India has implemented some of the world's strictest regulations on deepfake content, requiring social media platforms to remove AI-generated fake videos and images within three hours of receiving a complaint. The new rules represent a landmark attempt to control the rapidly evolving technology.

The regulations also mandate that all synthetic content be clearly labeled and traceable to its source. Platforms that fail to comply face significant penalties, including potential bans on operating in the world's most populous country.

The move comes amid growing concerns about the misuse of deepfake technology for political manipulation, non-consensual intimate imagery, and financial fraud. India has seen several high-profile cases of deepfakes being used to create fake news and defamatory content.

Tech companies have expressed concerns about the feasibility of the three-hour deadline, arguing that detecting sophisticated deepfakes requires time and expertise. However, Indian officials insist that platforms must invest in better detection tools.

For Gen Z Indians, who are among the world's heaviest social media users, the regulations represent a government attempt to protect citizens from AI harms while potentially limiting creative expression. The balance between safety and freedom remains hotly debated.