Technology

Meta Plans To Tackle Deepfakes In India With Invisible Watermark Tool

The new technology that will be deployed to detect deepfakes will not impact encryption on WhatsApp, according to a Meta official

Meta (formerly Facebook)
info_icon

WhatsApp-owner Meta is working on a labelling mechanism to identify deepfakes spreading through its social media platforms, a senior Meta official told Outlook Business. Due to the crackdown on deepfakes announced by the Ministry of Electronics and IT (MeitY), Meta and other social media companies in the country are under pressure to comply with government measures on the detection and removal of harmful AI-generated visuals from its platforms.

Deepfakes—synthetic visuals made with artificial intelligence (AI)—have come under increased scrutiny from the government after a fake video of Telugu actor Rashmika Mandanna went viral last month. Following a meeting with social media companies earlier this week, IT Minister of State Rajeev Chandrasekhar said that advisories would be issued to ensure 100 per cent compliance by platforms.

Advertisement

The technology that will be deployed by Meta is being developed by Facebook AI Research (FAIR) in collaboration with Inria. Called ‘stable signature’, Meta describes the labelling mechanism as “an invisible watermarking technique we created to distinguish when an image is created by an open-source generative AI model.”

The watermark imposed by the new technology will not be visible to the human eye but can be detected using algorithms. This invisible watermark cannot be removed from the visual even if it gets altered, cropped or otherwise edited, according to the AI research team at Meta.

Not Ending Encryption

Advertisement

Although the initial research on this technology was first announced in October this year, it now has to be made “more robust” to suit Indian requirements. “Through our advanced AI, we can now identify deepfakes and pull them. But it’s much more challenging in an encryption context,” said the Meta official, who did not wish to be named.

One key concern around the government’s crackdown on deepfakes has been its potential impact on privacy. Since deepfake-enabled misinformation spreads easily via encrypted messaging platforms like WhatsApp, detection of deepfakes could require breaking of end-to-end encryption (E2EE). E2EE is a secure communication process that prevents anyone apart from the original sender(s) and recipient(s) from accessing the contents of a message.

The Meta official clarified that this technology will not be involved in the breaking of encryption in any way. “[Our] engineers are working on a technology that does not involve traceability but that can recognise deepfakes at inception. The goal is to do this without breaking encryption,” said the official. The new tool will also be applied to other social media platforms owned by Meta such as Instagram and Facebook.

Since the Rashmika Mandanna incident, the MeitY has held three meetings with social media companies to discuss ways to tackle deepfakes. In the most recent meeting, the firms were reminded to comply with relevant provisions of IT Rules to identify and take down deepfakes. Since these are mapped to corresponding provisions of the Indian Penal Code (IPC), the companies may attract criminal consequences in case of failure to comply.

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement