Meta disclosed further details regarding its stance on political ads, now demanding advertisers to disclose when they employ artificial intelligence to manipulate images or videos in certain political adverts.
Nick Clegg, Meta’s president of global affairs, outlined the updated ad policies, claiming they align with the platform’s previous advertising rules during election cycles.
However, a pivotal change for the upcoming elections involves the surging application of AI technologies by advertisers to craft computer-generated content. Building on an earlier announcement, Clegg stated that starting next year, Meta will mandate advertisers to reveal their use of AI or related digital editing methods “in creating or modifying political or social issue ads in specific cases.”
Clegg clarified the criteria for disclosure, indicating that it applies to ads featuring photorealistic visuals or realistic audio, digitally created or altered to depict individuals saying or doing things they didn’t. The policy also extends to ads showcasing non-existent realistic individuals or events, manipulated real event footage, or falsified realistic events.
Previously, Meta faced criticism, notably during the 2016 U.S. presidential elections, for inadequate measures to curb misinformation across its platforms, including Facebook and Instagram. In 2019, a digitally doctored video of Nancy Pelosi circulated on the site, simulating her inebriation, although it wasn’t an ad.
The ascent of AI in crafting deceptive ads poses a fresh challenge for the platform, especially after substantial layoffs within its trust-and-safety team as part of cost-cutting measures this year.
Moreover, Meta will enforce a ban on new political, electoral, and social issue ads in the final week of U.S. elections, mirroring its approach in previous years. These restrictions will be lifted the day following the election.
As Meta tightens regulations on AI-altered political ads, Taiwan faces its own battle against manipulated media last month. Taiwan’s Deputy Premier Cheng Wen-tsang refutes a deepfake video implicating him, sparking legal action amid upcoming elections. Parallel incidents involving other officials underscore the pressing need to address AI-generated misinformation impacting political landscapes. The urgency for regulations to curb such deceptive practices heightens amidst concerns of electoral integrity and public perception manipulation.