Facebook parent company, Meta Platforms has introduced new disclosure requirements for political advertisers asking them to reveal any use of AI in their content ahead of the upcoming Canadian federal elections.
This initiative is part of Meta’s broader initiatives to curb misinformation and restore confidence in political communication, at a time Canada is heading for the polls.
Meta extends ban to ads with non-existent persons
A Reuters article indicate that the disclosure mandate applies if an advertisement contain a realistic image, video or realistic sounding audio that has been digitally made or adjusted to depict a person as saying or doing something they did not say or do.
According to the company, the new laws also apply to digitally altered political or social advertisements in a move minimize misinformation, in time for the Canadian federal polls.
Canadian Prime Minister Mark Carney is reportedly going to trigger an early election this weekend for an expected vote on April 28, according to an official who was not authorized to speak and therefore remained anonymous.
Meta’s move comes as there has been a proliferation of AI-generated content filling social media platforms and websites, misleading unsuspecting audiences.
Now, the ban extends to ads that show a non-existent person or a seemingly realistic event that never happened. This also includes altering footage of a real event or showing an event that allegedly happened but is not a true image, video or audio recording of the event.
See also OpenAI to test 'ChatGPT connectors' for external apps
Last November, Meta indicated that it would extend its ban on new political ads after the US election, in response to the rampant misinformation during the past presidential election.
The social media giant also put a ban on political campaigns and advertisers in other regulated industries from using its new generative AI advertising products in 2023.
However, Meta earlier this year also removed its US fact checking programs, adding to the curbs on discussions around contentious topics such as immigration and gender identity. This comes as the social networking firm succumbed to pressure from conservatives to implement the biggest overhaul; of its approach to managing political content.
The Facebook and Instagram-owner also indicated in December last year that generative AI had limited impact across its apps in 2024, failing to build a significant audience on Facebook and Instagram or use AI effectively.
Meta may increase its operating costs with this initiative
The firm has also added a feature for people to disclose when they share AI-generated images, video or audio so it can be label it.
According to Nasdaq , the initiatives that Meta has taken for AI-generated political ads show proactive leadership in combating misinformation, which could potentially boost user trust and regulatory goodwill.
The policy could also give the social media giant a competitive edge by positioning it as more proactive and trustworthy platform for political discourse, attracting both advertisers concerned about transparency.
See also New Intel CEO Lip-Bu Tan plots an overhaul of the business
Though a noble idea in fighting misinformation and disinformation, there are also concerns the requirements may deter some political advertisers, potentially leading to a decrease in ad revenue from this sector.
Another challenge, according to Nasdaq is in the implementation and enforcing the policy, which could be difficult and resource-intensive, increasing operational costs for the social media giant.
The policy may also be a clear indicator of the prevalence of AI-generated content on the platform, raising user concerns about authenticity and potentially reducing overall engagement.
The availability and accessibility of generative AI tools has made it possible for users to create images, audios, and videos, that seem real, making it easy to mislead the unsuspecting public.
In 2024, the world witnessed an increase in cases of misinformation and disinformation as unscrupulous individuals made use of AI tools to create content to mislead the public as many countries had polls including the US. Other key notable instances include when a picture of the Pope wearing a stylish puffer jacket went viral.
Another case involved calls mimicking the then US President Joe Biden’s voice persuading voters in New Hampshire to boycott the presidential primaries. The current president Donald has had a fair share of the AI manipulation and at some point a deepfake picture of himself tussling with the police went viral.
Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More