We use cookies and similar technologies that are necessary to operate the website. Additional cookies are used to perform analysis of website usage. By continuing to use our website, you consent to our use of cookies. For more information, please read our Cookies Policy.

Closing this modal default settings will be saved.

Facebook Parent Meta Planning to label all AI-generated Content Starting Next Month

The labelling initiative aims to address concerns about misleading content on its platforms

Owner's Profile

Staff Writer, TLR

Published on April 6, 2024, 14:01:08

471

meta, artifical intelligence, facebbok, ai generated content

Meta Platforms, the parent company of Facebook, Instagram and Threads, announced plans to introduce labels for artificial intelligence-generated audio, image, and video content starting next month. The labelling initiative aims to address concerns about misleading content on its platforms.

The company clarified that it will specifically label content generated using AI technology and will refrain from removing it unless it violates platform policies or presents significant risks.

Meta acknowledged that its current policy, established in 2020, is too narrow as it only addresses videos altered or created through AI. Monika Bickert, Meta's vice-president of content policy, highlighted the rapid evolution of AI technology, noting the emergence of realistic AI-generated audio and photos over recent years.

In response to feedback from its oversight board, which engaged with over 120 stakeholders across 34 countries, Meta conducted a public opinion poll involving more than 23,000 respondents from 13 countries. The poll revealed strong support (82 per cent of respondents) for adding warning labels to AI-generated content.

The global AI industry is projected to attract investments of up to $200 billion by 2025, potentially significantly impacting GDP, according to a report by Goldman Sachs Economic Research in August.

Despite the industry's growth, regulatory bodies are struggling to keep pace with technological advancements. In December, the EU introduced the landmark Artificial Intelligence Act, imposing fines exceeding €35 million ($38.4 million) for non-compliance.

Meta emphasised a commitment to freedom of expression and revealed that its oversight board recommended a "less restrictive" approach to addressing manipulated media through contextual labelling.

Meta will employ its own detection methods to identify AI-generated content and will label media based on user disclosures of AI use during uploads.
In cases where digitally-created or altered content poses a significant risk of public deception, Meta may apply more prominent labels to provide additional context.

Meta clarified that content removal, whether AI-generated or human-created, will be reserved for select cases violating platform rules, such as those pertaining to voter interference, bullying, violence, or incitement as outlined in its community standards.

Additionally, Meta employs nearly 100 independent fact-checkers who can demote false or altered content in users' feeds and attach overlay labels to provide further context.

For any enquiries or information, contact ask@tlr.ae or call us on +971 52 644 3004Follow The Law Reporters on WhatsApp Channels.

Comments