Executive Briefing: 5 Risks and Harms Related to Meta’s New AI Image Generator

--

Meta’s recent release of the “Imagine with Meta AI” image-generator, powered by the Emu image-synthesis model, raises concerns and potential risks, particularly concerning children. The Emu model was trained on an extensive dataset of 1.1 billion publicly visible Facebook and Instagram images, leading to several noteworthy considerations.

1. Privacy and Consent:

The use of publicly available images for training the AI model includes content from children who may not have explicitly consented.

The “Imagine with Meta AI” experience requires a Meta account, potentially involving minors who may not fully understand the implications of using their images in this context.

2. Ethical Considerations:

The dataset includes images from Instagram and Facebook, platforms widely used by children, potentially raising ethical concerns about the use of their data without explicit consent.

The lack of information regarding the specific origin of the training data raises questions about the ethical handling of children’s images within the dataset.

3. Content Generation and Control:

The AI image generator allows users to create images from written prompts, potentially leading to the generation of inappropriate or harmful content involving children.

Adversarial testing reveals that while violent and explicit content is filtered, the system may allow the creation of images involving commercial characters in inappropriate or potentially harmful contexts.

4. Transparency and Traceability:

Meta’s approach to handling harmful outputs involves filters, a proposed watermarking system, and a disclaimer about potential inaccuracies or inappropriateness.

The effectiveness of these measures in preventing harm, especially concerning minors, remains uncertain, and the proposed watermarking system is not yet operational.

5. Lack of Disclaimers on Harmful Content:

Unlike other AI companies, Meta’s research paper on the Emu model lacks explicit disclaimers regarding the potential creation of reality-warping disinformation or harmful content.

The absence of such disclaimers may downplay the risks associated with the generation of misleading or harmful images, especially those involving children.

The “Imagine with Meta AI” image-generator introduces potential risks and harms related to privacy, ethical considerations, content control, and transparency. It is imperative for Meta to address these concerns comprehensively, particularly when it comes to protecting children and ensuring responsible AI usage. Additionally, ongoing monitoring, transparency, and user education are essential components of mitigating the potential risks associated with the deployment of this technology.

Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos

Meta faces $600M fine in Europe Over Possible Data Protection Laws Violations

--

--

Jeff Kluge, FHCA- AI Governance & Audits
Jeff Kluge, FHCA- AI Governance & Audits

Written by Jeff Kluge, FHCA- AI Governance & Audits

AI governance advocate, bridging business and ethical design of responsible tech, with a resolute focus on a brighter future for children.

No responses yet