Executive Brief: The Perils of Prioritizing Open-Source Over Ethics in AI
by Jeff Kluge, FHCA Digital Services Act & Children’s Code Auditor
Meta’s challenging journey in developing large language models underscores the critical need for robust ethical governance frameworks, beyond a reliance on open-source values, to ensure responsible AI practices.
The premature launch of Galactica in November 2022 elicited swift criticism due to its flawed outputs. Meta’s chief scientist, Yann LeCun, defended Galactica as an open research project, but its exclusive dependence on open collaboration proved counterproductive. The subsequent model, Llama, also lacked comprehensive governance, including ethical reviews.
LeCun’s position exposes the risks of prioritizing openness over concrete oversight in AI development. While open-source methodologies contribute to transparency, they alone cannot effectively address ethical risks. Galactica lacked crucial safeguards such as bias testing, guidelines, and risk assessments that could have averted issues prior to launch. The straightforward evidence of completing this exercise would reveal that hallucinations for researchers and scientists would be sub-optimal and should be avoided.
Meta must blend openness with accountability structures centered on ethics and a shared moral framework, publicly displayed. Vital steps include establishing an ethical review board, enforcing accountability measures for misuse, and implementing user training. A balanced approach harnesses the benefits of openness while ensuring responsible safeguards are in place.
The progression from Galactica to Llama reveals that Meta’s learning curve on AI governance remains incomplete. Utilizing the same thought process and expecting different results seems unwise. Further advancement requires aligning models with societal values through rigorous governance processes covering transparency, accountability, and continuous improvement.
Meta’s journey demonstrates that openness alone cannot guarantee ethically sound AI. True responsibility demands supplementing existing technically focused practices with concrete ethical oversight mechanisms and robust risk mitigation processes. This balanced approach can foster collaborative innovation while upholding the principles essential for public-facing AI. As evident from repeated press releases and whistleblower complaints, much of the responsibility falls on CEO Mark Zuckerberg. Both he and Mr. LeCun must recognize that ethical governance, not just openness, is essential for earning public trust.