Skip to main content

Meta disbanded its Responsible AI team

Meta disbanded its Responsible AI team

/

A new report says Meta’s Responsible AI team is now working on other AI teams.

Share this story

Image of Meta’s logo with a red and blue background.
Illustration by Nick Barclay / The Verge

Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.

According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.

Meta AI communications representative Nisha Deo told The Verge in an email the change is intended to aid in development of AI features, but that the company will “continue to prioritize and invest in safe and responsible AI development.” Deo added that though members of the RAI team are now in the generative AI organization, they will “continue to support relevant cross-Meta efforts on responsible AI development and use.”

The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.

RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.

Moves like Meta’s and a similar one by Microsoft early this year come as world governments race to create regulatory guardrails for artificial intelligence development. The US government entered into agreements with AI companies and President Biden later directed government agencies to come up with AI safety rules. Meanwhile, the European Union has published its AI principles and is still struggling to pass its AI Act.

Update November 25th, 2023, 12:37PM ET: Updated with statement from Meta.