Key Takeaways:
- A group of ex-OpenAI employees has petitioned attorneys general in California and Delaware to prevent OpenAI’s structural transition to a for-profit model.
- The former staff, joined by AI experts and Nobel laureates, argue this change could undermine OpenAI’s original nonprofit mission.
- OpenAI insists the new structure will maintain its public-oriented goals through a hybrid model.
- Concerns center around accountability, safety, and the ethical development of advanced AI technologies.
Ex-OpenAI Employees Call for Legal Block on Company’s Shift to For-Profit Model
A group of former OpenAI employees and prominent experts are calling on state authorities in California and Delaware to block OpenAI’s ongoing efforts to restructure itself as a for-profit entity. The move, they argue, could jeopardize the company’s founding mission to ensure that artificial intelligence (AI) serves the public good.
In a letter signed by ten former OpenAI staff members and supported by Nobel laureates and AI thought leaders, the coalition urged California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings to use their oversight powers to prevent the transition.
Their appeal emphasizes the risks of allowing a company developing artificial general intelligence (AGI)—a type of AI that could surpass human capabilities—to prioritize profits over safety and accountability.
“Ultimately, I’m worried about who owns and controls this technology once it’s created,” said Page Hedley, a former policy and ethics adviser at OpenAI.
Public Benefit vs. Private Incentives
OpenAI, originally founded as a nonprofit to safely develop AGI, has since grown into a tech powerhouse. With ChatGPT now boasting 400 million weekly users and the company’s market value reported at $300 billion, OpenAI has launched a for-profit arm and now seeks to formalize this structure under a public benefit corporation—similar to models used by rivals like Anthropic.
In a statement, OpenAI defended its restructuring plan, saying it would allow both the for-profit and nonprofit arms to thrive and jointly advance its humanitarian goals. “This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission,” the company said.
Yet critics are unconvinced. Hedley and others argue that OpenAI has been accelerating product rollouts at the expense of safety, reducing the checks that once ensured responsible innovation.
“The costs of those decisions will continue to go up as the technology becomes more powerful,” Hedley warned.
Former technical team member Anish Tondwalkar raised alarm over the potential loss of safeguards like the “stop-and-assist clause,” which mandates OpenAI to assist rather than compete if another organization approaches human-level AI.
Related: Cybersecurity News
Broader Backlash and Internal Divisions
This isn’t the first challenge to OpenAI’s restructuring. Earlier in the month, labor leaders and nonprofits also petitioned to protect the company’s billions in charitable assets.
Meanwhile, a lawsuit from co-founder Elon Musk accuses OpenAI of abandoning its founding principles, though some former employees question Musk’s motives given his own commercial AI ventures.
Nobel laureates Oliver Hart and Joseph Stiglitz, along with AI scientists Geoffrey Hinton and Stuart Russell, endorsed the letter. Hinton, in particular, called for OpenAI to return to its core mission rather than “enriching their investors.”
“OpenAI may one day build technology that could get us all killed,” warned former employee Nisan Stiennon. “This duty precludes giving up that control.”
As OpenAI moves closer to redefining its governance, the debate over how—and by whom—advanced AI should be controlled continues to intensify.
Also in the News: