Unlock the Editor’s Digest without cost
Roula Khalaf, Editor of the FT, selects her favorite tales on this weekly publication.
On his return to the White Home in January, Donald Trump swiftly dismantled the regulatory framework his predecessor Joe Biden had put in place to handle synthetic intelligence dangers.
The US president’s actions included reversing a 2023 govt order that required AI builders to submit security check outcomes to federal authorities when programs posed a “critical danger” to the nation’s safety, economic system or public well being and security. Trump’s order characterised these guardrails as “boundaries to American AI innovation”.
This backwards and forwards on AI regulation displays a stress between public security and financial progress additionally seen in debates over regulation in areas akin to workplace safety, financial sector stability and environmental protection. When rules prioritise progress, ought to firms proceed to align their governance with the general public curiosity — and what are the professionals and cons of doing so?
At OpenAI, based in 2015 by Sam Altman as a non-profit organisation, this has been a subject of serious debate amongst traders and co-founders, together with Elon Musk, significantly since guaranteeing that AI operates safely, ethically and for the advantage of humanity has been a priority because the know-how’s earliest days.
Consequently, many firms have adopted novel corporate structures that intention to steadiness their financial pursuits with broader societal issues. For instance, in 2021, seven former OpenAI staff based Anthropic and integrated it as a profit company, a construction by means of which an organization legally commits to delivering societal profit alongside revenue. In its incorporation paperwork, Anthropic states its goal is to responsibly develop and preserve superior AI for the long-term advantage of humanity.
Check your self
That is a part of a collection of standard enterprise faculty instructing case research dedicated to enterprise dilemmas. Learn the textual content and the articles from the FT and elsewhere instructed on the finish (and linked to throughout the piece) earlier than contemplating the questions raised. The collection varieties a part of a wide-ranging assortment of FT ‘immediate instructing case research’ that discover enterprise challenges.
First launched by Maryland in 2010, profit company buildings have been adopted by greater than 40 US states, Washington DC, Puerto Rico and nations together with Italy, Colombia, Ecuador, France, Peru, Rwanda, Uruguay and the Canadian province of British Columbia.
Nevertheless, they’ve additionally been adopted by AI firms whose objectives aren’t particularly aligned with environmental and societal influence. Musk’s xAI, integrated as a profit company in Nevada, has a acknowledged company goal to create “a cloth constructive influence on society and the atmosphere, taken as an entire”.

Critics argue that the profit company mannequin lacks enamel. Whereas most embrace transparency provisions, the related reporting necessities can fall wanting offering significant accountability on whether or not the corporate is reaching its authorized goal.
All this raises the chance that the mannequin opens the door to “governance washing”. Following the wave of lawsuits towards opioid maker Purdue Pharma, its proprietor the Sackler family proposed turning the company into a benefit corporation, which might deal with making medication to sort out the opioid disaster. Closing disposition of the multitude of cases towards the corporate is ongoing.
The case of OpenAI illustrates the problems surrounding governance within the AI sector. In 2019, the corporate began a for-profit entity to tackle billions of {dollars} in funding from Microsoft and others. Quite a lot of early staff left, reportedly over security issues.
Musk sued OpenAI and Sam Altman in 2024, alleging that they had compromised the start-up’s mission of constructing AI programs for the advantage of humanity.
In December 2024, OpenAI introduced plans to restructure as a public profit company and in early 2025, the corporate’s non-profit board was reportedly working to split OpenAI into two entities: a public profit company and a charitable arm valued at roughly $30bn. Musk has opposed the transfer and this month made an unsolicited bid of greater than $97bn for OpenAI.
The trajectory of OpenAI’s funding helps the argument put forth by Musk and others that OpenAI prioritises revenue over public profit. In October 2024, the corporate secured a landmark funding spherical at a $157bn valuation. However it had not but formalised its possession construction and governance framework, giving traders important affect over the corporate’s mission and execution.
As the corporate finalises its construction, ought to it embrace the imaginative and prescient of the trade articulated in Trump’s govt order and drop its deal with security and humanity? Or ought to it preserve that focus, on condition that different areas of the world or future US presidents might take a distinct view of the duty of AI firms?
And are voluntary mechanisms akin to company construction and governance adequate to create accountability whereas sustaining the agility wanted for innovation? In line with some legal experts, such structures are not necessary as conventional company types of incorporation permit firms to set sustainability objectives if they’re within the long-term pursuits of shareholders.
To extend accountability, some profit companies have created multi-stakeholder oversight councils with representatives from affected sectors akin to know-how and civil society. In Might 2024, OpenAI did set up a safety and security committee, led by Altman (he later stepped down), though critics have identified that such voluntary buildings could possibly be subordinated to revenue objectives.
Different choices embrace adopting the EU’s Corporate Sustainability Reporting Directive, which is able to govern firms akin to OpenAI within the coming years, or linking compensation and inventory choices to safety-related objectives.
Different accountability mechanisms might emerge. In the meantime, governance at AI firms akin to OpenAI raises necessary questions on integrating moral and security issues right into a largely untested know-how.
Questions for dialogue
How can profit companies within the AI sector guarantee accountability for his or her social and environmental commitments?
How might voluntary company governance safeguards present public belief in an trade usually criticised for opacity and potential hurt?
What particular metrics and reporting necessities would make profit company standing significant for AI firms?
What mechanisms might policymakers introduce to strengthen the efficacy of the profit company mannequin in high-stakes industries?
Can these fashions result in systemic change in company accountability, or will these fashions stay area of interest options?
How can profit companies deal with their world influence when working below completely different nationwide authorized frameworks?
Christopher Marquis is Sinyi Professor of Chinese language Administration at Cambridge Choose Enterprise College