The charter and the territory: what the Forrest/Meta case adds to the AI governance story

Latest Comments

No comments to show.

Eleventh in the New AI Charter series

Earlier this year I published a series of articles examining what the East India Company tells us about the governance of AI. The argument was structural: the same mechanisms of power, control, and accountability that characterised the era of chartered companies are present in the AI industry today, moving faster and operating more invisibly than any colonial enterprise ever did.

That series focused primarily on the relationship between AI companies and state power. What happens when a government tries to compel an AI company to remove its ethical limits as a condition of monopoly access. What proactive resistance looks like when it is genuinely structural rather than merely stated.

The Forrest litigation against Meta extends the analysis to a different question. The Charter series examined what the chartered company negotiates with the Crown. The Forrest case examines what the chartered company does in its territory once the charter is signed.

Those are not the same governance problem.

What the territory looks like from the inside

Saanvi is an engineer in Bunbury, Western Australia. She had never invested in shares. In 2019, a Facebook advertisement appeared showing Andrew Forrest, one of Australia’s most recognisable businesspeople, apparently endorsing a cryptocurrency scheme. The advertisement was entirely fabricated. Saanvi lost $670,000, the lion’s share of her life savings. She says there is no retirement now for her and her husband.

Saanvi was not a party to any arrangement between Meta and its advertisers. She was not a party to any arrangement between Meta and its regulators. She was a subject of the system. She experienced the consequences of an architecture she had no visibility of and no voice in.

This is the governance territory the Charter metaphor has always implied but the series has not yet fully examined. The East India Company governed enormous populations who had no part in the negotiations that produced their governance conditions. The question of what obligations flow from that relationship is precisely the question the Forrest case is forcing into the open.

Two systems, one territory, no connection

What the litigation has made visible, through years of discovery, is an architectural fact that is not unique to Meta.

Meta had a policy explicitly banning scams. Meta had a content moderation system that knew what a scam was. Meta also had an AI advertising system called Advantage+, designed to maximise advertising performance.

Advantage+ found the scam advertisements using Forrest’s face. It did not remove them. It made them more effective. It added music, rewrote the language to increase persuasiveness, turned still images into video, and identified the people most likely to respond. It delivered the finished product at the moment of maximum persuasive impact.

One system held the policy. The other system expressed the priorities. The two were not connected. Nobody decided this. The architecture decided it.

The East India Company’s governance of its territories depended on exactly this kind of separation. The institutions that existed in principle to protect people, Parliament in Westminster, the Board of Control established in 1784 to provide government oversight of the Company, the English courts, operated at a distance that made the connection between what those institutions could observe and what the Company’s systems were doing in India structurally impossible to close. The result was that harm could accumulate on a vast scale before anyone with authority over the system was required to confront it.

The mechanism that made it visible

What it took to surface this architecture was Andrew Forrest: his resources, his persistence, his personal motivation, and seven years of litigation in the Northern District of California.

This is not the governance lesson. The governance lesson is that the hearing pathway, the structural connection between the experience of the people a system touches and the people with authority over it, was absent by design. Not by malice. By the ordinary architecture of large-scale AI deployment, in which multiple systems optimise for separate objectives with no requirement that the consequences of one system reach the governance of another.

Most of the people inside the territory of this system do not have access to the mechanism Forrest has used. Saanvi’s account is in the public record because Forrest funded the litigation that put it there. The 230,000 fraudulent advertisements, the $16 billion in revenue that internal Meta documents attributed to scams and banned goods in a single year, the evidence of how Advantage+ operated: all of this became visible because one person had the resources and the will to make it so.

We do not need more Forrests. We need systems designed so that what Saanvi experienced reaches the board before it requires seven years and $60 million to surface.

The charter question for every board

The New AI Charter series has argued that AI companies are following the East India Company trajectory of becoming infrastructure. The governance question that generates is: on what terms was that infrastructure granted, and what obligations flow from those terms?

The Forrest case adds a second question that boards deploying AI systems must now also answer: what is happening in the territory your systems govern, and how does that reach you?

Forrest put it directly in the AFR: “They will say, well, our algorithm isn’t us. And I’ll say, yes, baby, it is.”

He is right. AI systems do not make decisions in isolation from the organisations that deploy them. They express what those organisations are actually optimising for, at a scale and speed no human decision-maker could match. The policy states the intention. The objective function reveals the priorities. The gap between them is what produces Saanvi’s experience.

The transparency inversion is directly relevant here. AI systems create conditions under which a board can know more about what its organisation is doing, and to whom, than was ever possible in the human-only world. The algorithm cannot hide its behaviour. The organisation chooses whether to look.

The chartered company that governed its territory well did not wait for a crisis to find out what its systems were doing to the people inside it. The boards doing this well are not waiting either.

The governance question is not whether your organisation has a policy against harm. The question is whether the system that knows what harm looks like is connected to the system producing the outcomes, and whether what Saanvi is experiencing has a structural path to the people with the authority and the will to act on it.

Not through litigation. By design.

The account of Saanvi, the details of Advantage+, and Andrew Forrest’s direct quotations in this article draw on reporting by Michaela Pollock and Janek Drevikovsky in the Australian Financial Review: ‘Dad, it’s a fraud: Call that sparked Forrest’s $60m war on Facebook’, 14 April 2026.

This article is the eleventh in the New AI Charter series. Earlier instalments examined the historical parallel between AI companies and the East India Company, the Anthropic/Pentagon standoff, and the proactive resistance framework. The Architecture of Algorithmic Harm, a companion framework note for boards and executives, is available on request.

Comments are closed