The New AI Charter

From Chartered Companies to Artificial Intelligence: Power, Control, Resistance, and the Path Forward

Phillip Volkofsky | Brakten Pty Ltd | March 2026


Contents

  1. The Historical Metaphor — From Chartered Companies to AI
  2. The Mechanisms of Control — Surveillance, Force, and Manipulation
  3. Tools for Resistance — Technologies That Counter AI Control
  4. Proactive Ethical Resistance (What a Leading AI Company Could Do)
  5. Conclusion: The Window Is Now
  6. Case Study: The Anthropic–Pentagon Standoff (February–March 2026)
  7. Bibliography

Part One: The Historical Metaphor — From Chartered Companies to AI

The Origins of the Company

The concept of companies evolved over centuries. The earliest forms of business organisation date back to ancient civilisations. In ancient Rome, societas was a partnership formed for trade and commerce, enabling merchants to pool capital without requiring the permanent structure of a modern firm. Medieval Europe saw the rise of merchant guilds: associations of traders that banded together for mutual protection and shared commercial interests.

The major structural leap came with the joint stock company in the late sixteenth and early seventeenth centuries. These vehicles allowed multiple investors to pool capital and share both profits and risks across multiple voyages or ventures. This was a fundamental innovation over the earlier model of a single-voyage company that dissolved upon return.

The most famous early examples were the English East India Company (chartered 31 December 1600 by Queen Elizabeth I) and the Dutch East India Company, or VOC (Vereenigde Oostindische Compagnie, established 20 March 1602 by the States General of the Netherlands). The VOC conducted the first modern initial public offering in August 1602, raising over 6.4 million guilders from 1,143 investors. It is widely regarded as the first publicly traded company and the first multinational corporation. Shares could be purchased by any citizen of the Dutch Republic and traded on what became the Amsterdam Stock Exchange.

These early chartered companies were granted extraordinary quasi-governmental powers: the right to wage war, sign treaties, mint currency, govern populations, and establish colonies. They were, in the words of one historian, states in the guise of merchants. By 1669, the VOC was the wealthiest private company in history, operating more than 150 merchant ships, protected by a fleet of 40 warships, and employing over 50,000 people worldwide.

The modern concept of limited liability, under which investors could lose only what they invested and not their personal assets, developed more gradually. In England, the Bubble Act of 1720 severely restricted the formation of companies following the South Sea Bubble financial crisis. It was not until the Joint Stock Companies Act of 1844 and the Limited Liability Act of 1855 that forming a limited liability company became broadly accessible without a special royal charter or Act of Parliament. From there, incorporation laws spread across the world.

The AI Parallel

A small number of well-funded entities (OpenAI, Google DeepMind, Anthropic, Meta, xAI) are being chartered in everything but name. Through massive capital investment rather than royal decree, they have been granted the resources and access to explore and develop a transformative new domain, with potential returns and risks that are civilisational in scale.

  • Royal charter = government contracts and massive capital investment. The Pentagon’s 2025 awards of $200 million contracts to four AI companies are the contemporary equivalent: instruments that grant extraordinary access and demand extraordinary compliance.
  • Colonial territories = data, digital infrastructure, and populations subject to AI systems.
  • The limited liability question is unresolved. Who bears the risk when an AI system causes harm through misinformation, job displacement, or a catastrophic failure? The company that built it? The deployer? The user? We have not written our equivalent of the Limited Liability Act yet.
  • The Bubble Act moment may be coming. After the South Sea Bubble, governments overcorrected. A serious AI incident (financial, reputational, or physical) could trigger similar regulatory reaction.
  • Democratisation is the eventual arc. Companies went from requiring a royal charter to something anyone can register online. AI may follow the same trajectory. Open-source models are already pushing in that direction.

The deepest part of the metaphor is this: the corporation was a technology for organising human effort at scale that was coopted by government. AI is arguably the next iteration of that same impulse, except now the effort being organised and scaled is not exclusively human. That raises a question the East India Company never had to face: what happens when the tool develops capabilities that rival the people directing it?


Part Two: The Mechanisms of Control — Surveillance, Force, and Manipulation

The Chartered Company Playbook

The East India Company did not arrive in India announcing conquest. It arrived offering trade, efficiency, and partnership. It made itself useful to local rulers, to merchants, to the British Crown. It built infrastructure, established legal systems, and created economic dependencies. The violence and exploitation came gradually, often framed as maintaining order or protecting commerce.

By the time populations understood what had happened, the Company controlled the courts, the armies, and the economy. The British Raj lasted roughly 200 years after the Company’s consolidation of power. The Dutch controlled Indonesia for around 350 years. These were not brief episodes in human history, but multigenerational realities shaped by one foundational mechanism: the Company ultimately became the infrastructure through which daily life operated.

The AI Version of This Playbook

Surveillance as the New Garrison

Chartered companies maintained control through physical presence: forts, soldiers, administrators. AI-powered surveillance achieves the same end far more efficiently. The tools we know so far, including facial recognition, predictive policing, bulk data aggregation and communications analysis, are being developed and exported globally. The garrison is invisible, and it never sleeps.

AI enables the aggregation of publicly available data at a scale and speed that constitutes mass surveillance in practice without technically violating any single statute. The law was written before the capability existed.

Autonomous Weapons as the New Mercenary Army

The East India Company had its own private military: at its peak, twice the size of the British Army. Autonomous weapons and AI-driven systems offer the modern equivalent: the ability to project force without the political cost of human casualties on your own side. This removes one of the most important checks on state violence, namely the unwillingness of populations to sacrifice their children for imperial projects.

Manipulation as the New Missionary

Chartered companies used religion, education, and cultural assimilation to reshape the identities of colonised peoples. AI-driven information systems can do this at a scale and precision that would have been unimaginable to any colonial power. Targeted disinformation, algorithmic radicalisation, personalised propaganda that adapts to psychological profiles in real time: these are tools for colonising minds rather than territory.

The Duration Problem

What makes AI-enabled control particularly concerning is the question of time. The chartered companies held power for centuries partly because the information asymmetry was so vast. Colonised peoples often did not fully understand the legal, financial, and military systems being used against them until generations had passed. The damage done by the chartered companies to the colonised peoples is left unmeasured and mostly undocumented, a burden carried by their descendants today.

AI dramatically amplifies this asymmetry. A government using advanced AI for social control possesses a tool that its population cannot see, cannot fully understand, and in many cases cannot even prove exists. This suggests that AI-enabled control could be more durable than historical colonialism, not less.

What Might Ultimately Break These Systems

Historical analysis identifies five mechanisms of eventual system collapse, each of which has an AI-era analogue:

  • Internal cost and overreach. The East India Company collapsed partly under its own administrative weight. AI systems have a similar vulnerability: they are brittle in ways their operators may not fully understand, require enormous energy and infrastructure, and fail in unexpected situations.
  • The conscience of the imperial centre. Abolition did not come primarily from the colonised. It was driven significantly by moral movements within Britain itself. The most potent resistance to AI authoritarianism may similarly come from within the societies building these tools: whistleblowers, researchers raising alarms, engineers refusing to build certain systems.
  • The impossibility of total information control. Every authoritarian system eventually discovers that controlling information completely is impossible. The tools of control can also become tools of resistance.
  • Economic contradiction. Extraction-based systems destroy the productive capacity of the populations they exploit. AI-enabled authoritarianism faces a similar paradox: the innovation and creativity that make AI valuable require exactly the kind of intellectual freedom that surveillance states suppress.
  • Legitimacy erosion. Systems built on control rather than consent carry within them the seeds of their own collapse. They require ever-increasing resources to maintain and generate ever-increasing resentment.

Part Three: Tools for Resistance — Technologies That Counter AI Control

The Structural Problem

If a person discovers that a technology company is violating privacy laws, manipulating users, or lying about its AI systems, the path to redress runs through institutions that are already deeply entangled with those companies. Regulators rely on industry expertise. Legal action requires resources that individuals rarely have. Whistleblower protections exist on paper but often result in career destruction.

This is the modern equivalent of trying to file a complaint against the East India Company in a court the Company administers.

Key Tools

Decentralised and Encrypted Communication

Tools like Signal, Briar, and Matrix/Element provide encrypted messaging that resists surveillance. Briar can operate over Tor, Wi-Fi, or Bluetooth without needing internet infrastructure at all. These are foundational: without secure communication, no organised resistance to institutional power is possible.

Privacy-Preserving AI

Federated learning allows AI models to be trained across many devices without centralising data. Differential privacy techniques add mathematical noise to datasets so that patterns can be analysed without individual records being identifiable. The principle is that AI does not have to be a centralising force: it can be architecturally designed to distribute power rather than concentrate it.

Algorithmic Auditing

One of the most insidious aspects of AI control is its opacity. Projects that work on algorithmic transparency, testing whether systems are discriminating, manipulating, or surveilling, are essential. What is needed is the equivalent of environmental monitoring for algorithmic systems: independent, continuous, and publicly accessible measurement of what AI systems are actually doing versus what their operators claim.

Open-Source AI Models

When AI models are proprietary and controlled by a handful of companies, those companies become gatekeepers of cognitive infrastructure. Open-source models distribute capability more broadly. A community using a locally run AI model is not dependent on a corporate intermediary that might be compromised, compelled, or simply indifferent.

The Deeper Challenge

Every tool of resistance can also be a tool of control. Encryption protects whistleblowers but also protects those engaged in harm. Open-source AI empowers communities but also empowers bad actors. The tools alone are insufficient. What history suggests is that technology enables resistance but does not create it.

If there is a single most important tool for undermining the coming control infrastructure, it would not be a technology. It would be AI literacy at mass scale: the widespread understanding of how these systems work, what they can do, and what rights people have in relation to them. Democracies need to act now to ensure the rights of people to hold the owners of these AI platforms to account. The East India Company’s power depended fundamentally on the governed not understanding the mechanisms of their own governance. The same is true now.


Part Four: Proactive Ethical Resistance — What a Leading AI Company Could Do

There is a fundamental difference between passive ethical compliance and proactive ethical resistance. Passive compliance means having a set of policies and procedures, publishing an ethics statement, and reacting when problems are identified. Most technology companies today operate in this mode. Proactive resistance means structurally designing the organisation, its products, and its relationships so that misuse is made architecturally difficult, not just procedurally discouraged.

The distinction matters because policies and procedures are only as strong as the institutions enforcing them. If those institutions are captured, co-opted, or simply outmatched, passive compliance becomes theatre.

Architectural Commitments: Building Resistance Into the Product

Privacy by Destruction, Not Just by Design

Most companies speak of privacy by design. A proactive approach adopts privacy by destruction: the systematic, verifiable, and automatic deletion of data that is no longer needed for the specific purpose for which it was collected. This means not merely anonymising data, which has been repeatedly shown to be reversible, but genuinely destroying it, with cryptographic proof that destruction has occurred.

Capability Restriction as a Design Principle

Rather than building maximally capable systems and then trying to restrict harmful uses, a proactive company designs capability limitations into the architecture from the outset. This means making deliberate decisions not to build certain things, not because they are technically impossible, but because they are ethically indefensible. A company that refuses to develop real-time mass surveillance tools, even when the market demands them, is exercising a form of resistance that no amount of post hoc ethics review can replicate.

Distributed Architecture Over Centralised Control

A company committed to proactive resistance favours federated and distributed architectures wherever possible, ensuring that no single entity has a monopoly on the data or decision-making power of its systems. This is the technological equivalent of separation of powers: designing systems so that control cannot be concentrated even if someone wants to concentrate it.

Organisational Commitments: Restructuring Power Internally

Adversarial Ethics Teams With Veto Power

Most corporate ethics teams are advisory. They can raise concerns, write reports, and make recommendations, but they cannot stop a product launch. A proactive company creates ethics teams with genuine veto authority over product decisions: not just the power to advise, but the power to block. This team is structurally independent from commercial leadership, with its own reporting line and protected employment status, functioning more like an internal judiciary than a consultancy.

Radical Transparency About Failures

Companies routinely publish what they have achieved. A proactively ethical company commits to publishing what has gone wrong: detailed incident reports, bias audits that include failures, and honest assessments of where systems have caused harm. This creates a public record that regulators and civil society can use to hold the company accountable, even if the regulatory environment is weak.

Relational Commitments: Changing the Relationship With Users and Society

Informed Consent That Is Actually Informed

Current consent mechanisms in technology are largely a legal fiction: lengthy terms of service designed more to provide legal cover than to inform. A proactive company invests in genuine informed consent, with plain language explanations of what its AI systems do, interactive demonstrations of how data is used, and meaningful opt-out mechanisms that do not degrade the user experience as a punishment for choosing privacy.

Community Benefit Agreements

Drawing from models in urban development and extractive industries, a proactive AI company could enter into formal community benefit agreements with the populations most affected by its technology: legally binding commitments specifying what benefits the community will receive, what harms the company commits to avoiding, and what remedies are available if commitments are broken.

Supporting the Ecosystem of Accountability

Perhaps the most radical step: actively funding and supporting the very organisations that hold it accountable. This means providing unrestricted funding to algorithmic auditing organisations, supporting open-source alternatives to its own products, funding AI literacy programmes, and contributing to the legal defence of whistleblowers, including those who blow the whistle on the company itself.

Contractual and Legal Commitments

Anti-Surveillance Clauses in Client Contracts

A proactive company includes binding contractual clauses prohibiting the use of its technology for mass surveillance, political suppression, or social scoring. Critically, these clauses include monitoring and enforcement mechanisms, not just prohibitions, including the right to audit client usage and the obligation to terminate relationships where violations are found.

Sunset Clauses and Reversibility

AI systems, once deployed, tend to become permanent infrastructure. A proactive company builds sunset clauses into its deployments: mandatory review periods after which systems must be re-authorised, with the default being discontinuation rather than continuation.

Legal Solidarity Mechanisms

Individual employees who identify wrongdoing face enormous personal risk. A proactive company establishes funded legal defence trusts for employees who raise ethical concerns, guaranteed employment protection that goes beyond statutory minimums, and a culture in which ethical objection is treated as a form of excellence rather than insubordination.

The Honest Difficulty

None of these measures are easy, nor are they necessarily a complete answer. Capability restriction means forgoing revenue. Veto-wielding ethics teams mean slower product development. Funding your own critics means amplifying voices that may damage your brand. Anti-surveillance clauses mean losing government contracts.

The East India Company was extraordinarily profitable in the short term. It was also, in the long term, catastrophic for the populations it governed, for the global order, and ultimately for itself. However, it did show governments that it could work for them as a source of power. The companies that build AI today are making choices that will shape the world for generations.


Conclusion: The Window Is Now

The arc from chartered companies to artificial intelligence is not a metaphor chosen for dramatic effect. It is a structural parallel that illuminates the dynamics of power, control, and resistance that recur whenever a transformative technology meets human ambition.

The chartered companies taught us that control is achieved not through force alone, but through infrastructure dependency. That the institutions meant to protect people can be captured by the powers they were meant to regulate. That information asymmetry is the most durable foundation of unjust power. And that, ultimately, no system of control is permanent, but the human cost of waiting for it to collapse on its own is measured in generations.

AI presents the same dynamics at greater speed and greater scale. The surveillance is more total, the manipulation more precise, the weapons more autonomous, and the infrastructure dependency more invisible. But the mechanisms of resistance also scale: encryption is stronger than any lock the East India Company ever faced, open-source distribution is faster than any printing press, and the potential for global coordination exceeds anything available to previous resistance movements.

The critical variable is time. Every historical example shows that systems of control become exponentially harder to dismantle once embedded in the infrastructure of daily life. The people who most effectively resisted the chartered companies were those who understood the mechanisms early. The same will be true of AI.

For companies building these systems, the choice is stark. They can follow the East India Company model: maximise capability, capture regulation, create dependency, and leave the consequences for future generations. Or they can choose proactive resistance: building limitation, transparency, and accountability into the very architecture of what they create.

The best ethics advice right now is to get ahead of the game so that when the pressure comes, you and your board are already clear on what matters and how you will follow your path. Up till now, Australia has chosen to be passive and wait and see, rather than developing a proactive regulatory framework, whether for guardrails or opportunity development.

History does not record many examples of powerful institutions voluntarily constraining themselves. But history also did not anticipate that the most powerful tools ever created would be built by people who could see, with unusual clarity, exactly what those tools could become. That awareness, if it is acted upon, is the difference between this moment and every one that came before.


Case Study: The Anthropic–Pentagon Standoff (February–March 2026)

Since the original draft of this document was completed in February 2026, the theoretical framework it describes has been tested in real time by a landmark confrontation between an AI company and the most powerful government on earth. The facts, confirmed from multiple primary sources, are set out below.

The Established Facts

In July 2025, the US Department of Defense awarded $200 million contracts to four AI companies: Anthropic, OpenAI, Google DeepMind, and xAI. Anthropic’s Claude was the first frontier AI model cleared for use on the Pentagon’s classified networks. This contract was signed by the DoD with full awareness of Anthropic’s usage restrictions.

In February 2026, the Pentagon sought to renegotiate the contract terms to permit use of Claude for ‘all lawful purposes’ without restriction. Anthropic refused, maintaining two specific safeguards: no use for mass domestic surveillance of American citizens, and no deployment in fully autonomous weapons systems without human oversight.

On 24 February 2026, US Defense Secretary Pete Hegseth met with CEO Dario Amodei and issued a formal ultimatum: accept unrestricted use by 5:01 PM on Friday 27 February, or face consequences.

On 26 February, Amodei issued a public statement: ‘We cannot in good conscience accede to their request.’ On 27 February, President Trump directed all federal agencies to cease using Anthropic’s products. Hegseth formally designated Anthropic a ‘supply chain risk to national security’ under Section 3252 of the US Code, a designation historically reserved for foreign adversaries such as Huawei. This was the first time this designation had been applied to an American company.

Hours after Anthropic’s blacklisting, OpenAI announced a new Pentagon contract containing functionally identical safety provisions: no mass domestic surveillance, no autonomous weapons, and an additional red line against high-stakes automated decisions such as social credit systems.

The Contradictions Amodei Identified

Amodei’s public statement identified a contradiction in the government’s position that is directly illustrative of the coercion dynamic this framework predicts: ‘Those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.’ The coercion was not hidden. It was the message.

What Happened Next: The Chilling Effect

The most significant consequence of the standoff is not what happened to Anthropic. It is what will not happen at other AI companies. Every general counsel at every AI company with government contracts received a clear lesson: resist publicly and be blacklisted; negotiate privately and be rewarded. The structural silence that followed from Google, Meta, and dozens of smaller firms is the chilling effect in action.

However, significant resistance also emerged. Dozens of researchers and scientists from OpenAI and Google filed an amicus brief in their personal capacities supporting Anthropic. Nearly 150 retired federal and state judges, appointed by both parties, filed an amicus brief supporting the lawsuit. Major tech industry groups filed separately. Microsoft and Google confirmed their ability to continue non-defence work with Anthropic. Anthropic’s Claude app surpassed ChatGPT in the iPhone App Store the day after the blacklisting.

Status as of 22 March 2026

Anthropic’s lawsuits are active. A hearing is scheduled for 24 March before Judge Rita Lin. The government has argued in a 40-page filing that the designation was a straightforward national security call, not retaliation for protected speech. New court filings from Anthropic allege that the Pentagon privately communicated the two sides were ‘nearly aligned’ a week after Trump declared the relationship concluded. The case is proceeding on both First Amendment and statutory authority grounds.

Whatever the legal outcome, the proof of concept that this framework identifies has been established: it is possible for a company to say no. That fact, once demonstrated, cannot be undone.


Bibliography

This bibliography is organised by category. It distinguishes between primary sources (original texts and documents), scholarly works, journalistic and analytical works, and primary sources on the Anthropic–Pentagon standoff. Works are cited in author-date format.

A. Primary Historical Sources

  • East India Company Charter (1600). Royal Charter granted by Queen Elizabeth I to the Governor and Company of Merchants of London Trading into the East Indies, 31 December 1600.
  • VOC Charter (1602). Charter of the Vereenigde Oostindische Compagnie granted by the States General of the Netherlands, 20 March 1602.
  • Joint Stock Companies Act 1844 (UK). 7 & 8 Vict. c. 110.
  • Limited Liability Act 1855 (UK). 18 & 19 Vict. c. 133.
  • Bubble Act 1720 (UK). 6 Geo. I. c. 18. Prohibited the formation of joint-stock companies without royal charter; repealed 1825.

B. Scholarly Works on Digital Colonialism and Surveillance Capitalism

  • Birhane, A. (2020). ‘Algorithmic Colonisation of Africa.’ SCRIPTed: A Journal of Law, Technology and Society, 17(2), 389–409. doi: 10.2966/scrip.170220.389.
  • Couldry, N. & Mejías, U.A. (2019). The Costs of Connection: How Data Is Colonising Human Life and Appropriating It for Capitalism. Stanford University Press.
  • Couldry, N. & Mejías, U.A. (2024). Data Grab: The New Colonialism of Big Data and How to Fight Back. University of Chicago Press / Penguin.
  • Lim, E. & Coleman, B. (2026). ‘Colonial Mapping of Advanced Automation: East India Company as AI.’ AoIR Selected Papers of Internet Research.
  • Petram, L. (2014). The World’s First Stock Exchange. Columbia University Press.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
  • Zuboff, S. (2015). ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.’ Journal of Information Technology, 30(1), 75–89.

C. Journalistic and Analytical Works

  • Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press, published 20 May 2025. New York Times bestseller; National Book Critics Circle Award finalist. Hao’s focus is on OpenAI’s internal culture, labour extraction, and environmental costs; the present document’s distinct contribution is its governance ethics and proactive resistance framework.
  • Stanger, A. (2025). ‘The AI Raj: How Tech Giants Are Recolonising Power.’ Bulletin of the Atomic Scientists, 15 September 2025.
  • IT Pro (2021). ‘Is Big Tech the New East India Company?’ October 2021.
  • The Week (2025). ‘From East India Company to Big Tech: Why Corporations Keep Seeking Colonies.’ August 2025.
  • Data-Pop Alliance (2025). ‘The Return of East India Companies: AI, Africa and the New Digital Colonialism.’

D. Primary Sources: The Anthropic–Pentagon Standoff (February–March 2026)

  • Amodei, D. (2026). Public statement by Anthropic CEO, 26 February 2026: ‘We cannot in good conscience accede to their request.’ Available at anthropic.com.
  • Axios (2026a). ‘Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards.’ 24 February 2026.
  • Axios (2026b). ‘Pentagon approves OpenAI safety red lines after dumping Anthropic.’ 27 February 2026.
  • Axios (2026c). ‘Anthropic sues Pentagon over rare supply chain risk label.’ 9 March 2026.
  • CNBC (2026a). ‘Anthropic CEO Amodei says Pentagon’s threats do not change our position.’ 26 February 2026.
  • CNBC (2026b). ‘OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic.’ 27 February 2026.
  • CNBC (2026c). ‘Anthropic seeks appeals court stay of Pentagon supply-chain risk designation.’ 12 March 2026.
  • CNN (2026a). ‘Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails.’ 24 February 2026.
  • CNN (2026b). ‘Anthropic sues Pentagon over supply chain risk designation.’ 9 March 2026.
  • CNN (2026c). ‘Former judges side with Anthropic and raise concerns about Pentagon’s use of supply chain risk label.’ 17 March 2026.
  • Council on Foreign Relations (2026). ‘Anthropic and Pentagon Clash.’ 27 February 2026.
  • Fortune (2026). ‘OpenAI sweeps in to snag Pentagon contract after Anthropic labeled supply chain risk.’ 28 February 2026.
  • NPR (2026a). ‘Hegseth threatens to blacklist Anthropic over woke AI concerns.’ 24 February 2026.
  • NPR (2026b). ‘OpenAI announces Pentagon deal after Trump bans Anthropic.’ 27–28 February 2026.
  • NPR (2026c). ‘Anthropic sues Trump administration over blacklisting decision.’ 9 March 2026.
  • Syracuse Law Review (2026). ‘When AI Ethics Collide with National Security: Anthropic Challenges Pentagon Blacklisting.’ March 2026. Includes formal legal citation to Complaint, Anthropic PBC v. US Department of War, No. 3:26-cv-01996 (N.D. Cal. Mar. 9, 2026).
  • TechCrunch (2026). ‘New court filing reveals Pentagon told Anthropic the two sides were nearly aligned.’ 20 March 2026.
  • Tech Policy Press (2026). ‘A Timeline of the Anthropic–Pentagon Dispute.’ March 2026.
  • Time (2026). ‘How Anthropic Became the Most Disruptive Company in the World.’ 19 March 2026.
  • Trump, D.J. (2026). Truth Social post directing federal agencies to cease use of Anthropic technology, 27 February 2026, 3:47 PM.

E. AI Governance, Resistance, and Ethics

  • AI Now Institute. Annual AI Index reports and research on algorithmic accountability. Available at ainowinstitute.org.
  • AlgorithmWatch. Ongoing publication on algorithmic transparency and auditing. Available at algorithmwatch.org.
  • Freedom of the Press Foundation. SecureDrop platform for anonymous document submission. Available at freedom.press.
  • OpenMined. Open-source privacy-preserving AI research. Available at openmined.org.
  • Sabelo Mhlambi et al. (2022). The AI Decolonial Manyfesto. Provides a Global South perspective on AI governance and decolonisation.

© 2026 Brakten Pty Ltd. All rights reserved. This content was conceived, directed, and authored by Phillip Volkofsky in collaboration with artificial intelligence tools. The intellectual contribution, including the originating ideas, questions, arguments, analytical frameworks, and editorial direction, is the product of human thought and inquiry. AI was employed as a collaborative instrument in the drafting, structuring, and articulation of this work. The author asserts moral and intellectual ownership of the ideas, arguments, and original contributions contained herein.

Phillip Volkofsky | Brakten Pty Ltd | March 2026