On February 28, 2026, your favourite AI assistant became a defence contractor. Here's what nobody's reading.

On February 28, 2026, OpenAI CEO Sam Altman announced that his company had reached an agreement with the U.S. Department of Defense to deploy its AI models on the Pentagon's classified network. The deal came hours after President Trump ordered all federal agencies to stop using Anthropic's technologyand Defence Secretary Pete Hegseth designated Anthropic, the company that had actually been running AI on the classified network, a "supply chain risk to national security."
Anthropic's offence was wanting contractual assurance its models wouldn't be used for autonomous weapons or mass surveillance of Americans. The Pentagon called that stance "philosophical" and "woke." Then OpenAI stepped in, claiming it had the same red lines, and struck a deal within hours.
Altman admitted the deal was rushed, telling followers it had generated significant backlash against OpenAI (enough that Anthropic's Claude briefly overtook ChatGPT on the Apple App Store the next day). His justification: OpenAI wanted to de-escalate the standoff between the Pentagon and the AI industry. Whether that reads as principled bridge-building or strategic opportunism depends on which sentence you stop at.
Six weeks. Ads on your conversations. "Safely" removed from the mission. $110 billion raised. Pentagon classified network. That is one company's February.
The contract includes three explicit red lines: no mass domestic surveillance, no autonomous weapons, no high-stakes automated decisions like social credit systems. Deployment is cloud-only, meaning the models aren't installed on drones or edge hardware. OpenAI retains full control of its safety stack, and cleared safety researchers remain in the loop for classified workflows.
These are real protections. Cloud-only deployment is meaningful. In-the-loop safety researchers provide oversight that previous AI defence contracts didn't have. OpenAI publicly opposing Anthropic's supply-chain risk designation was worth noting. They didn't have to say that.
The asterisk: these are contractual protections, not statutory ones. They exist because OpenAI negotiated them, not because a law requires them. If a future administration wants to renegotiate, or if operational pressures make the safety stack inconvenient, the enforcement mechanism is a contract dispute, not a criminal offence. As critics have noted, the legal scaffolding still leaves room for broad data collection.
Or, to put it in language the carousel would: the pentagon says it won't spy on you, build weapons with your data, or deny you social services. But they are super trustworthy, right?
Three weeks before the Pentagon deal, OpenAI launched advertising in ChatGPT. Ads appear at the bottom of responses for free and Go tier users. They're contextually matched to your conversation. Ad personalization is turned on by default.OpenAI says conversations aren't shared with advertisers and ads don't influence responses.
But here's what sits uncomfortably next to that: the same conversations that are now generating ad revenue also share infrastructure with a classified military network. Not the same servers, but the same company, the same model architecture, the same safety stack. Your recipe query and a military intelligence triage run through different instances of the same system built by the same engineers under the same corporate umbrella. Altman saidthe ads were necessary because "a lot of people want to use a lot of AI and don't want to pay." What he didn't say: the same month the ads launched, the company's infrastructure entered a classified military environment.
OpenAI's mission has been rewritten six times in nine years. The original, filed in 2016: "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The current version: "to ensure that artificial general intelligence benefits all of humanity."
Each revision removed a constraint. "Unconstrained by financial return" went first. "Openly share our plans" went next. In 2024, "safely" was removed. The same year the company reversed its ban on military applications. The same year it began pursuing the Pentagon contract it signed on a Friday night in February 2026.
OpenAI has also reportedly disbanded its mission alignment team. The word is gone. The team responsible for ensuring alignment with the mission is gone. What's left: a $840 billion for-profit public benefit corporation with ads, a Pentagon deal, a planned IPO, and no published environmental data.
OpenAI has never published a sustainability report. No Scope 1, 2, or 3Three categories of carbon emissions. Scope 1 is what a company produces directly (like its own factories). Scope 2 is from the energy it buys. Scope 3 is everything else in the supply chain. Scope 3 is usually the biggest and hardest to measure. emissions. No net-zero target. No total energy consumption data. The company is one of the most opaque major technology companies in the world on environmental metrics.
What we know from external research: ChatGPT's estimated daily water consumption, using the lifecycle methodology from UC Riverside researchers, is approximately 7.5 million litres per day. That's enough to fill three Olympic swimming pools. Every day. AI systems collectively were estimated to produce 32.6 to 79.7 million tonnes of CO₂ in 2025, roughly comparable to New York City's annual emissions. Microsoft, which provides OpenAI's primary compute infrastructure, saw its emissions rise 29% since 2020.
Now add classified military compute to that footprint. Compute that, by definition, cannot be publicly audited.
Amazon invested $50 billion. OpenAI committed to spend $100 billion back on Amazon Web Services over eight years. Nvidia invested $30 billion. OpenAI will deploy 5 gigawatts of Nvidia's Vera Rubin hardware. The investors are also the suppliers. The money goes out one door and comes back through another. This isn't illegal or unusual in tech, but it does mean the "value" of OpenAI is partly a function of its own spending commitments to the people valuing it.
ChatGPT is still the app you used this morning. You asked it about your weird dream. It showed you an ad afterwards. The prompts still work the same way.
But the company behind it has changed. A nonprofit research lab became an $840 billion for-profit corporation with a classified military contract, ads in your conversations, a planned IPO, and no published environmental data. In eleven years. The word "safely" was in the mission statement. Then it wasn't. The company banned military applications. Then it didn't. The CEO said the optics don't look good. Then he signed anyway. On a Friday night.
If they don't care about you or the planet, maybe you should read the fine print.
ready to put this into practice?
Impact investing for $10/month. Five themes. One app. ASIC registered.