skip to main content
Now available on iOS & Android.
Download on theApp StoreGet it onGoogle Play
inaam app features

get the app

Start investing in what matters. Available on iOS and Android.

$10/month. No lock-in. Cancel anytime.

the fine print

chatgpt*
the clearance

On February 28, 2026, your favourite AI assistant became a defence contractor. Here's what nobody's reading.

series: the fine printpublished:
ChatGPT editorial illustration

what happened on friday night

On February 28, 2026, OpenAI CEO Sam Altman announced that his company had reached an agreement with the U.S. Department of Defense to deploy its AI models on the Pentagon's classified network. The deal came hours after President Trump ordered all federal agencies to stop using Anthropic's technologyand Defence Secretary Pete Hegseth designated Anthropic, the company that had actually been running AI on the classified network, a "supply chain risk to national security."

Anthropic's offence was wanting contractual assurance its models wouldn't be used for autonomous weapons or mass surveillance of Americans. The Pentagon called that stance "philosophical" and "woke." Then OpenAI stepped in, claiming it had the same red lines, and struck a deal within hours.

Altman admitted the deal was rushed, telling followers it had generated significant backlash against OpenAI (enough that Anthropic's Claude briefly overtook ChatGPT on the Apple App Store the next day). His justification: OpenAI wanted to de-escalate the standoff between the Pentagon and the AI industry. Whether that reads as principled bridge-building or strategic opportunism depends on which sentence you stop at.

the timeline nobody asked for

jan 16 2026
OpenAI announces it will start testing ads in ChatGPT for free and Go tier users in the US
feb 9 2026
Ads officially launch in ChatGPT. ad personalization turned on by default. ~$60 CPM. the day after Anthropic ran a Super Bowl ad mocking AI companies for putting ads in chatbots
feb 13 2026
Scholar reports that "safely" was removed from OpenAI's mission statement in its 2024 IRS filing. mission has been rewritten 6 times in 9 years
feb 27 2026
OpenAI closes $110B funding round. $840B post-money. Amazon $50B, Nvidia $30B, SoftBank $30B. largest private raise in history
feb 27 2026
Trump orders all agencies to stop using Anthropic. Hegseth designates Anthropic a supply-chain risk. same day as the funding round
feb 28 2026
Altman announces Pentagon classified network deal. friday night. same night U.S. and Israeli strikes on Iran begin
mar 1 2026
Altman admits deal was "definitely rushed" and "the optics don't look good." Claude overtakes ChatGPT on Apple App Store

Six weeks. Ads on your conversations. "Safely" removed from the mission. $110 billion raised. Pentagon classified network. That is one company's February.

what the deal actually says

The contract includes three explicit red lines: no mass domestic surveillance, no autonomous weapons, no high-stakes automated decisions like social credit systems. Deployment is cloud-only, meaning the models aren't installed on drones or edge hardware. OpenAI retains full control of its safety stack, and cleared safety researchers remain in the loop for classified workflows.

These are real protections. Cloud-only deployment is meaningful. In-the-loop safety researchers provide oversight that previous AI defence contracts didn't have. OpenAI publicly opposing Anthropic's supply-chain risk designation was worth noting. They didn't have to say that.

The asterisk: these are contractual protections, not statutory ones. They exist because OpenAI negotiated them, not because a law requires them. If a future administration wants to renegotiate, or if operational pressures make the safety stack inconvenient, the enforcement mechanism is a contract dispute, not a criminal offence. As critics have noted, the legal scaffolding still leaves room for broad data collection.

Or, to put it in language the carousel would: the pentagon says it won't spy on you, build weapons with your data, or deny you social services. But they are super trustworthy, right?

the ads in your conversations

Three weeks before the Pentagon deal, OpenAI launched advertising in ChatGPT. Ads appear at the bottom of responses for free and Go tier users. They're contextually matched to your conversation. Ad personalization is turned on by default.OpenAI says conversations aren't shared with advertisers and ads don't influence responses.

But here's what sits uncomfortably next to that: the same conversations that are now generating ad revenue also share infrastructure with a classified military network. Not the same servers, but the same company, the same model architecture, the same safety stack. Your recipe query and a military intelligence triage run through different instances of the same system built by the same engineers under the same corporate umbrella. Altman saidthe ads were necessary because "a lot of people want to use a lot of AI and don't want to pay." What he didn't say: the same month the ads launched, the company's infrastructure entered a classified military environment.

the mission statement that keeps changing

OpenAI's mission has been rewritten six times in nine years. The original, filed in 2016: "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The current version: "to ensure that artificial general intelligence benefits all of humanity."

Each revision removed a constraint. "Unconstrained by financial return" went first. "Openly share our plans" went next. In 2024, "safely" was removed. The same year the company reversed its ban on military applications. The same year it began pursuing the Pentagon contract it signed on a Friday night in February 2026.

OpenAI has also reportedly disbanded its mission alignment team. The word is gone. The team responsible for ensuring alignment with the mission is gone. What's left: a $840 billion for-profit public benefit corporation with ads, a Pentagon deal, a planned IPO, and no published environmental data.

the water and the silence

OpenAI has never published a sustainability report. No Scope 1, 2, or 3Three categories of carbon emissions. Scope 1 is what a company produces directly (like its own factories). Scope 2 is from the energy it buys. Scope 3 is everything else in the supply chain. Scope 3 is usually the biggest and hardest to measure. emissions. No net-zero target. No total energy consumption data. The company is one of the most opaque major technology companies in the world on environmental metrics.

What we know from external research: ChatGPT's estimated daily water consumption, using the lifecycle methodology from UC Riverside researchers, is approximately 7.5 million litres per day. That's enough to fill three Olympic swimming pools. Every day. AI systems collectively were estimated to produce 32.6 to 79.7 million tonnes of CO₂ in 2025, roughly comparable to New York City's annual emissions. Microsoft, which provides OpenAI's primary compute infrastructure, saw its emissions rise 29% since 2020.

Now add classified military compute to that footprint. Compute that, by definition, cannot be publicly audited.

the circular money

Amazon invested $50 billion. OpenAI committed to spend $100 billion back on Amazon Web Services over eight years. Nvidia invested $30 billion. OpenAI will deploy 5 gigawatts of Nvidia's Vera Rubin hardware. The investors are also the suppliers. The money goes out one door and comes back through another. This isn't illegal or unusual in tech, but it does mean the "value" of OpenAI is partly a function of its own spending commitments to the people valuing it.

what this means

ChatGPT is still the app you used this morning. You asked it about your weird dream. It showed you an ad afterwards. The prompts still work the same way.

But the company behind it has changed. A nonprofit research lab became an $840 billion for-profit corporation with a classified military contract, ads in your conversations, a planned IPO, and no published environmental data. In eleven years. The word "safely" was in the mission statement. Then it wasn't. The company banned military applications. Then it didn't. The CEO said the optics don't look good. Then he signed anyway. On a Friday night.

If they don't care about you or the planet, maybe you should read the fine print.

disclaimer: this content is produced by inaam Impact Investments Pty Ltd. inaam is a Corporate Authorised Representative (CAR No. 1318254) of Non Correlated Advisors Pty Ltd (ABN 61 158 314 982, AFSL 430126) (NCA). Primary Securities Ltd (ABN 96 089 812 635, AFSL 224107) (Primary) is the Responsible Entity of the inaam Impact Investments Fund (Fund). this article provides general information only. it does not constitute financial advice, a recommendation to buy, sell, or hold any financial product, or an endorsement of any investment strategy. past performance is not indicative of future results. all claims are sourced and hyperlinked. where data is estimated or ranges are cited, the methodology and source are disclosed. readers should conduct their own research before making investment decisions. inaam is not affiliated with OpenAI, Microsoft, Anthropic, Amazon, Nvidia, SoftBank, or any entity discussed in this report.

ready to put this into practice?

Impact investing for $10/month. Five themes. One app. ASIC registered.

← back to the fine print