First Cyber, Then Chips: The Open AI vs Anthropic's has moved to Hardware‑Level.
Quick look at the OpenAI’s 18‑Billion‑Dollar Chip Hangover plus Anthropic’s UK Side Quest: Fractile and the Cheap‑Silicon Gambit
As much as I’d like to focus on other corners of the AI world, the past few weeks keep circling back to the same two labs, no matter which stack you care about. It’s understandable there’d be parallels, but this year OpenAI and Anthropic feel like they’re almost shadowing each other. One week Anthropic is in the headlines with a scary‑good cyber model; a few weeks later, OpenAI shows up with its own cyber‑branded model. Then OpenAI announces a giant custom‑chip deal, and soon after, Anthropic is suddenly talking to a UK chip startup. It’s the same rivalry, but the battleground has shifted all the way down to the hardware now.
OpenAI and Anthropic keep stepping on the same stages: first cyber models, now custom chips.
Let’s start with cyber, because that’s where this round really kicked off.
Anthropic introduced a model called Claude Mythos – basically, an AI that’s extremely good at hacking‑style tasks: finding software bugs, turning them into exploits, chaining them into real attacks. It was strong enough that Anthropic said, “we’re not putting this on the public internet.” Instead, they created Project Glasswing, a small, invite‑only programme where big tech firms, banks, and critical‑infrastructure players can test Mythos under tight controls. The message was, “we can do very powerful offensive security stuff, but we’re going to keep it behind glass for now.”
Now that got a lot of attention from governments and CISOs.what happened no long after though was that OpenAI rolled out its own cyber‑focused model, GPT‑5.4‑Cyber, plus an updated “AI for security teams” story plus their own restricted frontier. GPT‑5.4‑Cyber comes with same basic capability class ie use AI to spot vulnerabilities faster, simulate attackers, and help defenders harden systems, but with a different vibe. Anthropic is running a tiny, heavily locked‑down pilot.
OpenAI is saying “we’ve turned the risk dial down enough that lots of vetted security teams can use this at scale.”
Anthropic went loud on “too powerful to release” cyber with Mythos; OpenAI replied with GPT‑5.4 Cyber for thousands of vetted defenders.
So on the cyber side you’ve already got a clear contrast. Anthropic as the cautious lab that keeps its scariest model in a glass box, and OpenAI as the lab that wants to get similar power into more hands, with rules, but still broadly deployed.
Now take that same dynamic and drop it one layer deeper, into chips.
Last year OpenAI announced a big partnership with Broadcom to build custom AI chips.
Instead of just renting whatever Nvidia will sell them, they want their own hardware, tuned to their workloads, at massive scale. The long‑term plan is eye‑watering.
up to 10 gigawatts of these chips in data centres by the end of the decade.
For rough intuition, that’s like wiring several nuclear‑reactor‑equivalents of power into OpenAI‑branded compute. If it works, they get a dedicated “compute rail” and more control over cost, instead of paying permanent Nvidia tax.
The problem is the bill.
The first phase of this custom‑chip push is about 18 billion dollars, and recent reporting says the financing isn’t fully sorted. Broadcom has apparently been asked to put up most of the manufacturing money, but it only wants to do that if Microsoft or other big buyers promise to take roughly 40% of the chips once they exist. Meanwhile, Microsoft has adjusted its OpenAI deal to stop some of the revenue sharing, which investors read as:
“we still like you, but we’re not bankrolling every part of your hardware habit.”
OpenAI tried to build its own private chip highway with Broadcom – and ran into an 18‑billion‑dollar funding wall.
So OpenAI now has a bit of a hangover moment. They’ve told the world they’re building this giant custom chip rail, but they still need to plug a double‑digit‑billion gap to actually get the first wave of silicon out of the fab.
Right as that story breaks, Anthropic’s name shows up in chip coverage too.
Anthropic is reportedly in early talks with Fractile, a London‑based startup building new AI inference chips.
AI inference chips are specialised processors designed to run already‑trained AI models quickly and cheaply, especially in data centres. They’re optimized for serving lots of user requests rather than training the model in the first place.
Fractile’s whole pitch is “run big models faster and cheaper than today’s GPUs” by changing how memory and compute are laid out on the chip. If Anthropic signs on, Fractile becomes their fourth hardware lane:
they already use Nvidia, Google TPUs and Amazon’s custom accelerators.
And again, the styles are very different. OpenAI is trying to do one giant, vertically integrated programme with Broadcom and is having to wrestle with the financing in public. Anthropic isn’t trying to buy Fractile. VCs fund most of the chip effort, and Anthropic lines up as an anchor customer when the hardware is ready. It’s a lighter way to get access to specialised silicon.
Risk looks different too.
OpenAI wants its own private motorway and if they pull it off, they get a ton of dedicated compute and more predictable costs. But if demand wobbles or the economics change, they’re tied to a very specific, very expensive hardware bet. On the other hand Anthropic seems to be collecting train tickets instead of building a motorway for themselves. it can move workloads between Nvidia, Google, Amazon and maybe Fractile, depending on who’s cheaper or less constrained that quarter.
All of this plays out against a backdrop of real tension between the two labs. Anthropic’s founders came out of OpenAI. The two sides have argued about safety, speed of deployment, and honesty with the public. The rivalry now shows up everywhere from research releases, Super Bowl ads, government deals and, increasingly, in who can lock down the next big chunk of compute.
If you zoom out, the pattern is pretty clear– Different stories, same goal.
[ In this newsletter you get sharp, unfiltered short essays; for full‑length, deep‑dive analysis on AI, subscribe to our companion publication, Intelligent Founder AI. ]
both labs are scrambling to secure enough affordable compute so they’re not forever paying Nvidia tax. If the model wars were just the trailer. The real movie is about the metal underneath. I’ll do a longer deep dive onAI chips soon something I haven’t really unpacked yet – looking at the technical choices, the strategy, and Fractile’s angle in particular. It’s worth noting there’s effectively no product in the wild yet, and Fractile is already being talked about like a billion‑dollar company. And interestingly they’re not alone.



