I recorded myself here so you can listen to it or read below:
I want to talk a little bit about Anthropic, because I think there is a lot to dig into right now.
So many news items are popping up. People are talking about Anthropic cracking down on third-party usage around Claude subscriptions, people are talking about lower effective limits during busy hours, people are frustrated online, and on other side you can see OpenAI pushing Codex harder and harder for developers. Also, Claude Code source code leaked online, which gave people even more reason to look under hood and talk about how Anthropic is building things.
There are lot of things I want to cover here, so I’ll just lay out my thoughts as clearly as I can.
Start with rate limits
I think this one is very interesting.
Bottom line, what I think is happening is simple: they do not have enough compute.
This we can all understand, especially in rush hours, where businesses need it, where users need it, where everyone wants response right now. Claude became operating system for many enterprises. So many professionals are relying on Claude on daily basis. You can see it online very easily, and you can see it inside companies as well. When Claude is down, everybody complains. And Claude has been down or degraded enough times lately that people are noticing it. Anthropic’s own docs also make clear that usage limits and capacity management are real part of product, not some side issue.
Why can their servers not handle it?
In my opinion, reason is simple. Anthropic does not own data center stack the way Google does. They rely on infrastructure from companies like Google and Amazon. That means every unit of compute has a real cost structure under it, and that matters a lot when demand explodes.
So this is chain of reasons as I see it. Anthropic cannot fully handle amount of demand they created. And why is demand so big? Because they are creating great models.
They are not only building good models. They are wrapping them very well
Anthropic is doing a very good job with the product. Not only the model itself. The product around the model.
And I think many people miss this.
One of reasons people love Claude so much is not only raw intelligence. It is way model is wrapped. It feels good to use. It feels coherent. It feels like you are talking to something more intelligent, more whole, more human, more grounded. And after Claude Code source leaked, even more people started talking about how much orchestration and product thinking sits around model itself. (Axios)
They are combining things in smart way. They are doing very good job with product design.
Their real product is model, yes. But model is wrapped inside other products. Claude subscription. Claude web app. Claude Code. Whole environment.
When person pays Anthropic every month, they are not paying only for model in abstract. They are paying to use model inside Anthropic environment.
That is important distinction.
Subscription was too good
And this is where things started breaking.
For long time, Claude subscription felt like amazing deal.
Their models are expensive. Very expensive. But for fixed monthly price, people felt they were getting huge amount of value. Much more bang for buck than expected.
And then community did what community always does.
It pushed this to edge.
Anthropic probably wanted people to use subscription for Anthropic products. Claude web app. Claude Code. Their own environment. Their own package. Their own UX.
But once you give people terminal access, people start using it however they can.
That is natural.
You give us terminal package, we will try to turn it into whatever we want. We will route workflows through it. We will stretch it. We will use authentication there. We will treat it almost like personal API access if we can.
And now it is apparent that Anthropic does not want that anymore.
To me, that makes sense.
Problem is not that they are evil or stupid. Problem is economics.
I think they are trying to solve problem that may not be solvable
Anthropic wants subscription to be for Anthropic products.
Of course that makes sense.
But Claude Code is terminal package. So people will always try to turn that terminal surface into something broader than intended.
Eventually, maybe they come up with way to lock it down harder. Maybe they make it more web-based. Maybe they change how authentication works. Maybe they change how local client interacts with their systems.
But then they run into another issue.
Claude Code also relies on personal computer for actual local execution. It is not like everything happens on Anthropic side. Some of value comes from fact that it runs on client side and only talks back to model when needed.
So they have real problem here. They want to stop people from turning subscription into not-subscription, into something closer to cheap API usage. But because of way product works, they cannot fully stop that without changing nature of product itself.
That is why I think they are trying to solve subscription problem in way that is fundamentally hard.
My conclusion: Anthropic is probably going down
I’ll say it clearly.
I think Anthropic is going down.
Not tomorrow. Not because models are bad. Not because researchers are weak. Opposite.
Their next models will probably be amazing. Anthropic has great researchers. They consistently build some of best communication models, most human-like models, most common-sense models. Even now they are still releasing stronger flagship models like Claude Opus 4.6.
So no, this is not me saying they are not talented.
I am saying I do not see sustainable business model there.
That is different.
In one year, two years, five years down road, I do not see how this company becomes truly durable in way people imagine.
Why?
Because they are not profitable, and I do not see how they become profitable.
What are they counting on?
Mostly API usage from other companies.
But their API is expensive, and they are competing against OpenAI and Google at same time. OpenAI is getting more aggressive in coding. Google owns compute. Anthropic does not. OpenAI is pushing Codex as command center for agentic coding. Google, unlike Anthropic, does not have to pay third-party markup in same way for its own core compute stack.
So why would market stay loyal forever at high prices?
Some people will pay for best model, yes.
But best model also costs most compute.
That is problem.
Large language models, no business model
This is way I see it right now.
Anthropic has large language models.
But I do not think they have business model.
Because they cannot offer this value in way that looks durable to me.
They are trying to fix subscription problem, but I think it is unsolvable in clean way. And once they fix it, or partially fix it, they may free some compute, yes. They may even build better models because of that. But they may also push many users away at same time.
And once users go back to OpenAI, or to Google, or to cheaper alternatives, it is very hard to pull them back.
So yes, Anthropic will continue to release great models.
But I think there is real chance they go back to being more like small lab with great research and less like dominant product company.
What are they really counting on?
In my opinion, they are counting on non-technical people inside enterprises.
Product managers. Salespeople. Corporate users. Teams that love Claude and do not want to switch because it feels better, sounds better, helps them more, writes better, thinks better.
That is real advantage. I do not dismiss it.
But every time Anthropic releases amazing model, everyone else gets more pressure to catch up.
And eventually, someone catches up.
OpenAI catches up.
Google catches up.
Chinese labs catch up.
Maybe not same week. Maybe not same month. But eventually, they do.
And once that happens, economics start mattering more than elegance.
This is why I agree more with compute than with hype
There is so much buzz around AI right now. AGI, revolution, all these things.
Maybe lot of it happens.
Fine.
But even if we get AGI, even if we get robots helping with everything, even if digital labor explodes, physical materials still matter.
Actually, they matter more.
That is why I keep believing in atoms over bytes.
Data centers matter.
Energy matters.
Chips matter.
Land matters.
Infrastructure matters.
Because if you add more labor to planet, whether through AI or robots or software agents, physical bottlenecks become more important, not less.
And this is why, when I look at Anthropic, I see company with beautiful intellectual property that still depends on physical world it does not own.
That is key point.
Many companies can be very profitable without owning much physical stuff. But Anthropic is not selling book. Not selling course. Not selling software with tiny marginal cost.
Their intellectual property requires heavy compute.
And if your intellectual property depends on hardware you do not control, your business is weaker than it looks.
Final point
Reason I am writing all this is not to dunk on Anthropic.
I think they are doing great work.
I am writing this because I think many people are getting distracted by magic and forgetting economics. Anthropic may keep shipping incredible models. But I do not think incredible models alone are enough here. To me, Anthropic looks more and more like a vehicle that transfers money from investors, subscribers, and enterprises to Google and Amazon.
That is harsh way to say it.
But that is honestly how I see it.
And that is why my conclusion, at least right now, is very simple:
Anthropic has great large language models.
I do not think they have a business model.