current state of AI - why OpenAI is losing its normie base to Anthropic, what Google might be cooking on in silence, and why the smartphone might become obsolete sooner than we think./
I have a few thoughts I'd love to share with you about the current state of AI.
In the corporate world, it's all about Anthropic these last couple of months. Claude has finally clicked. People are starting to adopt it at a crazy rate, transitioning from the OpenAI app to the Anthropic app.
OpenAI is Losing Ground
First, the models from OpenAI have become less and less large language models and more and more large coding models. When you talk with ChatGPT now, it's no longer the sensation of talking with something that is actually chat-like. You're basically talking with something that writes super long responses and articles.
There's something about the Claude app - not Claude Code, but the Claude app itself - that just lets people see what the model is doing. The model operates inside a sandbox of sorts. They give their models access to tools within the app, and the models can use them in their own sequences. They're more generous about this, I'd say.
We had a great transition from OpenAI to Gemini in late 2025, and now we have a great transition from OpenAI to Anthropic. OpenAI is losing its normie base more and more to Claude.
This is fascinating to me. Why would OpenAI allow this? They know how to make those models. They had GPT-4.0, which was a very good conversational model. They had 4.5, which was a real deal large language model.
The Cost Question
My question is: why do they let this happen? I believe it's because of cost. The costs have skyrocketed, and the impact of those costs wasn't significant enough. At the end of the day, you're just talking to people about their day-to-day, helping them search the web in an easier, nicer way. There's just no actual financial value in those models.
That's why OpenAI has transitioned into large coding models, which can help in science - and it seems like OpenAI really cares about science right now. They can help coders and builders (we should say builders because it's no longer about writing code, it's about building stuff).
Anthropic, on the other hand, has been able to create great conversational models and great coding models within the same model - something OpenAI doesn't seem able to do right now.
If you give ChatGPT 5.4 the task to create a front end using some sort of agentic harness on your computer, you'll see that it basically doesn't know how to write copy like a person would. That tells you this model is not human-centered. It's not human-like. It doesn't have human language within it - a little bit to explain what it's doing, but it's more like a brain than a writer.
Opus and Sonnet (which for me are basically the same model - I don't see much difference between them) are amazing at copywriting and a little bit less amazing at writing code.
I find it fascinating. Why is OpenAI doing that? I believe it's about financial value. Otherwise, if they know how to make those models, why would they let their competitors do it?
What OpenAI Really Working On?
This leads me to another question: what are they working on?
You see the rise of Claude Code. Okay, they made Codex. You see the rise of Claude's computer use. You see the rise of the OpenClaw project by Peter Levels. Okay, they bought Peter. But they haven't shipped anything yet about using your computer. They are so quiet about it.
Anthropic, on the other hand, ships something around agent harnesses almost every single day. I find it fascinating because I believe OpenAI is working on something bigger.
Let me talk about our third player in this field - everybody seems to forget about it, but this is Google.
Every single dollar that Anthropic is making most likely goes to the amazing hardware Google has. This is why I believe - and I no longer hold any position on Google - that Google is the strongest player.
Recently, I saw a tweet from Elon Musk saying that Google will win the AI race in the West, China will win the AI race in the world, and SpaceX will win the AI race in space. Okay, yeah, but that was an interesting take, and I completely agree.
Google has been tremendously quiet. Yeah, they want to be number one on the leaderboard. Every time someone else jumps above and takes first place, they push out a better model. But they don't do anything else. No new products. No new harnesses. Their models don't seem to care about agentic work.
It seems like Google is working on something entirely different. What do they work on? It's a very interesting question. They could ship something very good, probably, but they're working on something else. Both of these companies are working on something that we, the public, don't really know yet. They're probably working on AGI, but we don't actually know what they have within their company.
Anthropic owns something that we don't have access to yet
Back to Anthropic. Another interesting thought: Boris was tweeting when Opus 4.6 came out - he was tweeting in past tense: ""When we played with it internally, we really liked it."" Meaning they played with it a long time ago. Right now they're playing with different models internally within the company.
That means Anthropic has better models than Opus 4.6, which is very good. But they have better models that they don't release to the public.
You can see how terrified their CEO is about the current state of AI. In every interview he talks about this. He seems very concerned about whether the AI is conscious or not, whether it's safe or not. It seems like he knows things that we as the public haven't yet encountered. There's something going on over there that I don't really know what it is.
My takes on each player
At the highest companies, here's my take:
Google is working on something far greater than all its competitors and will soon ship something that will probably change the world. They have the best TPUs in the world, and they're selling those TPUs to Meta and to Anthropic.
OpenAI is working towards AGI, like they say. They realize that models that talk to you are not benefiting society in their terms, so they stopped working on this - which I feel is a shame. I really want to have large language models as well. I really want to be able to talk or to copywrite with my model. They kind of put it aside, and this is why you see all the hate, all the hate online on Twitter, because people don't really want them to take 4.0 out. It seems like the cost-to-value is not there for those models, at least not for OpenAI anymore. That brings a lot of money to Anthropic, but just like everybody's buying Nvidia because they have the best GPUs, everybody should buy Google because they have the best TPUs.
Meta - I believe they'll ship something probably within a month or two. I really like the place they're going with the glasses. I think this is super important.
The Death of the Smartphone
You can already feel how outdated the smartphone in your pocket is. How almost useless. The entire way we're going to talk with software is through the agentic layer. We're not going to talk with apps and push buttons anymore.
It's going to happen gradually over two years or whatever, but at the end of the day, you don't need this entire real estate of screen in your pocket. You just need something to talk to. That something will project things for you so you'll be able to see and communicate with the data and push buttons if you need to. The agentic layer will create the apps for you, building for your own needs. You'll just talk with the agent.
This is very, very clear to me. In the near future, you don't need to have smartphones. Why would you? It's a waste of time.
I don't believe Apple will be relevant anymore - I'm talking about over a five-year period. They will try to answer what's happening in the market, but it seems like... I don't know. I hope they will ship a greater product. I hope they'll go the route of light sunglasses like Meta is doing, or headphones that you can talk to or whatever. I think sunglasses is the best way. I think it's the best UI for the agentic world.
Final Thoughts
I hope you liked this one because these are my thoughts about the current situation, the current state of AI. I like the race. I enjoy it. I really do believe Google will be on top - as well as China, of course - but Google will make it to the top. I really believe so because they have the funding, they have the income, they have the data centers, they have the data itself, they have everything they need.
But they seem like they don't want to ship. And the question is why? Google has always been slower at this race. Very much slower. They know about the agentic harnesses, but they don't really seem to care about their own agentic harness. Why?
Someone told me that Google might be more considerate, might be more thoughtful towards creators right now because it makes them a lot of money - NotebookLM, Veo, all this kind of flow apps. Yeah, maybe. But Google is a web infrastructure company and should create an infrastructure for the web.
I believe they will ship something great probably within a few weeks. Let's see. I think every model they send out is a preview model. They don't want to call it a model. They are very slow, very cautious. This is Google.
Anthropic, as much as I do like them, I don't think they will win the AI race. They're shipping at a very high production pace, but at the end of the day, their models seem to hit a wall. I'm sure they have better models than the ones they offer to us. I'm sure they do, and they don't want to put them out there. They want to use them themselves because they don't trust them yet. But something is telling me that their approach to large language models is limited.
OpenAI is losing its space. They have the best coding model in the world by far - better than Opus. If you're writing code or doing anything, use ChatGPT 5.4. It's amazing. But what about the copywriting skills? Maybe we can delegate those to Gemini. Maybe I could do that.
Anyway, I hope you enjoyed this one. If you have any thoughts, feel free to let me know what you think.