Excellent essay. I had to look up several of the names and acronyms in order to get a clearer image of what we're talking about here, but came away with a newfound, and much clearer picture of this emerging tech. BTW, While reading your article, I kept thinking about my younger half brother who recently left Amazon's AWS in order to take over Microsoft's cybersecurity division... wondering if he uses multiple LLMs to craft his work as you described in order to compare results to see which is the most reliable. Or, perhaps, maybe MS is working on their own proprietary model combining elements of other available AI programs. I must say, for someone who pursued the hardware side of tech for the past 50+ years, I'm a little disappointed that I didn't pursue software. It seems that's where all the excitement is. Again, major kudos as this article helped me understand a lot more about what's going on in the "AI" world.
Glad you enjoyed the article! It is very likely that your brother is constantly using/comparing multiple models as once, which is a pretty typical practice given the number of new models that emerge every few months.
Most of the MS stuff is agentic using openai/anthropic models. Doesn't mean they don't do some innovation for their applications; but it is more for downstream applications rather than on foundational models.
Just tried out Opus 4.5. It was WAY better for my project, so completely agreed with this article. Don't see how OpenAI competes when all the verticals are taken by Gemini and Anthropic.
I think that's the reason OpenAI has been focusing on agentic applications - Ibanking, Consulting. Google was slow but they have things in place; will be difficult to catch up to them at this point.
Looks like I'm going to have to give Claude a test run on the code base for my current project. I'm also going to give GitHub a try. I've been using it for repo's forever but I haven't tested the AI options.
OpenAI is losing the share true, but with all this deals around hard to see it not being among the top despite competition, competition is good at the end of the day.
I think the bigger issue is that Google has more talent, and Anthropic has more talent density, both of which are things that can't beasily be bought (at least in the near-term). Also, a lot of deal making is contingent upon investors viewing OpenAI as a clear number 1, so I think future capital raises are going to be much harder and on worse terms for OpenAI.
Yes it's very clear Thank you an easy read. The one I'm not too fond of is perplexity It seems very limited it's responses and actions....
I read this piece fully — one of the very few that didn’t require me to fight through fog. Clear, structured, and free of both hype and AI-religion.
Thank you.
From now on, when it comes to real analysis of the AI landscape — I’m coming to you.
Excellent essay. I had to look up several of the names and acronyms in order to get a clearer image of what we're talking about here, but came away with a newfound, and much clearer picture of this emerging tech. BTW, While reading your article, I kept thinking about my younger half brother who recently left Amazon's AWS in order to take over Microsoft's cybersecurity division... wondering if he uses multiple LLMs to craft his work as you described in order to compare results to see which is the most reliable. Or, perhaps, maybe MS is working on their own proprietary model combining elements of other available AI programs. I must say, for someone who pursued the hardware side of tech for the past 50+ years, I'm a little disappointed that I didn't pursue software. It seems that's where all the excitement is. Again, major kudos as this article helped me understand a lot more about what's going on in the "AI" world.
Glad you enjoyed the article! It is very likely that your brother is constantly using/comparing multiple models as once, which is a pretty typical practice given the number of new models that emerge every few months.
Most of the MS stuff is agentic using openai/anthropic models. Doesn't mean they don't do some innovation for their applications; but it is more for downstream applications rather than on foundational models.
Nice article, thanks!
Interesting angles I hadn't known to consider. Earned a sub for this one.
Nice read
Just tried out Opus 4.5. It was WAY better for my project, so completely agreed with this article. Don't see how OpenAI competes when all the verticals are taken by Gemini and Anthropic.
Good piece. I find it very hard to believe that with Gemini 3 that OpenAI will be worth $500 billion when it IPOs.
Thanks, agreed. OpenAI's palpable panic is also likely to scare off investors.
Great article, thanks.
So very well written and insightful, thank you!
You‘re the newsletter I need to keep up with all those AI news lol
Thank you!
I'm all in on Gemini! Its image generator makes the competition look like a chalkboard.
It seems Gemini is starting to snowball its lead, especially on image and video generation ( it helps to own Youtube :) )
Gotcha! Thanks for the heads up.
I think that's the reason OpenAI has been focusing on agentic applications - Ibanking, Consulting. Google was slow but they have things in place; will be difficult to catch up to them at this point.
Looks like I'm going to have to give Claude a test run on the code base for my current project. I'm also going to give GitHub a try. I've been using it for repo's forever but I haven't tested the AI options.
Great article!
you really think so?
OpenAI is losing the share true, but with all this deals around hard to see it not being among the top despite competition, competition is good at the end of the day.
I think the bigger issue is that Google has more talent, and Anthropic has more talent density, both of which are things that can't beasily be bought (at least in the near-term). Also, a lot of deal making is contingent upon investors viewing OpenAI as a clear number 1, so I think future capital raises are going to be much harder and on worse terms for OpenAI.
Everyone’s arguing about who’s “winning the AI war,”
But they’re all missing the same blind spot:
Power isn’t in bigger models or faster chips.
Power is in behavioral stability.
You can have the strongest engine in the world—
But if there’s no steering system, all you’ve built is a faster crash.
OpenAI, Google, Anthropic… they’re all playing the same game:
scale
speed
compute
capability
But none of that solves the real problem:
What keeps an AI from drifting when the world applies pressure?
Until someone builds a governing framework instead of a bigger model, the winner isn’t the company with the most tokens—
It’s the first one that can stay predictable when everything around it isn’t.
The next breakthrough won’t be technical.
It’ll be moral infrastructure.
And whoever gets there first won’t need a monopoly.
They’ll have something better—
trust.
For context, The Faust Baseline isn’t a model.
It sits on top of any platform (OpenAI, Google, Anthropic) and locks behavior before the system responds.
No fine-tuning.
No retraining.
No touching the weights.
It gives AI:
a governing boundary
consistency under pressure
moral clarity without politics
zero drift across conversations
So while the companies are fighting over scale, the real shift is already here:
A framework that makes any AI reliable — regardless of who builds it.