fbpx

Google Gemini 3 vs. OpenAI Garlic: Code Red in Silicon Valley and why did Sam Altman suddenly press the panic button?

Is the era of scaling over or has the real race just begun?

Code Red
Photo: Jan Macarol / Aiart

In a world where we thought ChatGPT was the only sheriff in town, Google just brought in a tank to the gunfight. Altman himself declared "Code Red." And believe me, the panic in Silicon Valley smells more like burning servers than morning coffee.

Let's face it, we've all gotten a little too comfortable. We've gotten used to OpenAI, as you get used to from a reliable German station wagon. It works, it's solid, it gets you from point A to point B. But while we were resting on our laurels and playing with the GPT-4o, Google in his basement in Mountain View, he was building something monstrous. Something with enough torque to spin the planet in the opposite direction. It's called Gemini 3. And suddenly that German station wagon looks like it's being driven by a concrete mixer. Altman himself pressed Code Red.

But the story is not that simple. This is not just a story about who has more… processing power. This is a story about OpenAI, the company that practically invented the modern AI hype, is suddenly backed into a corner. According to information circulating in the corridors (and confirmed by sources like The Information), Sam Altman has ordered his engineers to “Code Red.” Why? Because their new secret project, codenamed Garlic, urgently needs to defeat Google giant.

When the asphalt runs out: Is scaling dead?

If you follow the automotive industry, you know that there comes a point when you simply can't add more turbines to an engine without the whole thing exploding. In the world of artificial intelligence, this is called the "Scaling Laws". Ilia Sutskever, the godfather of artificial intelligence and a man who probably dreams in binary code, recently said: “The age of scaling is over. We are returning to the age of exploration.”

Translated into our language: Simply throwing more data and more chips at the problem no longer works. The wall is ahead. Andrej Karpathy, another genius we know from Tesla, agrees. Current large language models (LLMs) are like atmospheric engines – they have reached their limit. We need a boost. We need something new.

And here's where it gets ironic. Just as all these geniuses were proclaiming the death of scaling and mocking the idea that "bigger is better," Google throws Gemini 3 on the table.

Specifications that count:

  • Architecture: Google TPU (Tensor Processing Unit) – their homegrown silicon monster.
  • Model: Gemini 3 (Frontier Model).
  • Status: It beats everything OpenAI currently has in its garage.
  • Users: Jump from 450 million to 650 million active users in a few months.

Google has proven that the scaling wall may exist for others, but if you have your own infrastructure and enough money to buy a small country, you can break through it head-on. Their TPUs are what we call an “unfair advantage” in the automotive industry. While others are waiting in line for Nvidia chips, Google is baking its own cookies in its own oven.

Photo: Jan Macarol / Aiart

Project Garlic: OpenAI Strikes Back

Sam Altman is not a man to sit idly by and watch the competition. Reports indicate that OpenAI hasn't released a serious new "frontier" model since May 2024 (GPT-4o). That's an eternity in the tech world. It's like Ferrari selling the same model for two years without a facelift. Unacceptable.

That's why they're cooking Garlic now. No, it's not an ingredient for your dinner, but the codename for a model that's supposed to rival Gemini 3. Mark Chen, the principal investigator at OpenAI, admitted that they had some "muscle atrophy" in pre-training, but now they're back in the gym.

Garlic is said to incorporate bug fixes from a previous failed project called Charlotte Pete (seriously, who chooses these names?). The goal is clear: to create a model that is not only smart, but actually performs better in everyday traffic.

What does this mean for us mere mortals?

Let's be honest. Do you really care if your car has 500 horsepower or 505? Probably not. You care if it connects to your phone and if the seat is comfortable. It's the same with AI.

Most users don't need a super-intelligent god in a box. We need an assistant who doesn't hallucinate (doesn't lie), who's fast, and who understands what the hell we want when we type in a semi-literate prompt. Altman knows this. His "Code Red" isn't just about intelligence, it's about experience. He wants better personalization, speed, and reliability. They're putting the ad projects on hold (thank goodness!) and focusing on making ChatGPT the best companion.

Google has distribution (Search, Android), which is like having a gas station on every corner. OpenAI has “mindshare.” ChatGPT is a verb. “Give it to ChatGPT,” we say. Nobody says “Give it to Gemini,” except maybe those weird people who wear Google Glass.

Bottom line_ Google is really better right now

We're in a fascinating time. Google has finally cashed in on years of investment in its own chips, flexing its muscles with the Gemini 3, which is currently the undisputed king on paper. OpenAI is on the defensive, which is unusual for them, but the best possible news for us.

When the giants fight, we, the users, win. We'll get faster, smarter, and (hopefully) cheaper models. Whether Google's raw engineering or OpenAI's innovative magic with Project Garlic will win out in December, we'll see. Until then, remember: it's not who has the faster car, it's who has more fun in it. And right now, this ride is a hell of a lot of fun.

With you since 2004

From 2004 we research urban trends and inform our community of followers daily about the latest in lifestyle, travel, style and products that inspire with passion. From 2023, we offer content in major global languages.