GPT-4.5, the latest artificial intelligence model from OpenAI Labs, is the first to successfully pass the famous Turing Test. In a recent scientific study, 73 % participants believed they were communicating with a real person—and not an algorithm. But while this is an impressive achievement, it also raises some pretty scary questions.
In 1950, Alan Turing posed a simple question: can a machine think? And if it can't, can it convince a human to think? Thus was born the famous Turing Test – the ultimate test of artificial intelligence. Well, GPT-4.5 It now appears to have truly passed it for the first time in history. But before we start applauding the machine, let's take a look at what it actually means – and why we might be concerned.
How did GPT-4.5 deceive humanity (or did it)?
Scientists Cameron Jones and Benjamin Bergen from the University of California, San Diego, in a study published on arXiv (Source: arxiv.org), conducted an experiment in which participants tried to figure out who was human and who was artificial intelligence in 500 text conversations. In each conversation, they communicated with two interlocutors – one human and one AI model – and had exactly five minutes to decide.
The results? GPT-4.5 convinced 73 % test subjects that it was the real person – more successfully than actual human interlocutors! If that's not a digital miracle, we don't know what is.
Real people, who participated as a control group, were successfully identified as humans only in 67 % cases.
What is the secret to success?
It wasn’t just about raw processing power or comprehension. The key was the “persona prompt” – a command to make GPT-4.5 behave like an introverted, slightly cynical young person using current internet lingo. And look at it this way – it turned out to be quite believable. Maybe even too believable.
Other models tested fared much worse:
- LLaMa-3.1: 56 % performance
- GPT-4o: 21 % performance
- ELIZA (legendary chatbot from the 1960s): 23 % performance
Does this mean that GPT-4.5 is truly intelligent?
Not exactly. The Turing test doesn’t measure consciousness, understanding, or deep intelligence—it measures the ability to imitate. So GPT-4.5 has learned how to look like a human, not how to become a human. Or as scientists say, “The model doesn’t know it knows.” This distinction is important. We could say that GPT-4.5 is a master of bluff, an illusionist in the world of algorithms. But such illusionists, when used in the wrong way, can quickly become frauds.
Why should we be concerned about this?
If AI can imitate people better than humans themselves, what does this mean for online identity verification, for relationships, for manipulating public opinion? Could AI in the future write columns, conduct interviews, persuade voters?
The researchers emphasize that society needs to seriously consider how we regulate such powerful models. Currently, anyone can generate convincing fake conversations, comments, opinions... and who knows what else will follow with just a few clicks.
Conclusion: Do we already have AGI?
GPT-4.5 is not just another smart chatbot. It is a milestone. It is proof that we have entered a new era – an era where machines not only understand language, but use it with such subtlety and context that they fool even us.
This is fascinating. This is scary. This is the future.
And the future, it seems, can type pretty well.