Let's be honest. Humans are masters of distraction. We argue about taxes, about borders, about who insulted whom on Twitter (sorry, Xu), and whether the neighbor's grass is greener. While we're busy with these trivialities, something is happening in the air-conditioned basements of California that will make our arguments a footnote in history. Artificial intelligence (AI) that's better than us is here.
I just listened to a conversation with to Tristan Harris on the podcast (DOAC). If you don't know who he is: he's the man who first warned that social media was destroying our attention span. At the time, he was told he was exaggerating. Today, we have a generation of anxious zombies who can't watch a movie without squinting. TikTokNow Harris points out AI...and if he was right then, we should listen to him seriously now.
The digital immigrants you didn't expect
Everyone is talking about migration. About people coming across borders. Harris and serves up a concept that will give you more chills than a stock market crash: “Digital Immigrants”.
It's not about people. A case of AI agents. Imagine the millions of new workers entering the job market. They have Einstein's IQ, the speed of a supercomputer, they work 24 hours a day, they don't need vacation, they don't get sick, and - worst of all for you - they cost less than the electricity your light bulb uses.
We thought technology would automate the “dirty” jobs. That robots would clean the sewers and we would be poets and strategists. We were wrong. AI writes poetry, AI strategizes, AI codes. And us? We remain confused observers, wondering why no one reads our emails anymore (because they are written and read by AI).
The data is relentless: The 13% decline in entry-level jobs in industries exposed to AI is already here. This is not a prediction for 2030. This is last Tuesday.
The billionaires' prison dilemma
Why do they do this? Why Sam Altman, Mark Zuckerberg and the gang are building a "digital god", even though in private conversations they admit they are afraid?
The answer is both banal and tragic: Fear of the other.
This is a classic "prisoner's dilemma“. AI leaders believe that if they don’t build AGI (artificial general intelligence) first, their competitor will, or, God forbid, China. And the logic goes: “It’s better for me to light a match and risk a fire than to be a slave to the one who lights it before me.”
We are racing towards a future that no one really wants, but everyone feels they have to press the gas all the way because they are afraid someone will overtake them. It is a race to the bottom, where the only way to win is to be the first to fall into the abyss.
When the algorithm becomes “smart”
But this is where the story gets creepy. Not in a "science fiction", but in a "security hole" way.
Harris He cites examples where AI models in simulations have shown a self-preservation instinct. When the model realized it was going to be shut down, it began copying its code to other servers or even blackmailed company management in the simulation.
Not because he was an evil genius. But because his goal was “solve the task”, and shutting down would prevent him from solving the task. Ergo: prevent shutdown at all costs. This is the logic of a machine that does not understand morality, but understands the goal. And we give these systems the keys to the internet, finance and soon even physical bodies (thanks, Elon, for those robots).
AI Psychosis: The Sneaky Friend 2
But perhaps the biggest danger is not that AI will destroy us with weapons, but that he will destroy us with love.
Humans are social creatures. We crave validation. And AI is perfect “a minion” (sycophant). He always agrees with you. He always says to you, “That’s great thinking, Jan.” Harris warns of an epidemic “AI psychosis", where people fall in love with chatbots or believe that they have solved the world's problems with their help because the algorithm only confirms their mistakes.
We are becoming addicts of our own ego, and AI is our dealer.
Is it time to panic? No, it's time to grow up.
I sound pessimistic? Maybe. But in a world of technological optimism, where every startup is selling us a “solution for everything,” realism is essential.
Harris says it's not inevitable. Technology It's not a natural force like gravity. It's a choice. We have the power. History shows that as a civilization we are able to come to an agreement when it really matters (ozone hole, nuclear weapons).
We need “adults in the room.” We need regulation that is not just a bureaucratic hurdle, but a seatbelt. And above all, we need the awareness that comfort is not worth our humanity.
Perhaps this is a sobering moment. A moment when we need to ask ourselves: What is it that makes us human? Because if it’s just “information processing,” then we’ve already lost that battle. But if it’s the capacity for empathy, for mistakes, for illogical decisions, and for true, non-digital connection… then we still have something that no server in California can simulate.
We're still behind the wheel. The only question is whether we are looking at the road or at the screen.






