​In the previous article, we established that humanity is facing a technological asteroid, an Artificial Superintelligence that could spiral out of control and bring about our end.

​The initial, logical reaction of any sane person is simple. If the danger is real, then stop. Shut down the servers, impose a global moratorium, and wait until we understand how to build this safely.

​It sounds reasonable. It is what humanity did with human cloning. It is what we did with biological weapons research. When we realized the risk outweighed the benefit, we pulled the brakes.

​But with Artificial Intelligence, that option simply does not exist.

​There is no off switch, and there are no brakes. The reason is not technological. It is political, economic, and historical. We are trapped in what game theorists call an Arms Race Trap.

​In this article, we look behind the curtain of the tech giants and world powers to understand why the system is rigged to careen toward the cliff, even if the drivers want to stop.

​The Geopolitical Trap

​Why doesn't the US government simply halt AI development through legislation? The answer boils down to one word: China.

​This is a classic Prisoner's Dilemma. Imagine a scenario where the US government decides to act responsibly. It imposes strict regulations and pauses the training of advanced models to ensure safety. At that exact moment, on the other side of the world, the Chinese Communist Party sees a golden opportunity. Labs in Beijing continue to develop their models at full speed, without the ethical constraints of the West.

​The first nation to achieve Superintelligence wins everything. This is a "Winner Takes All" scenario. It is not just a technological victory. It implies absolute military supremacy, infinite economic dominance, and control over global information.

​No American president will agree to take the risk of China getting there first. And no Chinese leader will agree to let the US get there first. The result is a shared suicide race. Everyone presses the gas pedal to the floor, not because they want to, but because they are terrified the other side will win.

China and USA flags in the background, graph, coins
Shutterstock | source

​The Graveyard of Good Intentions

​But the race isn't just between nations. It is happening first and foremost within the West, among the tech giants. This is where we see the greatest tragedy, where the market proves to be stronger than the people.

​The history of AI is filled with brilliant, idealistic people who tried to build safe AI, protected from commercial pressures. One by one, they were all crushed under the steamroller of capital and competition.

​The Lost Vision of DeepMind

​Demis Hassabis, a genius and the founder of DeepMind, understood the danger from day one. He established the company in London, far away from the noise and greed of Silicon Valley, with a clear vision to build AGI as a controlled, responsible, and safe scientific project. He even established an ethics board to oversee the technology.

​But training advanced models requires expensive computing power. Eventually, the money ran out, and Google acquired DeepMind.

​For years, Hassabis managed to maintain a bubble of independence within Google. But then ChatGPT was released. Google entered a business panic known internally as "Code Red." They realized they could lose their search monopoly. In an instant, all scientific pretensions were discarded. Google merged DeepMind into its product division, stripped away its independence, and turned the scientific lab into a factory for commercial products designed to fight Microsoft. The original vision died.

​The Tragedy of OpenAI

​OpenAI, the creator of ChatGPT, was originally founded as a non-profit. Its stated mission was clear: to ensure that artificial general intelligence benefits all of humanity, without being constrained by the need to generate profit for shareholders.

​They had a unique defense mechanism, an independent Board of Directors whose sole job was to stop the CEO if development became too dangerous.

​In November 2023, the mechanism was tested. The Board decided that the CEO, Sam Altman, was pushing too fast at the expense of safety and was not being candid with them. They did exactly what they were designed to do, and they fired him.

​But then, market forces intervened violently. The investors, led by Microsoft, were furious. The employees, holding stock options worth millions, threatened to quit. In less than five days, the ethical coup collapsed. Altman was reinstated as a victor, and the Board members who tried to prioritize safety were fired and replaced by business figures.

​Today, OpenAI is on a path to becoming a for-profit entity with a valuation exceeding $150 billion. Money defeated the safety mechanism in a knockout.

​Microsoft and Meta: Scorched Earth

​The other giants are no different. When Microsoft integrated AI into Bing, it didn't just rush, it cut the brakes. The company fired its entire "Ethics and Society" team to speed up the market release. CEO Satya Nadella openly stated that the goal was purely competitive, admitting he wanted people to know that Microsoft made Google dance.

​Meanwhile, Mark Zuckerberg chose a "scorched earth" strategy. While other companies keep their most powerful models closed for fear of misuse, Meta released its powerful LLaMA models as open source to the whole world. This means the power to build cyber weapons or biological agents is now in the hands of any hacker in a basement, with zero oversight. The business consideration of commoditizing the market to hurt Google and OpenAI outweighed any consideration of global security.

Mark Zuckerberg selfie with Meta staff
Mark Zuckerberg's group selfie with staff went viral for appearing to be out of the ordinary | Zuck/Facebook

​We Are Terrible at Predicting the Unforeseen

​There is a deeper, almost philosophical reason why we cannot stop. Human history teaches us that we never understand the technology we create in real-time. We are terrible at predicting the unintended consequences.

​Consider the Industrial Revolution. The goal was wonderful: mass production, abundance, and steam engines. We knew there would be some smoke. We predicted that. But what did we not predict? We did not predict that burning coal would change the chemical composition of the atmosphere and cause a Climate Crisis that would threaten to drown entire cities 200 years later.

​Consider the Social Media Revolution. The goal was wonderful: to connect the world. We knew it might waste some of our time. But what did we not predict? We did not predict that algorithms designed for engagement would cause destructive political polarization, the collapse of shared truth, a depression epidemic among teenagers, and the destabilization of democracy.

​Now, think about AI. We are playing with a technology a billion times more powerful than a steam engine or TikTok.

​If the known and predicted forecast by experts is a 10% to 20% chance of extinction, ask yourself what the unforeseen consequences are.

​What is the "Global Warming" of Artificial Intelligence? What is the phenomenon we don't even have a name for yet, which will hit us when we are completely dependent on the system? The assumption that we can control the consequences of an entity smarter than us, when we can barely control the consequences of Facebook, is a historical hubris with no basis in reality.

​A Train Without a Driver

​So where do we stand?

​We have a technology with existential destructive potential. We have a geopolitical Arms Race dynamic that prevents nations from stopping. And we have predatory market forces that have dismantled every safety mechanism that tried to stand in their way.

​The train is barreling at full speed toward the cliff, and the throttle is locked on maximum. There is no driver who can hit the brakes, because the driver has been replaced by an algorithm of profit and competition.

​But fear of the enemy and greed for profit are not the only things fueling this engine. There is a more seductive, and perhaps more dangerous, force at play.​We aren't just building this machine because we have to. We are building it because we want to.

​In the next article, we will uncover the ultimate trap: The Utopian Temptation. We will explore why we are willing to risk extinction for the promise of a perfect world, and why the promise of salvation might be the very thing that dooms us.