It’s clear we won’t regulate AI for safety’s sake

As usual, governments will barely affect the trends that most affect our lives

This article is republished from The Financial Times

Sam Altman, chief executive of OpenAI, is probably the best-known figure in AI.

He’s been in the news lately, with The New Yorker magazine publishing a long profile; someone threw a Molotov cocktail at his house, other passers-by fired a gun, and OpenAI changed strategy to focus on the enterprise market.

Yet last month Altman inspired only one-third as many Google searches worldwide as the Liverpool footballer Mo Salah (who’s a 33-year-old reserve, for crying out loud). Public engagement with AI remains, shall we say, limited.

Altman blogged last weekend: “The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.” That could be corporate hype, or true, or both. He called for a collective debate on safety, saying: “I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.”

He might mean that sincerely, or not. Either way, we already know what is going to happen. A few AI labs will make the most consequential decisions about our future. We won’t regulate AI for safety’s sake. That’s not how modern societies handle innovation.

Each time a technology changes the world, it takes ages for the world to spot the downsides, and longer before cautious regulation is attempted. Think of CO₂ emissions. In 1856, the American scientist Eunice Newton Foote demonstrated that they could warm the planet, and here we are today.

Similarly, since British scientists showed in the 1950s that cigarettes cause cancer, smoking has legally killed hundreds of millions of people. Today, two decades after social media and smartphones rewired our brains, we’re starting to try to regulate the tech, at least for children. The whole thing is a libertarian’s dream.

Whenever regulation is attempted, companies fight it. Shareholders don’t always defeat democracy, but that’s the way to bet on it. The country where most AI is being developed hasn’t passed a federal law regulating the technology. Instead, Donald Trump ordered government agencies to eliminate policies that could “hinder American AI dominance”.

Aside from the EU’s AI Act, hardly any jurisdiction has even attempted comprehensive regulation. As usual, governments barely affect the trends that most affect our lives.

Voters aren’t pushing politicians to take on AI. Most people engage with the technology only as consumers. They struggle to think about its societal implications. They’re more comfortable arguing about identity, or the perceived personalities of politicians, though they often misjudge those too.

Most media also neglect the topic. The day Anthropic launched Mythos, a model that seems able to hack almost any operating system, the journalist Shakeel Hashim noted: “The release does not appear near the top of the homepage on any major news site.”

No wonder, because AI is even harder to understand than previous innovations. It’s a closed-source technology, advancing at unprecedented speed, and developed by a tiny group of experts who themselves don’t fully understand it.

Anthropic’s chief executive, Dario Amodei, writes: “People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work . . . this lack of understanding is essentially unprecedented in the history of technology.”

But as outsiders understand even less, the loudest warnings against AI come from insiders like Amodei. In 2023, more than a thousand industry executives and experts called for a pause in AI’s development so as to prevent “the loss of control of our civilisation”. That now forgotten appeal never became an electoral issue anywhere.

One group is opposing AI: academics, especially in the humanities. However, many of them suffer from a bias that prevents them from taking the technology seriously. The humanities study human creations. If you have dedicated your life to that, and you’re fed up with your students outsourcing their minds to Claude, and you aren’t schooled in tech, then you might still be dismissing AI as “glorified autocomplete”.

In short, we don’t know enough even to push for regulating AI. Even if we did, there’s no hope of global regulation of anything any more. Tech companies still act globally; states don’t. The effect of AI will be determined by a few AI labs, or perhaps by the products themselves. We’re essentially speeding along the highway at 300 miles an hour in an autonomous vehicle without seatbelts or headlights, and we’ll see what happens.

Copyright The Financial Times Limited 2026©

2026 The Financial Times Ltd. All rights reserved.

69e2250d3dd4c454b09be1d8