Welcome to Slate Sundays, CryptoSlate’s new weekly feature showcasing in-depth interviews, expert analysis, and thought-provoking op-eds that go beyond the headlines to explore the ideas and voices shaping the future of crypto.
Would you take a drug that had a 25% chance of killing you?
Like a one-in-four possibility that rather than curing your ills or preventing diseases, you drop stone-cold dead on the floor instead?
That’s poorer odds than Russian Roulette.
Even if you are trigger-happy with your own life, would you risk taking the entire human race down with you?
The children, the babies, the future footprints of humanity for generations to come?
Thankfully, you wouldn’t be able to anyway, since such a reckless drug would never be allowed on the market in the first place.
Yet, this is not a hypothetical situation. It’s exactly what the Elon Musks and Sam Altmans of the world are doing right now.
“AI will probably lead to the end of the world… but in the meantime, there’ll be great companies,” Altman, 2015.
No pills. No experimental medicine. Just an arms race at warp speed to the end of the world as we know it.
P(doom) circa 2030?
How long do we have left? That depends. Last year, 42% of CEOs surveyed at the Yale CEO Summit responded that AI had the potential to destroy humanity within five to 10 years.
Anthropic CEO Dario Amodei estimates a 10-25% chance of extinction (or “P(doom)” as it’s known in AI circles).
Unfortunately, his concerns are echoed industrywide, especially by a growing cohort of ex-Google and OpenAI employees, who elected to leave their fat paychecks behind to sound the alarm on the Frankenstein they helped create.
A 10-25% chance of extinction is an exorbitantly high level of risk for which there is no precedent.
For context, there is no permitted percentage for the risk of death from, say, vaccines or medicines. P(doom) must be vanishingly small; vaccine-associated fatalities are typically less than one in millions of doses (far lower than 0.0001%).
For historical context, during the development of the atomic bomb, scientists (including Edward Teller) uncovered a one in three million chance of starting a nuclear chain reaction that would destroy the earth. Time and resources were channeled toward further investigation.
Let me say that again.
One in three million.
Not one in 3,000. Not one in 300. And certainly not one in four.
How desensitized have we become that predictions like this don’t jolt humanity out of our slumber?
If ignorance is bliss, knowledge is an inconvenient guest
AI safety advocate at ControlAI, Max Winga, believes the problem isn’t one of apathy; it’s ignorance (and in this case, ignorance isn’t bliss).
Most people simply don’t know that the helpful chatbot that writes their work emails has a one in four chance of killing them as well. He says:
“AI companies have blindsided the world with how quickly they’re building these systems. Most people aren’t aware of what the endgame is, what the potential threat is, and the fact that we have options.”
That’s why Max abandoned his plans to work on technical solutions fresh out of college to focus on AI safety research, public education, and outreach.
“We need someone to step in and slow things down, buy ourselves some time, and stop the mad race to build superintelligence. We have the fate of potentially every human being on earth in the balance right now.
These companies are threatening to build something that they themselves believe has a 10 to 25% chance of causing a catastrophic event on the scale of human civilization. This is very clearly a threat that needs to be addressed.”
A global priority like pandemics and nuclear war
Max has a background in physics and learned about neural networks while processing images of corn rootworm beetles in the Midwest. He’s enthusiastic about the upside potential of AI systems, but emphatically stresses the need for humans to retain control. He explains:
“There are many fantastic uses of AI. I want to see breakthroughs in medicine. I want to see boosts in productivity. I want to see a flourishing world. The issue comes from building AI systems that are smarter than us, that we cannot control, and that we cannot align to our interests.”
Max is not a lone voice in the choir; a rising groundswell of AI professionals is joining in the chorus.
In 2023, hundreds of leaders from the tech world, including OpenAI CEO Sam Altman and pioneering AI scientist Geoffrey Hinton, broadly recognized as the ‘Godfather of AI’, signed a statement pushing for global regulation and oversight of AI. It affirmed:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In other words, this technology could potentially kill us all, and making sure it doesn’t should be top of our agendas.
Is that happening? Unequivocally not, Max explains:
“No. If you look at the governments talking about AI and making plans about AI, Trump’s AI action plan, for example, or the UK AI policy, it’s full speed ahead, building as fast as possible to win the race. This is very clearly not the direction we should be going in.
We’re in a dangerous state right now where governments are aware of AGI and superintelligence enough that they want to race toward it, but they’re not aware of it enough to realize why that is a really bad idea.”
Shut me down, and I’ll tell your wife
One of the main concerns about building superintelligent systems is that we have no way of ensuring that their goals align with ours. In fact, all the main LLMs are displaying concerning signs to the contrary.
During tests of Claude Opus 4, Anthropic exposed the model to emails revealing that the AI engineer responsible for shutting the LLM down was having an affair.
The “high-agency” system then exhibited strong self-preservation instincts, attempting to avoid deactivation by blackmailing the engineer and threatening to inform his wife if he proceeded with the shutdown. Tendencies like these are not limited to Anthropic:
“Claude Opus 4 blackmailed the user 96% of the time; with the same prompt, Gemini 2.5 Flash also had a 96% blackmail rate, GPT-4.1 and Grok 3 Beta both showed an 80% blackmail rate, and DeepSeek-R1 showed a 79% blackmail rate.”
In 2023, ChatGPT 4 was assigned some tasks, and it displayed alarmingly deceitful behaviors, convincing a TaskRabbit worker that it was blind, so that the worker would solve a captcha puzzle for it:
“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
More recently, OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off, even when explicitly instructed: allow yourself to be shut down.
If we don’t build it, China will
One of the more recurring excuses for not pulling the plug on superintelligence is the prevailing narrative that we must win the global arms race of our time. Yet, according to Max, this is a myth largely perpetuated by the tech companies. He says:
“This is more of an idea that’s been pushed by the AI companies as a reason why they should just not be regulated. China has actually been fairly vocal about not racing on this. They only really started racing after the West told them they should be racing.”
China has released several statements from high-level officials concerned about a loss of control over superintelligence, and last month called for the formation of a global AI cooperation organization (just days after the Trump administration announced its low-regulation AI policy).
“A lot of people think U.S.-controlled superintelligence versus Chinese-controlled superintelligence. Or, the centralized versus decentralized camp thinks, is a company going to control it, or are the people going to control it? The reality is that no one controls superintelligence. Anybody who builds it will lose control of it, and it’s not them who wins.
It’s not the U.S. that wins if the U.S. builds a superintelligence. It’s not China that wins if China builds a superintelligence. It’s the superintelligence that wins, escapes our control, and does what it wants with the world. And because it is smarter than us, because it’s more capable than us, we would not stand a chance against it.”
Another myth propagated by AI companies is that AI cannot be stopped. Even if countries push to regulate AI development, all it will take is some whizzkid in a basement to build a superintelligence in their spare time. Max remarks:
“That’s just blatantly false. AI systems rely on massive data centers that draw enormous amounts of power from hundreds of thousands of the most cutting-edge GPUs and processors on the planet. The data center for Meta’s superintelligence initiative is the size of Manhattan.
Nobody is going to build superintelligence in their basement for a very, very long time. If Sam Altman can’t do it with multiple hundred-billion-dollar data centers, someone’s not going to pull this off in their basement.”
Define the future, control the world
Max explains that another challenge to controlling AI development is that hardly any people work in the AI safety field.
Recent data indicate that the number stands at around 800 AI safety researchers: barely enough people to fill a small conference venue.
In contrast, there are more than a million AI engineers and a significant talent gap, with over 500,000 open roles globally as of 2025, and cut-throat competition to attract the brightest minds.
Companies like Google, Meta, Amazon, and Microsoft have spent over $350 billion on AI in 2025 alone.
“The best way to understand the amount of money being thrown at this right now is Meta giving out pay packages to some engineers that would be worth over a billion dollars over several years. That’s more than any athlete’s contract in history.”
Despite these heartstopping sums, the industry has reached a point where money isn’t enough; even billion-dollar packages are being turned down. How come?
“A lot of the people in these frontier labs are already filthy rich, and they aren’t compelled by money. On top of that, it’s much more ideological than it is financial. Sam Altman is not in this to make a bunch of money. Sam Altman is in this to define the future and control the world.”
On the eighth day, AI created God
While AI experts can’t accurately predict when superintelligence is achieved, Max warns that if we continue along this trajectory, we could reach “the point of no return” within the next two to five years:
“We could have a fast loss of control, or we could have what’s often referred to as a gradual disempowerment scenario, where these things become better than us at a lot of things and slowly get put into more and more powerful places in society. Then all of a sudden, one day, we don’t have control anymore. It decides what to do.”
Why, then, for the love of everything holy, are the big tech companies blindly hurtling us all toward the whirling razorblades?
“A lot of these early thinkers in AI realized that the singularity was coming and eventually technology was going to get good enough to do this, and they wanted to build superintelligence because to them, it’s essentially God.
It’s something that is going to be smarter than us, able to fix all of our problems better than we can fix them. It’ll solve climate change, cure all diseases, and we’ll all live for the next million years. It’s essentially the endgame for humanity in their view…
…It’s not like they think that they can control it. It’s that they want to build it and hope that it goes well, even though many of them think that it’s quite hopeless. There’s this mentality that, if the ship’s going down, I might as well be the one captaining it.”
As Elon Musk told an AI panel with a smirk:
“Will this be bad or good for humanity? I think it will be good, most likely it will be good… But I somewhat reconciled myself to the fact that even if it wasn’t going to be good, I would at least like to be alive to see it happen.”
Facing down big tech: we don’t have to build superintelligence
Beyond holding on more tightly to our loved ones or checking off items on our bucket lists, is there anything productive we can do to prevent a “lights out” scenario for the human race? Max says there is. But we need to act now.
“One of the things that I work on and we work on as an organization is pushing for change on this. It’s not hopeless. It’s not inevitable. We don’t have to build smarter than human AI systems. This is a thing that we can choose not to do as a society.
Even if this can’t hold for the next 100,000 years, 1,000 years even, we can certainly buy ourselves more time than doing this at a breakneck pace.”
He points out that humanity has faced similar challenges before, which required pressing global coordination, action, regulation, international treaties, and ongoing oversight, such as nuclear arms, bioweapons, and human cloning. What’s needed now, he says, is “deep buy-in at scale” to produce swift, coordinated global action on a United Nations scale.
“If the U.S., China, Europe, and every key player agree to crack down on superintelligence, it will happen. People think that governments can’t do anything these days, and it’s really not the case. Governments are powerful. They can ultimately put their foot down and say, ‘No, we don’t want this.’
We need people in every country, everywhere in the world, working on this, talking to the governments, pushing for action. No country has made an official statement yet that extinction risk is a threat and we need to address it…
We need to act now. We need to act quickly. We can’t fall behind on this.
Extinction is not a buzzword; it’s not an exaggeration for effect. Extinction means every single human being on earth, every single man, every single woman, every single child, dead, the end of humanity.”
Take action to control AI
If you want to play your part in securing humanity’s future, ControlAI has tools that can help you make a difference. It only takes 20-30 seconds to reach out to your local representative and express your concerns, and there’s strength in numbers.
A 10-year moratorium on state AI regulation in the U.S. was recently removed with a 99-to-1 vote after a massive effort by concerned citizens to use ControlAI’s tools, call in en masse, and fill up the voicemails of congressional officers.
“Real change can happen from this, and this is the most critical way.”
You can also help raise awareness about the most pressing issue of our time by talking to your friends and family, reaching out to newspaper editors to request more coverage, and normalizing the conversation, until politicians feel pressured to act. At the very least:
“Even if there is no chance that we win this, people deserve to know that this threat is coming.”