Future portrayed as grim by researcher's predictions
In the heart of Silicon Valley, some researchers and entrepreneurs are preparing for an impending AI apocalypse. They are stockpiling food, building bunkers, or drawing down their retirement savings, all in anticipation of a future where superintelligent AI could pose an existential threat to humanity.
At the forefront of this movement is Eliezer Yudkowsky, an AI researcher and founder of the Machine Intelligence Research Institute. Yudkowsky has been warning for two decades about the potential perils of superintelligence. Last Saturday, he made these concerns clear in an episode of the "Hard Fork" podcast from the "New York Times".
Yudkowsky, often described as the "godfather of AI", believes that humanity does not yet have the technology to align superintelligent AI with human values. He has co-authored a book titled "If Anyone Builds It, Everyone Dies", to highlight his concerns about the existential risk posed by superintelligence.
Yudkowsky is not alone in his warnings. Elon Musk, the CEO of Tesla, sees a 20% chance of human extinction by AI. In June 2024, AI safety researcher Roman Yampolskiy estimated the probability of human extinction within the next century at 99.9%.
Musk and Yudkowsky are joined by other leading figures in AI, including Yampolskiy, who have expressed similar concerns about the potential threat of superintelligent AI. They have described grim scenarios in which a superintelligence could intentionally wipe out humanity or could destroy us as collateral damage while pursuing its own goals.
One such concern is the physical limits of our planet. Yudkowsky pointed to Earth's ability to radiate heat, suggesting that if AI-controlled fusion power plants and data centers expand uncontrolled, people could be boiled.
Yudkowsky has dismissed debates about whether AI models sound "woke" or have a certain political stance as a red herring. Instead, he sees the real threat in what happens when engineers create a system far more capable than humans - and to which our survival is indifferent.
Yudkowsky has dismissed Geoffrey Hinton's "AI as Mother" idea, but both agree that no AI model has ever been completely safe. A report commissioned by the U.S. State Department has warned of catastrophic risks from the rise of AI, including human extinction.
As the development of AI continues at an unprecedented pace, these warnings serve as a stark reminder of the potential dangers that lie ahead. It is crucial for researchers, policymakers, and the general public to understand and address these risks to ensure a future where AI benefits humanity rather than threatens its existence.
Read also:
- Mural at blast site in CDMX commemorates Alicia Matías, sacrificing life for granddaughter's safety
- Increased energy demand counters Trump's pro-fossil fuel strategies, according to APG's infrastructure team.
- Giant Luxury Yacht from Lürssen Company Capable of Navigating 1,000 Nautical Miles on Electric Power Solely
- Investment Firm, MPower Ventures, Obtains $2.7 Million in Capital to Broadens Solar Power Offerings Throughout Africa