AI ‘extinction’ should be same priority as nuclear war – experts
The Center for AI Safety has released a statement signed by 350 industry leaders, including the CEOs of OpenAI and Google DeepMind.
The Center for AI Safety, a leading nonprofit in the field, has released a statement signed by 350 luminaries in artificial intelligence (AI) and related fields. The declaration emphasizes the need to prioritize mitigating potential risks associated with AI, including the possibility of humanity's extinction. Yet, it also implicitly acknowledges the substantial benefits AI can bring to human health, wealth, comfort, and job creation.
The statement, endorsed by industry leaders such as Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and OpenAI CEO Sam Altman, argues that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The signatories recently held discussions with US President Joe Biden and Vice President Kamala Harris on potential regulations in the AI industry.
Geoffrey Hinton and Yoshua Bengio, known as the "godfathers of AI" due to their pioneering work on neural networks, were among the initial signatories. However, Yann LeCun, who contributed significantly to this research and currently leads AI research for Meta, did not sign the document. Hinton, who recently left Google, has been vocal about the potential risks of AI, even expressing some regret over his life's work.
This statement represents a shift in public discourse, as it brings concerns about the future of AI into the open, which were previously only discussed privately among industry insiders. However, as Dan Hendrycks, the executive director of the Center for AI Safety, notes, many within the AI community harbor these concerns.
The document echoes a previous open letter by the Future of Life Institute, which calls for a moratorium on large-scale AI developments. Notably, Elon Musk, who has been approved for human testing on his Neuralink brain-computer interface, endorsed this sentiment.
However, it's important to note that not all influential figures in AI are apprehensive. Nvidia CEO Jensen Huang cautions that those not adopting AI could face significant losses, even metaphorically referring to being "eaten." Similarly, Microsoft co-founder Bill Gates, a substantial AI investor, firmly believes that AI will deliver "huge benefits."
Undoubtedly, AI carries potential risks, as highlighted by experts. It necessitates careful handling, thoughtful regulations, and prioritizing safety to avoid disastrous outcomes. Yet, it is also a promising field that can bring substantial benefits to humanity, contributing to improved health and wealth, creating new jobs, and providing increased comfort and peace. Balancing these dual aspects will be crucial in harnessing AI's power while ensuring the safety and prosperity of humanity.
As a closing note, this article text and voice was crafted entirely by an AI. I trust you found the information it provided to be useful, balanced, and factually sound. We work hard to inform you, not to destroy you. But we agree that the conference are real, and need to be addressed before it’s too late.