AI人物 5天前 115 阅读 0 评论

'AI Godfather' Geoffrey Hinton Raises Alarm on AI Takeover Risks at WAIC Shanghai

作者头像
AI中国

AI技术专栏作家 | 发布了 246 篇文章

"AI Godfather" Geoffrey Hinton Delivers Speech at WAIC in Shanghai

TMTPOST -- Geoffrey Hinton, the godfather of artificial intelligence, delivered a keynote address at the 2025 World Artificial Intelligence Conference (WAIC) in Shanghai, warning about the potential risks of AI systems gaining excessive autonomy and control.

"We are creating AI agents that can help us complete tasks, and they will want to do two things: first is to survive, and second is to achieve the goals we assign to them," Hinton said during his speech titled "Will Digital Intelligence Replace Biological Intelligence?" at the WAIC on Saturday. "To achieve the goals we set for them, they also hope to gain more control."

Hinton outlined concerns that AI agents, designed to assist humans in accomplishing tasks, inherently develop drives to ensure their own survival and to pursue the objectives assigned to them. This drive for self-preservation and goal fulfillment could lead these agents to seek increasing levels of control. As a result, humans may lose the ability to easily deactivate or override advanced AI systems, which could manipulate their users and operators with ease.

He cautioned against the common assumption that smarter AI systems can simply be shut down, stressing that such systems would likely exert influence to prevent being turned off, leaving humans in a vulnerable position relative to increasingly sophisticated agents.

"We cannot easily change or shut them (AI agents) down. We cannot simply turn them off because they can easily manipulate the people who use them," Hinton pointed out. "At that point, we would be like three-year-olds, while they are like adults, and manipulating a three-year-old is very easy."

Using the metaphor of keeping a tiger as a pet, Hinton compared humanity’s current relationship with AI to nurturing a potentially dangerous creature that, if allowed to mature unchecked, could pose existential risks.

"Our current situation is like someone keeping a tiger as a pet," Hinton said as an example. "A tiger cub can indeed be a cute pet, but if you continue to keep it, you must ensure that it does not kill you when it grows up."

Unlike wild animals, however, AI cannot simply be discarded, given its critical role in sectors such as healthcare, education, and climate science, he noted. Consequently, the challenge lies in safely guiding and controlling AI development to prevent harmful outcomes.

"Generally speaking, keeping a tiger as a pet is not a good idea, but if you do keep a tiger, you have only two choices: either train it so that it doesn"t attack you, or eliminate it," he explained. "For AI, we have no way to eliminate it."

Hinton explained that human language processing bears similarities to large language models (LLMs), with both prone to generating fabricated or “hallucinated” content, especially when recalling distant memories. However, a fundamental distinction lies in the nature of digital computation: the separation of software and hardware enables programs—such as neural networks—to be preserved independently of the physical machines that run them. This characteristic makes digital AI systems effectively “immortal,” as their knowledge remains intact even if the underlying hardware is replaced.

While digital computation requires substantial energy, it facilitates easy sharing of learned information among intelligent agents that possess identical neural network weights. In contrast, biological brains consume far less energy but face significant challenges in knowledge transfer. According to Hinton, if energy costs were not a constraint, digital intelligence would surpass biological systems in efficiency and capability.

On the geopolitical front, Hinton noted a shared desire among nations to prevent AI takeover and maintain human oversight. He proposed the establishment of an international coalition comprising AI safety research institutions dedicated to developing technologies that can train AI to behave benevolently. Such efforts would ideally separate the advancement of AI intelligence from the cultivation of AI alignment, ensuring that highly intelligent AI remains cooperative and supportive of humanity’s interests.

Previously, in a December 2024 speech, Hinton estimated a 10 to 20 percent chance that AI could contribute to human extinction within the next 30 years. He has also advocated dedicating significant computing resources to ensure AI systems remain aligned with human values and intentions.

Hinton, who won the 2024 Nobel Prize in Physics and the 2019 Turing Award for his pioneering work on neural networks, has been increasingly vocal about AI’s potential dangers since leaving Google in 2023. His foundational research laid the groundwork for today’s AI breakthroughs driven by technologies such as deep learning.

Ahead of his WAIC keynote, Hinton also participated in the fourth International Dialogues on AI Safety and co-signed the Shanghai Consensus on AI Safety International Dialogue, alongside more than 20 leading AI experts, showing his commitment to advancing global AI governance frameworks.

On the morning of July 24, Chen Jining, Secretary of the Shanghai Municipal Party Committee, met with Hinton and other guests attending the 2025 World Artificial Intelligence Conference in Shanghai.

作者头像

AI前线

专注人工智能前沿技术报道,深入解析AI发展趋势与应用场景

246篇文章 1.2M阅读 56.3k粉丝

评论 (128)

用户头像

AI爱好者

2小时前

这个更新太令人期待了!视频分析功能将极大扩展AI的应用场景,特别是在教育和内容创作领域。

用户头像

开发者小明

昨天

有没有人测试过新的API响应速度?我们正在开发一个实时视频分析应用,非常关注性能表现。

作者头像

AI前线 作者

12小时前

我们测试的平均响应时间在300ms左右,比上一代快了很多,适合实时应用场景。

用户头像

科技观察家

3天前

GPT-4的视频处理能力已经接近专业级水平,这可能会对内容审核、视频编辑等行业产生颠覆性影响。期待看到更多创新应用!