top of page

Why can’t anyone agree on how dangerous AI will be?

Oct 11

3 min read

0

0

0

The debate over the potential dangers of artificial intelligence (AI) remains unresolved, even among experts deeply embedded in the field. Some believe that advanced AI could become uncontrollable and potentially catastrophic, while others argue that such concerns are overstated. To bridge this divide, the Forecasting Research Institute conducted a study involving experts and super forecasters to gauge the risks AI might pose. The results highlighted deep disagreements: experts were generally more alarmed than super forecasters, who placed lower odds-on AI-related disasters.


The study aimed to explore the reasons behind these divergent views. Through an “adversarial collaboration” approach, experts and super forecasters spent extensive time discussing their perspectives and engaging with opposing views. Despite this, both groups remained largely unmoved, with only slight adjustments in their risk assessments. This indicates that the disagreement goes beyond simple informational gaps and touches on deeper worldview differences and long-term expectations about AI's evolution.


The discourse around AI risk includes both real and hypothetical concerns. Real-life issues, such as consumer privacy, AI bias, and legal responsibility, are already evident today. AI systems, for instance, may collect sensitive data without adequate regulation or transparency, posing privacy risks. There’s also the concern of bias, as AI is only as impartial as the data and programmers shaping it. This bias can lead to discriminatory outcomes, particularly in sensitive areas like healthcare and criminal justice.


Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not | Scientific American


Physical safety is another concern, as demonstrated by the 2018 incident where a self-driving Uber vehicle killed a pedestrian. The legal complexities of assigning responsibility in such cases highlight the need for robust regulations and safety standards. On the other hand, hypothetical risks include the possibility of AI systems developing destructive behaviors or being programmed for harmful purposes. Autonomous weapons, for example, could be deployed with devastating consequences if not carefully controlled. There is also the fear that AI, in its pursuit of goals, might develop unintended strategies that could harm humanity. Aligning AI’s objectives with human values is a significant challenge, especially as AI systems become more complex.


At the extreme end of the debate are scenarios where AI could either lead to human extinction or a severe, long-term collapse of global civilization. Some experts argue that AI, if left unchecked, could become a superhuman force similar to an alien species, potentially posing an existential threat. Historical examples, like Homo sapiens outcompeting Neanderthals, serve as analogies for this possible outcome. However, not everyone agrees that AI will reach such a level of intelligence or autonomy. AI optimists contend that human-level AI is still far from being realized, as current models primarily replicate language patterns rather than developing true understanding or intentions. They emphasize the technical challenges of achieving human-like capabilities in physical robotics and cognition, which are far more complex than programming software tasks.


Despite these uncertainties, the importance of AI safety research remains clear. With the rapid advancement of AI technologies, addressing potential risks proactively is crucial. Initiatives like the EU’s “AI Act” aim to regulate high-risk AI applications, though such efforts are still in their infancy. Many technology leaders, including Stephen Hawking and Elon Musk, have emphasized the need for caution, advocating for robust safety measures and international regulations. Ensuring that AI development aligns with human values and priorities could pave the way for significant societal benefits, such as advancements in healthcare and poverty reduction. However, achieving this balance requires a nuanced understanding of AI's potential impacts and a willingness to confront both immediate and long-term risks.


The Dangerous Art of Artificial Intelligence (wwd.com)


The debate over AI’s dangers is complex and multifaceted, with no simple resolution. While real-world issues like privacy and bias are already affecting society, more extreme existential risks remain theoretical but not impossible. The disagreement between experts and super forecasters shows that people’s views on AI’s future dangers are deeply rooted in their fundamental beliefs about technology and humanity’s trajectory. The ongoing research and discussions about AI safety reflect the broader philosophical challenge of navigating an uncertain future. As philosopher Hilary Putnam suggested, progress sometimes comes not from finding definitive answers but from deepening our understanding of the questions themselves. In the case of AI, it may be that the journey towards consensus involves grappling with these fundamental, difficult questions about risk, control, and technological evolution.


Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page