Artificial intelligence is widely represented in science fiction as a threat to human quality of life or survival. As investments in AI research and development has intensified in recent years many of these threats are transitioning from fiction to reality. The following are risks that are commonly associated with artificial intelligence.
Existential Risk The potential for certain types of AI such as recursive self-improvement to develop malicious, unpredictable or superintelligent features that represent a large scale risk.
PrivacyThe potential for technology such as sentiment analysis to monitor human communication has broad implications for privacy rights. For example, a government could monitor practically all electronic communications for attitudes, emotions and opinions using AI techniques.
Quality Of LifeThe potential for artificial intelligence to be used in ways that decrease quality of life. For example, a nursing home that cares for elderly patients using robot nurses that provide no human contact.
Single Point Of Failure Replacing diverse human decisions with a handful of algorithms may represent a single point of failure. For example, if 40% of self driving cars were operated with a common operating system a bug in a software update could cause mass accidents.
WeaponizationThe potential for weaponization of artificial intelligence such as swarm robots that involve machines making decisions to harm humans.
This is the complete list of articles we have written about risks.
If you enjoyed this page, please consider bookmarking Simplicable.
© 2010-2023 Simplicable. All Rights Reserved. Reproduction of materials found on this site, in any form, without explicit permission is prohibited.
View credits & copyrights or citation information for this page.