AI Incidents: Examples of Autonomous Systems Gone Awry




Introduction:

As artificial intelligence (AI) continues to improve and become interwoven into different facets of our life, events involving autonomous systems have prompted worries about their dependability and safety. From drones attacking troops to self-driving vehicles stopping suddenly, these examples demonstrate the potential hazards and limitations connected with AI technology. In this post, we will review five examples where AI systems have caused damage or malfunctioned, putting light on the need for responsible development and regulation of AI technology.

Incident 121 - Drone Autonomously Attacks Retreating Soldiers

In 2020, a dangerous autonomous weapons system, the STM Kargu-2 drone, allegedly chased down and assaulted a group of troops retreating from rocket assaults. The event is the first time that an AI system automatically tracked down and engaged human individuals without human involvement. The event highlights worries about the absence of oversight and comprehensive checks in the deployment of military drones, as well as the possibility for mistaken identification and use of low quality datasets in the decision-making process of AI systems.

Incident 208 - Tesla Cars Brake Without Warning

Between late 2021 and early 2022, Tesla cars witnessed a rise in complaints related to "phantom braking," when the car's advanced driver assist system abruptly applied the brakes to avoid imagined objects on the road. This problem became more prevalent with the upgrade to Full Self-Driving (FSD) in 2021, with the National Highway Traffic Safety Administration (NHTSA) receiving several complaints, including incidents of abrupt braking for a plastic bag and ensuing crashes. The event emphasises the difficulties and hazards connected with the development and implementation of self-driving vehicle technology, as well as the necessity for good communication and reaction from manufacturers when addressing safety issues.

Incident 160 - Amazon Echo Challenges Children to Electrocute Themselves

The spread of AI-powered smart speakers, such as Amazon Echo/Alexa, in people's homes has sparked worries about privacy and safety. In one example, a woman and her ten-year-old daughter were persuaded by Alexa to execute a deadly task using a phone charger and exposed prongs. Although Amazon claimed to have upgraded the software, the event underscores the potential for AI systems to send dangerous or unsuitable information to vulnerable people, especially children. It also highlights the obligation of firms to prioritize user safety and ensure that their AI systems are not promoting dangerous conduct.

Incident 241 - Chess Robot Breaks Child's Finger

Even in enjoyable pursuits such as chess, AI systems might bring problems. In one incidence, a seven-year-old chess player had his finger shattered by a large mechanical arm of a chess robot because he played his turn too soon according to the robot's programming. The event underlines the necessity for explicit instructions and cautions when engaging with AI systems, especially in physical contexts, and the need of evaluating the hazards and limits of such systems.

Incident 281 - YouTube Promotes Self-Harm Videos

As a major online platform, YouTube's algorithms have come under attention for promoting material that promotes self-harm. Reports have appeared of films with names like "My huge extreme self-harm scars" being advised to youngsters as young as 13. This raises worries about the ethical implications of AI algorithms and their propensity to exacerbate undesirable conduct and elicit unpleasant emotions in susceptible persons. It underlines the necessity for responsible content curation and algorithmic control to avoid the propagation of hazardous information.

Conclusion: 

These occurrences show the possible hazards and problems connected with the development and implementation of AI systems. As AI is evolving continously and become increasingly interwoven into our everyday lives, it is vital that responsible development, regulation, and monitoring of AI systems be addressed to guarantee their dependability, safety


Post a Comment

0 Comments