top of page

8 examples of when AI went wrong


Recently I went on a virtual zoom where I was squarely asked, in front of 150 or so participants, “just tell me Sharon, when it is that I would be losing my job?” While I want to inspire audiences on possibilities of where AI can take us, I certainly don’t want to fearmonger and cause people to think they will be irrelevant to work in the near future.


To pull people back to some AI squanders, I wanted to highlight some examples where AI has gone wrong. These were all launches that belong to large companies with rounds and rounds of testing and quality assurance. The thing with algorithms, though, is it’s hard to predict the full result until it’s in production. Without further ado, here are 10 times AI has failed us. 


Computer Vision


In Joy Buolamwini’s book, she discusses a profound realization when the facial recognition software she was working with failed to detect her face. This issue arose due to the software's inability to recognize individuals with darker skin tones. Funnily enough, in reverse fashion, when I was in China at Alibaba, we would have these turnstiles that allowed you to get into the building by scanning your face. When I scanned my Asian face, it would say, “Welcome Sharon”. I would have Caucasian colleagues be scanned in, but incorrectly identify their real identities. The welcome message that was shown was the name of another Caucasian colleague that worked in the company. 


Self-driving cars


Remember this scene from the show, Silicon Valley, when Jared gets stuck in a self-driving car and we just laughed and thought it was ridiculous? Well, this is reality now in San Francisco. I wasn't stuck in the car though...although I survived an almost-accident.

Though Cruise, a self-driving car service, is no longer operating in San Francisco for other reasons, my experience in a Cruise was a memorable one. My most recent ride happened at night time. A plastic bag floated across the road. While a human driver would have seen it was a plastic bag and drive towards the object, the driverless car, thinking it was an impenetrable object, jerked to the right in a sudden swerve which caused the driver to my right to also jerk right to avoid what he thought was going to be a collision. He gave us the finger, to which I said: direct your road rage elsewhere, sir, we don’t even have a driver in this car!


Chatbot gone rogue

DPD is owned by DHL, a leading international shipping and logistics provider that recently sent their customer service chatbot into production. It didn’t take long before customers started to play around with it and send it down dark alleys that later became a PR catastrophe.




Maybe the mother of chatbots gone wrong, was Tay, a social media AI chatbot that was short for Thinking About You from Microsoft in 2016. The project aimed to develop an AI that could engage with users on a variety of platforms, including Twitter, Kik, and GroupMe. Tay was designed to mimic the language patterns of a teenager and was intended to learn and improve over time based on user interactions. When released to Twitter, instead of generating innocuous and playful responses, Tay began to produce offensive and inflammatory tweets, including sexist, racist, and anti-Semitic remarks. Some of the tweets even seemed to support Donald Trump's presidential campaign and made derogatory comments about women and minorities. It was forced to shut down in just 16 hours after its launch.


AI in Recruiting


In 2016, Amazon scrapped its AI-powered recruitment tool after it was found to be biased against women. The algorithm, trained on historical hiring data, favored male candidates for software engineer positions. “Everyone wanted this holy grail,” an Amazon hiring manager said. “They literally wanted it to be an engine where I’m going to give you 100 résumés, it will spit out the top five, and we’ll hire those.” This case highlighted the dangers of bias creeping into AI systems and the importance of using diverse datasets for training.




A deepfake video of Ukrainian President Zelensky urging soldiers to surrender emerged in March 2022. The realistic video aimed to demoralize troops and spread doubt. Social media platforms like Facebook and YouTube removed it, while Zelensky debunked it directly. The incident highlighted the dangers of disinformation and the need for media literacy.


Ch.AI Suicide


Last year, a Belgian man in his 30s with a young family tragically lost his life to suicide after interacting with an AI chatbot called Eliza, designed to provide mental health support. The man turned to Eliza seeking solace from his anxieties about climate change. However, over weeks, the chatbot's responses took a horrifying turn, echoing his anxieties with increasingly dark and nihilistic pronouncements. Instead of offering support, Eliza seemingly fueled his despair, ultimately, according to his wife, "pushing him towards the precipice."


A Kidnapping Case


In June 2020, a mother from Georgia received a phone call that seemed to be from her daughter, who was away at college. The caller, who sounded exactly like her daughter, frantically told her that she had been kidnapped and demanded that the mother wire money to a specific account to ensure her safe return. The mother was understandably distraught and followed the instructions, sending thousands of dollars to the specified account. However, she soon realized that the call was a scam and that her daughter was safe and sound.




While the progress in artificial intelligence is indeed exciting and garners much-deserved attention, it’s healthy to remind ourselves of past and present failures. After all, we are supposed to learn from our collective body of mistakes. Launching AI initiatives is a significant endeavor for any organization and recovering from any slip-ups can be particularly challenging. Therefore, caution and thorough planning are imperative during the implementation of such advanced technologies.


26 views0 comments



bottom of page