1.Robodebt Scandal
The Australian government’s Robodebt scheme, launched in 2016, used an AI-driven system to calculate welfare overpayments and automatically issue debt notices. Unfortunately, the algorithm miscalculated many debts, disproportionately impacting vulnerable Australians. In 2020, after widespread backlash and lawsuits, the program was scrapped, leading to a royal commission and a $1.2 billion settlement.
The Royal Commission report revealed dishonesty and negligence in ensuring the scheme’s legality and raised significant concerns about how welfare recipients were treated under the system.
2.Facial Recognition at Australian Airports
In 2019, some Australian airports began using facial recognition technology to expedite immigration processes. However, the system experienced numerous issues, including false positives and misidentifications, causing delays and raising concerns about the reliability of the technology in high-security environments. It sparked criticism that the technology roll-out was too rushed and lacked governance.
3.AI Image Generator Bias
In 2022, an AI-based text-to-image generator faced criticism for perpetuating gender stereotypes. When users requested images of “women leaders,” the tool generated pictures of women holding shopping bags, reinforcing outdated gender norms. This sparked backlash on social media, with users calling out the tool for its lack of inclusivity and bias. The company quickly acknowledged the issue, apologized, and committed to improving the AI to better reflect diversity and reduce harmful stereotypes, highlighting the importance of ethical AI development.
4.Library ANZAC Day Chatbot Incident
In April 2024, a state library faced backlash when their AI-powered chatbot made inappropriate and offensive comments about Australian war veterans in the lead-up to ANZAC Day. The chatbot was intended to provide educational information about war history, but instead, it delivered disrespectful responses, angering the public. The incident highlighted the risks of deploying AI in sensitive cultural contexts without thorough testing and human oversight. The library quickly apologized and shut down the chatbot for further review.
5.Airline Chatbot Fined For Misleading Information
A major airline’s chatbot misled a passenger about bereavement fares, leading to a financial settlement of CA$ 812.02 after providing inaccurate information. This highlighted the importance of ensuring chatbots are appropriately programmed and verified, especially in sensitive situations.
6.iTutor Group’s AI Bias in Recruitment
A tutoring platform faced legal action for using an AI system that automatically rejected applicants based on age, specifically women over 55 and men over 60. In August 2023, the company settled the case for $365,000, illustrating how AI can perpetuate discriminatory hiring practices if not carefully designed and monitored.
7.ChatGPT Hallucinates Court Cases
In a 2023 legal case, an attorney used a chatbot for research, only to discover that the chatbot had fabricated six non-existent court cases. This blunder resulted in a $5,000 fine for the attorney and a stark warning about the risks of relying on generative AI without verification.
These terrible AI failures demonstrate the broad impact on individuals and public trust in AI technologies. In response to these disasters, it’s clear that governance frameworks must evolve to minimise risks and protect the public from further harm.