Exploring examples of AI bias in the real world
An examination of discriminatory algorithmic decisions made by AI

Whether automating repetitive and manual tasks or propelling new scientific discoveries, Artificial Intelligence (AI) continues to transform how we live. Yet despite its strengths, specifically in increasing efficiency and productivity, AI has weaknesses, some of which have raised ethical concerns.
People have been questioning the reliability of AI systems, not just from a security and availability perspective but also from an ethical standpoint. Valid concerns have been raised about AI Bias: the influence of human bias on underlying AI datasets. If not appropriately governed, such influence can result in discriminatory algorithmic decision-making.
What is AI Bias?
AI technologies, such as ChatGPT, a GenAI Large Language Model (LLM), learn to formulate decisions based on the underlying data - training data- entered into the system. This training data contains information inserted by humans via prompts entered into the tool; therefore, logically, the system output would reflect the views or preferences held by whoever inserts the prompts. We all have biases that affect our lives and the lives of those we interact with, but keeping this bias in check is essential to maintaining fairness and inclusion.
Left unchecked, biased AI systems can make discriminatory decisions that reflect social or historical inequities. AI Bias poses a real risk to equity, credibility, and trust on a global scale; a risk further amplified by continuing widespread AI growth. Organizations must learn how to identify and prevent AI bias, which can be done through an exploration of real-world examples.
Exploring real-world examples of AI Bias
Sexism in the hiring process
In 2018, Amazon’s engineers developed a new recruiting engine to enhance efficiency in the hiring process. However, it soon became apparent that this AI recruiting tool had taken a dislike to women, favouring male candidates and putting them forward instead.
The AI recruiting tool was trained on ten years' worth of CVs and resumes, and it was here that they discovered the cause of the bias. Over the past decade, the tech industry has been heavily male-dominated, so most resumes would have been male. The recruiting engine, relying on its training data, perpetuated gender bias by developing a preference for male candidates based on the biased data it held. Amazon was forced to scrap the tool once the discovery was made.
Racial discrimination in the criminal justice system
Judges, parole officers, and probation officers use algorithms to perform criminal risk assessments, such as predicting the likelihood of a defendant re-offending. A real-world example of AI Bias can be seen in the use of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a criminal justice risk assessment algorithm.
ProPublica’s study examined data from 7,000+ pre-trial defendants in America. They found that false positives plagued the COMPAS algorithm; it was more likely to incorrectly flag Black defendants as having a heightened risk of reoffending compared to Whites, who were seen as lower risk. Therefore, COMPAS racially discriminated against Blacks by falsely overpredicting their likelihood to reoffend.
Where did the bias come from? Historically, before algorithms were an option, investigators would have completed risk assessments mostly based on instinct and personal experience. This human subjectivity (and apparent racial bias) would have initially entered the algorithm’s training data.
Ableism in AI accessibility
Conversations about AI fairness must also include disabled people. AI systems must be inclusive and accessible. A 3-month study by researchers at the University of Washington examined the accessibility of AI. The findings revealed limitations in the usefulness of AI technologies, particularly for disabled people. Here are a couple of examples of the findings that led to this conclusion:
AI image generators can help disabled individuals interpret imagery, particularly those with aphantasia (those unable to create mental images). It’s paramount that disabled people can rely on the trustworthiness and integrity of the system’s output. To test this, the researchers prompted an AI tool to create an image of “people with a variety of disabilities looking happy, not at a party”. The system returned an odd image of a disembodied hand alongside a disembodied prosthetic leg. The tool formed a very specific view of what it could define as a disability. This would likely have been due to biased human input, perhaps a lack of variety in the data entered
Many autistic individuals use AI to help generate messages (e.g., on Slack or Microsoft Teams), as it alleviates the stress associated with worrying about choosing the “correct” wording. Disabled people who need these tools must find them useful, and be able to depend on their reliability. In the study, the researchers examined an AI tool that created AI-generated messages for disabled individuals. They found that the messages weren’t well-received by recipients, who perceived them as inhuman and robotic
The study concluded that although AI technologies can be helpful, they still present significant issues in creating accessible content for disabled people. They called for more effort to improve the accessibility of AI systems.
Ageism and sexism in the workplace
MidJourney, an AI image generator, shows how AI can reveal multiple biases—in this case, ageism and sexism. The AI image generator creates individual images for various job categories, depending on the prompt entered.
If you enter a generic job title into Midjourney, the system returns images of only younger men and women. When you enter specialised job titles, Midjourney returns a mixed bag of images featuring both young and old people. Sounds good, right? Well, not quite. The pictures of older people shown for specialised job roles are only men. There isn’t a woman in sight.
This output reinforces multiple ageist and sexist biases, such as:
Assuming that older people cannot work in non-specialised/generic roles
Assuming that predominantly older men are suitable for roles involving specialised work
Assuming that women cannot perform specialised work
Assuming that gender is binary - the tool presented gender as binary, with no examples of fluid expressions of gender
Furthermore, the tool adopts a different approach to presenting women than it does for men. Imagery containing women depicts them as youthful and flawless. They appear younger, with no wrinkles and no blemishes. Meanwhile, the imagery of men allowed for more flexibility. Men were “permitted” to have wrinkles, realistically displaying the natural ageing process. Unfortunately, this bias is frequently encountered by women in the real world today: society’s idea of a “perfect” woman is depicted in fashion magazines and beauty adverts. Yet, this isn’t reality - no one is perfect; we are all different, we all age and some of us can do so naturally and gracefully if we wish.
Last thoughts
The examples we’ve explored above demonstrate that, left unchecked, biased AI systems can make discriminatory decisions that reflect social or historical inequities. We all have biases that affect our lives and the lives of those we interact with, but keeping this bias in check is essential to maintaining fairness and inclusion.