A practical guide to AI bias: garbage in, injustice out

A practical guide to AI bias: garbage in, injustice out

Writer

Laurent Smeets

Date

April 2, 2026

Contact us

Your AI is learning more than you think

In April 2025, ChatGPT briefly turned into a “digital sycophant”, praising users for stopping their medication and validating clearly unsafe choices. The fix came fast, but the message was clear: modern AI does not just copy our knowledge. It scales our blind spots and social prejudices.

If your organisation is using AI in hiring, customer service, risk scoring or healthcare, you are already exposed to these dynamics, whether you see them or not.

What the data reveals

The report pulls together some of the most striking recent findings:

● LLM-based resume screening preferred White-associated names over 85% of the time, with Black-associated names rarely selected first.

● Simulated GPT‑4 “doctors” showed systematic bias in life‑and‑death triage, favouring patients who matched their own demographics.

● A major social platform’s image-cropping AI routinely centred White, younger faces, even cutting a former president out of photos.

● An AI hiring tool that automatically rejected older women led to a $365,000 settlement.

These are not edge cases. They are what happens when models train on billions of words and images drawn from an unequal world.

Why this matters for you

Traditional ML bias fixes such as cleaning data, rebalancing classes and retraining models do not translate cleanly to today’s frontier LLMs. You cannot audit trillions of tokens, you cannot afford to retrain the base model, and behaviour can change overnight through hidden system prompts or RLHF updates.

That means the real leverage has moved to how you design, govern and monitor AI systems on top of these models.

What you’ll get in the full report

Garbage In, Injustice Out is a concise, 35‑page guide that gives you:

● A clear, non‑technical map of how statistical, algorithmic and social bias intertwine in AI.

● A focused technical explainer of how LLMs and image models actually absorb and amplify prejudice.

● Eleven recent, real‑world case studies across healthcare, hiring, criminal justice, content and politics.

● A candid look at what Anthropic, OpenAI, Google and others are doing, and where gaps remain.

● Eight concrete levers leaders can pull now without retraining a single model, from RAG and multi‑agent oversight to bias‑aware system prompts, independent benchmarks and domain‑specific Small Language Models.

If you are accountable for AI outcomes as a C‑level, product lead, head of data or AI, or risk and compliance owner, this gives you a practical playbook rather than another ethics brochure.

Get the full report

Equip your organisation with a sharper understanding of AI bias and a realistic plan to reduce it.

Download

Talk to our experts

Let's create real impact together with data and AI

Harm Erbé

Public & Society Lead

Harm Erbé