- Design x Machine
- Posts
- The Dark Side of AI in UX: Bias, Hallucination & the Ethical Questions Designers Must Ask
The Dark Side of AI in UX: Bias, Hallucination & the Ethical Questions Designers Must Ask
How AI’s confidence can mislead designers, distort research, and quietly introduce ethical risks into our products.

TL;DR
AI isn’t dangerous because it’s biased or hallucinates - it’s dangerous because designers often trust those outputs too quickly. In a recent usability test, one of the lead designers accused our instructions of being “leading” based on a ChatGPT response without asking for context.
This moment revealed a bigger truth: AI can create false confidence, bias our judgment, and weaken our critical thinking. Ethical UX in the era of AI requires slowing down, asking the right questions, and interrogating AI’s assumptions not outsourcing our own.
Want more like this? Subscribe to get my free UX prompt guide for designing with AI and join the exploration.
I didn’t plan to open this story with an arrogant lead designer.
But here we are.
A week ago, during a usability test recap, one of the senior designer colleague pasted our findings into ChatGPT and immediately concluded:
“This question is leading. This test is invalid. nuff said”
Just like that.
One sentence.
Full stop.
Case closed.
What he didn’t know or didn’t bother to ask was this:
The question wasn’t leading.
It was instructional.
We were testing what happens after the task.
We wanted the behaviour that followed, not the phrasing of the instruction.
But because an AI model said it looked leading, he concluded it was leading.
He trusted the model more than the humans running the study, and definitely more than the context.
This is the new danger of AI in UX:
false confidence wrapped in fluent sentences.
AI speaks smoothly.
Humans trust smoothness.
And smoothness becomes “truth” - even when it’s wrong.
The Day I Realised AI Isn’t the Problem — People Are
The problem isn’t that ChatGPT said the question looked leading.
The problem is that someone with authority believed it without thinking.
No asking:
“Why does it look leading to you?”
“What context is ChatGPT missing?”
“What behaviour did we actually want to observe?”
“What assumptions are we making here?”
No 7 Whys.
No curiosity.
Just judgment.
And this is exactly how bias, hallucination, and ethical blind spots enter a product team.
Not because AI is malicious —
but because humans get lazy when a machine sounds smart.
AI Creates Problems Designers Aren’t Trained For
UX has always been about understanding humans.
Now UX also has to understand machines that think like humans… but not really.
AI has a few behaviours that every designer urgently needs to recognise:
1. The Bias Illusion
AI mirrors whatever dataset it was trained on.
If it saw more Western examples, male personas, English-speaking flows, or common patterns, it will assume that’s the world.
Bias becomes invisible, because it feels “normal.”
2. The Hallucination Problem
AI generates confident answers even when it has no basis for them.
It fills the gaps with guesses.
Guesses made with style ≠ truth made with evidence.
Humans give AI “expert status” because it sounds certain.
Designers stop asking questions.
They stop challenging assumptions.
They stop being curious.
And slowly, the designer becomes the assistant, not the machine.
The Real Ethical Danger: When Designers Stop Thinking
AI doesn’t create unethical products.
Designers do — when they blindly accept what AI gives them.
The moment a designer thinks:
“AI already analysed this, so I don’t need to think deeply…”
That’s when the damage begins.
It can be small:
a mislabelled persona, a wrong assumption, a biased research summary.
Or huge:
a broken flow rolled out at scale,
a misinterpreted user behaviour,
a design that misleads instead of guides.
All because no one asked,
“Why does the AI think this way?”
“What’s missing?”
“What don’t we know?”
Ethical UX isn’t a policy.
It’s a muscle.
And AI is quietly making that muscle weaker unless we intentionally fight it.
The 7 Whys: Your New Defence Against AI Overconfidence
If there’s one habit every designer needs in the era of AI, it’s this:
**Never accept an AI answer at face value.
Interrogate it. Always.**
Use the 7 Whys method:
Why did it generate that assumption?
Why does this sound correct?
Why might this be wrong?
Why is context missing?
Why would a human think differently?
Why would a user behave differently?
Why does this matter for the product?
AI is not a critical thinker.
It cannot reason with lived experience, culture, behaviour, or nuance.
It can only remix patterns and probabilities.
Designers, on the other hand —
we work with consequences.
We make decisions that affect real people.
Critical thinking is not optional anymore.
It’s ethical.
The Truth We Don’t Want to Admit
AI makes everything feel faster.
But it also makes everything feel easier - too easy.
And easy thinking is dangerous thinking.
The arrogant lead designer from my story didn’t realise this:
He wasn’t using AI wrong.
He was using himself wrong.
He stopped thinking.
He outsourced judgment.
He traded curiosity for confidence.
And that, right there,
is the dark side of AI in UX.
Not the technology.
Not the hallucinations.
Not the bias.
But the moment a designer assumes:
“The machine has already thought for me.”
If AI is the machine… then designers must become the mind.
We’re not here to accept what AI gives us.
We’re here to question it, interpret it, reshape it, and align it with human reality.
In the era of AI, the most powerful skill a designer can have isn’t speed.
Or aesthetics.
Or prompt-engineering.
It’s ethical critical thinking.
The courage to ask:
“Is this really true?”
“Who might this harm?”
“What bias is hiding here?”
“What context is missing?”
“What would a real user actually do?”
The Designer’s Mind Is the Last Line of Defense
AI will keep getting smarter, faster, smoother, and more convincing.
But that doesn’t mean our thinking should get softer.
The truth is this:
AI won’t replace designers - but it will replace designers who stop thinking.
The future of UX isn’t about who can prompt the best or move the fastest.
It’s about who can pause, question, interpret, and challenge the machine with a human lens.
It’s about designers who understand that every AI answer is a draft, not a decision.
Because products don’t become unethical in a single big moment.
They become unethical in hundreds of small ones -
when assumptions go unchecked,
when context is ignored,
when bias slips through quietly,
and…
when curiosity is replaced by convenience.
If we want to design responsibly in an era of intelligent systems, we need to protect the one skill AI can’t mimic:
our ability to think critically, ethically, and with intention.
AI can give us speed.
But only designers can give meaning.
And that’s the real work now -
not fighting the machine,
but staying awake while using it.
If this kind of exploration resonates with you…
👉 And if you haven’t yet, subscribe here to get my free UX prompt guide for designing with AI - it’s the easiest way to keep exploring with me.
Reply