Back to Blog
Bias in Design Thinking and AI
Inês Costa - 5 October 2023 - 6 min read
How is creativity and bias affecting designers in the age of AI?
Designers can be described as the perfect fusion of problem solvers and human creativity. Shaping the intangible into the visible. With an increased interest in AI, organisations are rapidly integrating cutting edge tools for automating tasks, deep learning, and data ideation. However, does this symbiotic relationship inadvertently perpetuate bias within the creative process?
Creative Bias and Design Thinking
First, we have to understand Design Thinking. The experiential journey for any creative human. Design Thinking is not linear, it's full of iterations, challenges, and user “understanding”.
It focuses on these five fundamental stages:
Empathise > Define > Ideate > Prototype > Testing
Of these stages, Empathise stands as a cornerstone. Let’s look at this example:
"If you are reading this, you’re part of the cohort that comprehends the most universally embraced languages: English. Ironically, you’re also connected to the internet, the gateway to this very article. Your choice to engage with this article suggests higher education."
These “assumptions” are easily formed during what we term the ”empathise” phase.
It’s easy to detach from any emotional connection when empathy towards the subject is scarce. This is where designers become agents of influence, often failing to untangle our designs from our assumptions and ideologies.
Amid the rapid evolution of visual communication - from Archaic scripts to the endless tapestry of modern TikTok scrolls - misinterpretations and conflicts arise.
Misinterpretations are hard to avoid. But at its nucleus lies the adoption of a universal mindset during the design process. A challenge that often leads us back to assumption led pathways.
As we operate within Western principles we tend to shadow latent biases. The influence of Modernism in design is present more than ever with the continuous abandonment of ornamentation in favour of clean, legible compositions. A complete standardisation of cultural nuances.
Bias and Artificial Intelligence
AI bias occurs in the initial stages of data input processing. Algorithms are generated and imputed by humans. Much like the Empathise phase in Design Thinking, an assumption led process the question persists: Are we empathising with the data itself?
AI systems make decisions based on training data, tainted by human biases. Occasionally reflecting social inequities, skewed historical events even when sensitive variables such as race, gender, and sexual orientation are removed.
One notable example was the case of “Amazon Recruiting Engine”. An AI powered tool to help review job applications and identify top candidates. It was later found to be biassed against female applicants. As its system has previously been trained to use ten year period data from resumes from the majority of male applicants. As a result, the system learned to favour male candidates over women.
This example showcases the importance of carefully auditing and curating data. Otherwise we will reduce the ability of certain groups to participate in society and economically.
Understanding and measuring fairness is where the real responsibility lies. “Counterfactual Fairness” is a promising technique that ensures model’s decisions are the same in a counterfactual world where attributes deemed sensitive such as race, sexual orientation, gender were changed.
These technical solutions are essential to determine when a system is fair enough to be released and which situations fully automated decision making should be allowed.
An Unbiased Path with Artificial Intelligence:
1. Acknowledge Your Privilege
Designers (partly) “control” the message. We have a position of privilege whereas we tend to shape the narrative aligned within the “norm”. However, this norm doesn’t align with the broader population. For instance, my personal experience compels me to mark “black from African descent” as my racial identity. A stark juxtaposition with the connotations that the term “black” carries: black box, black sheep, black area. But for you could be “just” a colour or popular saying. Reflect and test your own implicit assumptions. Project Implicit by Harvard University is a good place to start.
2. Educate all stakeholders and let them educate you
You have to be up to date! AI and Design are fast paced fields, full of trends and cutting edge technologies. Be eager to learn and apply but always remember to establish uncomfortable conversations that question what you just learned. Is it objective, or biassed? Engage with your client, your colleague and your CEO. Allow yourself to be educated from a different viewpoint. Google Ai is always sharing interesting articles but remember to diversify your perspective and consider global sentiments.
3. Prioritise Transparency and Diversity
You will make mistakes, you will fail. But own up to it and restart, assume responsibility and re-learn. Learning from it makes you a good designer, a good colleague, a good leader, and ultimately a good person. Always be on the lookout for a diverse and inclusive pipeline for your creative and Ai team as they will be better at anticipating, spotting, and reviewing biases. On your next job opening check Diverse as they are fighting homogeneity of the AI field and the overall Tech sector. An overlooked view can give you a fresh perspective.
AI has the potential to benefit many businesses and better creative workflow but this will only be possible if we train it to not produce biassed results. AI can help humans but only if humans are working together to tackle bias. Reflecting on the initial phase of Design Thinking, ask yourself whether you are truly empathising with a marginalised group and whether biases cloud your approach.