Skip to content
Gallery
The Inclusive Innovation Playbook
Share
Explore
How To Guides

icon picker
Countering Bias

Understand how human and algorithmic bias can impact outcomes for certain groups
Bias can be a significant challenge in the creation of inclusive and accessible digital products and services. Bias refers to a tendency to think or act in a certain way based on preconceived beliefs, values, and experiences.
When creating digital products and services, biases can impact every aspect of the design process, from user research and feedback to product decisions and marketing. The result can be a product that only caters to a narrow group of users, neglecting the needs and preferences of other user groups.
To overcome this challenge and create truly inclusive products and services, it's important to be aware of biases and make an effort to minimize their influence on the design process. This includes being mindful of the diverse perspectives and needs of all users and ensuring that the product is accessible and usable for everyone.
checked-2
A good place to start is to understand your own biases. You can take the and encourage your team members to do the same.
checked-2
Conduct a bias review to understand where bias may be present in your current people, practices and products in order to identify ways in which to counter it.

Human bias

Here is a list of some common cognitive biases with a definition and an example of how each one could affect the creation of digital products and services:
Confirmation bias: The tendency to favour information that supports one's preexisting beliefs or values, disregarding information that contradicts them.
light
For example: A product manager may only seek out feedback and user data that confirms their initial product design decisions, disregarding data that suggests a different approach would be more effective.
Availability heuristic: The tendency to rely too heavily on information that is easily accessible or comes to mind quickly.
light
For example: A product team may make decisions based on common user feedback. This may only represent the perspective of a small, homogeneous group of users, leading to the exclusion of other user groups, where more effort is required to understand their needs and preferences.
Anchoring bias: The tendency to rely too heavily on the first piece of information encountered when making decisions.
light
For example: A product team may conduct initial user research and gather feedback that suggests a certain feature is critical to the success of the product. As a result, the final product may fail to be inclusive for a diverse range of users because it is anchored to the initial feedback from a narrow sample of users.
Framing effect: The way a problem or situation is presented can affect the decision made about it.
light
For example: A marketing team may present a new feature in a way that emphasises its potential benefits for a specific user group, leading to the exclusion of other user groups who may not find it relevant or useful.
Affinity bias: The tendency to favour individuals who are similar to oneself, including similarities in background, ethnicity, religion, nationality, etc.
light
For example: A product manager may prioritise features and designs that cater to a specific group of users who share the same background or interests as themself, neglecting the needs and preferences of other user groups with different backgrounds or interests. This can lead to a lack of diversity and inclusiveness in the product. Affinity bias can also play out when hiring new teams members too, where people who are more similar to the interviewers are favoured.
Hierarchy or authority bias: The tendency to favour individuals in higher positions of power or authority, often at the expense of individuals in lower positions or those with less power.
light
For example: A product manager may prioritise features and designs that cater to the needs and preferences of senior executives or stakeholders, neglecting the needs and preferences of entry-level employees, lower-income or marginalised users. This can lead to a lack of diversity and inclusiveness in the product as diversity tends to decrease as you rise up the ranks!
Stereotype bias: Stereotype bias refers to the tendency to make assumptions or generalisations about individuals based on preconceived notions or stereotypes associated with their group identity, such as race, gender, age, or disability. Stereotype bias can lead to exclusion and a lack of accessibility for certain groups, which can ultimately limit the success and impact of a digital product or service.
light
For example: A designer might assume that older users are not technologically savvy and do not require or desire complex features.
Selection bias: The tendency to only gather or select data that supports one's own beliefs or preferences, while disregarding (intentionally or unintentionally) data that contradicts them, often leading to a lack of representation from certain groups.
light
For example: A data scientist may only use data from a narrow group of users to train a machine learning model, neglecting to include data from diverse user groups. This can result in a model that has a bias towards the narrow group of users and fails to accurately predict outcomes for other user groups, leading to a lack of representation and exclusion of these groups in the product or service.
light
When airbags were first introduced, there were higher numbers of fatalities with women and children because all of the crash test dummies were tall males.

These are just a few examples of how human biases can lead to exclusion or lack of inclusion in the creation of digital products and services.
It's important to be aware of these biases and to make an effort to consider the perspectives and needs of diverse groups of users in order to create products and services that are inclusive and accessible to everyone.

Countering bias during user research

When engaging participants to conduct user research throughout the lifecycle, it is vital to consider where bias may creep in and how this can be countered.
Here are some suggestions to get you started.
checked-2
Build Rapport: By being aware of affinity bias, you might try to find other ways to help people who are very different to each other find common ground.
checked-2
Clear Framing: Consider different ways to position your goals and request feedback to reduced the framing effect bias. Ensure you are providing enough guidance and structure so participants know what is expected of them without influencing how they respond.
checked-2
Hypothesis Testing : Ask your team to forecast what they think they’ll see or hear in a user test or interview, before the event. It can then be helpful to refer back to this afterwards to see what insights surprised them and what assumptions were wrong. This can help to create a concrete discussion about confirmation bias or anchoring bias and help people to stop making assumptions.

Countering bias during recruitment

Building diverse and inclusive teams is key for inclusive innovation. Countering bias during the recruitment process will inevitably diversify your talent pool. Consider where bias may creep in and how this can be countered.
Here are some suggestions to get you started.
checked-2
Actively seek out diverse talent: Don’t just ‘see who applies’.
checked-2
Use structure and criteria: Don’t just rely on gut instinct.
checked-2
Diversify the interview panel: Don’t let all decisions be made exclusively by white, middle class, straight, able bodied men.
checked-2
Anonymise first stage: Consider removing info such as name which can imply gender and ethnicity for the initial screening stage.
checked-2
Offer adaptations: Create an experience to help people shine, not put them through their paces.
checked-2
Redefine success: Review what is a critical requirement for the job versus a nice to have, and what could be developed versus required on day one.
These steps will help you create a more equitable experience, where those who lack privilege will be less likely to be further penalised. A more diverse candidate pool will be more likely to emerge as a result.

Algorithmic bias

As well as human bias it is important to combat bias that can be introduced into machine learning and AI models.
indicates that “model bias could be more prevalent than many organisations are aware and that it can do much more damage than we may assume, eroding the trust of employees, customers, and the public.”
Algorithms used in digital products can reflect and amplify existing biases in society, leading to discrimination against certain groups of people.
light
For example, even the most advanced artificial intelligence is hindered by the inherently racist data it’s trained on.
light
For example, voice recognition technology can be biased against women, people of colour, and non-native speakers, because it is often trained on a narrow dataset of voices that may not represent the diversity of the population.

According to a survey by the Society for Human Resource Management, roughly 79% of employers use some form of AI in the recruitment and hiring process. However there are also countless examples of this introducing bias into the process. Even when technology automates the discrimination, the employer is still responsible.
light
For example, On Aug. 9, 2023, a tutoring company agreed to pay $365,000 to settle an artificial intelligence (AI) lawsuit with the Equal Employment Opportunity Commission (EEOC), when the company’s recruitment AI was found to discriminate against people on the basis of age. I

Here are a number of examples of different types of algorithmic bias.

Measurement bias: When training data is different from real world data or when measurement advice is faulty.
light
For example, using a bright lens camera to take pictures of different people will not reflect their real skin tones. When the algorithm is trained on these pictures, then the model will fail to appropriately recognise the difference in real skin tones and results will be skewed.
Sample bias: When the dataset has a limited sample size.
light
For example: Facial recognition software only trained on white faces fails for non white faces.
open-book
find even top-performing facial recognition systems misidentify people who are black at rates five to 10 times higher than they do people who are white.
Historic prejudice bias: Training AI using datasets that reflect prejudice, unfairness or injustice can perpetuate that injustice.
light
For example: If historic pay data is used to forecast future salaries, historic ethnicity and gender pay gaps will prevail.
light
Amazon used an to search for top talent. However, it penalised female candidates because the system was rating candidates based on the patterns of resumes submitted to the company over 10 years. As this was a tech company, a sector historically dominated by men, the program taught itself that male candidates were preferable.
open-book
A found, that “broadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% black. This is a quintessential case of an algorithm reproducing bias from the real world, without human intervention.This underscores the need for policymakers and platforms to carefully consider the role of the ad delivery optimisation run by ad platforms themselves---and not just the targeting choices of advertisers---in preventing discrimination in digital advertising. “
Proxy bias: Using inappropriate personal characteristics or associations as a predictor of future behaviour.
light
For example: Using postcode as a predictor of an individual's likelihood of criminal offense (or exam results).
Exclusion bias: Missing or disregarding important attributes in your model.
light
For example: An employee well-being survey shows a good average, but when filtered to look at just black employees tells a very different story. If ethnicity is excluded as a data point for analysis this insight could be missed entirely.
Categorisation bias: Discretisation of variables, or, similarly, the selection of categories can induce biases.
light
For example: The categories in surveys usually enforce historical categorisations that are false, and exclude those who don't fit those categories. For example, gender options that only include male and female.
Interaction bias: When the AI system learns biased behaviours through its interactions with biased humans.
light
For example:
was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.

checked-2
Mitigate bias in AI: If your solutions leverage AI, prevent bias before launch, by educating everyone about risks of AI model bias and providing a common language to discuss model risk and how to mitigate it.
Review your processes, your data and your technology and ensure that humans who are most impacted by the model are “in the loop” when developing the model.

Resources


Resources from Google on how to mitigate the risk of bias when using machine learning.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.