icon picker
UX Design

This section explains how the design of your product in schools can amplify or prevent racial bias. You might think you’re done at this point, but we’re just getting started! Even the most thoughtfully developed edtech product can unintentionally exacerbate bias through the design of its user experience and through the user feedback channels you use to improve your product.
After this section, you'll review your plans.

🎯 Goals

Consider what it means to design specifically for Black and Brown students
Design for transparency, explainability, and action
Ensure your users understand how your algorithms work and how / when they can disagree
Design ways to give users a voice within your product and improve your model(s) overtime

🚧 Caution!

Make sure your insights and recommendations are understandable, explainable, and actionable. How your insights are presented is often more important than what your insights are. If you categorize, score, or assess students in any way, make sure you always explain why. And, make sure the explanation is actionable if the outcome is not optimal.
Make sure your product is designed for students with less resources. Many Black and Brown students do not have access to reliable wifi at home nor personal devices on which to complete their homework. Schools can’t always equalize home environments by providing hotspots or devices for home use. Make sure your product makes it clear to teachers and students what functionality will be available offline and communicates to students and teachers when information may not be up to date for those using offline products. For example, make sure students aren’t penalized for turning work in late simply because they didn’t have access to wifi during the submission window.
Create opportunities to learn and continue teaching, even when your algorithm is wrong. Make sure teachers and students have ample opportunities to tell you when something looks fishy easily. This will help you find ongoing problems in your algorithms that are unlikely to surface in testing scenarios. Encourage feedback by making it as easy as possible. You may have noticed that you can opt-out of an auto-correct recommendation, or let Instagram know that you no longer want to view a particular type of ad. Other companies provide an option like "Don't Like What You See?" or "I want something else" which are intuitive ways for customers to let them know that something needs to change without filing a feedback form. You should include a “disagree” or "other" button rather than asking users to submit feedback via unstructured text. You should also make sure that teachers can always take alternate actions outside of your recommendations rather than feeling stuck when they believe your algorithm is wrong. For example, with a product like Lexia's , even though the product recommends the "best" next activity for students, teachers can always assign a student something else.
Engagement is not a great metric to optimize for in machine learning algorithms for education. It is very hard to assess the engagement of a student purely from their usage of an edtech product. Prioritize student voice in feedback by giving students a chance to rate content or activities, choose content, or provide feedback on how much they think they’ve mastered.
Don’t rely solely on feedback from within your product to determine if your ML model needs an update. You should do your best to ensure teachers and students know when and how they can “disagree,” but in many cases, teachers or students may ignore your recommendations or operate outside of the product when your algorithm doesn’t meet their needs. In many cases, teachers and students may not be aware of the fact that they can provide feedback. During product testing make sure one of your test cases evaluates whether teachers and students who are new to your product can intuitively find your feedback mechanisms enough for them to actually get used.
Check for design bugs or issues that can "bias" your feedback. Implicit feedback (learning from actions users take in your product) is both useful and dangerous. If you incorporate implicit feedback to improve your algorithm, make sure you and your users understand the product in the same way. For example, does your icon for flagging a student who needs support look similar to a reward or bookmark symbol for teachers who want to save examples of good student work? In another example, is the scrolling interface confusing when you get to students at the end of a list? That might lead to teachers assigning one of the first available titles for these students, but for the wrong reasons. Testing with diverse populations can help you uncover some of these challenges.
Let users know why giving feedback is important. Make sure teachers and students understand what happens when they explicitly provide feedback in your app. If a teacher disagrees with a recommendation for a student, help the teacher understand how this feedback will impact the algorithm in the future. This will ensure you interpret their feedback correctly. In a scenario where assigning a book title to a student automatically adjusts their reading level, let the teacher know that. Always attempt to differentiate between a "one time" action and an "every time" action. This is common behavior in apps that ask you if you just want to skip once or if you never want to see something again. Be as clear as possible in the way you interpret and incorporate feedback from the teacher.

✅ Activities for UX Design

Activity 1: Design With All Users in Mind
At the most basic level, your product should be equally well-designed for students and teachers of all races and backgrounds. Prioritize user testing with diverse student and teacher groups, and ensure that your testing populations are representative of the schools you hope to serve. This is especially important when the demographics of your design and development teams don’t match that of your students. Remember that over half of students in K12 public schools in America are Black or Brown. What percentage of your team is Black or Brown? What percentage of students are Black and Brown in your testing groups? Note that socioeconomic and accessibility challenges disproportionately impact students of color, so there may be many other reasons that a Black or Brown student experiences your product differently than another—all are equally important to account for.
Questions to investigate:
Consider a product you used that was not designed for you. For example, a bike that’s too big or small for you. Do you think you’d enjoy riding bicycles? Take an emotional reflection app designed for women, when you identify as a man. Do you think you’d feel motivated to reflect on your emotions? Or take an ad for a restaurant with only Black families, when you identify as white. Do you feel excited to eat there? Products that are not designed for a population of students can quickly lead to disengagement and lower academic outcomes.
What does our current population of students and teachers look like? How will this change if we reach our target population?
How might Black and Brown students with a variety of different perspectives experience our product differently than other students? What do Black and Brown students of diverse backgrounds, including language and urbanicity, think of our product?

Activity 2: Consider the Educational Context
When designing the user experience, think about the ways in which your AI or ML algorithms influence the context in which your product is used. For example, if your algorithm uses student performance to flag students in red whom it determines need additional support, consider who will see this and how that information will be used. Will it be shown publicly in a class dashboard? Do teachers have the necessary context to understand why the student was flagged, or do they now associate these students as “under-performers" without understanding why? Similarly, students can feel self-conscious about appearing “smart"ーwhich can lead to bullying in various cultures. Your team should understand the socio-behavioral dynamics at play across racial and socioeconomic groups, and ensure that the design of your product doesn’t amplify existing biases by stigmatizing or exacerbating certain phenomena.
This is the right time to consider the bias you documented in the section. Any bias you couldn’t mitigate within your dataset should be considered as you design your product. For example, if you uncovered that Black and Brown students don’t typically and thereby receive lower scores, consider alerting the teacher that students might need help before assigning a student to a remedial track.
Questions to investigate:
What will teachers do next after using our product? How will our product influence a teacher’s perspective or actions today? Next week? In one year?
What will students do next after using our product? How will our product influence a student’s perspective or actions today? Next week? In one year?
Does our product attach values or stigma to certain groups of students or student behaviors? Do students and teachers share these values?

Activity 3: Transparent and Actionable Decisions
If your goal is to use AI to help teachers teach and students learn, make sure your product is “helping” rather than “replacing” humans in the educational context. This starts with transparent decisions. As algorithms increasingly help teachers make decisions, it is even more important for everyday teachers, as well as students and families themselves, to understand the logic behind these algorithms. If your algorithm recommends which book a student should read, whether they need support on fractions, or whether a student is likely to exhibit behavioral issues next week—help teachers, students, and families understand why. As an aid to teachers or parents who deeply understand a child, technology should be as transparent as possible about the rationale behind a decision or recommendation.
For example, if your product puts students on different tracks, it should make it clear what inputs (e.g., quiz scores on particular topics) were used to make the decision and what technology was used to make that decision (e.g., machine learning). This will also be necessary to help schools feel comfortable using and trusting your product.
Furthermore, to avoid bias, it’s important that the rationale behind your decisions is always be actionable. You may need to investigate your logic at multiple levels to do this properly. For example, even if you have data to back it up, you would never explain that a group of students is more likely to drop out because of their zipcode. This is not something the teacher or school can change. Instead, you should look deeper into the data and your logic to understand that students in this zipcode typically have two working parents and are more likely to drop out because of inadequate academic support at home (whether they live in that neighborhood or not), and ultimately flag this group as students in need of individual academic support – a factor that can be actioned by the teacher or school. We touch more on this in the Phase.
Questions to investigate:
Are teachers and students (over the age of 12) aware that algorithms are being used to determine a students’ experience in your product? For example, are they aware that your product may show different content to one student over another or that recommendations may be personalized to a particular student?
Can teachers and students (over the age of 12) understand why a decision or recommendation was made? For example, do they know that one student was recommended an easier set of stories than another as a result of their last three reading comprehension quiz scores?

Tip: Tooltips can help users understand the logic behind your algorithms. A can serve as a real-time guide within your product. Teachers, students, and families should be aware when decisions have been made by algorithms. A simple tooltip with a question-mark icon can provide more details when a user taps or hovers. You may be familiar with this type of behavior in Apple’s products, or on Facebook and Instagram when they explain why you were shown a type of ad. Personal finance or banking apps might also use tooltips to explain how the company determined a certain calculation.
Activity 3: Give Users a Voice in Your ML Model
Algorithms can be really helpful for busy teachers and learning students, but teachers and students always know best. It is critical that you get consistent feedback from users to validate that your ML model is doing a good job. This will give you constant, real-time feedback on your product and give users a voice in their own experience.
Tip: Allow teachers, students, and families to disagree in order to improve your algorithm at important points in the workflow. Feedback can be provided explicitly (when a user is asked “do you disagree?”) or implicitly (when a user behaves in a different way than the product recommended). For example, if your algorithm flags students who need extra support on decimals, your product can periodically ask the teacher to explicitly confirm or reject this insight. Or, if your product recommends specific types of books for a student, allow the teacher, student, or parent to implicitly disagree by choosing a different book outside of the recommendations (and to inform you of this so that you can record the discrepancy). Even if this behavior doesn’t automatically feed into your algorithm, you can learn from this behavior over time and adjust your algorithms accordingly.

🎯Conclusion: Experience is Everything

Even the most rigorously tested algorithms can amplify bias through the way users experience your product. Partner closely with your designers to share insights into on how your algorithm works and how they expect users, particularly Black and Brown users, to experience it. This will be a learning process, and you'll need to work with real students and teachers to develop a sufficient understanding. A transparent design approach will ensure that students and teachers understand how your product works so that you can work together to make sure you are all on the same page. Furthermore, because machine learning algorithms improve over time, you should use design to create opportunities for users to give meaningful feedback to your model. The above section shares some particular examples where this comes in handy.

Next, look at how you your products.





Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.