icon picker
Logical Assumptions

This section explains how to meticulously investigate your assumptions to catch areas where racial bias might unconsciously enter your product. All ideas are predicated on a set of assumptions. Even when you feel absolutely sure that your idea for a product is grounded in real facts, writing out the team’s assumptions forces you to find holes in your logic and ensure your team is on the same page. This will also help you share your logic with schools – a necessary part of becoming a transparent edtech product. We encourage you to be as thorough as possible during this step because your assumptions will find their way into your data and algorithms as bias in many different ways. After listing out your assumptions, use research and interviews to confirm or modify your approach. After this section, you'll review your .

🎯 Goals

Check for holes in your logic that could lead to racial bias
Identify and investigate assumptions through research and interviews

🚧 Caution!

This is not a vanity exercise. When checking assumptions, get curious about what alternative narratives might actually be true. Your goal is to learn more so that you can strengthen your future product, not to simply find evidence to strengthen your current assumptions.

✅ Activities for Logical Assumptions

These activities are designed to help you better understand the gap between you and your users’ educational experiences before you list out your assumptions. There are many versions of similar human-centered design exercises from other sources, and we strongly encourage you to allocate time in your development cycles for these activities.
Activity 1: Step Into My Shoes
Step 1: Ask your product manager or business team to develop a few personas for the team. Make sure your personas reflect a diversity of learning experiences. Remember that 51% of public school students are students of color, and a high percentage of students have other “high-needs” including (~34%), and speech impairment (~19%), or other intellectual, behavioral, and emotional (~40%).
Step 2: Make sure your personas are reflective of the population of students you hope will use your product. If you already have customers, co-create personas with actual teachers and students. Take 20 minutes to describe a student’s school day from arrival to exit. Make sure you define multiple student and teacher personas so as to not hide important needs by focusing on the average. How might they experience your future product? What barriers might they face? You can record your notes in a table like the one below. This is different than the exercise of creating personas. This is the process of assessing what experiences might be missing from the personas you use.
Example: User Personas
0
User
Photos
Characteristics
1
Students
ben-mullins-je240KkJIuA-unsplash.jpg
santi-vedri-O5EMzfdxedg-unsplash.jpg
zach-vessels--7I5Z7zPXGY-unsplash.jpg
40% live in unstable housing 80% qualify for free and reduced lunch 15% have frequent interactions with law enforcement 50% identify as Black or Brown 25% do not have a computer or device at home Define 3 diverse student personas
2
Teachers
jose-aljovin-JZMdGltAHMo-unsplash.jpg
thisisengineering-raeng-TXxiFuQLBKQ-unsplash.jpg
thought-catalog-xHaZ5BW9AY0-unsplash.jpg
50% teach 6 classes / day and 100+ student / semester 25% have been teaching 10+ years 35% have been teaching < 3 years 15% identify as Black or Brown
3
School Administrator
christina-wocintechchat-com-0Zx1bDv5BNY-unsplash.jpg
william-kuhn-l-_x85WK_DU-unsplash.jpg
pam-sharpe-yvGEvJUpMyM-unsplash.jpg
Fill in
There are no rows in this table

Activity 2: Check Your Blind Spots
We all have . The best way to become aware of your blind spots is to have your team visit schools oftenーnot just your product manager, the whole team. The more time your team spends in the education context in which you’d expect your product to be used, the more aware your team members will become. There is no substitute for school visits!
To Do: Visit a school and, once back in your office, bring your product development team into a single (physical or virtual) room. Encourage the team to share their personal school or learning experiences from the visit, as well as the insights they received from interacting with children. These can be shared openly or anonymously, depending on the level of comfort.
Gallery walk exercise: In person, ask team members to narrate their answers to a series of questions (suggestions below) about their learning experience as a child (and that of their children, if they have children). Collect the post-it notes or index cards and paste them in random order on a wall. Have your team take 10 to 15 minutes for a “gallery walk,” in which they individually read and reflect on the post-its. Then, bring the group back together for a discussion. What was surprising? Compare the room’s experience with that of children in public schools today. What is missing, and how do today’s experiences differ? How might our own experiences bias our perception of what schools or students need? Where are our blind spots?
Suggested questions:
What are your best memories of school?
What challenges did you face?
Describe a classroom you remember. How many students were there? Where did the teacher sit? How did you feel about school? About yourself?
How do you think your teachers felt about you?
In what ways did you feel different than your peers?
What is something you wish was different in your school?
How much time have you spent in a school in the last year? How many teachers or students have you spoken with?

Recognize that the less time you spend in schools with your users, the more gaps there will be between your and their understanding of their needs.

Activity 3: Creating an Assumptions Log
Step 1: Start with your products' goal. Inherent in your goal is the assumption that others see its value. What is the impact you hope to have? Describe in detail what it will look like if your product is successful before you begin development. Do the schools and students who will use your product share your goal?
Step 2: Make a
of all the assumptions that must be true for your idea to be successful. How might we investigate each assumption? This is the process of researching or identifying what you need to learn rather than searching for evidence to validate your assumption. For example, if your product assumes a linear learning path, where students must first master a specific concept before moving onto the next one, you will be able to find ample evidence to prove your assumption true. However, there is also ample evidence that students can learn related concepts at the same time or in a different sequence, making connections that solidify their understanding. Unless you are genuinely open to investigating your assumptions, you may not find evidence that adequately challenges your assumptions until it is too late.
Some tips when working on your assumptions:
Tip #1: It is easy to fall into the “research” hole during this stage. On the one hand, it can be very helpful to assess the risk of wrong assumptions so that you can prioritize the most critical investigations first. If your assumptions are wrong about the way students learn, your product might be ineffective. This warrants a discussion with educational scientists and teachers, in addition to a thorough investigation of the latest research. On the other hand, not every assumption needs extensive research. For instance, imagine you are wrong about the fact that teenagers like colorful content more than a classic or modern design. You could either quickly test this with a group of teenagers, or adopt the design anyway and easily change the UI later when you discover you were wrong.
Tip #2: Your assumptions log should be a living document, visible by all members of your team. The more perspectives you include, the stronger your logic will be. As you train your algorithm, make sure to stop and record all assumptions in this log.
Tip #3: Before you begin, diagram your assumptions on a whiteboard to map your logic. For example, a personalized learning product might assume that students learn better when they move at the pace that is right for them. Underlying this assumption is the idea that the pace of learning might differ but the order of concepts should be presented to students in the same order. This assumes both that the product has an effective ordering of concepts for student mastery and that the assessment methods used to evaluate student mastery are effectiveーand so on.
Assumptions log
0
Example
Assumption as a hypothesis.
What are alternative theories?
How can we find out if this is true?
What if we are wrong?
1
Learning outcomes will improve if students can learn at a pace that’s right for them.
Will learning outcomes improve if students can learn at a pace that’s right for them?
Even if certain students are given an endless amount of time, they may never master certain concepts.
Certain concepts are less engaging for students.
{Talk to learning scientists and a diverse group of teachers, school leaders, and students}
The product may have no impact on student outcomes.
2
Students should progress to the next educational concept only after they have fully mastered all previous concepts.
Can students only progress to the next concept after they have mastered all previous concepts?
Students can learn related concepts simultaneously.
Students benefit from exposure, space, and then re-exposure to certain concepts.
{Talk to learning scientists, teachers, and school leaders}
Some students may get stuck behind if they are unable to master a single topic. Can we give the teacher or student the ability to skip when appropriate? Can we flag this to the teacher in a timely manner?
3
We have correctly organized the progression of concepts in the order students should learn them.
Do we have the correct order of concepts in our learning pathways?
The progression of concepts differs for every student.
Concepts are interrelated and can be learned orthogonally.
{Test our order with students and teachers; hire an experienced curriculum designer who has worked with our target student population}
The product may not improve student outcomes.
4
We can effectively assess whether or not students have mastered a given concept.
Does our assessment effectively evaluate if a student has mastered the concept? How well?
Students may answer our questions incorrectly, but actually understand the concept.
Students may answer our assessment correctly by chance, but not understand the concept.
Students may not know how to use a computer, and so may not be able to demonstrate their mastery of the concept.
The content in questions confuses students.
The font is hard to read.
Our reporting has a bug and reports student answers incorrectly.
{Test our assessment methods against other forms of human assessment}
The product may not improve student outcomes, because the product will misdiagnose what content the student should be working on. Some students may fall behind or become disengaged.
5
Our set of pathways to concept mastery work equally well for all students, not just average students.
Do students of all races, abilities, and cultural, national, or linguistic backgrounds have the same mastery outcomes using our pathways?
Students of different backgrounds might master some concepts in a different order than others, especially if certain examples are inaccessible to them.
{Test our content across multiple pathways with students of different races and check for different outcomes.}
Some students may learn less well if we limit pathway possibilities.
6
Our assessment methods work equally well for all students, not just average students.
Do students of all races, abilities, and cultural, national, or linguistic backgrounds have the same mastery outcomes using our assessment methods?
Some students may not be familiar with the terminology in our assessments and therefore miss questions even when they understand the concept.
{Test assessment methods with students of different races and check for different outcomes.}
Some students advance more slowly than others or too quickly if our assessment methods work less well for certain types of students.
There are no rows in this table
6
Count

📖 Story Time: Data May Not Lie, but It Can Be Misunderstood

Data can easily mislead you to a faulty conclusion before you consider the full range of possibilities. Consider this example:
A school looks at its behavioral data records over the past 10 years. It notices that nearly although only 35% of its students are Black. At first glance, an easy explanation might be that Black students are simply more likely to have behavioral issues, resulting in a higher rate of suspensions. It's important to investigate alternate explanations for all of your assumptions. In this case, the school should dig in to the data to understand why Black students are suspended more often. Several possibilities could account for this outcome. Research shows that teachers often suspend Black students for the same behavior for which white students are put in a "time out" or gently reprimanded. In some schools, students are suspended for missing homework (something more common for students without a safe and stable home environment). This is an example where correlation does not imply causation. Black students are suspended more frequently due to factors that are outside of their control, and it's important to understand this larger context that may not be represented in your dataset.
Beware that this can also happen the other way around. You can always find a data set that confirms your hypothesis. For example, take the hypothesis that students with accents have lower English language proficiency. It would be easy to acquire a test score dataset to show students with accents performing more poorly on standardized tests. However, the dataset might fail to accommodate the level of familial support students did or did not receive or the cultural relevance of content on the tests (ex. the passages reference very American activities to which students do not relate).
Here are a few tips to avoid this:
Instead of asking, “why do Black students have behavioral issues?”, always ask open-ended questions, such as “why might Black students have received more suspensions than students of other races?” It is your responsibility to ask enough questions to get beyond the data to which you have access. Share your assumptions about the data with your users to make sure they would agree. It can be easy for faulty assumptions to influence your product when schools and companies don’t work closely together.
Be careful about looking for data to validate your assumption. This is confirmation bias at its best. You can always find a dataset to confirm your hypothesis. Be critical and open-minded.
Consult others and dig into the meaning, and especially the history, of the dataset. Where did the data come from? What other factors impacted how the data was recorded? What was the educational context during which the recorded events occurred? For example: for what reasons did teachers suspend students? What racial dynamics existed in the classroom? What were school policies around suspension? Sometimes these answers will differ from school to school, which is why representative data and research are so important. See the section for more information.
Finally, always remember: correlation does not imply causation! Especially with education data, just because two data points are correlated does not explain why something happened. Decades of racism in the education system cause race to be correlated with lower academic outcomes, however there is that students of one race are more intellectual capable than those of another.

🎯Conclusion: Investigate Relentlessly!

There are many ways for both existing and new bias to creep into your logic. While you may be familiar with common forms of data bias, racial and socioeconomic bias can be so deeply inherent in our perspectives that it can be very difficult to spot. Commit as a team to identifying and investigating assumptions, and don't stop peeking around the corners at every stage.

Before we move into the data, let's look at how bias can enter through your .




Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.