Start playing with our research archive template in . In early 2018, we (, , and ) joined Thumbtack. Together, we formed the experience research team. , an online marketplace for local services, was going through a period of considerable change. The core product that we built to match customers with local professionals for pretty much anything — home repairs, wedding photography, you name it — was evolving, which meant our metrics were changing, and we were many more new users every week. A lot of the existing research no longer applied, so people turned to our team for help.
From day one, we were swamped by research requests. Clearly, there was much more demand for research than our tiny team could meet. Since then, we’ve used three primary strategies to scale up the impact of our tiny research team without increasing our headcount.
Scaling Existing Research Scaling the contribution of others We originally shared this framework in a UXPA 2019 presentation titled , and in a three part blog series on . In these you’ll see how we improved our output by building repeatable research processes, how we increased the value of our research by making insights easier to discover, and how we empowered our colleagues to perform excellent research in their own right.
Today we’re following up with an update to Part 2: Scaling Existing Research by taking you behind the scenes and showing you the Coda doc that made it happen: our Research Archive!
The best part? You can copy and customize this doc for your own team.
Using past insights to answer today’s questions
In 2018, our tiny research team put in place multiple strategies to scale our operations. As we scaled ourselves, we were able to do more research. We collected more data and produced more knowledge to support our colleagues. But we weren’t prepared for quite how fast our output grew and we didn’t have a structured way to organize our findings.
Over time it became harder and harder for our tiny team to keep track of all the research we’d produced. And that led to a load of inefficiencies like duplicated research and slow information finding. It also put huge strains on our tiny team as we became the gatekeepers for our knowledge. Each day, we’d receive a number of Slack messages asking where we kept different reports, data, and so on.
So we asked ourselves, “How do we make past insights discoverable and relevant today?” Or, “How do we squeeze more juice out of the lemons we already harvested?”
We first considered creating one central database where we could store individual research observations. Curious about the concept of atomic research (popularized by on the Polaris database and the Research team at Uber’s impressive solution) we explored creating a similar system in which we would meticulously store these research “nuggets.” If we labeled everything with themes and project descriptions, we’d have the perfect, searchable research database.
As we went down this path we hit some serious roadblocks. Who was going to populate the database? How would we develop a consistent yet flexible tagging taxonomy? How would we determine what qualifies as an insight and what doesn’t? Would our research partners be able to find the relevant nuggets? What about participant privacy?
We loved the idea of atomic research but decided it wasn’t the right approach for our tiny team. We knew we had to organize our assets but we couldn’t handle that level of granularity nor did we think it would serve our stakeholders well.
So we took a step back and reassessed the problem. And when we did, we realized we had two distinct issues. On the one hand, we needed to make our insights discoverable for us, the researchers. On the other hand we needed to make our insights discoverable for our teams. We had two problems, which meant we could explore two solutions.
Our first solution was a repository for our raw qualitative data, which most of the time takes the form of a recording of an interview. A prior researcher at Thumbtack had discovered , a product that automatically recorded, transcribed, and stored every research session. It was a great fit for our first problem. Having all our raw data in one place gave us a quick way to answer some of the strategic questions that our stakeholders ask over and over again. For example, when someone asked, “What do our customers think about reviews?”, we could fire up our repository and find an answer in minutes. Our second solution was a . We implemented to catalog and organize all our research reports and literature reviews. Whenever a research project concludes, the final report is submitted via and a member of our team reviews the submission before we publish it in the . Our colleagues now have a birds-eye view of all our research and can consult our past reports in to support their design decisions.
We considered a number of different tools for the archive – Google Sheets, Airtable, Dovetail – but ultimately settled on Coda for a few reasons:
Coda allowed us to maintain one, detailed “source of truth” () while also being able to create customized views (e.g. ) for the main audience of the archive. That way we as a research team could track more detailed information without cluttering the main view that our colleague’s were using. Additionally, we hid the page (including the and subpages) so they were tucked of sight. These pages are only relevant to the research team (not the rest of the company), so hiding them focuses the attention on . Coda’s publishing features allowed us to create an even better looking view for our internal audience that looks and feels like a regular website and where the editing privileges are set so that no-one accidentally deletes or alters information. (In fact, this feature was so important to us that we almost had an engineer build a simple website for us that pulled data from our Coda archive but the Coda team launched internal publishing just before we were set to begin coding!). Enabling this feature is simple enough - we just published the doc, but left discoverability off. Coda was already being used elsewhere in our organization. We knew that creating as low of a barrier to entry as possible for our co-workers to access the Archive was important, so we selected a solution that was already integrated into our workflows.
It took a long time and a lot of work to put our two new systems in place. But by slowing down, thinking about our needs, and building a system tailored to Thumbtack, we are now able to move much faster.
Make this doc your own
Now that you know how our tiny team scaled our efforts, we hope you can better leverage the work your team has produced in the past.
When you’re ready, head to to begin.
Here’s what you’ll find in the rest of our doc.
: Drill down on past reports and find studies that match your criteria. This page is visible to everyone at Thumbtack.
: Submit a new report to the archive.
: Dashboards and master data tables to help admins manage this doc. Since these pages are intended just for admins, Erik hides these pages in Thumbtack’s version of this doc.
This doc was co-authored by Cordelia Hyland, Erik Olesund, and Jordan Berry — researchers at Thumbtack. Special thanks to Cory Weaver for the illustrations and to Maya Geary for making sure the Research Archive lives up to its potential!
Want to join the Thumbtack design team? We’re currently hiring for multiple roles. .