In this report (along with the attached Figma files) I will walk through my UX Audit for the Plandek App.
When we began this project, we identified one core problem: customers were struggling to understand the Plandek interface.
Not only did this prevent them from taking advantage of Plandek’s full functionality, but it also led to repetitive questions for the CX team.
Therefore, this audit had two goals: provide an outsider’s perspective on Plandek’s “ease-of-use” and suggest ways to improve the user experience. Before starting the project, we decided I would spend most of my time on four major areas in the app: Onboarding Flow, Navigation, Datasets + Dashboards, and the Explore View.
Areas of Focus
I’d like to point out two things before we get started. First, although some of my designs are in higher fidelity than a typical wireframe, they should not be taken as recommendations about visual style. Instead, they should be interpreted as recommendations about the flow of the application and the positioning of elements. In other words, feel free to adapt the visual style to suit your design system.
Second, I haven’t covered the entire app. Instead, I’ve focused on the four areas we identified at the beginning of the project. That means there will be sections and edge-cases missing, but all of the major problem-areas in the app will have been addressed
For this UX audit I’ve provided three deliverables.
01. Final Report
The “final report” is the document you’re currently reading. In this report I will explain my high-level approach for each section, not the implementation. For example, I might explain why I used “progressive disclosure” in the onboarding flow, but I won’t show the actual designs. The implementation details will be covered in the Figma Flows.
, and then use these sections as a reference to explore the rationales behind key decisions.
Okay, with that out of the way — let’s get into it.
An onboarding experience should serve two purposes:
Collect user information
Teach users about the platform
Plandek has room for improvement in both areas. First, I ran into hiccups when entering my information (e.g. not knowing the format of my Jira username, or spending time searching for my API key).
Second, the onboarding didn’t teach me about the interface. Once I entered my information, I was thrown into a dashboard filled with menus and metrics. I felt lost.
My goal in redesigning the onboarding was two-fold: Make it easier to enter information, and teach users about the app.
I did this through “progressive disclosure”. Users will land on an empty dashboard, and as they enter their information it will appear in the dashboard. This will help them slowly orient themselves. By the time they’ve finished entering their information they’ll have a solid understanding of their location in the app, along with the location of the main functionality.
Once users have finished onboarding, they will land on a dashboard within a dataset. At this stage, the structure of the navigation should serve as a guide that helps users grasp their bearings (similar to the signs on top of a ski hill, telling skiers which way to descend).
A user’s ability to determine what to do next will depend on three factors: naming, positioning, and hierarchy.
There are three major issues in the app’s navigation. These issues make it difficult for someone to grasp where they are in the application.
Mixing of Metaphors
As we know, the app has four levels of hierarchy: Company → Dataset → Dashboard → Metric. However, the interface instantly begins to mix these metaphors.
For example, when a user finishes the onboarding flow, he gets thrown into a dataset with the same name as the company.
If I create a company called “Albert’s Second Company, I will automatically receive a dataset also named “Albert’s Second Company. This starts to blur the distinction between the our levels of hierarchy (am I inside a company or a dataset?).
Naming of hierarchy
Secondly, the naming of elements inside of Plandek is unclear. We currently use “dataset” to describe something that isn’t quite a dataset. Instead, the screen that holds multiple dashboards can be more accurately described as a “workspace” (a space where users can create dashboards and collaborate with team members).
Laws of Locality
The final source of confusion stems from violating the two UX “laws of locality”. The laws of locality state the following:
Put a control where it affects change
If a control affects change over an area, put it above that area.
The current interface violates these two principles on multiple occasions, making it difficult for users to grasp their bearings. Here’s a simple example:
As you look through the redesigned screens, keep these three mental models in mind: Distinct metaphors, clear naming, and obeying the two laws of locality.
At some point, users will want to add new workspaces (formerly known as “datasets”). The current interface has a few non-standard interactions which are easy to fix (e.g. improving 2-step drop downs, converting radio buttons into tabs, etc.).
However, my most important recommendation is to notify users when data is missing and guide them towards adding that data.
For example, when users attempt add data from a “data source” they will only see the data that they’ve already hooked up. In other words, if they don’t have Github hooked up, they won’t even know that “repositories” exist.
To resolve this, I’ve created “actionable inactive states” which guide users to hook up any data sources that are unavailable. I’ve used this approach a few times throughout the app. If something is unavailable we shouldn’t hide it. Instead, we should guide users towards hooking up the correct data.
I made two major changes to the “explore view”. The first was splitting “base metrics” from “presets”. In the current interface, the mixing of these concepts leads to a lot of confusion.
For example, in the image below you can see a metric with the name of “New bugs within a sprint”, but when I click on the question mark I land on documentation for the base metric of “Created Tickets”
Splitting the metric into base metric + preset solves this problem (among with many others). It also allows users to rename their metrics without altering the base metric.
My second big decision was to improve the visibility of controls. Instead of leaving them hidden behind the secondary menu, I brought them into the forefront.
In design, there’s a saying: “obvious always wins”. The more obvious you can make something, the more likely your users are to use it. If we want a user to explore a functionality, we should put it front and center.
I made three important changes to the interface for “adding a metric”.
First, I made the categories more descriptive. When working with data, users tend to think of “nouns” that they want to manipulate — not abstract categories. So I’ve replaced the categories with clear nouns (e.g. “tickets”, “stories”, “deployment”, “builds”, etc.)
Second, I split “base metrics” from presets, and gave users the option to select from a list of presets (e.g. “New bugs within sprints”). When users consciously select a preset, they will start to understand the difference between “base metrics” and “presets”.
Finally, I changed the way we display metrics that have no data. Instead of hiding these metrics, we should guide the users towards connecting the proper data.
Finally, I made a few changes to the dashboard screen. The main consideration on this screen is “scannability”. In other words, we want to allow users extract the information they need as quickly as possible.
Thus all my suggestions for this section were focused on making these previews as scannable and digestible as possible.