Our dev team decided they wanted to upgrade and modernize their web stack, starting with one of our web apps. With the rewrite came the opportunity for a redesign!
The use of Angular meant I needed to grow the small improvement from the
project into a full component library. At the time, none of our multiple web apps shared a similar style. This was my opportunity to begin my long-held goal of unifying all our web interfaces into a cohesive family of apps.
Even beyond cohesive interfaces, I saw this as a chance to move our company from a disconnected set of apps toward my vision of a single, seamless web experience for our users.
The Problems
This was also a chance to fix the problems in the old interface:
Analysis results appeared below settings so that you had to scroll up and down often to change your settings and view the results. Simulation results required clicking through four separate pages just to compare across simulations.
No data visualization. And because results were sorted by number, not score, such that you had to hunt for the item with the highest score, there was no emotionally satisfying moment of immediately seeing the winning item.
Some results report the item numbers, not the item labels (ie, the words). This made it difficult to understand your results.
Research
Survey blueprint written in the software's text-to-survey syntax.
Design
Based on the requirements, I put together this flow to communicate to developers how data and usage would flow from our flagship app, into and through the new app we were creating, and (when needed) out again as exports. This also helped me organize the screens and experiences that I would need to design.
Since we were building this new app from scratch, I knew this was the right time to introduce a new, official, more modern design system / UI style for our web experiences, this app being the first where it would be applied. That alone took an immense amount of thought and experimentation since the design decisions not only had to work with this one app but also with our other web experiences that have yet to be rebuilt with this new technology (and therefore with this new design system). All in all, I found myself more or less redesigning a half dozen web apps at the same time.
That's the architect in me. Looking back, and now that my company is trying to be more agile, I may have done better by biting off smaller chunks at a time. Then again, a high-level vision and architectural plan forward (albeit a flexible one that can be adjusted or even abandoned with time and new information and requirements) continually prove invaluable. Balancing the holistic and the immediately practical is a constant struggle. I'm endlessly passionate about identifying the principles of process strategy.
In the end, I had a high-resolution prototype complete with a Figma component library, which would be the seat of the new design system. From it, developers made an Angular component library.
Some Solutions
I moved the settings into a settings panel that sat to the right of the results so that users could make tweaks to the settings and see the results all in the same view.
I added charts to the design as the default view setting. The results were sorted by score so that the winning item was the most prominent one on the page.
I made all settings and results textual rather than just numerical.
I was told we wouldn't integrate this app into our other survey-builder app, which was the obvious preference over opening it in a new tab. I prototyped what the integrated experience would look like. After seeing it, developers said it could be done, and the product manager agreed it was worth the extra time to make it so.
Designers don't have a compile button to test whether something works, so all throughout the project, I shopped my designs around to teammates, stakeholders, and users to get specific feedback as well as a general confirmation that I was designing in the right direction.
Testing
With each major feature set, we tested the usability of our interface on existing users. For the first set, I used the asynchronous usability testing app Maze to create tasks (called "missions") for users to perform on our prototype and then used our own survey software to ask them follow-up questions about how they felt about their experience using the new features.
I color-coded their follow-up survey responses to identify problems, successes, and additional requests and ideas users brought up.
A later usability test, we met with users over video chat and watched as they tried to perform various tasks. (For this round, my fellow designer led out and I simply recorded their success rates and answers to a few follow-up questions.)
The Results
Users were enthusiastic about the redesign! They said it was much easier and more convenient to use and much better looking. One wrote, "I went through the update when it was announced and, honestly, I was blown away. The whole look and feel of the simulator is totally different and you did a great job anticipating users’ needs." Another wrote, "[The new UI] has gotten quite a positive response from our project managers."
What story did the usage metrics tell? Unfortunately, despite my efforts, product management decided not to spend the budget on setting up metrics collection in this new app. As of this writing, I was still in the process of changing that.