Deciphera

Optimizing a pharma-CRM

Project overview
Deciphera is a small pharmaceutical company whose sales representatives were growing frustrated with a CRM that was hard to navigate and challenging to understand. The stakeholders wanted us to focus on 3 prominent flows that they knew their reps frequently navigated through and had feedback on. 
The deliverable
We were conduct user research and select 6 screens to re-design based off of the data. The wireframes were expected to be mid-fidelity and would help guide the CRM team with any enhancements or redesigns. 
Project details
Team (5): UX designer, UI designer, Project manager, Tech lead, Design lead
Role: UX designer (and researcher)
Timeline: 3 months
Main tools: Airtable, Figma, Figjam
Creating a purpose
Before getting started, we wanted to spend time with the stakeholders aligning on intentions, ideas and hopes for the project to get a better idea of expectations.
After we collected these perspectives, we spun them into a purpose statement to help create alignment and focus towards a common goal.

The purpose statement

The goal for discovery
To better empathize with sales reps, we needed to understand the experience, uncover pain-points and discover needs that were either being satisfied or unmet. The plan for discovery was holistic and comprehensive. We were going to create a heuristic evaluation, conduct in-depth interviews and hold focus groups to analyze the product and it's relationship to the user.

Heuristic evaluation
To evaluate this CRM, we measured the interface against Nielson Norman's 10 heuristics and defined any apparent issues on a range from mild to severe. I analyzed multiple variables for each violation and tracked where they fell in a flow to see how they were related and/or impact the experience. Cross-analyzing violations to flow locations also allowed us to pin-point areas of interest for interviews and focus groups.
We visualized the evaluation in Figma as well to complement the documentation in Airtable and aid in understanding. 
Assumptions:  UX/UI violations would be heavily present and "visibility of system status" was likely to be a frequently violated heuristic

Heuristic evaluation completed in Airtable

The results
Organizing the data by heuristic showed the following results: Consistency & Standards was mentioned 38 times and Aesthetic & Minimalist design and Visibility of System status were both mentioned 16 times. These 3 common violations meant that the product tended to feel inconsistent and unintuitive, feel overly complex and information heavy. In addition, the lack of system feedback created a sense of mistrust as it was hard to tell what the system was interpreting in the backend and how it was processing actions. 
These violations were placed in context within their flows, described and labeled to stakeholders in the following format to help communicate and build trust in our analyses.

The presentation of our synthesis

Interviews 
Participants 
5 Sales reps and 1 Medical science liaison were to be interviewed for  1-hour over zoom.
The goal
The primary goal was to lead an open-ended discussion to uncover behavior, needs and emotional responses of the holistic experience. From this, we expected to receive data that was all-encompassing of their experience with areas of detail that were important to each participant. The discussion guide focused on context setting, opinions on the experience and a walkthrough activity.
Some assumptions
From our heuristic evaluation we thought that participants would likely talk about the challenges of navigating the interface and mention the use of workarounds to compensate for workarounds. 
Focus Groups 
Participants
3 focus groups divided up by company role (Director, Territory manager and Sales rep). Each session was to last 1 hour over zoom. 
The goal
 We wanted focus groups to complement the data gained from the interviews by generating group discussion around likes, dislikes and thoughts specifically within the 3 primary flows. Since we wanted to compare across groups, we designed the same focus group activity for each group.
Some assumptions
Mostly we were expecting the focus groups to validate what we've already heard. We were hoping the Directors and Territory managers would have more to say since we hadn't interviewed them.  
The goal for defining 
After accomplishing all of this research we needed to spend some time sifting through and synthesizing what we have learned. We wanted to do be really intentional about tying each research effort together so frequently discussed how to integrate intersecting insights.
Interview synthesis 
After all of the interviews were complete, we spent 2 weeks synthesizing what we had learned using affinity maps and revisiting the notes. We wanted to make sure we took the time for proper analyses and catch any gaps in our data since this was to be the foundation for our upcoming wireframes. 
The process
We started the process by deciding how we wanted to set up synthesis. We decided designate stickies to each participant and tagged them with each user's role on one map. This gave us the ability to cross-analyze between roles and themes to differentiate participants and draw further relationships between roles and experiences.
After deciding on our structure for synthesis, we began by taking highlights from the notes and organizing them on stickies. We then sifted through these and arranged them into large groupings like "positives" and "negatives". After the groupings were more digestible, we reorganized the large groups into smaller themes and gave them over-arching labels like "pain-points" or "requests". 
The end result was a board that achieved high-level analyses while providing the freedom of a detailed view.
Focus group synthesis 
While we had significantly less stickies from the focus group activities, we also chose repeat the affinity mapping technique. We had to be much more cognizant of time spent on synthesis because we were starting to encroach on the time allotted for design. Our goal was to complete synthesis within 2 days which we were a bit nervous about.
The process
Since we had to be much more efficient this time around without sacrificing data integrity, we created an affinity map for each focus group and color coded the stickies by role instead of by participant. This gave us the same cross-analyzing ability that we had during the interview synthesis which maintained our ability to dive deeper into the data. 
After organizing the boards, we optimized time by setting a 10 minute timer for each map. If we weren't satisfied with the groupings in the 10 minutes, we spent an extra 7 minutes reviewing and reorganizing. At the end, we analyzed each map and cross analyzed between roles. We pulled out initial themes and revisited our synthesis the next day to ensure we maintained data integrity while being comprehensive in our analyses since we were shorter on time. 

Part of an affinity map made after interviews

The problem
Interpreting the results
After we were satisfied with the synthesis, we highlighted conclusions from the interviews and focus groups we felt that we had enough data on.
From the interviews, we had 12 major themes and 3 prominent sentiments. In this context, we defined themes as any obvious behavior or preference that was prevalent in the synthesis and sentiments as any underlying emotions or thoughts people indicated they felt during the interview process. Since we only had 1 Medical Science Liaison, we felt like we couldn't use the data from that role in an individual context. 
To organize our findings from the focus group analyses, we matched each like, dislike or thought that the group had consensus on in their respective location within the 3 flows. We then intersected this data with the heuristic evaluation by labeling each like, dislike or thought with the heuristic it either violated or solved. 
To ensure that our data produced valid results, we matched related themes and cross-analyzed findings between the focus group and interview data sets. We found that the dislikes and likes from the focus groups aligned quite well with the themes found in the interview synthesis. This meant that our data sets complemented each other and supported one another.  
Key problem
Our biggest takeaway from the interviews was that outdated, challenging UI created inefficient flows. The lack of system notifications made it difficult for users to tell whether or not an action had been completed. In addition, the data constraints and limitation in data analyses lead to users rely heavily on work-arounds from other tools and mistrust the data present in the system.
To contribute to these findings, our focus group and heuristic evaluation cross analyses highlighted the inefficiencies in the UI, unintuitive design and general inflexibilities present in all of the major flows. 
Basically, data from the Sales reps showed that they were feeling like they were spending so much time inputting data and trying to figure out what the CRM was doing that it was reducing the time they spent actually selling. Their general mistrust of data in the system further impeded them by making it harder to find trustworthy leads. 
Key focus areas
There were 3 qualities we wanted to prioritize in our upcoming designs to alleviate some of these challenges: efficiency, time and trust.
We wanted to increase efficiency and reduce time spent in the interface by allowing for the easy input of data. We were going to simplify navigability to strengthen the mental map of the interface and make the experience feel more intuitive. Most importantly though, we were going to strengthen trust by adding in more organization, analytical tools and more transparency with system status and data updates.  
The goal for developing 
The goal here was pretty simple, wireframes needed to increase efficiency, reduce time spent in the interface and strengthen trust.
However, the challenge was that we were only scoped to deliver 6 screens, so choosing which parts of the CRM to redesign was going to take some thinking. Thankfully though, we also had planned on conducting design validation sessions so if we misjudged anything, we were hopeful that it would be caught. 
We ended up deciding on redesigning the home page, the navigation, created a template for a table and then the 3 most challenging screens from each flow.
Drafting designs 
Competitive research
When we started brainstorming we began looking at CRM competitors, examining how their designs looked, what table behavior they had and any comparative flows. This gave us an idea about what layout felt intuitive, standard features and what potential enhancements we could add to increase competitive advantage for the product. 
Brainstorming
After, we spent some time white-boarding and iterating. We then documented our ideas into Figma and revisited it later asking ourselves questions like "would this feature increase speed?" "Does this help with extra data analyses?" "Do I feel like users would be able to interpret what is happening here?" 

Dashboard and table brainstorm

Selecting designs to test
The designs we chose were the wires that we felt increased speed, reduced time and built trust the most. They had competitive features built into them and aligned with requests from the data like "I'm tired of switching multiple tabs on my computer to see what my day looks like". They also contained UX/UI enhancements such as a general search bar, a notifications panel and breadcrumbs.  

Dashboard and table template

Design validation
Participants
4 Sales reps and 1 Medical Science Liaison reviewed 6 screens over zoom for 1 hour. 
The goal
Our overall priority was to validate our synthesis of the original research and the translation of that data into designs. We wanted to make sure it felt intuitive, efficient and analytical and that any enhancements felt necessary to the flow. It needed to feel easily navigable and trustworthy.
The process
To prepare for design validations we cleaned up the mid-fidelity UI and prototyped out key behavior for each design (like kebabs). During the sessions, we asked a range of questions that prompted feedback on anticipated behavior, layout and whether or not elements felt intuitive and/or important. 
Design validation synthesis
Similar to the in-depth interviews and the focus groups, we relied on affinity maps for proper analyses. We created a board for each screen and then grouped highlights from the notes into categories. Notes that prompted iteration were transferred on to the designs so that feedback could be seen in context. 
Results
Overall, our initial designs were a great success! Participants had the most to say about the home screen and a couple tweaks with language but loved how the designs aligned with their needs. They noted that the behavior, navigation and flow felt like a much needed improvement to efficiency and data analyses. 

Stickies of potential home screen improvements

The goal for delivering 
Since we were delivering 6 mid-fidelity wireframes, we wanted to make it clear that these designs were not finished. While there were parts backed by research, there was still more research and design work to do. To support this balance, we delivered visually simplistic wireframes taking care to only document key behaviors and attributes supported by the research we completed and provided them with a document of questions we still had. 
Along with our wireframes, we provided prioritized recommendations that would help progress the designs. We noted that creating a design library would help maintain consistency and improve system understanding, that research into tech integration possibilities should be done to streamline the experience, further decrease the need for workarounds and reduce the time spent switching between interfaces. In addition we proposed further research to explore what types of content sales representatives would want displayed and what their expectation was for information architecture. 
Our designs
Reflections & Challenges 
Align research with design
The timeframe for this project was tight, only scoping 3 months for research and design. Research was meant to be all encompassing with interviews and focus groups spanning roles, titles and departments when design was meant to target a specific user group.
Normally, I am all for holistic research especially when there is an potential for future work and a lot of unknowns. However, our time spent researching those who fell out of the ultimate end user group meant that we could've either prioritized more time on the end user or increased time spent designing. There was an opportunity cost that occurred since the scope didn't plan research around the end user.
If I were to re-scope this project, I would only interview and host focus groups for the sales reps since the designs were going to be geared towards them. This probably would've saved about 2 weeks time that we could've spent on brainstorming or design validation sessions. 
Delivering wireframes without a design system
Something that was top of mind throughout the whole project was that we were to deliver 6 wireframes that fell within 3 different flows that were going to be passed along to a 3rd party development team that had no design system. We were told that the development team was going to try to use these mid-fidelity wireframes to code off of which is an idea we challenged but tried to account for.
While we ended up delivering wireframes that noted key data-backed features with some behavioral documentation, I would've preferred to make 2 individualized wireframes instead that redesigned the navigation and the homepage and spent the rest of the time creating a basic design system with colors, buttons, icons and key ui elements like modals or drop downs. I think this would've been a good comprise between idealistic and practical designs.

You may also like

Back to Top