Competitive Review & Heuristic Analysis

Project Description
The problem was: We didn’t know what the problems were. At least, we did not have a holistic view of usability problems within the fitting software, GN ReSound’s Aventa. We also had minimal information on how we compared to our competitors in terms of usability and feature availability. The last usability analysis conducted on the fitting software platform had been in 2007 on what was, at the time this review was conducted, a legacy platform. A lot had changed. I was tasked with conducting a benchmark study to trend usability problems and benchmark our software against the competition.
Project Details
Client GN ReSound
Date November 2014
Skills Heuristic Analysis, Qualitative Data Analysis, Quantitative Data Analysis

The Approach
In approaching this project, I needed three distinct phases. The first phase: I would need to conduct a heuristic analysis to determine what was problematic in the current platform. The second phase: I would need to conduct a comparative analysis using the trends and problems discovered in the first phase trying to quantify the data. The third phase: I needed to conduct a qualitative analysis of the data and compile the results for presentation.
Phase 1: Heuristic Analysis / Expert Review
Using Jakob Nielsen’s Usability Heuristics as a framework and essentially turning myself into a user through Immersion, I began an in-depth evaluation of the problems within the fitting software platform. This involved running through a number of connection scenarios and user tasks. In addition, severity values would need to be assigned to each problem as a measure used to determine importance. However, there was a weakness to address. Traditionally, heuristic evaluations and expert reviews are conducted by 3-5 usability experts and severity ratings can be extremely unreliable when using a single reviewer. I only had myself as our team and resources were limited. To address this weakness, I used the 2007 usability report for Aventa 2.X (which also used Nielsen’s Heuristic framework), previous research our team had conducted with our call centers and data derived from more than 100 hours of direct user observation and semi-structured interviews. This afforded me with some solid trends between what I was finding in an expert review and what we had previously discovered through users and internal research.
Phase 2: Competitive Benchmarking
With a solid set of trends identified within our own platform, I then turned to the competitors software to understand 1) how they managed the issues I had identified and 2) what features they had incorporated we did not currently have in our system. This involved connecting competitors’ hearing instruments and conducting similar scenarios and user tasks I had scripted for the first phase. I was then able to compile a series of problems and feature comparisons between our platform and our competitors.
Phase 3: Qualitative Analysis of Results
This phase involved coding more than 200 different problems identified within the various platforms. Those codes were then used to quantify the prevalence of different problems, to include specific examples for presentation purposes. The quantitative analysis was divided into primary issues and secondary issues. The majority of problems concerned navigation, visual design, user control, labeling and system communication (to include user help).
Results
