Home UX Differentiating a Usable Design from an Ineffective One

Differentiating a Usable Design from an Ineffective One

by admin
In a less than ideal chapter of my working life as an online course developer, I stumbled upon a lasting lesson that has enhanced my skills in user experience (UX) design: the importance of learning objectives. These objectives outline the intended knowledge or skill a learner is expected to acquire by the end of a module. They should align with any accompanying assessments, as without this connection, the validity of the evaluation is questionable. Similarly, when appraising a design’s usability, it’s essential to determine the goal posts that signify the benchmark for success. Remember, the design, not the participants, is put to the test. What actions must users perform to ascertain that the design meets its objectives? Is it logging specific hours, generating client invoices from those hours, or correctly dispatching the invoice? These tasks constitute your assessment framework.

Naturally, usability tests revolve around task completion observations, but it is critical to establish the concrete actions you require from users. Establishing specific success criteria veers you away from ambiguous targets such as “grasping the time tracking concept.” To conclusively understand, users need to articulate their comprehension. When they convey an accurate understanding, you can confidently acknowledge the design’s effectiveness. These criteria not only affirm the success of your design but also facilitate the results’ communication.

Powerful Verb Selection

In the realm of learning objectives, the guidance presented by George Piskurich’s Rapid Instructional Design introduces a list of verbs that can shape your criteria. Verbs such as “describe” or “demonstrate” are suited for comprehension goals—avoid ambiguous terms like “understand.” Look for explicit expressions or actions that bear witness to their comprehension. As tasks grow more complex, the verbs escalate to “explain” or “organize,” further escalating to “create” or “evaluate” at the apex of task difficulty. These carefully chosen verbs enable the monitoring of task completion success.

Setting Clear Objectives

As you prepare a usability test, question what the user should be capable of doing with or discussing regarding the design. For instance:

  • Logging a set number of hours dedicated to a particular project,
  • Composing an invoice tied to those logged hours,
  • Discerning and articulating the difference between time tracking and time logging.

Now armed with definitive success criteria, you have a clearer idea of the tasks to assign during the test. It’s crucial to note that while success criteria are foundational to your evaluation, tasks offer a narrative for the participant, integrating additional task-specific context as necessary. For example:

Success criterion: Create an invoice for time logged on a project
Task instruction: “After spending three hours on the Atlas project, demonstrate how you would bill Acme Products for your efforts.”

While there is an evident overlap, success criteria are an internal compass for you and your team, whereas the task is the engagement tool for the participant within the usability test. Additionally, some criteria may evoke a discussion rather than a task action, which can shed light on whether users align with the design’s intended mental model.

Stakeholders’ Appreciation for Criteria

Findings matter more to stakeholders than methods; vague outcomes can prove frustrating. Communicating uncertainties like, “The participant tracked some time, but we’re unsure if the concept was clear to them…” is ineffective. Clarity is part of your responsibility, as is providing tangible solutions for UX issues. Success criteria aid in both conveyance and clarification of results. For visualization, we sometimes employ a color-coded chart on our wiki, quickly revealing the core issues in green for success and red for failure. This intuitive format, coupled with concise result summaries and actionable recommendations, drives our iterative improvement process. Your method may vary—perhaps you present a formal report to a client—but the core advantages remain constant.

Related Posts

Leave a Comment