A5: User Evaluations#

Overview#

By now you should be closing in on a user interface that is ready for evaluation. A good user evaluation is an invaluable tool for helping you identify aspects of your produce that your users might struggle to resolve. For the user evaluation you will be assessing the usability of the interface that you designed by conducting a study using the Think Aloud protocol discussed in lecture. Your study should be designed such that participants have an opportunity to interact with at least one aspect of your user interface.

Each team member is responsible for arranging at least one participant for the study. A minimum of two team members should be available for all study sessions. All team members should participate in data analysis.

For each participant you will collect the following demographics:

  • Age

  • Experience with software interfaces (novice, intermediate, or expert)

  • Preferred web browser, computer operating system, and mobile operating system

  • Recruiter (team member name)

Task Planning#

You will plan a series of tasks for your user to complete using your program. Each task should have a clear description, goal, and end-point. You should write as many tasks as you feel are necessary to adequately evaluate the usability of your interface. Since this number will vary greatly depending on your scenario, it is difficult to predict how many each team will need. In general, if you have a sufficiently complex user interface and only prepare 1 or 2 tasks, then it is safe to say that you have not fully evaluated your user interface.

Data Collection#

With your testing plan in hand, you will start conducting your usability tests. Present each participant with the exact same introduction (scope, purpose, session detail, etc.) and tasks. If you discover an error with your first participant, do not fix it for the remaining participants unless it is critical to the success of your study (more on this below).

For each user evaluation session, you will collect the following data:

  • Task completion results:

    • Did the user successfully complete the task? If not, what happened?

  • Errors:

    • Did the user make any mistakes during the task?

    • Did the user encounter errors in the prototype during the task? If so, what happened?

  • Time per task:

    • How long did it take the user to complete the task?

  • Notes from user reflection:

    • Likes

    • Dislikes

    • Suggestions or recommendations for improving interface

    • Any other subjective data you would like to collect from your user.

Finally, before you conduct your first study, be sure to run at least 1 pilot evaluation with your group. You want to be absolutely sure that everything will work as intended and consistently across all of your participants. A good approach is to pick a member to act as participant and guide them through each task. Try to think about the different ways that someone new to your work might interact with your prototype to ensure that all interactions work properly.

Analysis of Results#

For your analysis, you will divide your results into two categories: quantitative and qualitative. Your quantitative results will include task completion results, errors, and time per task. These should fit nicely within a table or spreadsheet so that you can get a holistic view of your results across all participants. Once your data is aggregated, you will summarize the results using the following metrics:

  1. Average time per task

  2. Task completion rate per user

  3. Average user error per task

Your qualitative results will include your observation notes and reflection with each participant. Since this is qualitative data, you will need to use your expertise to develop insights about your user interface. Were there common points in your interface where participants stumbled? Was there a feature that participants liked or disliked? Did you observe any unexpected participant behavior that could be improved through changes to your UI? If so, summarize your qualitative analysis in one or two paragraphs and list out the changes you have decided to make. NOTE: You do not have to make these changes!

Submission#

Combine all materials developed into a single PDF document divided into three sections:

Participants

A list of all participants used for the evaluation (deidentified!)

Tasks and Intervention

A list of the tasks given to each participant

Screenshots of the intervention for which the tasks were carried out

Data

All of the raw data that you collected across all participants

Analysis of Results

A report on your findings

You must also include a cover page with the name of your team, scenario description, and team member names.

You will submit this document to Canvas.

Grading#

The following criteria will be used to grade your final assignment.

  • All materials submitted on time: 0.25 pts

  • Submission format requirements met: 0.25 pts

  • Minimum participant total met: 1 pt

  • Sufficient tasks for designed intervention: 1 pt

  • Analysis of quantitative data: 1 pt

  • Analysis of qualitative data: 1 pt

  • Summary of changes to final product: 0.5 pt

Total: 5 pts