• User Experience Testing Methods, Tools and Practices

    Methods used by our experts to evaluate interface:

    Benchmarking, Heuristic Evaluation, Cognitive Walkthrough

    Tools and practices involving users remotely:

    Online Behavior Statistics, Surveys, A/B Testing

    Practices involving users directly:

    Contextual Analysis. One-on-One Interviews, Task Completion, Listening Lab, Card Sorting (Open and Closed), Tree Testing, Design Testing (Flash Test and Emotional Response Test)

     

    Benchmarking

    Benchmarking is performed using a pre-defined list of criteria, which is adopted for particular industry or website type. Metasite has developed the following benchmarks:

    • digital brand presence benchmark contains 100 evaluation criteria for professional brand online presence
    • e-banking benchmark contains more than 500 evaluation criteria for retail banking
    • postpaid telecom selfcare benchmark contains 300 evaluation criteria for postpaid online selfcare in mobile telecommunications industry
    • prepaid telecom selfcare benchmark contains 250 evaluation criteria for prepaid online selfcare in mobile telecommunications industry

     

    Heuristic Evaluation

    Evaluation of interface. Experts measure usability, efficiency, and effectiveness of the interface based on a chosen set of usability criteria, adapted to the needs of a particular client.

    Metasite uses two sets of usability criteria:

     

     Cognitive Walkthrough

    Expert or experts team „walk“ through a set of predefined (important or typical) user tasks, one step at a time. At each step  expert asks following questions about the expectations of users‘ behaviors:

    • Will the user try to achieve the right effect?
    • Will the user notice that the correct action is available?
    • Will the user associate the correct action with the effect to be achieved?
    • If the correct action is performed, will the user see that progress is being made?

     

     Online Behavior Statistics

    When collected in the beginning of analysis or before the project, statistics can be used for  spotting patterns, raising questions and hypotheses which can further be explored by more in-deep user analysis.

    When collected in the end of analysis or after the project, statistics can back up theories and verify if the initial assumptions were true.

    Tools used:

    1. GoogleAnalytics

    2. ClickTale

     

    Surveys

    Online surveys are used to collect quantitative data about users‘ opinions, needs and preferences. Users‘ tasks have to be clearly defined and questions have to be planned well, because experts are not allowed to ask follow-up questions.

    Surveys may include open-ended questions, depending of a given time limit (analysis of open-ended questions may delay the project).

     

    A/B Testing

    A method of comparing two or more different designs in order to improve response rates.

    A/B Testing can be performed manually or using one of many online tools.

    Metasite uses Google Website Optimiser account, which allows the expert to set up multiple variations of a web page tested. When a user visits the website, Google Website Optimiser will display one variation of the web page according to the end user‘s IP address. As the user navigates the website, Google Website Optimiser tracks the user‘s clicks to see if one version is more effective than another.

    Google Website Optimiser:

     

    Contextual Analysis

    Understanding users‘ tasks by observation and interviewing. The expert silently observes the user at work or in his natural environment and in the process or after the observation asks questions in order to understand the procedure from the user‘s point of view. The expert may also conduct exercises that let users map out the task procedure on paper and indicate their emotional reactions to specific steps.

     

    One-on-One Interviews

    Interviews help to learn about users‘ attitudes, beliefs and specific tasks. In order to prevent the expert from introducing bias and to ensure that every participant is interviewed using the same set of questions, the expert prepares a discussion guide, which is a list of questions that will be asked in a particular order. An expert may decide to ask follow-up questions to gain more clarity.

    Users are likely to answer questions based on the way a task „should“ be completed. Therefore interview should not be used alone. It works best when combined with Contextual Analysis.

     

    Task Completion

    A method of evaluating a product by testing it on users. An irreplaceable usability practice, as it provides direct input on how real users use the system.

    The expert prepares a list of 5 to 10 tasks which users perform within an hour or less. The tasks should represent the most common user goals and/or the most important conversion goals from the system owner‘s perspective.

    It is crucial to establish very clear success criteria for each task, to clarify where the participant should begin the task, and how task completion and starting points may affect the expert‘s ability to counterbalance task order.

    During the testing session the expert reads a participant one task at a time and allows the participant to complete the task without any guidance. To prevent bias, the researcher follows the same „script“ when explaining the task to each participant.

    The expert analyzes users‘ facial expressions, the number of mouse clicks made, and the navigation path used to complete a task. Afther the study the expert compiles the data to determine the severity of each issue.

     

    Listening Lab

    Task CompletionA method of evaluating a product by testing it on users. But, unlike Task Completion, this method focuses on allowing users to choose their own path through the system and both observing and listening to the users intensely, instead of defining tasks for them.

    The expert creates a relaxed and friendly style of interaction. Questions are framed in an open-ended way to avoid tilting user behavior or responses in a certain direction. Questions focus on users‘ thinking process and their expectations, for example:

    • What does this website could be used for?
    • Why did the user click that particular button?
    • If user hesitates before clicking the link, what was he thinking about?
    • Based on what user saw on the previous screen, is the next screen as he has expected?

     

    Card Sorting

    Card Sorting generates an overall structure for the information, as well as suggestions for navigation, menus and possible taxonomies. If combined with a discussion, Card Sorting helps to gain insights into user‘s mental model.

    Two types of Card Sorting can be applied:

    1. Open Card Sort

    Participants are given a stack of cards and are asked to group them together as it makes sense to them (no right or wrong answers). After they have grouped the cards, they are asked to name each group of cards.

    2. Closed Card Sort

    Participants are provided group names, and are asked to place each of the cards into one of the pre-established groups.

    A variation of the closed card sort is a semi-open/closed card sort exercise. In a semi-open/closed card sort, participants begin with a closed card sort, with the exception that they are allowed to make changes to the group names, and may add new groups, rename groups, and remove groups.

     

    Tree Testing

    A technique for evaluating the findability of topics in a website. Tree testing is done using a simplified text version of the site structure to ensure that the structure is evaluated in isolation, nullifying the effects of navigational aids, visual design and other factors.

    Tree Testing is done as follows: the user is given a task (e.g., „to find a cabbage“) and shown a text list of the top-level topics (on paper or online). The user chooses a heading and then a list of subtopics is shown. The user continues choosing (moving down through the tree) until he finds the topic or until he gives up.

    One participant does several tasks in this manner. Once several participants have completed the test, the expert analyzes the results by answering these questions:

    • what is the findability of a particular item in the tree (how many users found it and how many clicks they needed in average)
    • did the users have to backtrack to find a particular item and if yes, what items did they choose instead
    • did the users find the topics quickly or did they have to think
    • overall, which parts of the tree worked well and which did not

     

    Design Testing

    Design Testing has two major benefits:

    1. helps to focus on the user rather than personal likes or dislikes
    2. helps to avoid designer-client confrontation by offering an objective user feedback

    Metasite uses two approaches to Design Testing:

    1. Emotional Response Test

    A method of measuring users‘ emotional responses. The idea is to use a set of Product Reaction Cards, each containing an adverb, to help the user to articulate his feelings and to be able to collect statistics – a useful quantification of otherwise subjective field.

    The participant is asked to describe the product or how using the product makes him feel by selecting relevant cards and then narrowing the selection to a smaller number of cards (e.g., five).

    2. Flash Test

    A method of judging the weighting and emphasis of the design elements. Flash Test reveals if the correct elements are highlighted, if the users notice key information or if they are distracted by something less important.

    The expert demonstrates design to user for a few seconds (online or paper version), removes it, and asks the user to recall as many items as they can. The expert notes which items and in what order were recalled.

     

    Share onLinkedin

    |

    Aug 10, 2010 08:54

    Author
    Viktorija T.

    Tags
    usability, testing, benchmark