Methods used by our experts to evaluate interface:
Tools and practices involving users remotely:
Practices involving users directly:
Benchmarking is performed using a pre-defined list of criteria, which is adopted for particular industry or website type. Metasite has developed the following benchmarks:
Evaluation of interface. Experts measure usability, efficiency, and effectiveness of the interface based on a chosen set of usability criteria, adapted to the needs of a particular client.
Metasite uses two sets of usability criteria:
Expert or experts team „walk“ through a set of predefined (important or typical) user tasks, one step at a time. At each step expert asks following questions about the expectations of users‘ behaviors:
When collected in the beginning of analysis or before the project, statistics can be used for spotting patterns, raising questions and hypotheses which can further be explored by more in-deep user analysis.
When collected in the end of analysis or after the project, statistics can back up theories and verify if the initial assumptions were true.
Online surveys are used to collect quantitative data about users‘ opinions, needs and preferences. Users‘ tasks have to be clearly defined and questions have to be planned well, because experts are not allowed to ask follow-up questions.
Surveys may include open-ended questions, depending of a given time limit (analysis of open-ended questions may delay the project).
A method of comparing two or more different designs in order to improve response rates.
A/B Testing can be performed manually or using one of many online tools.
Metasite uses Google Website Optimiser account, which allows the expert to set up multiple variations of a web page tested. When a user visits the website, Google Website Optimiser will display one variation of the web page according to the end user‘s IP address. As the user navigates the website, Google Website Optimiser tracks the user‘s clicks to see if one version is more effective than another.
Understanding users‘ tasks by observation and interviewing. The expert silently observes the user at work or in his natural environment and in the process or after the observation asks questions in order to understand the procedure from the user‘s point of view. The expert may also conduct exercises that let users map out the task procedure on paper and indicate their emotional reactions to specific steps.
Interviews help to learn about users‘ attitudes, beliefs and specific tasks. In order to prevent the expert from introducing bias and to ensure that every participant is interviewed using the same set of questions, the expert prepares a discussion guide, which is a list of questions that will be asked in a particular order. An expert may decide to ask follow-up questions to gain more clarity.
Users are likely to answer questions based on the way a task „should“ be completed. Therefore interview should not be used alone. It works best when combined with Contextual Analysis.
A method of evaluating a product by testing it on users. An irreplaceable usability practice, as it provides direct input on how real users use the system.
The expert prepares a list of 5 to 10 tasks which users perform within an hour or less. The tasks should represent the most common user goals and/or the most important conversion goals from the system owner‘s perspective.
It is crucial to establish very clear success criteria for each task, to clarify where the participant should begin the task, and how task completion and starting points may affect the expert‘s ability to counterbalance task order.
During the testing session the expert reads a participant one task at a time and allows the participant to complete the task without any guidance. To prevent bias, the researcher follows the same „script“ when explaining the task to each participant.
The expert analyzes users‘ facial expressions, the number of mouse clicks made, and the navigation path used to complete a task. Afther the study the expert compiles the data to determine the severity of each issue.
A method of evaluating a product by testing it on users. But, unlike Task Completion, this method focuses on allowing users to choose their own path through the system and both observing and listening to the users intensely, instead of defining tasks for them.
The expert creates a relaxed and friendly style of interaction. Questions are framed in an open-ended way to avoid tilting user behavior or responses in a certain direction. Questions focus on users‘ thinking process and their expectations, for example:
Card Sorting generates an overall structure for the information, as well as suggestions for navigation, menus and possible taxonomies. If combined with a discussion, Card Sorting helps to gain insights into user‘s mental model.
Two types of Card Sorting can be applied:
Participants are given a stack of cards and are asked to group them together as it makes sense to them (no right or wrong answers). After they have grouped the cards, they are asked to name each group of cards.
Participants are provided group names, and are asked to place each of the cards into one of the pre-established groups.
A variation of the closed card sort is a semi-open/closed card sort exercise. In a semi-open/closed card sort, participants begin with a closed card sort, with the exception that they are allowed to make changes to the group names, and may add new groups, rename groups, and remove groups.
A technique for evaluating the findability of topics in a website. Tree testing is done using a simplified text version of the site structure to ensure that the structure is evaluated in isolation, nullifying the effects of navigational aids, visual design and other factors.
Tree Testing is done as follows: the user is given a task (e.g., „to find a cabbage“) and shown a text list of the top-level topics (on paper or online). The user chooses a heading and then a list of subtopics is shown. The user continues choosing (moving down through the tree) until he finds the topic or until he gives up.
One participant does several tasks in this manner. Once several participants have completed the test, the expert analyzes the results by answering these questions:
Design Testing has two major benefits:
Metasite uses two approaches to Design Testing:
A method of measuring users‘ emotional responses. The idea is to use a set of Product Reaction Cards, each containing an adverb, to help the user to articulate his feelings and to be able to collect statistics – a useful quantification of otherwise subjective field.
The participant is asked to describe the product or how using the product makes him feel by selecting relevant cards and then narrowing the selection to a smaller number of cards (e.g., five).
A method of judging the weighting and emphasis of the design elements. Flash Test reveals if the correct elements are highlighted, if the users notice key information or if they are distracted by something less important.
The expert demonstrates design to user for a few seconds (online or paper version), removes it, and asks the user to recall as many items as they can. The expert notes which items and in what order were recalled.