Validating UX without users
1 July 2014
A known rule of thumb in the world of UX is that it's very important to validate and review (both planned and constructed) in cooperation with users. Some approaches, methodologies and even several technologies that assist in this mission all targeted at the same goal: to receive and analyze the response- both conscious and subliminal-provided by the user as a reaction to the screen presented to him/her.
Despite the valuable information gained from validation with users, there are cases in which a UX interface will be validated without contacting the users. Regretfully, the users' time is valuable and limited and as such is not available at all times. Usually, the users are able to invest only very limited time in the project we're working on and therefore we will choose the aspect through which we can carefully actualize this resource and the stage during which we can make the most of the users' time.
During other stages we can utilize a valuable (yet less rare) resource in order to validate a need. The wisdom of the large numbers is another reason that may lead us to choosing a validation without any users, knowing that using a limited amount of users will not lead to exposing all problems and possible failure points which alter according to different usage habits, knowledge levels and tasks.
In this article, we will review three central alternatives for performing a UX validation without users:
Cognitive exhibition: a focused method in which a structured animation of the thinking pattern and actions of people when they use the planned interface. This approach to validation begins with choosing a 'prototype' of users and a central mission which the interface is meant to support. During the next stage a cognitive exhibition is performed throughout the detailed design document or the interface's demo, during which all actions the user may perform/questions the user may ask when perform the task are detailed. Following each description, we present the following questions:
Is this indeed a task the user would want to perform?
Will he/she understand the components (buttons, menus, links etc.) that affect the mission's completion?
Will the user understand the way in which these components affect fulfilling the mission?
After fulfilling the mission, will the user understand the indication he/she receives regarding said performance?
When one of these questions is answered by a story that is either incoherent or unreliable, this is a clear indication of a UX problem. Besides locating weak spots in the planning, this 'exhibition' may assist in exposing flaws in the requirement document (not the interface itself but the manner in which it is described). The cognitive exhibition method especially suits task-oriented projects and projects which include a relatively limited number of tasks in order to analyze all missions in this manner.
Analyzing actions: a method that focuses on analyzing action sequences performed by the users in the interface. This method enables reviewing the time required for a skilled user (as opposed to the cognitive exhibition method which simulates usage of a new user) in order to perform the action sequences and therefore identify the UX planning's weak spots. i.e. "time consuming" spots.
The main problems this method may expose are:
Understanding that the user is required to perform too many actions in order to perform certain tasks.
Performing the required actions takes too much time
Performing the required actions requires too much learning.
Of course, the action analysis method may identify holes in the characterization or things that the interface should perform but does not. It also helps in exposing problems that might evade the eyes of the characterizers due to an overload of details, requirements and subtleties they must deal with.
Illustrative evaluation: Evaluation of known potential point of failure/weakness. This method includes preparing a 'checklist' of all UX rules that may be relevant to all areas and components and performing an analysis of the interface according to said checklist. This concentration of all relevant rules of thumb enables reviewing each aspect of the written plan and serves as an opportunity to refresh the project's point of view. Sometimes, due to some restrictions or the clients' will choose to not operate according to some rule of thumb or popular instruction. Using the checklist and the illustrative evaluation will assist in reevaluating this choice and making it conscious and reasoned.
Besides rules of thumb, it is recommended to include in the checklist also our list of common mistakes and assure that they aren't made in the planned interface. Of course, in order to profit as much as possible from this method, it is recommended to perform the investigation by several evaluators. Unlike other approaches (especially validating by users) these evaluators need to possess early knowledge of the world of UX. Another difference between this approach and others is that it especially suits sites and interfaces that are not process oriented.
Since these methods emphasize different aspects of using the interface, it is recommended to merge them and use all three at different stages of planning and constructing the interface. The cognitive exhibition method provides a better understanding of problems which it exposes. Therefore it is recommended to integrate it into the entire development process. When the development of a distinct/substantial part of the interface is completed, it is recommended to perform an illustrative evaluation which serves as a final checklist for the entire project and the actions analysis method when the interface has fully formulated with its complexity visible. Of course, none of the above should be interpreted as a recommendation to quit validating with users rather pointing out the additional contribution made by combining these methods while working with the users.