
In every knowledge management project, there comes that moment when we're so close to implementing the solution that we can already see it materializing before our eyes. This is exactly when we need to take a deep breath, and before diving into deep waters, carefully dip our fingertips to test the water temperature. The operational meaning is that before launching our solution, we must ensure we're releasing a worthy solution - to conduct a pilot. A common mistake is thinking that a pilot only means technological testing to verify system functionality. A successful pilot will always address three main components:
Content
Structure
Functionality
Content
We can address two main dimensions, the first being the extent to which the content truly represents the users' content world, meets their needs, and serves as a working tool. Questions that can be incorporated in the pilot examining this dimension are those where the subject needs to evaluate how satisfied they are with the content's existence, to what extent they feel they will use it, how attractive the content is to them, whether there's missing content, whether there's content they would like to add, and so on. The second dimension refers to how the content is written, whether it's well-written and marketable, whether it's understandable to the user, whether its name reflects its content, whether it's up-to-date, etc.
Structure
The structure issue relates to how content is organized in our navigation tree and its layout on each page. Our basic assumption is that the structure should be simple and user-friendly so that users can reach their required content with minimum clicks. Questions relevant to this stage are those where the subject is asked to state their opinion regarding the logic of content division into different content areas or between navigation bars (if there is more than one main bar). Another type of testing that can be incorporated in this context is tests where the subject is asked to locate knowledge items on the site and respond about the ease and intuitiveness of finding them. This is the stage where we should examine items whose location we discussed in earlier stages of the specification.
Functionality
Although system testing is usually done earlier, a few more examining eyes can always help. The questions testing functionality will always involve the task the subject must perform and feedback on whether it was properly executed, meaning whether the system responded as expected. Examples could be "write a message in the discussions component," "filter the information according to certain criteria," etc.
Of course, these are just the basic conditions, but additional questions relating to the user interface, design aesthetics, system performance under different conditions, etc., can be added and varied.
So Where Do We Start?
Deciding on Pilot Duration
We must balance conducting a long-term pilot versus a short and focused one. While a longer pilot allows us to receive more complete feedback about the system, it simultaneously tires users and can even cause their response patterns to become somewhat routine. On the other hand, a short pilot might not cover all system aspects that could affect the user, but it's easier to implement. One technique we use to overcome these issues is conducting a modular pilot. In this type of pilot, we divide tasks into short modules. At their discretion, the participant can either complete all modules at once, which would take several hours, or complete each module on a separate day and spread out the execution at their convenience.
Pilot Infrastructure
When possible, we recommend piloting the knowledge system infrastructure, allowing users to fully experience the system's capabilities. An additional advantage is that the pilot can be conducted using an appropriate component (such as surveys) in many systems, so result analysis is performed automatically, at least partially. Of course, if the infrastructure doesn't support this, it can be done using simpler tools, such as a printed form distributed to users or a file sent to them by email.
Pilot Users
Selecting pilot users isn't trivial. We must ensure proper representation of all users—management and employees, new and veteran staff, and all participating departments. When possible, it's recommended to take a sample that represents the proportion of users in the population.
Supporting Tools
In the pilot toolbox, it's recommended to maintain a template for each of the following documents:
A letter updating the user about the intention to conduct a pilot, their role, importance, and ability to influence the knowledge system's structure to better suit their needs.
An opening letter for the pilot itself, including information about the pilot, timeframes, explanation of tasks, and pilot managers who can be contacted in case of questions, issues, etc.
A summary and thank-you letter for the user's participation in the pilot. This letter should reference the future transfer of the pilot findings analysis to the user. Of course, I don't commit to implementing all requests, but I would like to clarify that they were considered and might be implemented at different project stages.
Pilot Timing
Pilot timing is a particularly sensitive issue. On one hand, we want the pilot to simulate the full system state as much as possible, so we wouldn't want to conduct it before most content is loaded into the system. We should also remember that conducting the pilot creates an expectation among users that they'll soon be able to access the full system. These two points guide us toward conducting the pilot as close to the launch date. On the other hand, what will we do if pilot findings require us to make minor or major changes? We'll need to leave ourselves sufficient time to handle issues. We should try to find the balance between these two timeframes when possible. And for those who still want a benchmark – it's recommended not to start a pilot with less than 70% of the required content.
Pilot Findings
We must remember that the pilot is our opportunity to receive feedback from future users. Therefore even feedback that doesn't necessarily meet our expectations is important feedback that we should act upon, examining what can be improved, at what stage of the process, etc. Pilot findings enable two main uses, the first of which is improving the system and adapting it to user needs as much as possible. The second use is using the findings as a marketing tool, both toward those involved in the work, who are laboring on content entry, - this is a good opportunity to expose them to feedback on their work, and no less importantly, toward future users. Imagine that as part of the system's marketing campaign, you receive news that "70% of pilot users believe the knowledge system fully meets their needs" - such a teaser already creates a positive expectation system among users, as those testifying about the system's quality are their colleagues, and what's better than a friend's recommendation?
Most importantly, the pilot indicates that you already see the finish line. With just a little more effort, you'll be there.
Comments