Web and information technology (IT) accessibility evaluations involve a combination of both automated and manual accessibility testing. Whereas automated accessibility tests focus on code accuracy and technical conformance, manual accessibility tests focus on functionality and the degree to which an individual with a disability can perceive, operate and understand a website or IT product.
Manual testing must be a core component of any accessibility testing and evaluation process in order to ensure full access is possible for individuals with disabilities.
Manual accessibility testing emphasizes how a product or web page functions for an individual with a disability. For example, any image on a web page must have a text equivalent that describes the purpose and/or function of that image. Such a text equivalent is provided – usually – by the alt attribute. While automated tests can verify that for every element there is a corresponding alt attribute, automated tests cannot determine the extent to which that text equivalent is appropriate for the image. Evaluating if a text equivalent describes the purpose or function of an image requires manual intervention to determine if there is an actual accessibility barrier.
Manual accessibility testing can be as simple or as complex as necessary in order to evaluate the full functionality of a website or IT product. One simple test for evaluating web content and software applications is to remove the mouse and to try and navigate the user interface. The No Mouse Challenge is a global effort to raise awareness regarding accessible web design and can also be used to review how software applications function. The No Mouse Challenge challenges the user to unplug or disconnect the mouse and attempt to navigate and activate elements using only the keyboard. Features that require the use of the mouse will prevent a user from interacting with or navigating content and must be addressed.
Simple manual accessibility tests do not require any knowledge or use of assistive technologies and most can be performed using a standard keyboard. Simple manual tests can include the following:
- Can you use the Tab key to navigate to interactive elements, such as hyperlinks, form fields and buttons?
- Can you activate hyperlinks with the Enter/Return key?
- Can you activate buttons with the spacebar or Enter/Return key?
- If you click on a form field label, does the cursor become focused in that form field?
- Do videos have captions?
If any of the simple manual tests receive a “no” answer then there are existing accessibility barriers that must be addressed.
Detailed Manual Testing
Websites and IT applications that are more complex may require more detailed and extensive testing. In order to track and evaluate the various aspects of a user interface, it may be helpful to separate content and interaction issues and focus on each separately. For example, manual testing of content issues can include:
- The existence of unique page titles, appropriate to the page content and/or task
- The presence of a “skip navigation” solution for pages with repeated navigational elements
- Information is marked using appropriate semantic structure (headings, lists, paragraphs, tables, etc.) and that data tables contain appropriate row and column structural markup
- Form input fields have an explicit label and that any instructions and/or formatting details are programmatically associated with the form field
Manual accessibility tests that focus on interaction-type issues includes keyboard operability, form and input validation, and dynamic page updates. For instance, as part of an online line transaction, a web page may indicate the success or failure of the transaction as a temporary visual message that disappears after a few moments. It is necessary that this information is also communicated in a manner that is supported by assistive technologies, such as screen-reader applications for blind and visually-impaired individuals. Manual testing of interaction-type issues can include:
- The existence of a visible focus indicator when navigating interactive elements
- Modal dialog windows that trap keyboard navigation and can be dismissed using the Escape key
- That any time-out interactions allow at least 60 seconds for the user to modify or extend the interaction time period using a simple key-press
- For online forms containing validation, any errors that are identified have focus directed back to the invalid form field
Limitations Of Manual Testing
Manual testing does require additional time and effort to examine and assess websites and IT applications for potential accessibility issues. Because such testing requires human intervention, it can also restrict the number of pages or products that can be reviewed in a given time period. As a result, some vendors and developers use automated accessibility tests exclusively, and do not pursue manual accessibility tests. A lack of familiarity with accessibility topics, incomplete user stories and product acceptance criteria, and demands to release new versions of a website or IT product quickly can result in quality assurance reviews that lack manual accessibility tests.
While manual testing is a necessary part of any accessibility evaluation protocol, manual testing alone may not be the best overall strategy. For evaluating websites and web-based applications, it can be more effective to use a combination of automated and manual testing to conduct reviews so as to pinpoint potential accessibility issues. Automated accessibility tests can scan code and identify areas of concern that may then be further investigated through manual testing. Furthermore, if the website or web-based application is based on a template, then performing manual tests upon the template can be more efficient than testing every single page. Manual tests, in conjunction with automated tests, can streamline the overall review process and mitigate any limitations resulting from manual testing alone.
The focus of manual accessibility testing is to assess the functionality and usability of a website or IT product for an individual with a disability. Such manual tests can be simple in nature, using a combination of keyboard navigation and interaction tests to assess a website’s capabilities. More extensive testing can be performed depending on the complexity of the user interface or the nature of the application. Accessibility evaluation protocols must include manual testing as part of the overall review process to ensure support for a diverse audience.
Sean Keegan is Director of the California Community Colleges Accessibility Center