Conformance

Conformance Reporting

The purpose of this topic is to become familiar with the concepts related to conformance reporting, that is, reporting the results of your accessibility testing to show the level of conformance to accessibility requirements.

Report Scope and Audience

Your role as a Trusted Tester is to determine what does and does not meet the test requirements as presented in the DHS Trusted Tester Section 508 Conformance Test Process for Web. Although you may also be called upon to recommend solutions or temporary workarounds, assess the impact of accessibility failures, and so on, it is outside of the scope of this training to provide guidance in any of those areas.

You might be reporting results informally for iterative testing of agile development prior to release. In these cases, you may be communicating results using email, a defect tracking system, or any other method that meets your project needs. However, the final report should be clearly written and easy to understand for a wide audience. As the accessibility report might be read by technology-savvy individuals or individuals with little technical expertise, you should document failures in a way that can be easily understood. It is recommended that you use an existing, industry-accepted accessibility report template to document your test results.

Report Mechanisms

Report mechanisms refers to processes or techniques for reporting test results, such as report tools and formats. Several accessibility report templates are publicly available, as well as report formats associated with automated commercial testing products. There are also reporting tools to extract results from formats such as report spreadsheets to create more readable reports. Two publicly available reporting mechanisms are discussed below.

Accessibility Conformance Reporting Tool (ACRT) — This reporting tool was developed by DHS specifically for use with the Trusted Tester test process. It has the advantage of merging results from different test conditions to determine if specific WCAG Success Criteria are met. ACRT also provides some conditional logic and quality assurance checks.

ACRT generates reports in HTML format. It also stores its report data in JSON format, which can be used to generate a report in your own preferred format. ACRT is available at GitHub (https://github.com/Section508Coordinators/ACRT).

Voluntary Product Accessibility Template (VPAT) – The Information Technology Industry (ITI) Council provides the global standard for the Voluntary Product Accessibility Template (VPAT). There are versions for International Reporting (INT version) and for European standards (EU version). The version that is most commonly used in the United States is the Web Content Accessibility Guidelines (WCAG) VPAT. You should use VPAT version 2.1 (or later) for the report. More information is available at https://www.itic.org/policy/accessibility/vpat.

While no specific reporting mechanism is required by the Trusted Tester process, it is highly recommended that the ACRT be used to report accessibility results, as this tool was created specifically for Trusted Tester reporting. If you plan to report results using a VPAT template, you can also use ACRT to collect your results and transfer them to the VPAT.

If other report mechanisms are used, they should provide the same report details discussed below.

Report Components

It is crucial that the test report includes the following components:

  • What was tested — The report must clearly identify:
    • The version number of the site (or, if this is not available, the date of the test)
    • The URL of web site being tested
    • The scope of testing, for example: - Full evaluation of a website - Testing limited to a new feature or newly added page or section
  • Tester Information — Once you are certified as a Trusted Tester, you are issued a certificate with your Trusted Tester ID number. In your report, include your name and ID number to authenticate the test results. This also informs the client whom to contact for any questions about the test report.
  • Operating System/Browser — Test results may be affected by the operating system and browser used for testing. Therefore, in your report, identify which browser and operating system was used for testing. If testing on multiple browsers, include details of any test result that was specific to a particular browser.
  • Identified Failures — All failures should be identified in the report. Whether or not to provide all test results will depend on project scope and client needs. The next lesson will cover what information should be include when reporting each failure.

Results

Follow the guidance provided in the Trusted Tester Process for recording the test results. A formal test report should include a result for every Test ID (each of which refers to a Test Condition) in the test process. In Trusted Tester, the test results are as follows:

  • DOES NOT APPLY (DNA) — The Test Condition does not apply to what was tested. For example, if a website does not have any time limits, 8.A Timing Adjustable does not apply and is marked DNA.
  • PASS — All of the content tested meets the Test Condition requirements throughout the scope of the testing. For example, if there are 20 form elements on a website and all 20 meet the requirement for 5.A Label Provided, then 5.A Label Provided is marked as PASS.
  • FAIL — The Test Condition applies, but in one or more instances the content does not meet the Test Condition requirements. Using the preceding example, if there are 20 form elements on a website but only 19 meet the requirements for 5.A Label Provided, then 5.A Label Provided is marked as FAIL. Comments should be included to locate and identify any form elements that failed.
  • NOT TESTED — The Test Condition applies but was not tested. At present, the Trusted Tester Process does not provide a way to evaluate some Test Conditions due to constraints such as a lack of a testing tool. If so, the test report should clearly indicate what was excluded from the test and why. Currently this result is used only for Test IDs 20.A Parsing and 3.A Flashing. For the Trusted Tester Process, a formal test report should use the NOT TESTED result for these two instances only.

Although the VPAT templates provided by the ITI Council allow the result SUPPORTS WITH EXCEPTIONS, this is not an acceptable test result in the Trusted Tester Process. If there are any instances where a Test ID is evaluated as FAILS, it should be marked as FAIL.

Evaluating Content

For a formal test report you should evaluate all content within the scope of the test and record a result for every Test ID in the test process. If you are unable to test all content, the test report should clearly explain why. All the content on a web page or application should be tested. This could include content such as:

  • commercial off-the-shelf (COTS) products
  • widgets
  • plug-ins.

For example, a site containing synchronized media must also provide an accessible media player that includes controls for captions and audio descriptions. If a media plug-in that is used is a COTS product, the media player must also be tested to determine whether it meets all applicable test requirements. Failures are not omitted from the test report based on a developer’s ability to fix them.

If constraints of time and resources prevent testing of all content throughout a site or application, the test report must indicate that a sampling approach was used for testing. A sampling plan should be designed to ensure that the content that is tested is representative of the entire site or application. For example, testing should be representative of all types of user controls, page layouts, processes, plugins, and media in a site. As another example, when testing a page with a large number of synchronized media files, testers may identify a few representative samples of those media files to test.

For results of DNA, PASS, or NOT TESTED, no further documentation is needed.

Documenting Failures

When a Test ID FAILS, you must document the failure clearly, keeping in mind that the documentation may be reviewed by a varied audience. For example:

  • developers may try to recreate the results to fix failures
  • business owners or Section 508 Program Managers may use the results to determine the criticality of failures for remediation and for delivery acceptance
  • acquisition staff may use results to make purchase decisions when comparing products.

Generally, failures should be reported individually. For example, if a website has a failure of providing alternate text for a “Home” button icon and also FAILS for an image of a flow chart that is missing alternate text, note these as two separate failures under 7.A Meaningful Images.

However, testers are not required to exhaustively identify every instance of a particular failure. The test report might cite one or two examples where this occurs and then note it as a global issue. For example, if a site uses a question mark icon to identify a “Help” button throughout the site and it fails because it is missing alternate text, it should be documented once or twice as a global issue. Another example might be navigational links which fail a specific test and are found throughout the site. One representative navigational link can be reported and be documented as a global issue. Other links elsewhere on the page which fail can be reported and documented in a separate line item.

The explanation and documentation of each failure should be clear, easily understandable, and identify the specific Test Condition which FAILS. The descriptions of the failures should not contain extraneous information or combine multiple issues. All failures should include:

  • The exact location of the failure (URL, page name, and section of page)
  • Whether the issue is a global issue (occurring throughout the website or application)
  • A complete description of the failure
  • Any specific steps needed to create the scenario in which the failure occurred. Another user should be able to recreate the failure based on these steps.
  • A screenshot of the failure, if possible. Crop the picture and mark up or highlight the location of the failure.