Guideline 1:
Support input and output device-independence
Ensure that the user can interact with the user agent (and the content it renders) through different input and output devices.
Since people use a variety of devices for input and output, user agent developers need to ensure redundancy in the user interface. The user may have to operate the user interface with a variety of input devices (e.g., keyboard, pointing device, and voice input) and output modalities (e.g., graphical, speech, or braille rendering).
Though it may seem contradictory, enabling full user agent operation through the keyboard is an important part of promoting device-independence in target user agents. In addition to the fact that most operating environments include support for some form of keyboard, there are several reasons for this:
- For some users (e.g., users with blindness or physical disabilities), operating a user agent with a pointing device may be difficult or impossible since it requires tracking the pointing device position in a two-dimensional visual space. Keyboard operation generally makes fewer perceptual/motor demands for moving the pointing device to a visual target.
- Some assistive technologies that support a diversity of input and output mechanisms use keyboard APIs for communication with some user agents; see checkpoint 6.7. People who cannot or do not use a pointing device may interact with the user interface with the keyboard, through voice input, a head wand, touch screen, or other device.
While this document only requires keyboard operation for conformance, it promotes input device independence by also allowing people to claim conformance for full pointing device support or full voice support.
As a way to promote output device independence, this guideline requires support for text messages in the user interface because text may be rendered visually, as synthesized speech, or as braille.
The API requirements of guideline 6 also promote device independence by ensuring communication with other software, including assistive technologies.
Checkpoints
1.1 [P1] | - Ensure that the user can operate, through keyboard input alone, any user agent functionality available through the user interface.
|
---|
1.2 [P1] | - Allow the user to activate, through keyboard input alone, all input device event handlers that are explicitly associated with the element designated by the content focus.
- In order to satisfy provision one of this checkpoint, the user must be able to activate as a group all event handlers of the same input device event type. For example, if there are 10 handlers associated with the onmousedown event type, the user must be able to activate the entire group of 10 through keyboard input alone, and must not be required to activate each handler separately.
|
---|
1.3 [P1] | - Ensure that every message (e.g., prompt, alert, or notification) that is a non-text element and is part of the user agent user interface has a text equivalent.
|
---|