Using the web requires certain physical, visual, and cognitive efforts, so it is a common assumption that people with hearing impairments may not be as limited on the web as people with other disabilities are. But this assumption is incorrect; despite being able to access the visual nature of the web and having the physical capabilities to navigate it, specific media such as audio and video content pose a challenge for people with hearing impairments.
Before we explore the different ways we can accommodate people with hearing impairments on the web, we must understand that like all disabilities, hearing impairments are varied. Some people are totally deaf, while others can have minor hearing loss. Here are some examples of hearing impairments:
- Tinnitus - a consistent ringing or buzzing sound caused by age or damage from loud noises.
- Conductive hearing loss - damage to the outer and middle ear that obstructs sound entry to the inner ear.
- Sensorineural hearing loss - caused by a damaged inner ear, cochlea, or auditory nerve
- Auditory processing disorders - a variety of disorders caused by how the central nervous system interprets auditory stimuli. This can include poor language processing, the inability to remember the things heard, and issues with differentiating important sounds from background noise.
Here are the common tools and techniques used to support people with hearing impairments on the web:
Captions and Subtitles
Captions are text on a video that is the same language as the spoken audio. Captions are displayed within the video and are synchronized with the audio, allowing people to follow the flow of conversation as well as helping people who lip read understand the video better. Captions must be clear and visible on the video, with good contrast against the changing images and of a good legible font size. One way to ensure that captions are legible is to set it against a dark colored background to avoid the text blending in with the images.
A lot of video platforms, like YouTube, are capable of automatically generating captions, which is good for those watching a video without built-in captions. However, these auto-generated captions are not the most reliable as the speech recognition technology can incorrectly detect words if the person speaking lacks clarity or has an accent.
Subtitles are for audio in the video that is translated into another language. They work in the same way as captions do and are also in sync with the video. Note that some countries like the UK use the term ‘subtitle’ to refer to both subtitles and captions.
Media Player Controls
Videos also need to be controllable by the person watching. For people with limited hearing, having a feature to control the volume can help them hear the audio of the video better. Clear buttons to activate captions and control the size of the captions are also helpful to people with both visual issues alongside their hearing impairment.
Transcripts are detailed text versions of both speech and non-speech audio information (such as background sounds, physical reactions such as laughter, etc. ) that are needed to understand the content. Transcripts also distinguish for things like the individuals talking. Unlike captions that are used only for video content, transcripts are used for both video and audio content such as podcasts.
Sign language translations are included in some videos to support people with hearing impairments. This is because for some, especially deaf people, sign language is their first language and is therefore much easier to understand and follow. It is included in videos either by having a sign language interpreter sign within the scene to translate what is being said, or to embed a video of a sign language interpreter presenting the content in a visible corner of the screen that does not obstruct the actual video. Sign language is also a good tool to convey the audio content of live streams where captions may be difficult to include.