Skip to main content

Why I Prioritize Web Accessbility

What motivated me to start prioritizing web accessibility.

This afternoon, I was privileged enough to view a very interesting video on This article detailed the new technologies that have come out for blind people on the web in the last few decades.

TED: Ron McCallum - How Technology Allowed Me to Read

The article got me thinking about how if we start our designs with accessibility in mind, we can alleviate many problems that come with large-scale application development.

In this article, I hope to show you some of the coolest new things we can do to make our webpages the most accessible webpages we can. In doing so, I hope that you can gain insight into potentially improving your application development workflow.

Blind Is the Future

Although it may not seem like it, addressing blind users in our design first allows our designs to reach a larger number of people without having to come back and address those users later. The easiest example that most people can understand is Google Glass. Google Glass is a cool new technology which operates almost solely on voice recognition. We've already seen voice-recognition and other places on Google, so it is no surprise that Google came up with Google Glass. With Google Glass, all we need to do is speak the command "okay glass", and Google Glass will activate and prompt us to give our next verbal cue. The reason Google Glass in the next coming technology is because people are starting to focus away from using their phones and focused more towards their own personal physical tasks.

The Singularity Doesn't Need Screens

Ray Kurzweil proposed in the late 1990's the idea of the singularity. Since then, he has garnered a large number of supporters by writing a number of novels, participating in a number of interviews, and even making a movie. The singularity states that humans in the future will soon merge with technology. This idea has since been advocated by other notorious artists and designers. For example, technologies such as Google Glass focus on interaction with technology through our daily physical activities and allow us to focus on real tasks with voice-recognition. Everybody is familiar with the concept of being in a restaurant and reading their phones. Ray Kurzweil, along with several others, think we will soon be moving away from the state of feeling trapped in a world of phones, in which we can focus on other people while, at the same time, keeping access to vital technologies such as Google search, Google voice-recognition, and others.

Speaking Is Faster Than Typing

The problem with today's technical society is that most of us have not been raised on dictation software, and we have not learned on technology that is geared towards the audio-impaired. Most of us grew up learning how to use a computer and how to type on lab keyboards -- and some even on typewriters. It's only recently that we been able to accurately transcribe voice recordings.

Many writers are finding that speaking actually helps to icrease creativity, especially in terms of writers block. More and more studies are confirming today that speech is a better way of conveying ideas over traditional writing or typing. Some programmers are also beginning to speculate on whether we should be coding with speech. This should come as no surprise, as the keyboard has been around for decades. Has the keyboard been superceeded by speech recognition? I would argue that it hasn't, but I would be optimistic for that change...

This entire article was written using Dragon naturally speaking. This is another program that we can use - a commercially-available program - to transcribe audio commands in real-time. Of course, there were some issues with the transcribing of the audio. I had to come back a number of times to correct the transcribing of my audio recordings. Nontheless, I would argue that the biggest problem with the transcribing isn't within the program; the problem is because of me. It takes some time to get used to using voice and voice transcription services. For example, when you're using Dragon Naturally Speaking, you need dictate exactly what the punctuation marks are that you want. In this case, I'm indicating when a periods, semicolons, or other punctuation marks occur.

So voice transcription services have issues. But there are still many applications, and ways that we can use them. I believe that if we accurately learn how to use these new transcription services, we can better focus on our daily lives and our daily tasks. Furthermore, by diverting our focus on accurate voice transcription technologies, we will immerse a larger diversity of readers (or listeners) who wouldn't otherwise have access to our content or media.

Where to Start

There are many places where you can start to use voice recognition services. In this article, I would like to focus on the free alternatives rather than the commercial ones. I know this article was written in Dragon Naturally Speaking. However, I also believe that the newest and services offered by Google present great opportunities to be used in our daily society.

The first service I will mention is Google Voice Recognition Software. To access the service on your android phone all you need to do is access the Google Google now application. Most people may not be familiar with the Google Search or Google Now applications; however, the service allows users to speak the commands to Google and have them interpreted by their servers. Some people may be familiar with the voice icon that is beside the switch icon in Google. When we click this button we are prompted to announce our next command to Google. Within a matter of seconds I command is interpreted by Google servers and it understands what we're looking for.

Another service most people will be familiar with, particularly Apple users, is Siri. Siri is the counterpart to Google's voice-recognition software that does exactly the same things; however it is only available on Apple products. This means that all of your voice commands are sent directly to Apple process and then the results are sent back directly to your device. Apple has the right to log everything you say. Although this article is not to be focused on which services better, it should be noted that Siri is only available to Apple users and on Apple products. If you are building an API that is based on Siri, you can only do so when you are using an Apple product. For this reason it may be more desirable to use free alternatives like Google or any of the other upcoming alternatives.

Just as Google has its own voice-recognition software that are other alternatives. Other search engines like Bing also have voice-recognition capabilities.

It is my advise that users check out all of these new voice-recognition software. By doing so we can see that that's the new voice-recognition software is drastically improving and hopefully will soon be able to use these in our daily lives.

Making Websites Accessible

Understandting the ARIA role

As you may note in table 1, ARIA is defined by roles. We need roles to indicate to the blind user how each of the elements interact with each other. Imagine a DOM without a clear visual indication of how elements are combined or used together. Just looking at the DOM tree may provide some indication of how elements interact with each other, but a lot of times, it is unclear whether a given section is transdental (side information) or information that pertains to the information itself.

Landmark Elements

Roles are not enough to define how elements interact with the user. For this reason, we can define landmark elements. Landmark elements assist the blind user in identifying which content areas of the document are most significant or important. By identifying and sectioning out different parts of the article or document that are more important, we can solve the problem of the user "tabbing" through all of the elements in the DOM tree, allowing us to go crazy with unnecessary <div>s (though this is not a good reason to do so). Table 1 identifies each of the sectioning landmark roles which WAI-ARIA provides:

Document Element ARIA role Attribute Description
<body> | document, application (document) A region containing related information that is declared as document content, OR (application) a region declared as a web application
<article>, <main> | main The main content of a document.
<header> | banner Region that contains mostly site-oriented content, rather than page-specific content.
<button>, <a class = "btn"> (bootstrap) | button An input that allows for user-triggered actions when clicked or pressed.
<aside> | complementary A supporting section of the document, designed to be complementary to the main content at a similar level in the DOM hierarchy, but remains meaningful when separated from the main content.
<aside> (with HTML5 Metadata probably), <footer> | contentinfo A large, perceivable region that contains information about the parent document.
<form> | form A landmark region that contains a collection of items and objects that, as a whole, combine to create a form.
<nav> | navigation A collection of navigational elements (usually links) for navigating the document or related documents
<header> | banner A landmark region that contains a collection of items and objects that, as a whole, combine to create a search facility
<form> | search A form or region containing inputs to perform searching on a given page.

Remembering to Do ARIA

On first encountering the WAI-ARIA specification, you may find it difficult to remember all of the distinctions and variations between the attributes (roles) and their usual corresponding HTML elements. However, with the modern IDE, it's very easy to facilitate and ease the development and implementation of Wai aria.

The easiest way to remember these differentiations is by using tools in your IDE such as live templates, snippets, and so forth. Figures 2, 3, and 4 outline examples of Sublime Text snippets:

Figure 2: Sublime Text Snippet for document headers

<header role = "banner">${1:content}</header>

Figure 3: Sublime Text Snippet for document main article

<article role = "main">${1:content}</header>

Figure 4: Sublime Text Snippet for document main navigation

<nav role = "navigation">${1:content}</header>

For the more visually inclined, a better way to understand aria roles is by looking at the W3C candidate recommendation for aria. Figure 5 below describes some of the relationships between all of the aria roles:

Figure 5: [ARIA Roles Visual Representation](

Right away, we can see that role type is one of the primary ways in indicating to the user how child element interacts with parent elements. One of the most primary distinctions we can make between an element is how, or if, we can provide input to that element. Those of us familiar with JavaScript-heavy applications may understand that it is actually very common to use <div> elements as a substitute for elements in the shadow DOM in non-supporting browsers. For example, we may use a <div> in the place of the input slider (<input type="range">), which is not natively available in some browsers. In this case, it's not clear at all to the browser that this element indeed is a slider element that is being added on a non-supporting browser.

(Non-accessible slider)

<input type = "range"
    min = "18"
    max = "100"
    placeholder = "18"
    title = "Please specify your age"

<input type = "range" min = "18" max = "100" placeholder = "18" title = "Please specify your age" required>

Compared with this:

(Accessible Input Slider)

<label for="age-chooser">Choose an age</label>
<input type = "range"
    id = "age-chooser"
    name = "user-age"
    min = "18"
    max = "100"
    placeholder = "18"
    title = "Please specify your age"
    role = "slider"
    aria-valuemin ="18" 
    aria-valuemax ="100"
    aria-valuenow = "42"
    aria-valuetext = "42 percent"
    aria-labelledby = "Age">

<input type = "range" id = "age-chooser" name = "user-age" min = "18" max = "100" placeholder = "18" title = "Please specify your age" required role = "slider" aria-valuemin ="18" aria-valuemax ="100" aria-valuenow = "42" aria-valuetext = "42 percent" aria-labelledby = "Age">

The first thing we can notice is the presence of the <label>. Labels are a crucial part in making your interfaces accessible. Labels are not just pieces of text. By clicking on a label, the user can then focus the element that corresponds with the label that refers to that input. In some cases, it may be argued to remove the label for usability and simplicity; however in these cases, you can simply hide the label to your users using a helper class (for example .visuallyhidden as in the HTML5 Boilerplate.

Another thing to note is the difference between HTML 5 inputs and aria roles. Notice the difference between <input type="range"> and role = "slider".

By using aria roles, we can allow the browser to interpret and understand that this element is indeed intractable.

The above were just examples

More information can be found on Mozilla's ARIA Documentation and Tutorials

Screen Readers Want Less and that is more

Imagine having to view an entire website or article based solely on the DOM structure: the classes, the IDs, and the attributes. Just as you can "tab" through the elements with the tab key, imaging having to sort through all the elements one by one to determine where the information you wanted was. Depending on the situation, you might find yourself very frustrated, wondering why the developer didn't just put the content front and centre, without worrying about nesting so many elements in the markup.

Accessible websites rely on simple DOM structure. Though it may not seem like it, given the tree depicted above in figure 2, keeping your document structure simple is tantamount to using headers right place in a write-up. Give the write-up enough subheaders and contents beneath them, and your reader loses track of where he/she was. A great example of the inaccessible website may have been, in the actual past, Facebook. In the early days of Facebook (and still now), inspecting the DOM tree reveals that there are many unnecessary `