New software program permits blind and low-vision customers to create interactive, accessible charts


A rising variety of instruments allow customers to make on-line information representations, like charts, which might be accessible for people who find themselves blind or have low imaginative and prescient. However, most instruments require an present visible chart that may then be transformed into an accessible format.

This creates obstacles that stop blind and low-vision customers from constructing their very own customized information representations, and it might restrict their capacity to discover and analyze necessary data.

A crew of researchers from MIT and University College London (UCL) needs to alter the best way individuals take into consideration accessible information representations.

They created a software program system referred to as Umwelt (which implies “surroundings” in German) that may allow blind and low-vision customers to construct custom-made, multimodal information representations with no need an preliminary visible chart.

Umwelt, an authoring surroundings designed for screen-reader customers, incorporates an editor that enables somebody to add a dataset and create a custom-made illustration, reminiscent of a scatterplot, that may embody three modalities: visualization, textual description, and sonification. Sonification includes changing information into nonspeech audio.

The system, which might symbolize a wide range of information varieties, features a viewer that permits a blind or low-vision person to interactively discover an information illustration, seamlessly switching between every modality to work together with information otherwise.

The researchers carried out a examine with 5 knowledgeable screen-reader customers who discovered Umwelt to be helpful and straightforward to be taught. In addition to providing an interface that empowered them to create information representations — one thing they stated was sorely missing — the customers stated Umwelt might facilitate communication between individuals who depend on totally different senses.

“We should keep in mind that blind and low-vision individuals aren’t remoted. They exist in these contexts the place they wish to speak to different individuals about information,” says Jonathan Zong, {an electrical} engineering and laptop science (EECS) graduate scholar and lead writer of a paper introducing Umwelt. “I’m hopeful that Umwelt helps shift the best way that researchers take into consideration accessible information evaluation. Enabling the total participation of blind and low-vision individuals in information evaluation includes seeing visualization as only one piece of this greater, multisensory puzzle.”

Joining Zong on the paper are fellow EECS graduate college students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior writer Arvind Satyanarayan, affiliate professor of laptop science at MIT who leads the Visualization Group within the Computer Science and Artificial Intelligence Laboratory. The paper might be offered on the ACM Conference on Human Factors in Computing.

De-centering visualization

The researchers beforehand developed interactive interfaces that present a richer expertise for display reader customers as they discover accessible information representations. Through that work, they realized most instruments for creating such representations contain changing present visible charts.

Aiming to decenter visible representations in information evaluation, Zong and Hajas, who misplaced his sight at age 16, started co-designing Umwelt greater than a yr in the past.

At the outset, they realized they would want to rethink tips on how to symbolize the identical information utilizing visible, auditory, and textual types.

“We needed to put a standard denominator behind the three modalities. By creating this new language for representations, and making the output and enter accessible, the entire is larger than the sum of its elements,” says Hajas.

To construct Umwelt, they first thought-about what is exclusive about the best way individuals use every sense.

For occasion, a sighted person can see the general sample of a scatterplot and, on the identical time, transfer their eyes to concentrate on totally different information factors. But for somebody listening to a sonification, the expertise is linear since information are transformed into tones that should be performed again one after the other.

“If you might be solely eager about straight translating visible options into nonvisual options, then you definitely miss out on the distinctive strengths and weaknesses of every modality,” Zong provides.

They designed Umwelt to supply flexibility, enabling a person to change between modalities simply when one would higher swimsuit their job at a given time.

To use the editor, one uploads a dataset to Umwelt, which employs heuristics to routinely creates default representations in every modality.

If the dataset accommodates inventory costs for corporations, Umwelt would possibly generate a multiseries line chart, a textual construction that teams information by ticker image and date, and a sonification that makes use of tone size to symbolize the value for every date, organized by ticker image.

The default heuristics are meant to assist the person get began.

“In any form of inventive software, you’ve gotten a blank-slate impact the place it’s laborious to know tips on how to start. That is compounded in a multimodal software as a result of you need to specify issues in three totally different representations,” Zong says.

The editor hyperlinks interactions throughout modalities, so if a person modifications the textual description, that data is adjusted within the corresponding sonification. Someone might make the most of the editor to construct a multimodal illustration, change to the viewer for an preliminary exploration, then return to the editor to make changes.

Helping customers talk about information

To take a look at Umwelt, they created a various set of multimodal representations, from scatterplots to multiview charts, to make sure the system might successfully symbolize totally different information varieties. Then they put the software within the fingers of 5 knowledgeable display reader customers.

Study contributors principally discovered Umwelt to be helpful for creating, exploring, and discussing information representations. One person stated Umwelt was like an “enabler” that decreased the time it took them to research information. The customers agreed that Umwelt might assist them talk about information extra simply with sighted colleagues.

Moving ahead, the researchers plan to create an open-source model of Umwelt that others can construct upon. They additionally wish to combine tactile sensing into the software program system as a further modality, enabling using instruments like refreshable tactile graphics shows.

“In addition to its affect on finish customers, I’m hoping that Umwelt generally is a platform for asking scientific questions round how individuals use and understand multimodal representations, and the way we will enhance the design past this preliminary step,” says Zong.

This work was supported, partly, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship.


Please enter your comment!
Please enter your name here