Accepted Posters and Demos

 

Poster Session I: Monday October 28, 2019 at 10:00 and 15:05

The following posters will be presented during Poster Session I on both the morning (10:00) and afternoon (15:05) of Monday October 28, 2019. In addition, all Student Research Competition posters will be on display during these poster sessions.

 

ACM DL

Transcript of Audio File

Escape rooms are popular recreational activities, but there is little awareness of the accessibility of these experiences. We have explored this area by means of an online questionnaire. We have found that technology is implemented throughout the experience but that there is little motivation on behalf of escape room designers to implement this in an accessible way.

Alt Text of Poster Images

1. Pie chart for the question “Is technology important in Escape Rooms?”. 76% of respondents were yes and 24% were no.

ACM DL

Transcript of Audio File

Dueto: Accessible, Gaze-Operated Musical Expression

Using gaze as input poses challenges for interactions that require visual planning, like playing a digital instrument.

We explore how multimodality can support eye-controlled musical expression by designing different multi-modal gaze interactions around a digital instrument we call Dueto.

Dueto can be played with eye-gaze alone, gaze and switch, or gaze and multi-touch, where multi-touch input is carried out by a partner.

Alt Text of Poster Images

Alt text description Dueto Poster

The poster shows Dueto’s user interface. On the upper part, Dueto has the following buttons in order from left to right: settings for modality selection, scroll to navigate the piano keyboard, and two buttons to control the type of gaze point visualization called “show cursor” and “show vector button”. Below this row of buttons, we find Dueto’s scrollable piano keyboard. At the bottom of the interface we find the harmony ladder as well as the record, play, and repeat buttons that enable the looper, and the Chords and key selection buttons that enable turning chords on and changing the root key for the harmony ladder.

In the middle of the poster, there are 3 figures each representing the 3 modalities that Dueto allows. For the gaze only modality there is an image of the harmony ladder. For the gaze + switch there is an image of the Xbox adaptive controller. Lastly for the gaze + partner mode there is an image showing a drawing of two individuals facing each other and a musical note between them.

Eye Gesture Shapes Diagram shows the shape of the eye gestures needed to create minor and major triad chords. The eye gestures follow a triangular pattern with 3 fixation points. The first step is to look at the root key and then look up and down to complete a mountain shape gesture to play a minor chord or look down and then up to complete a V-shape gesture that will play a major chord. The major chord example is labelled with the notes C-E-G.

ACM DL

Transcript of Audio File

Title:

Teacher Perspectives on Math E-Learning Tools for Students with Specific Learning Disabilities

Authors and Affiliations:

Zikai Alex Wen, Anjelika Lynne S. Amog, and Shiri Azenkot, from Cornell Tech and Cornell University.

Katherine Garnett from Hunter College, CUNY

Content:

Many students with specific learning disabilities, SLDs, struggle in learning math. Students with SLDs need special pedagogies to gain math proficiency, but few math e-learning tools are designed for them. Therefore, we interviewed fifth-to-eighth grade math teachers for students with SLDs to study two research questions: (1) what are the challenges faced by students with SLDs in a math class and (2) whether existing math e-learning tools help them overcome the challenges. We found that existing e-learning tools do not work for these students and we present our findings with the help of figures.

Alt Text of Poster Images

This poster has six figures. Three figures illustrate how math work feels like for students with SLDs. Students with dyslexia can see digits upside-down. Students with dyscalculia are frequently uncertain about the meaning of math symbols. Students with dysgraphia might forget how to write math symbols. We used a screenshot of Khan Academy to highlight math e-learning tool features that are reported to be difficult for students with SLDs. Khan Academy is an example of online math e-learning tools used by our participants. Our participants reported that (1) these tools can’t help students visualize math concepts because they are all text-based. (2) students have trouble typing math work. (3) These tools do not report to our participants when and why students get stuck. We showed two pictures of manipulatives that our participants preferred to use for students with SLDs: (1) a number line that can visualize whole-number addition and (2) different sizes of fraction tiles to help students compare fractions.

Poster-23 Computational Thinking as Play: Experiences of Children who are Blind or Low Vision in India (ACM DL)
Gesu India, Geetha Ramakrishna, Jyoti Bisht and Manohar Swaminathan

ACM DL

Transcript of Audio File

Addressing the Stigma of Epilepsy in Saudi Arabia for Co-Design

People with epilepsy in Saudi Arabia confront prejudice against their disease, which results in secrecy, misunderstandings, and social exclusion. While there is significant merit in adopting current technologies for individuals with epilepsy and their caregivers to monitor seizure patterns and notify caregivers of epileptic episodes, little effort has been made to address the user requirements of such technologies in relation to stigma-related concerns.

An explorative study was carried out with 10 participants, 5 participants with epilepsy and their primary family caregivers. The aim of this study was to investigate the potential use of co-design to address the design implication perpetrated by the social stigma of epilepsy. This is particularly of interest when designing wearables where the interaction between culture, religion, and fashion is often overlooked.

The preliminary findings from the data generated from the focus groups include the following. Participants and their caregivers all encountered issues with the stigma of epilepsy. Caregivers reported on a situation where they were advised to consider spiritual rituals and religious healing. Participants also shared that discretion is often key and at times necessary for their social survival. Despite the issues reported under stigma and discretion, all participants reported a rapid pace of change in the community regarding epilepsy. The majority of participants and their caregivers monitored seizures by utilising pen and paper or note applications on their smart devices. Two participants used the embrace watch to detect and monitor seizures for a short period of time. Both of whom complained about its visibility and would often avoid wearing as it was seen as a label of illness.

The focus group findings highlighted several considerations that need to be undertaken to garner maximum benefit from co-design sessions within the conservative community. This includes gender considerations, timing, incentives, and sensitivity training.

ACM DL

Transcript of Audio File

At the University of Dundee, we are addressing the challenge of educating developers who will go on to create accessible software. We have developed a four-year degree that engages our students on a supported pathway of exploration, empathy and understanding through engagement with difficult-to-reach groups.

ACM DL

Transcript of Audio File

Title. Tactiled: Towards More and Better Tactile Graphics Using Machine Learning

Authors: Ricardo Gonzalez Penuela, Carlos Gonzalez and John Guerra-Gomez from Universidad de los Andes and North Eastern University.

In this poster We present Tactiled, a tool designed to identify high-quality images that can be transformed into Tactile Graphics.

Tactile graphics is the main way for people with visual disabilities access to visual concepts.

Through our literature review, we found out that one of the biggest barriers to using more tactile graphics is that they are time-consuming to make, thus not highly available.

In order to make tactile graphics more available, we designed a system that lets users evaluate standard images to check if they can be transformed, through standard algorithms, into a reliable version of a tactile graphic.

Tactiled is a system formed by:

(1) A Machine learning model trained with 800 images collected from the American printing House Tactile Library and the researchers; and

(2) a web application that lets teachers of the visually impaired retrain the model by feeding new images and helping with the classification.

The ML model determines whether an image can be transformed into a reliable version of a tactile graphic or not, this is why we use the APH tactile library.

In the web app we leverage users provided information by retraining the ML model in a collaborative manner: Teachers of the visually impaired identify and provide images that make the ML model classification more reliable.

In the future, we will work on improving our ML model by feeding it a bigger database of images and improving the platform usability.

Currently, the web-platform only provides images on simple concepts (animals).

We plan on creating multiple models to simplify and improve the classification accuracy, by making models specialized on certain concepts (e.g., animals, buildings, maps, objects).

Alt Text of Poster Images

Figure 1

Comparative grid with the title "Example of scores for different images".

On the left side there are four images showing simple pencil drawings with little details and well defined lines. Two of them are white with black borders, the other two are entirely black.

On the right side there are four images showing either colored drawings or drawings with confusing angles and separated lines

Figure 2

Graph composed of 6 bullet points showing the 6 main steps of the Tactiled process, each one with an image and an explanation of the step:

1) Image: Logo of the Tensorflow ML library. Text: "Use a pre-exisisting Machine Learning model to classify images"

2) Image: Magnifying glass with a question mark in it. Text: "Find good and bad images for Tactile Graphics transformation."

3) Image: a red dot cornered by arrows pointing at it. Text: "Insert the new categories in the Machine Learning model and re-train it."

4) Image: a colored bar chart. Text: "Load model on browser."

5) Image: a text bubble. Text: "Show Tactile Graph score of any image the user inputs"

6) Image: three bullet points with different tags, implying categorization. Text: "Provide a proper image of the desired class if the user is looking for TG's"

ACM DL

Transcript of Audio File

Title: Design and Analysis of Interoperable Data Logs for Augmentative Communication Practice

Clinicians use a variety of data collection methods to capture the language use and communication performance of clients with complex communication needs. One solution for collecting and analyzing augmentative and alternative communication (AAC) data is through automatic data logging from high-tech AAC devices. However, there is no interoperable method to analyze data logs across various SGDs. To address this, our work presents an interoperable data log format and a parser as a prototype solution. The prototype was used successfully to analyze two common AAC data log formats, and can easily be extended to other formats. This approach has significant potential to improve AAC outcome measurement for individual users, as well as comparison of outcomes across multiple users and devices.

Alt Text of Poster Images

Table 1 summarizes the data log formats included in the review. The variables include 1) Time: timestamp with support for varying resolution. A timestamp is a digital record of the time of occurrence of an entry. 2) Text output: any text output associated with an entry. 3) Access: type of access device used for produced entries. For example, touchscreen, mouse, joystick, eye tracker, etc. 4) Source: type of source that produces an entry. For example, word list, pages, commands, language representation methods etc. 5) Keyboard functions: type of actions that users performed such as enter, select, delete and etc. 6) Coordination: coordinate position of an entry. 7) Position label: label of the position. 8) Page label: name of the page.

Figure 1 shows log file examples of two formats we worked on in this project. The examples show those different log formats.

Figure 2 shows the proposed analysis interface with the option to enter inputs for different fields (such as File ID, User, Date and Task) if statistics pertaining to specific conditions need to be returned. By default, no filters are used on the data, and results are shown for all records in the database.

Figure 3 shows an example of the displayed analysis results which are returned as statistics (number of words, number of unique words, communication rate etc.) both at an aggregated level and per file, given a certain set of filters.

ACM DL

Transcript of Audio File

Our project involves the development of a tactile code skimming tool designed to help blind and visually-impaired programmers skim code faster. While sighted programmers can skim new code by observing indentation, the same task can be time consuming for blind and visually-impaired programmers with a screen reader.

Based on demo feedback on our prototype, we found that such a device should provide more context about control structure, should support scaling and shifting for deeply indented code, should be editor agnostic, and could also function as an indentation editor.

Alt Text of Poster Images

Figure 1: Top Left. Tactile Code Skimming Device hardware with labels pointing out navigation buttons and sliding tactors.

Figure 2: Top Right. Sample code snippet with labels pointing out the cursor, the first line displayed on the tactile skimming device, and the last line displayed.

Figure 3: Middle Right. A participant interacting with the tactile code skimming device.

ACM DL

Transcript of Audio File

Towards a Standardized Grammar for Navigation Systems for Persons with Visual Impairments

We address the problem that pedestrian navigation systems are rarely accessible or suit the needs of persons with visual impairments. They usually lack a standardized grammar for their speech instructions, forcing users to learn new types of instructions for each new system.

Thus, our goal is to create a standardized, localized (German) grammar for a GPS-based navigation system for People with Visual Impairments that transmits all necessary information about e.g. turns, crossings and objects of interest in an appropriate manner.

Our system consists of a smartphone app, a backpack with high-performance laptop, a ZED mini camera and bone conducting headphones. The system recognizes paths, obstacles, traffic lights, road crossings, interesting landmarks, buildings and objects.

We evaluated our grammar within a qualitative user study with 15 participants with visual impairments who walked three city routes with our grammar implemented in a mobile navigation system with and without the CV system. The overall usefulness of the system was rated with 1.93 from 1 very useful to 5 very useless. Participants assessed positively the compact and concise grammar, few error sources and the clear and loud instructions. They also liked the customization option which is necessary because of distinct participants’ needs.

Alt Text of Poster Images

Figure 1: An overview of the complete system: a smartphone, backpack with the laptop inside, ZED mini camera and bone conduction headphones

Figure 2: A participant wearing the test system and a screenshot from the routing screen of the app.

Figure 3: Two images (cars and bycicles outdoor) showing the depth determination of the camera. Lighter tones (yellow) are closer, darker tones (dark blue) are further away.

Figure 4: Bar Chart showing the results of the NASA-TLX score for the navigation task only, with the computer vision system and the overall score for both. The mental demand was especially high for both systems, around 7.5 and 8. The other scores are around 3-5 for both systems, no signifanct differences are found. The overall score for both systems ist around 4.8.

Figure 5: QR Code from website

ACM DL

Transcript of Audio File

There are currently few options for seamless indoor to outdoor navigation and vice-versa for people who are blind or visually impaired. Thus, there is a great need to provide a low-cost, easy to use, and reliable wayfinding system to serve them.

This work presents the CityGuide wayfinding system and smartphone application that can be used by blind and low vision individuals to navigate their surroundings beyond what is possible with just a GPS-based system. CityGuide enables an individual to query and get turn-by-turn shortest route directions from an indoor location to an outdoor location (and vice-versa).

CityGuide utilizes Bluetooth Low Energy beacons in indoor environments to understand a user's location and how they should navigate. Similarly it uses GPS in outdoor locations. The CityGuide application integrates both these technologies to provide seamless handover between technologies allowing an individual to navigate without having to switch applications.

Preliminary test results with six visually impaired individuals show CityGuide to be reasonably effective and convenient with most users finding it much more useful and effective than commodity applications built for people with vision impairments.

Alt Text of Poster Images

Figure 1a shows the graph representation of two floors of the Jabara Hall. It shows the location of the stair and the elevator as well as the exits with ramps. It also presents shortest path within the building using a drawn green line. It also illustrates the shortest path from the exit of the Jabara Hall to the destination building (Wallace Hall) as well as locations of installed beacons outside around other buildings.

Figures 2a and 2b show navigation time and distance from the starting location inside a building to the destination in the outdoor environment using CityGuide and other apps. These figures clearly illustrate that almost all 6 subjects reached their destination in shortest amount of time. It also shows that all users took less number of steps to reach their destination when they used the CityGuide vs other apps. Figure 2a also shows that it takes almost same amount of time or more to exit the building when they did not used the CityGuide. According to figure 2a, outdoor navigation times for users A and B were more than 20 minutes when they used CityGuide while this number decreased to almost 7 minutes for user A and 5 minutes for user B when they used CityGuide. It almost took 15 minutes for user D, more than 7 minutes for user E, and more than 12 minutes for user F to reach destination using other applications. Navigation for users C through F using CityGuide were around 8, 6, 9, and 6 minutes. Except for user A who could not reach destination and user C who did not do outdoor navigation without the CityGuide, Navigation distances for users B through F are: 700, 750, 500, and 750 steps. These number decrease to around 500 steps for users A and C, around 400 steps for users B, E, and F and around 350 steps for user D when they used the CityGuide.

Table 1 provides information about participants’ level of visual impairments.

Poster-51 Simulation of Motor Impairment with "Reverse Angle Mouse" in Head-Controlled Pointer Fitts’ Law Task (ACM DL)
Mariah Papy, Duncan Calder, Ngu Dang, Adian McLaughlin, Breanna Desrochers and John Magee

ACM DL

Transcript of Audio File

Title: Dynamic Sensor Orientation Unit for the Intelligent Mobility Cane

Authors: Manu Suresh, Jagannadh Pariti, and Tae Oh

Affiliation: Golisano College of Computing and Information Sciences at Rochester Institute of Technology

Smart cane prototypes use fixed sensor and tactile vibration feedback methods to inform cane users of upcoming obstacles. However, users are forced to change their preferred grasping techniques. Any change in the grasping technique affects the angle of the cane and obstacle detection accuracy.

The study introduces a Dynamic Sensor Orientation Unit (DSOU), that is mounted on a walking cane prototype called the Intelligent Mobility Cane (IMC). The DSOU maintains an extended ground level obstacle detection range irrespective of the user’s grasping technique and the angle the cane makes with the ground.

11 participants were recruited from the Association for the Blind and Visually Impaired (ABVI) in Rochester, NY to provide their feedback on the DSOU. 3 of the 11 participants were people who were visually impaired. The remaining 8 participants were people who were blind.

The study discovered that a single user will not only hold the cane differently but will hold at different locations on the handle as well. However, the DSOU maintains the obstacle detection range from the cane to notify users of upcoming obstacles regardless of their grasping technique. Future studies include evaluating a DSOU for overhead and side drop off detections and performance improvement.

Alt Text of Poster Images

Image 1: The Dynamic Sensor Orientation Unit mounted on the Intelligent Mobility Cane (IMC) which consists of an ultrasonic sensor, servo motor, microcontrollers and 9-Degrees of Freedom (9DOF) sensor. The microcontroller reads the angle of the cane from the 9DOF sensor and determines the adjusted angle for the ultrasonic sensor. The microcontroller instructs the servo motor to rotate the ultrasonic sensor to the adjusted angle.

Image 2: The Intelligent Mobility Cane with an ultrasonic sensor mounted 12 inches from the tip of the cane. The initial angle calculated from the 9DOF sensor and the adjusted angle determined by the microcontroller are displayed. The adjusted angle ensures a 36-inch detection distance from the tip of the cane is maintained.

Graph 1: Shows the variation in the cane angle for each participant that was recorded while the participants were navigating the obstacle path.

Table 1: Shows the mean cane angle recorded in degrees for each participant. The mean is the sum of all cane angles recorded for 1 participant divided by the total number of cane angles recorded for the same participant. The standard deviation in degrees for each participant is the minimum and maximum variations from the mean. Each participant’s height and grasping technique is also mentioned.

ACM DL

Transcript of Audio File

Twitter A11y is a browser extension that makes images on Twitter accessible.

Currently only 0.1% of images on Twitter have alt text.

Twitter A11y adds alt text to images on Twitter using two components: the extension and the server.

As the user browses their Twitter timeline, the Twitter A11y browser extension scrapes URLs of images without alt text and sends them to the server.

Once the server receives a tweet image, the server classifies what type of image it is in order to choose the best way to create alt text.

If the image came from a news article, we use a technique called url following that searches the original news page for visually similar images with alt text.

If the image is classified as text, then we put the image through Google Cloud Vision’s optical character recognition.

Finally, if the image is not a news article or primarily text, then we obtain alt text using Amazon Mechanical Turk.

After the server receives alt text, the server sends the new alt text to the extension and Twitter A11y updates the image tag HTML with the generated alt text.

Alt Text of Poster Images

System diagram: The Twitter A11y system contains two components: the chrome extension and the backend server. Two Chome Extension diagram blocks are connected by an arrow from the first to the second. The first block reads: “Scrolling triggers image loading” and the second reads: “Send POST request with image URLs for classification”. There is an arrow from the second extension block to the next backend server block that reads: “Decide which classification method to use”. From this block, the diagram breaks off into three different options from if statements. First there is a statement that reads: “IF external URL article”. If this is true the the next block in the diagram is a backend block that reads: “Follow article with URL preview”. The next if statement reads: “IF text labeling > 0.8 confidence”. If this is true, the next block in the diagram is a backend block that reads: “OCR- Google Cloud Vision”. The last statement reads “ELSE”, if this is true then the next block in the diagram is a backend block that reads: “Crowdsourcing on Amazon Mechanical Turk”. Following either three of these options, the next block in the diagram is an extension block that says “Update frontend page HTML with generated alt text”.

Evaluation: The evaluation graph is titled “Alt Text Quality by Method Type”. On the x-axis is the three different method types, Url following, OCR, and Crowdsourcing. The y-axis is labeled “Images with new Alt Text”. The legend at the bottom of the graph has two labels, high quality and low quality. For each method type, we evaluated the amount of images with quality alt text. The URL following has 12 high quality alt text and 70 low quality alt text. The OCR has 156 high quality alt text and 37 low quality alt text. The Crowdsourcing has 69 high quality alt text and 31 low quality alt text.

Right figure: The figure on the right is a screenshot of a twitter timeline with four tweets on it. The first tweet image has a caption “this is my favorite page” and the tweet image shows a screenshot of an e-book. Overlayed on the image, is a translucent black box that says “Image”. The next tweet on the timeline says, “why is it raining so much?? :(“. The next tweet image has the caption, “Me on the way to free lunch” and the image on the tweet is a white chubby cat running in a green field. The image also has a translucent black box that says “Image”. The last tweet is a link to a news article from the New Yorker, about the U.S. Women’s soccer team winning the 2019 world cup. The news article preview has a picture of the women’s soccer team holding a trophy excitedly. This image also has a translucent black box that says “Image”.

ACM DL

Transcript of Audio File

In this poster, we describe the approach used at the University of Maryland College Park to introduce accessibility content into the undergraduate “User-Centered Design” (Fall 2018) course by including accessibility concepts into the four currently taught modules i.e., 1) web design, 2) understanding user needs, 3) prototyping, and 4) evaluation.

We added accessible web design i.e., Web content accessibility guidelines to web design module, described contextual inquiry, disability and assumptions in understanding user needs module by giving an example of a failed design due to mistaken assumptions, taught about personas and implicit bias, and how different disabilities and accessibility considerations can be incorporated into personas in the designing and prototyping module, and stressed on accessibility evaluations by simulating disability in class and using W3C tools to evaluate the accessibility of website in the evaluation module. After piloting the above modules in Fall 2018, we updated the modules by merging content and adding more content on WCAG, accessibility guidelines for non-web content, algorithmic bias and organizational inclusion of people with disabilities in design. This was rolled out in Spring 2019 and continues to be taught in all sections of this course.

To assess how the course modules affected students’ understanding of accessibility, we administered a survey including Likert scale questions related to current knowledge of accessibility and interest in learning about developing technology for people with disabilities. Results indicate that the averaged scores of each student’s responses for the post-survey were significantly higher than the pre-survey scores. Future work needs to assess the perceptions and changes in student understanding at a larger scale.

ACM DL

Transcript of Audio File

Gauging Interest in Digital Personalized Simulations of Hearing Loss For Parents of DHH Children

Dar'ya Heyko and David R. Flatla

University of Guelph, Ontario, Canada

Digital personalized simulations of hearing loss might improve understanding for hearing parents of d/Deaf and hard of hearing (DHH) children, but only if there is demand for it. We surveyed six hearing parents of DHH children online to assess factors that might influence this demand, as well as the base demand itself.

A DHH diagnosis can change parents’ expectations for how their child will develop throughout life — questions arise about what it means to have a DHH child, and for some parents, this questioning can lead to despair. We found that three parents (B,D,E) do not hope for ‘restored hearing’, while two parents (A, F) still strongly cling to it.

We then explored the level of understanding that parents have about their DHH children, in terms of adaptation, selection of schools, language learning, and Deaf community inclusion. We found that three parents (B,D,E) have a high level of understanding of their children's DHH experiences.

Perhaps the combination of parents with high hope in restored hearing (A,F) and parents with a high level of understanding (B,D,E) would mean that few were interested in hearing loss simulations. However, we found that 4 out of 6 parents (A,B,C,D) expressed interest, including one with high hope (A) and two with high understanding (B, D). Based on this, we plan to develop the technology moving forward.

Alt Text of Poster Images

There are three sections that have six color-coded participants per section on the left side of the poster. In the first section of the question "Do parents hold on to hope for 'restored' hearing?", participants A and F is coded to "Yes", C is coded to "Some", and B, D and E is coded to "No".

In the second section, the question is "Do parents have an understanding of hearing loss?". Participants B, D, and E are coded as "High", C is coded as "Some", A is coded as "Low", and F is coded as "None".

In the third section, the question is "Is there interest in the personalized simulation?". Participants A, B, C, and D are coded as "Yes" and E and F are coded as "No".

On the right side of the poster, there are seven demographics of the participants. The first contains a timeline of the participants' children's births and age of diagnosis. Child of A was born in 1992 and diagnosed at 1, child B was born in 1986 and diagnosed at 1, C was born in 1985 and was diagnosed at 8, child of D was born in 1989 and was diagnosed as a 1 year old, child of E was born in 1995 and diagnosed at 1, and child of F was born in 1997 and diagnosed at 2.

The second graph shows whether participants know the cause of deafness (A,B,C,D knows while E and F doesn't). The third shows a pie chart with participants' children's hearing levels (Children of B,D,E,F are profound, C is mild, and A is moderately severe). The fourth graph shows preference for signing or oral speech (B,D,E prefers signing, A and C prefers oral, and F was unknown). The fifth shows usage of assistive technology (Child of B uses hearing aids, F uses cochlear implants, and A,C,D,E doesn't use technology). The sixth graph shows six pairs of parent-child (A,B,D,E,F are the mother-son pairs, and C is the father-daughter pair). Finally, the seventh graph shows a world map of countries of residence (B,D,E,F are from Canada and A and C are from the UK).

ACM DL

Transcript of Audio File

Title: Strength-Based ICT Design Supporting Individuals with Autism.†

Authors: Jessica Navedo, Amelia Espiritu-Santo, and Dr. Shameem Ahmed from Western Washington University in Bellingham, WA, USA.

While sociocommunicative behaviors of the autistic population are frequently pathologized, the researchers find evidence via an exploratory thematic analysis of 21 essays supporting strength-based approaches which utilize the natural talents, strengths, interests and communication styles of individuals with autism, resulting in higher degrees of wellbeing.

A strength-based perspective assumes that communities and individuals are resilient, creative, and possess a deep self-knowing which informs solutions, emphasizing the necessity of incorporation of and collaboration with the autistic community these technologies seek to support.

ICT†designs supporting strengths common among individuals with autism use simple interfaces, user-focused complexity in functionality, sensory-based design supporting enhanced sensory processing, and highlight customization and personalization as well as predictability and consistency to provide fluid structure.

Thank you!

To contact, email navedoj@wwu.edu, thatís n-a-v-e-d-o-j @wwu.edu.

Alt Text of Poster Images

Image 1, Theme 1: Validate Autistic Intelligence. Presents a Linux penguin icon vs. a Windows OS icon.

Image 2, (Theme 1) ECHOES, a technology enhance learning (TEL) environment. Image shows a boy interacting with a screen in an office setting while being observed by an adult.

Image 3, Theme 2: Autism-specific Measures of Ability. Image shows a man in a suite sitting at a desk with a line of animals (bird, chimp, penguin, elephant, fish, seal, dog) facing him. There is a tree being the animals. The man says, ìFor a fair selection everybody has to take the same examÖ Please climb that tree.î The illustrator is unknown.

Image 4, (Theme 2) A flowchart of the Design 4 Diversity (D4D) framework from Benton et al. The primary category is ìD4D Framework for ASDî. This splits into two subcategories, ìStructuring Environmentî and ìAdditional Supportsî. ìStructuring Environmentî splits into two sub-subcategories, ìUnderstanding Cultureî which lists the following topics ñ (1) Quite/familiar environment, (2) Start with visual recap/intro. End with summary/intro to next session, (3) Consistent session structure, (4) Weekly sessions same time/place, (5) Visual schedule to engage/organize, (6) Routine to tick off activities, and (7) Start with design task fine details ñ and ìTailor to individualî which lists the following topics ñ (8) Themed to hobbies/interests, (9) Appropriate content for ability, and (10) Multiple modes of expression. ìAdditional Supportsî splits into two sub-subcategories, ìUnderstanding Cultureî which lists the following topics ñ (11) Staged idea generation, (12) Team building activities, (13) Demonstrate existing tech, (14) Visual activities, (15) Familiar activity structure, (16) Visual design templates, and (17) Transfer ideas quickly to computer-based prototype ñ and ìTailor to individualî which lists the following topics ñ (18) Adult support: engagement, idea generation, (19) Adult support: sensitivities, collaboration, and (20) Link to existing knowledge/skills.

Image 5, Theme 3: Wellbeing is the Outcome Goal. A version of the neurodiversity pride icon ñ a rainbow-colored infinity symbol with a heart worked into the line.

Image 6 (Theme 3) A chart of the framework used by Lanou et al. titled ìPlanning Strategies That Incorporate Strengths & Interestsî. The following text is below the title: ìTo plan a motivating strategy, consider following this structure: (1) List the strengths, interests, and talents of the student. Challenge yourself to write as many as possible! (2) Identify the specific areas of need of the student. Is the studentís need behavioral, academic, social, or emotional? (3) Consider which research-supported strategies could be used to address the need. Consult recent literature or strategies found in journals like Intervention in School and Clinic. (4) Pair the strategy with a strength, interest, or talent creatively. Ensure that the interest is an inherent part of the strategy itself to increase the studentís motivation. Here are some of the strengths and interests our students have shared with use. It was our goal to teach with, through, and about these areas.î There are three categories listed below with the headings Strengths, Interests, Talents. Under Strengths is the following list: Reading stamina, Hyperlexia, Attention to detail, Computation, Ability to focus on areas of interest, Using the computer, Creativity. Under Interests is the following list: Titanic, Sharks, Transportation, Godzilla, Riddles, Waste Management, Elevators, Anime. Under Talents is the following list: Conceiving of imaginary words, Map making, Creating silly poems, Vocabulary, Creating collections, 3-D design, creating comics.

Poster-90 CBConv: Service for Automatic Conversion of Chinese Characters into Braille with High Accuracy (ACM DL)
Xiangdong Wang, Jinghua Zhong, Jia Cai, Hong Liu and Yueliang Qian

(ACM DL)

Transcript of Audio File

Assets Poster. A Classroom Accessibility Analysis App for Deaf Students. Authors: Raja Kushalnagar (Gallaudet University).

Educational Disparity

Prior to visual accommodation laws in the 1970s, 7 of 3000+ institutions had accommodations for DHH.

After visual accommodations laws passed, nearly all educational institutions had accommodations, and DHH college graduation greatly increased. Yet, 16% DHH graduate, compared to 30% hearing.

Classroom Design

Prioritize aural over visual access: In most classrooms, students sit in rows, to maximize use of space. While this preserves aural access, it does not preserve visual access.

Classroom Accessibility for Visual Learning

Visual access: directed view of 2 degrees width

Aural access: global view with nearly 360 degrees width

Recommended Classroom Layout for Visual Learning

Rearrange seating to establish 360 degree radial spatial distribution.

Challenge

How to measure and document 360 degree radial spatial distribution?

Classroom Accessibility Analysis App for DHH

App analyzes and reports percentage of faces visible from center of room

App: 360 degree camera and Video Analysis script

Participant Ratings

Participants rated the circular layout as being more accessible than either the row or hybrid layout. For the circular layout, the participants rated it as being very accessible: 5.0 (SD=0), and for the row layout, the participants rated it as being somewhat not accessible: 1.86 (SD=0.74). Finally, for the hybrid layout, the participants rated it as being somewhat accessible: 3.2 (SD=0.88).

App Scores

The script analysis of classroom layouts indicated more participants and their faces were visible for the circular layout compared to the hybrid layout and more visibility than the row layout. For the circular layout, the script returned 15 bodies and 15 faces (100%). For the hybrid layout, the script reported 13 bodies (87%) and 6 faces (40%). For the row layout, the script reported 6 bodies (40%) and 2 faces (13%).

Conclusion

The Classroom Accessibility Analysis app correlates well with self-reported accessibility ratings. All participants noted in their open-ended responses that it was important to see the faces and body language of other participants during discussion, and their ratings were consistent with their comments. The Classroom Accessibility App provides DHH students, faculty or staff a quick way to assess, document and report classroom accessibility.

Alt Text of Poster Images

Figure 1

A view of a classroom with multiple visuals from a deaf student's viewpoint -- screen, laptop, blackboard, etc.

Figure 2

A view of classroom with wooden tables circularly arranged around the room

Figure 3

Field of view for sighted people — 2 degrees sharp details, and 130 degrees peripheral vision

Figure 4

A group of students sitting in rows in a classroom

Figure 5

A group of people sitting in a circle around a table

(ACM DL)

Transcript of Audio File

This poster is titled "Motor accessibility of smartwatch touch and bezel input". The authors are Meethu Malu (now at Google), Pramod Chundury from the University of Maryland, and Leah Findlater from the University of Washington.

The research question was: Can input on the smartwatch bezel provide accessible control compared to the touchscreen for people with upper body motor impairments?

We had two hypotheses: first, that bezel input is faster than touchscreen input, and second, that bezel input is more accurate than touchscreen input. These hypotheses are based on past work that shows that hard edges can be useful for helping to stabilize touch input by people with upper body motor disabilities.

We created two custom smartwatch apps, one for the touchscreen and one that took input from conductive fabric affixed to the smartwatch bezel. For each trial, the apps displayed a yellow target and played an audio cue. A trial was successfully completed if the user tapped the given target, or timed out as an error after 10 seconds.

The study used a 2x2 within subjects design that included two factors: interaction technique, which was the bezel or touchscreen, and layout, which meant four larger targets or 8 smaller targets. We recruited 14 participants with upper body motor impairments, who completed 48 test trials in each of the four conditions.

In terms of input performance, findings revealed a speed-accuracy tradeoff. Counter to our first hypothesis, the bezel input was significantly slower than the touchscreen input. But we found support for our second hypothesis, in that the bezel input was more accurate than the touchscreen input.

For overall preference, the touchscreen was preferred. However, the bezel was preferred for specific tasks like when needing to limit visual occlusion or for shortcut gestures.

Alt Text of Poster Images

Figure 1. Two screenshots of the smartwatch touchscreen app, one showing a rectangular yellow target in the bottom right and the second showing a smaller rectangular in the middle-left of the screen.

Figure 2. Two screenshots of the smartwatch bezel input app, one showing a yellow bar at the top of the screen to indicate input on the top edge, and the second showing a yellow target in the top-left corner of the screen to indicate input on the top-left bezel corner of the watch. There are also two close-up images of participants’ hands and the watch, one pressing the lower large bezel target and one pressing the bottom right corner bezel target.

Figure 3: Boxplot of trial completion times. The graph shows higher completion times for trials in the bezel conditions and lower times for touchscreen conditions. The touchscreen conditions took on average 1.2s per trial, whereas the bezel conditions took on average 1.7s per trial.

Figure 4: Boxplot of error rates. The graph shows higher error rates for the touchscreen with 8-target layout and similar but low error rates for the other three conditions. The touchscreen trials resulted on average in a 10% error rate, whereas the bezel trials yielded on average a 3.5% error rate.

SRC Students

 

[Back to top]

 

Poster Session II: Tuesday October 29, 2019 at 10:20 and 15:15

The following posters will be presented during Poster Session II on both the morning (10:20) and afternoon (15:15) of Tuesday October 29, 2019. This poster session also includes all Doctoral Consortium Student posters and Travel Scholarship Winner posters (poster titles for our scholarship winners are forthcoming).

(ACM DL)

Transcript of Audio File

ASSETS Poster. Titled DanceCraft: A Whole-body Interactive System for Children with Autism.

Authors: Kathryn E. Ringland of Northwestern University, Christine T. Wolf of IBM, Almaden, LouAnne Boyd of Chapman University, Jamie K. Brown of University of California Irvine, Andrew Palermo of University of California Irvine, Kimberley Lakes of University of California Riverside, and Gillian R. Hayes of University of California Irvine.

Using natural user interfaces, we can augment therapeutic dance programs for autistic children and children with sensory processing challenges with the Microsoft Kinect. The goals of this research project were to evaluate a system, DanceCraft, for feasibility in augmenting dance therapy for these children.

The study used a 1 week at-home deployment with 9 families. 10 children in total, aged 7 to 12, participated in the study. Each used the DanceCraft program at home up to 3 times during the week. The system allowed for three different dance themes: birds, cars, and snow, pictured on poster.

Implications for desing of this study are as follows. Simplicity. Reduce sensory overload for users. Configurability. Allow for variation in the program to allow for a child's individual needs and goals. Inclusive support. Need to include accessibilty for other disabilities of the users, including the parent and child using the system.

Implications for study design are as follows. Feasibility of the system. Needs to match the other at-home systems, like video gaming systems. Understanding HCI measure of engagement. This includes how and when chidlren used the system and their own personal work-arounds. Messiness of at-home deployment. Home deployment is better for understanding real-world use, but it is hard to anticipate and react to problems.

The future goals of this work are to create a more robust system and test on a longer, larger scale.

Alt Text of Poster Images

Image 1 (center left): Dancecraft menu, blue background, white buttons that read, "Day 1", "Day 2", "Day 3".

Image 2 (center right): screenshot from DanceCraft. Black image of doll figure posing in front of a background with simple blue sky and green grass. White outlines of clouds in sky and gray silhouettes of birds against sky.

Image 3 (below image 2, left): screenshot from DanceCraft. Black image of doll figure posing in front of background of light blue sky, snow or white ground, large white snowflakes overlayed on the sky. To the far right in background is a white snowman.

Image 4 (below image 2, right, next to image 3): screenshot from DanceCraft. Black image of doll figure posing in between two small cars (one yellow car, one red van). Gray ground and light gray sky.

For more details: bit.ly/dancecraft

(ACM DL)

Transcript of Audio File

Tetraplegia is a condition in which people have limited motor ability in their legs and arms. Voice assistants are currently designed as a general purpose tool, with limited attention to people with disabilities.

We performed contextual need finding activities with the participants with tetraplegia, mostly in their own homes. We conducted semi-structured interviews along with wizard- of-oz prototyping or a contextual inquiry.

People with tetraplegia use voice activation

when lying down on their bed, when on the floor, or while in

transit.

We suggest a physical convenience approach, distributing microphones on the bed frame, on the floor and even in the bathroom (with caution about privacy, and permission) in order to accommodate for the locations that the people will want to use the voice assistant technology. In future work, we plan to examine novel uses of the wizarded voice-activated drone or chair (i.e., scratching the nose, checking who is at the door, picking up mail). Additionally, we plan to extend this work to include more participants.

Alt Text of Poster Images

There are 3 figures in the poster. From top to bottom:

Figure 1: is a person sitting with a respirator in her mouth, at a computer using the Dragon speech to text software.

Figure 2: a person sitting in a power wheelchair which is being operated by a young girl who is the grand daughter of the person in a wheelchair. He asked his granddaughter to take him to the sunshine, which was in the kitchen.

Figure 3: A person in a wheelchair, next to his van which has a wheelchair lift. He has equipped his van to be accessible. The researcher is standing on the wheelchair lift.

ACM DL

Transcript of Audio File

Child speech therapy games would benefit from including speech recognition technology to automatically process utterances. However, few studies have examined applying ASR to disordered speech from children, partially due to very limited available data from this population. In our study, we examined two low-resource approaches for domain-specific speech recognition: template matching and adapting existing acoustic models using two standard methods. We found that template matching worked well, but applying maximum a posteriori adaptation to an existing model resulted in the best recognition overall. These results suggest that ASR performance can be improved to a level where it could be used in speech therapy games for children.

Alt Text of Poster Images

Figure 1

Two screenshots from the Apraxia World game. One shows the monkey avatar in a jungle level and the other shows a speech exercise popup prompting the player to say "Jar" by displaying the word and a picture.

Figure 2

Diagram for the template matching process. In the first part, the feature vector is created by trimming leading and trailing silence from a recording, applying pre-emphasis, extracting MFCC features, and applying cepstral mean normalization. In the second part, a test feature vector is time aligned with a template feature vector and the frame-wise distance is computed between the two.

Figure 3

Boxplots for word-level accuracy of the four speech recognition methods. PocketSphinx with the default acoustic model has the worst accuracy, followed by MLLR-adapted models, then template matching, and MAP-adapted models have the best accuracy.

Table 1

Shows average word recognition accuracy for each speech recognition method per speaker. MAP-adapted models and template matching work best, although which of the two works better is speaker-dependent.

ACM DL

Transcript of Audio File

Tactile Schematics: Circuit Accessibility in a Physical Computing Class

Authors and Affiliations:

* Lauren Race, New York University

* Chancey Fleet, New York Public Library

* Joshua A. Miele, Blind Arduino Project

* Tom Igoe, New York University

* Amy Hurst, New York University

Problem: Schematics are a visual language, describing the relationships between components in an electronic circuit. They present accessibility challenges for tactile learners.

Obstacle: Circuit diagrams contain small elements, complex relationships, and must follow industry standards.

Solution: An improved set of tactile schematic symbols and nine guidelines to create readable tactile graphics for schematics.

Method: Iterative design activities with tactile graphic experts, blind and low vision students, graphic designers, and physical computing instructors.

[Image: Tactile graphic of Analog In before the schematic redesign.]

Analog In Original: 

* Components that are too small and close together

* There’s no braille labels

* Some lines are gray and will not puff up in the fuser

[Image: Tactile graphic of Analog In after the schematic redesign.]

Analog In 11th Version: 

* Optimal symbol sizing

* Braille labeling

* 2-point dotted leader lines

* 2-point connection lines

* 8-point stroke around Integrated Circuits

Tactileschematics.com

Acknowledgements: Our Participants, Andrew Heiskell Braille & Talking Book Library, NYU Ability Project, NYU ITP [Image: NYU Ability Project logo and NYU ITP logo]

Alt Text of Poster Images

[Image: Tactile graphic of Analog In before the schematic redesign.]

Analog In Original: 

* Components that are too small and close together

* There’s no braille labels

* Some lines are gray and will not puff up in the fuser

[Image: Tactile graphic of Analog In after the schematic redesign.]

Analog In 11th Version: 

* Optimal symbol sizing

* Braille labeling

* 2-point dotted leader lines

* 2-point connection lines

* 8-point stroke around Integrated Circuits

[Image: NYU Ability Project logo and NYU ITP logo]

Poster-28 30 Years Later: Has CVD Research Changed the World? (ACM DL)
Wanda Li and David Flatla

ACM DL

Transcript of Audio File

Performace-based tests are frequently used for determining optimal settings but can lead to fatigue. We investigated whether people can identify their optimal touchscreen target sizes by asking 7 older adults, with a mean age of 67.4, and 12 younger adults, with a mean age of 39.2, to identify optimal target sizes on a questionnaire. We then compared these chosen sizes to performance on a target acquisition task.

We found that older individuals (60+) were better than younger adults at choosing their optimal target sizes. In fact, younger adults underestimated the smallest target size they could accurately touch by almost 6mm. Older adults might not need performance assessments for determining their optimal target sizes, while younger adults might.

Alt Text of Poster Images

1) Upper left corner: A smartphone screenshot of the questionnaire used in the study. At the top of the questionnaire is the participant number (P55) and the question (Which is the most comfortable target size for you to tap with your index finger?). Below the question is a submit button and a 3x4 table with the targets decreasing in size from top left to bottom right.

2) To the right of (1): A smartphone screenshot showing the performance test used in the study. There is one target displayed against a white background.

3) Bottom left corner: Two boxplots depicting the level of accuracy older and younger adults achieved on their chosen "smallest" target size. Older adults had a median accuracy of 100%, while younger adults had a median accuracy of around 85%.

4) To the right of (3): Two boxplots depicting the level of accuracy older and younger adults achieved on their chosen "most comfortable" target size. Older adults had a median accuracy of 100%, while younger adults also a median accuracy of 100%.

ACM DL

Transcript of Audio File

This poster is titled “A Closer Look: Multi-Sensory Accessible Art Translations”. It presents a collaboration between researchers from Monash University and the Bendigo Art Gallery in Australia, exploring the use of new technologies such as Laser Scanning and 3D printing to create multi-sensory accessible version of gallery artworks.

The poster is made up of 3 main panels. The first panel is named “Background”. It briefly describes the Bendigo Art Gallery, and shows a picture of two visitors at the gallery, viewing a large sculpture titled “Conjurer 3”.

The second panel is the largest of the three and is titled “Artworks and their Translations”. It shows 6 artworks from the permanent collection of the Bendigo Art Gallery that were chosen to explore new accessible translations. The artworks shown are “The Drover” by Walter Withers , “i ate the rainbow up... ... ...” by Del Kathryn Barton, “Happy Ending?” by Michael Doolan, “Circe” by Bertram Mackennal , “The Young Family” by Patricia Piccinini, and “Conjurer III” by Benjamin Armstrong. These works are described in the accompanying image description text file.

This panel also shows some of the alternate translations that were made of these artworks. They include 3D printed and laser-cut translations.

The final panel is named “Evaluation” and briefly describes the evaluation that took place. It also has a photo of a participant handling a small 3D printed version of “Conjurer 3”.

The poster concludes with the copyright information for the artworks as well as logos for the principle research partners: Bendigo Art Gallery, Monash University, and the SensiLab research lab.

Alt Text of Poster Images

1. This image shows two visitors at the Bendigo Art Gallery. They are in one of the main gallery spaces, viewing a large sculpture titled “Conjurer 3”. This sculpture is a carrot-like creature standing upright, with long arms and legs, made of wood. It is approximately 2.5 metres tall.

2. This image is of “The Drover” by Walter Withers. This is a realistic painting of a man on a horse, herding sheep along a dusty road.

3. This image is of “i ate the rainbow up... ... ...” by Del Kathryn Barton. This is a stylised painting of two women in bright vivid colours. One woman takes most of the frame and is looking at the viewer, while the second woman’s head is in front of the first woman’s chest.

3. This image is of “Happy Ending?” by Michael Doolan. This is a large stylised outdoor sculpture of a bear next to a tree, with a bird lying on the ground. It is black and made of metal.

4. This image is of “Circe” by Bertram Mackennal. This is a bronze statuette of a naked woman, arms outstretched, standing on a plinth.

5. This image is of “The Young Family” by Patricia Piccinini. This is a latex sculpture of a female big with some human like features in the face and limbs. There are a number of piglets surrounding the mother pig.

6. This image is of “Conjurer III” by Benjamin Armstrong. This is a large indoor sculpture made of wood and shows a carrot-like creature standing upright, with long arms and legs.

7. This image is of a 3D printed version of “The Drover” by Walter Withers. It is a bas-relief of the painting, printed in green 3D filament.

8. This image is of two alternate versions of “i ate the rainbow up... ... ...” by Del Kathryn Barton. The first is a version of the image printed on swell paper, while the second is of a laser cut bas-relief version.

9. This image is of a 3D printed version of “Happy Ending?” by Michael Doolan. It shows the 3D printed bear, tree and bird, all printed in black filament.

10. This image is of two alternate versions of “Circe” by Bertram Mackennal. They are two bas-relief versions, the first 3D printed and the second laser cut.

11. This image is of one piglet from “The Young Family” by Patricia Piccinini. It shows a 3D resin printed version of a piglet.

12. This image is of a 3D printed version of “Conjurer III” by Benjamin Armstrong. It shows a version printed with clear plastic filament, against a black background.

13. This image shows a person handling a small 3D printed version of “Conjurer 3”. One hand holds the body, while the other explores one of the long legs.

ACM DL

Transcript of Audio File

Building Capacity: eTextile Tactile StoryBook Workshops

By Leona Holloway, Kirsten Ellis and Louise Curtin (Monash University)

Tactile literacy is a key to success, and the first step is active touch stimulated by engaging tactile diagrams.

eTextiles offer an easy and affordable means of enhancing tactile story book pages. We used Lilypad components, conductive thread, LED lights and vibration motors.

We ran three public workshops, each with up to 23 adults and children. After a brief explanation and exposure to sample materials, participants were supported to create their own story book page with a sewn circuit. Everyone was able to complete the task within the 2-3 hour workshop, and a surprising number incorporated switches for greater interactivity.

Based on our experiences, we offer seven guiding principles for running Tactile eTextile Workshops:

1. Get people making as quickly as possible;

2. Provide Inspirational materials such as rhymes and picture story books;

3. Provide examples of the finished product;

4. Encourage people to check their work by touch;

5. Keep it simple for success, starting with a very simple circuit;

6. Intervene early with a higher ratio of helpers; and

7. Use collaborative learning.

Alt Text of Poster Images

The poster border is a dashed sewing line with needle at one end and LED light at the other.

Image 1: Toddler touching a simple tactile story book. Copyright Feelix Library, Vision Australia.

Image 2: Diagram illustrating conductive thread sewing lines between Lilypad batter holder and vibration motor.

Image 3: Participants sewing at an outdoor workshop.

Image 4: Craft equipment including fabric, scissors, pompoms and googly eyes.

Image 5: Workshop participants sharing their tactile story book pages.

Image 6: Fuzzy collage bee with LED lights on its feelers.

Image 7: Furry collage cat.

Reproductions of the circuit and tactile diagram pages will be available to touch alongside the poster.

ACM DL

Transcript of Audio File

People with visual impairments are interested in artworks as much as their sighted peers. However, their experience is limited because most artworks are visual.

To enable people with visual impairments to explore and understand artworks independently, we propose a touchscreen-based mobile application which provides object-level verbal descriptions of a painting upon users' touch. For example, if a user touches a certain object in a painting displayed on a touchscreen the app provides the label, color and the location of the object such as "A cypress tree, painted black, located on the left side of the painting".

We conducted a semi-structured interview study using our application with 8 participants with visual impairments where they were asked to explore 4 different paintings and then provide their description and opinion about each painting.

We found that our application enables them to understand the shape and location of several objects of the painting, and to gain knowledge of the painting as if reading encyclopedia.

Also, they valued that they could access paintings at any time they want at their own pace without sighted person's help, and save time and money because they do not need to visit a museum.

In conclusion, our application can help people with visual impairments to freely explore and learn various paintings more in detail with object-level descriptions as well as spatial information such as position and size.

Alt Text of Poster Images

There are 2 figures in the poster.

Figure 1: The original and its visualization of segmented painting examples with ‘The Starry Night’ by Vincent van Gogh which was used in our study.

Figure 2: Four paintings used in our study with varying genres which are landscape('The Starry Night' by Vincent van Gogh), portrait('Girl with a Pearl Earring' by Johannes Vermeer), abstract('Composition II in Red, Blue and Yellow' by Piet Mondrian) and still life('The Basket of Apples' by Paul Cézanne).

ACM DL

Transcript of Audio File

This work is Titled “Evaluation of Why Individuals with ADHD Struggle to Find Effective Digital Time Management Tools” by Breanna Desrochers, Ella Tuson, and John Magee from Clark University in the United States.

Our work investigated the use of time management tools among adults with ADHD with the goal of determining where there may be room for improvement in current digital tools. We present findings from a survey of adults with and without ADHD. Our findings indicate dissatisfaction among adults with ADHD with the tools currently available to them and highlight key areas for potential development. Future work in this area includes expanding digital tools, and conducting more research about balancing multiple time management strategies and the use of timers among those with ADHD.

Alt Text of Poster Images

Alt text for submission 50

Survey: A bar graph depicting participants who reported having an ADHD diagnoses, participants who reported not having an ADHD diagnoses, and participants who reported they were unsure. Thirty participants reported having ADHD, twenty two reported not having ADHD, and three were unsure.

Perceived Effectiveness: A bar graph depicting the perceived effectiveness of current time management tools. No participant has chosen “Extremely Effective.” One participant with ADHD and eleven participants without ADHD chose “Very Effective.” Thirteen participants with ADHD and eight participants without ADHD chose “Moderately Effective.” Fourteen participants with ADHD and one participant without ADHD chose “Slightly Effective.” Two participants with ADHD and two participants without ADHD chose “Not Effective at all.”

Physical versus digital tools: A bar graph depicting which participants reported using digital tools, physical tools, or both kinds of tools. Four participants with ADHD and eight participants without ADHD reported using only physical tools. Four participants with ADHD and zero participants without ADHD reported using only digital tools. Twenty-two participants with ADHD and fourteen participants without ADHD reported using both digital and physical tools.

Specific strategies: A bar graph depicting the percentages of participants with and without ADHD use specific time management tools. Of those that reported using a calendar 61% were participants with ADHD and 39% were participants without ADHD. Of those that reported using a planner 45% were participants with ADHD and 55% were participants without ADHD. Of those that reported using a to do list 59% were participants with ADHD and 41% were participants without ADHD. Of those that reported using a timer 82% were participants with ADHD and 18% were participants without ADHD. Of those that reported using a reward system 57% were participants with ADHD and 43% were participants without ADHD. Of those that reported using a strategy not listed 62% were participants with ADHD and 38% were participants without ADHD.

ACM DL

Transcript of Audio File

Title: Syncing Pre-Recorded Audio Description to a Live Musical Theater Performance using a Reference Audio Recording

Authors: Dirk Vander Wilt and Mary Farbood

Affiliation: New York University

Audio description (AD) is an accessibility service that provides an alternative to obtaining visual information for blind or visually impaired individuals. AD for scripted live performances, such as musical theater, provides spoken, real-time visual information for blind or visually-impaired theatergoers.

Unlike fixed media, live, repeatable performances cannot have a single, fixed AD track aligned in advance, since repeated live performances are by design never identical. In these cases, AD tracks are pre-recorded and then triggered at the correct moment using some automated process.

AD for live events is expensive and time-consuming to produce, and is rarely available. When it is available, it is often limited to certain performances. Our solution allows AD to be deployed automatically at every performance of a repeated live show using a reference audio recording and online time warping.

We first obtain a complete recording of the show to be described. Concurrently, a live describer describes the show, noting the sample number where each described event begins. On subsequent performances, the pre-recorded AD is triggered based on its location in the reference recording.

Alt Text of Poster Images

The first image is an 8-by-8 grid that visualizes how Dynamic Time Warping works. Both axes of the grid represent one of the two audio signals to be aligned, and each cell in the grid is the cumulative path cost from the lower-left (the first frame of each time series) to the location of that cell. A line weaves through the grid to show an example of how Dynamic Time Warping might find the best match between two signals.

The second image is two line graphs that compare how accurately our system can find the correct AD trigger marks in our two experiments. The results show that H.M.S. Pinafore experiment was more accurate in finding the AD triggers at the correct time than the Legally Blonde experiment.

ACM DL

Transcript of Audio File

Sidewalk, A Wayfinding Message Syntax for People with a Visual Impairment

by Joey van der Bie, Christina Jaschinski and Somaya Ben Allouch of the Amsterdam University of Applied Sciences and the Saxion University of Applied Sciences.

Traditional turn-by-turn navigation approaches often do not provide sufficiently detailed information to help people with a visual impairment (PVI) to successfully navigate through an urban environment.

To provide PVI with clear and supportive navigation information we created Sidewalk, a new wayfinding message syntax for mobile applications.

Sidewalk proposes a consistent structure for detailed wayfinding instructions, short instructions and alerts.

We tested Sidewalk with six PVI in the urban center of Amsterdam, the Netherlands. Results show that our approach to wayfinding was positively valued by the participants and preferred over traditional navigational messages.

Alt Text of Poster Images

Image 1. Photo Joey van der Bie

A photo of the author Joey van der Bie and the logo of the Amsterdam University of Applied Sciences.

Image 2. EyeBeacons wayfinding system

The EyeBeacons wayfinding system is represented by a bone conducting headset, an Apple watch series 3 and an iPhone 7.

On the screen of the iPhone and watch is the EyeBeacons wayfinding app interface visible.

Image 3. Sidewalk navigation syntax explained

The sidewalk navigation syntax consist of 3 parts with several components:

1. Attention Indicator (optional) and Attention Message (optional).

2. Current Orientation (optional), Distance (optional), Action, Action Indicator (optional), Destination

3. Orientation to Action (optional)

Each component is accompanied with an example forming the example message:

"Warning, obstruction on the sidewalk.

You are at the Wibautstraat with traffic at your right. After 60 meters turn left at the trashcan onto Mauritskade.

On your left you pass a bakery."

More details of the different components can be found in the paper of the ASSETS proceedings.

Image 4. SUS scores

A graph showing the individual SUS scores of the participants in green bars.

A red line indicates the average score 84.

Image 5. RTLX scores

A graph showing the individual RTLX scores of the participants in purple bars.

A red line indicates the average score of 17.3.

Image 6. Busy crossing

A photo of a busy crossing is presented with the navigational message describing the situation:

"After 5 meters turn left at the tactile pavement and cross the road to Tweede Boerhaavestraat.

You cross a bicycle lane, two roads with traffic lights and audio indicator, and a bicycle lane.”

ACM DL

Transcript of Audio File

A multi-modal approach for blind and visually impaired developers to edit webpage designs by Venkatesh Potluri, Liang He, Christine Chen, Jon E. Froehlich and Jennifer Mankoff from Paul G. Allen School of Computer Science and Engineering, University of Washington

Content creators who are blind and visually impaired are actively creating interfaces meant for visual consumption. While these interfaces are expected to have good visual design, the necessary tools and information to build visually pleasing interfaces are not accessible to developers who are blind and visually impaired

To address this gap, we propose a multi-modal approach, using an Integrated Development Environment and touch gestures to enable BVI developers to edit web page designs without breaking visual aesthetics. We demonstrate our approach through a multi-modal system and present preliminary findings from a pilot. Future work will enhance validation through formal verification and machine learning techniques, investigate learnability and discoverability of our current gesture set and explore new interaction techniques.

The picture shows a user interacting with the webpage representation on the accessible canvas, an iPad. The code editor (with the underlying CSS) is displayed on a laptop.

The system consists of three modules: the accessible canvas on the left, the code editor on the right, and the controller in the middle.

The poster has logos of the Paul G. Allen School of Computer Science and Engineering, Design Use Build, Makeability Lab and Make4All

Alt Text of Poster Images

The poster has two sections.

In the top section, the background shows that a blind user’s finger is touching the accessible canvas (i.e., the tablet). The title-A Multi-Modal Approach for Blind and Visually Impaired Developers to Edit Webpage Designs-is at the top-left corner and the authors’ round headshots and names are listed under the title from left to right. At the top-right corner, a QR code linking to the project webpage is shown. Under the QR code, the logos are stacked from top to bottom in the following list: Paul G. Allen School, DUB, Makeability Lab, and Make4All.

In the bottom section, there are two columns. In the left column, the section at the top is Research Problem. It has a bold heading called “Research Problem” and a paragraph: Content creators who are blind and visually impaired (BVI) are actively creating interfaces with visual elements, meant for visual consumption. While these interfaces are expected to have good visual design, the necessary tools and information to build visually pleasing interfaces are not accessible to BVI developers. Under the paragraph, there is a system overview section. In this section, there is a heading called “System Overview”, followed by two vertically stacked figures. The figure at the top is the picture of the system, including a hand touching the tablet on the left and a laptop with the code editor on the right side. Two annotations are pointed to the tablet and the laptop, respectively. The figure at the bottom is the system diagram that we used in the paper. Under the system diagram figure, the preliminary study section has a heading and a grey person’s headshot icon with a quote on the right saying “Need to have more control (e.g. have access to the history of changes)”. Under the quote, the demographic information of the participant is described: 24-year old congenitally blind software developer.

Still in the bottom section, in the right column, the first section at the top has a heading called “Our Approach”. The first paragraph in this section says: To address this gap, we propose a multi-modal approach, using an Integrated Development Environment (IDE) and touch gestures to enable BVI developers to edit web page designs without breaking visual aesthetics. Then three components are vertically stacked under the paragraph. Each component has the following visual layout: there is a round icon indicating what the component is, a bold component name on the right side of the icon, and a brief description of the component. The components are (1) Accessible Canvas: A touch screen tablet interface that allows a BVI developer to modify  visual attributes of web pages; (2) Code Editor: Modified an IDE with a plugin that supports direct CSS code edit; (3) Controller: This processes proposed updates from the canvas or the code editor and rejects them if they violate design guidelines. Under the three components, three design guidelines are stacked horizontally. From left to right, they are spacing consistency, typeface consistency, and color consistency. Each design guideline has an icon above the design guideline name. Under the three design guidelines, here is the final section called “Future Work”. It has three bullet points: (1) Improved design guideline validation; (2) Machine learning-based design guideline validation and recommendation; and (3) Non-visual interaction paradigms.

ACM DL

Transcript of Audio File

Poster Title: Supporting Older Adults in the Use of Smart Devices for Personal Health Management

Authors: Collin Wang, Carolyn Pang, Karyn Moffatt, School of Information Studies, McGill University. Rock Leung, Samsung Research & Development. Joanna McGrenere, Department of Computer Science, University of British Columbia

Personal wearables can help older adults in managing health problems by monitoring information such as one’s heart rate, but are challenging to learn and adopt due to an overwhelming amount of information and features they contain. Our research goal is to build an accessible technology for older adults that supports the independent and collaborative learning of smart devices for personal health management. We developed Help Kiosk 2.0 that integrates instructions, videos, and feedback into a single 40” tabletop display on which users are able to place their smart devices. From prior work, we derived three design requirements for our system: known guidelines for senior-friendly instructions; onboarding and personal health management; and collaborative learning and social connectedness. The core features of our system are a remote video support feature (Help) integrated with Zoom; a navigation menu for eight key learning topics; and the central area where the devices (smartwatch and tablet) are placed and instructional information (text, images, and videos) display. Our next step is to conduct video prototype interviews and observational studies to evaluate the system with older adults so that we can iterate on the design and make modifications to improve their learning experiences of smart devices.

Alt Text of Poster Images

Figure 1:

A remote video support feature is available with a camera and screensharing;

Help launches remote video support;

Devices are placed in the central area of the tabletop;

And a navigation menu for eight key learning topics.

Figure 2:

Instructional information (text, images, and videos) display around the devices.

ACM DL

Transcript of Audio File

Typically, the use of touchscreen devices to perceive graphical information by individuals who are blind or visually impaired uses a single exploring finger, with tactile and/or auditory feedback. The main question asked in this study is whether the use of feedback for multiple exploring fingers can improve performance. For tactile feedback, a vibrator was placed on the distal part of one or more fingers to provide feedback for that particular finger. For auditory feedback, grouping principles of both timbre and/or spatial location were used to signal information about one or more exploring fingers through binaural headphones. Seven different methods in total were compared through a within subject design. Blind or visually impaired participants were asked to answer questions about maps of a botanical garden. Performance was determined in terms of the number of correct answers and response time. Preliminary results suggest that the use of audio cues, particularly with two fingers, is more effective than using vibrotactile feedback, whether with one or two fingers.

Alt Text of Poster Images

Diagram/Map Representation. An example of each type of map that was used in testing is given. Both map examples are square. The first map, an overview map of the garden areas without inner details plus buildings, contains garden areas indicated in different colors and represented by simple shapes, such as rectangles, L-shape and X-shape, and two buildings indicated by triangles. The second map, an individual garden map contains greenery (indicated by green areas), pathways (indicated by grey), stairs (indicated in one place by a red rectangle on a path), benches indicated by yellow ovals and points of interest indicated by ovals that are a shade of blue. The garden consists of a square path way surrounding a small oval green in the middle. Three pathways go straight from the middle to the top, left and bottom edges, respectively. A fourth pathway goes to the right to a large blue oval, which indicates a water feature. On the pathway to the left, there is a red rectangle indicating stairs. Four yellow ovals are placed around the square path in the middle, 1 is placed beside the fountain and 2 are placed on the path heading to the left. There are also a pair of points of interest (indicated by shades of blue) on the top, left and bottom paths.

Feedback Methods.

A tablet is shown in portrait mode with the screen divided into a 3x2 grid of solid blocks of color: white, grey, red, green, blue and yellow. The different colors map onto the notes (for audio feedback) and vibrations (for tactile feedback) used to indicate the features in the map (which are rendered by the different colors).

There are four methods that use audio cues:

1. One finger exploring: The color under the index finger of the right hand is indicated by notes of the clarinet played to one ear.

2. Two fingers exploring: The color under the index finger of the right hand is indicated by notes of the clarinet played. The color under the middle finger of the right hand is indicated by notes of the guitar. The notes at a given instant in time are combined and played to one ear.

3. Two fingers exploring: The color under the index finger of the right hand is indicated by notes of the clarinet played to the left ear. The color under the middle finger of the right hand is indicated by notes of the clarinet to the right ear.

4. Two fingers exploring: The color under the index finger of the right hand is indicated by notes of the clarinet played to the left ear. The color under the middle finger of the right hand is indicated by notes of the guitar to the right ear.

There are three methods that use tactile cues:

5. One finger exploring. The color under the index finger of the right hand is indicated by the corresponding vibration pattern (on/off) provided by a ring-like device placed on the proximal half of the fingertip of the same finger.

6. Two fingers exploring. The color under the index finger of the right hand is indicated by the corresponding vibration pattern (on/off) provided by a ring-like device placed on the proximal half of the fingertip of the same finger. The color under the middle finger of the right hand is indicated by the corresponding vibration pattern (on/off) provided by a ring-like device placed on the proximal half of the middle fingertip.

7. Two fingers exploring. The color under the index finger of the right hand is indicated by the corresponding vibration pattern (on/off) provided by a ring-like device placed on the proximal half of the fingertip of the same finger. The color under the index finger of the left hand is indicated by the corresponding vibration pattern (on/off) provided by a ring-like device placed on the proximal half of its fingertip.

ACM DL

Transcript of Audio File

There have been many recent developments in navigational aids for people with disabilities that complement GPS-based applications. These include applications such as BlindSquare and SoundScape. Moreover, there have been systems developed utilizing a combination of Bluetooth-Low Energy beacons, Wi-Fi and or Computer Vision such as NavCog, GuideBeacon, ASSIST, and PerCept. These auxiliary location-based services promise to provide wayfinding capabilities in GPS-limited areas.

While most of the initial research expectedly has been on creating prototypes and their evaluations, it is not clear how these can be sustainably deployed on a large-scale because we do not know what two key stakeholders in the process think about them and their future. These stakeholders are city planners who will likely deploy and manage such systems in communities, and non-profit agency personnel who serve people with disabilities that typically advocate for such aids for their constituents.

This work presents the results from a survey of 45 city planners and 30 non-profit agency personnel from the city of Wichita and surrounding towns on their thoughts about auxiliary location-based services.

The key conclusions from this study are the following:

City planners and non-profit personnel expressed a need for adequately funding auxiliary location-based services with a recommendation for funding through private-public partnerships.

Auxiliary location-based services were identified to be desirable for safety (emergency and evacuation) needs along with navigational needs of people with disabilities, who are currently helped by family and friends.

City planners and non-profit personnel differ in their perception of potential impacts of such location-based services and there is a need for education of all stakeholders.

Alt Text of Poster Images

Results for Question 2: How are assessment of wayfinding needs currently done

This figure shows the various mechanisms used by city planners and non-profits assess wayfinding needs in their communities. The mechanisms shown are the following: do not assess, field studies, research studies, surveys, stakeholder meetings, elected officials, public meetings, online comments, telephone. The major differences between the two groups are that City planners do not assess or use field studies, while non-profits use research studies, with all other mechanisms used somewhat.

Results of Question 5: Percentage of population positively impacted by ALBSs

This figure shows the percentage of population that is likely to be positively impacted (as perceived by both groups) if ALBSs are deployed. Most city planners did not know, while over 50% of non-profit personnel felt this would impact between 15-30% of population.

Results for Question 6: Ranking of various applications by priority

This figure shows how city planners and non-profit personnel ranked various ALBS applications in order of priority from 1 through 4. The application categories were: emergency, general wayfinding, wayfinding for PWD, smart city. City planners prioritized emergency applications followed by wayfinding for PWD, while non-profits prioritized both emergency and PWD wayfinding applications equally.

ACM DL

Transcript of Audio File

Poster title: Exploring Invisible Disability Disclosure in the Sharing Economy Platforms

Authors and Affiliation: Zhengyan Dai and Erin Brady, Department of Human-Centered Computing, Indiana University School of Informatics and Computing.

In this poster, we are trying to answer whether people with different invisible disabilities would encounter discrimination when working on different sharing economy platforms. We used a mixed-method to answer the research question. We designed a three-times-two factorial between-group experiment to measure the effect of disability types (including physical, psychiatric, and no disabilities) and sharing economy platforms (including offline and online work) on perceptions of service providers’ credibility and employability suitability. We distributed surveys on Amazon Mechanic Turk in February 2019 and received 97 valid responses. In the statistical analysis, we didn’t find a significant effect of disability status and types of sharing economy work on Credibility and Employment Suitability. We used an affinity diagram to categorize primary themes generating from the open-end responses for employability suitability, including questions of clients’ additional information before making hiring decision and reasons for their decisions of selecting the specific service providers. We summarized three essential main themes from participants responses, namely Work-Related Attributes, Platform, and Personal traits.

Alt Text of Poster Images

In the first figure, it displayed one sample of experimental materials we used in the experiment. It is mocked up from the profile images from Rover. On the left side of the figure, it displayed a female’s photo with her dog. Under the profile, it shows what kind of dog she can take care of, from small (0-15 lbs) to large (101+). On the right side, it displays the basic information about the sitter: her name is Annie, she lives in Indianapolis, she has a response rate at 100%, response time is under half an hour. The main part of the picture is Annie’s self-introduction: I have been dog sitting on Rover since 2016 and have been caring for my own dogs for 20 years. I have experience with house sitting, dog boarding, drop-in visits, and walking. I’ve sat puppies, giant dogs, and everything in between. I also have experience in in giving medicine. I look forward to meeting you and you’re puppies! I started working from home last year after I was diagnosed with traumatic brain injury. This condition does not impact the quality of my work, though it may take me more time to respond to your messages (up to 24 hours). Get free consultations! If you are interested, please feel free to contact me.

Poster-81 Thermo-haptic earable display for the people with hearing and visual impairment (ACM DL)
Arshad Nasser, Kening Zhu and Sarah Wiseman

ACM DL

Transcript of Audio File

Developmental personal health libraries: supporting independence through design by Dr. Amelia N. and Gibson and Doctoral student Kristen L. Bowen master of science in library science at the University of North Carolina at chapel hill School of Information and Library Science.

This poster presents partial findings from an ongoing three year study on the information needs and information practices of people with autism in local communities.

These findings include themes related to health information seeking, and include considerations for design of personal health libraries that support independent management of health information for autistic adults.

Themes include providing indicators for trustworthiness and safety, supporting Patient-Provider Communications and patient rights, multimedia and Multimodal communication and,

Formattable text and lay out, clear descriptions of costs, capitalizing on special interests, and English as a second language. Dr. Gibson is collaborating with other researchers at the Carolina Health informatics program and the UNC School of Medicine Division of Occupational Science and Occupational Therapy to apply these findings to the design of a mobile personal health application to support independence for people on the autism spectrum.

Alt Text of Poster Images

Trustworthiness and Safety: Flat line icon of a lock

Supporting Patient-Provider Communications & Patient Rights: Flat line icon of a megaphone

Formattable Text and Layout: Flat line icon of a document with a pencil hovering over it

Costs: Time and Money: Flat line icon of a hand holding a coin

Capitalizingon Special Interests: Flat line icon of a lightbulb, with a person inside

Language: Flate line icon of two speech bubbles. One has a question mark inside.

Logos at bottom right: US Institute of Museum and Library Services, Community-Engaged Disability informatics: Connecting Carolina's Communities with Information;

UNC School of Information and Library science

ACM DL

Transcript of Audio File

Title: "Connection: Assisting Neurodiverse Individuals in Forming Lasting Relationships Through a Digital Meduim". Authors: Elliot Fox, Shane Baden, Nick Ziegler, Justin Greene, and Dr. Moushumi Sharmin.

A significantly high number of adults on the neurodiverse spectrum have been found to experience social isolation. This appears to stem from their common difficulty forming and maintaining relationships through traditional face-to-face methods.

Our goal is to create an online socialization platform that can aid neurodiverse individuals in forming and maintaining long term relationships, both platonic and romantic, by including features specifically tailored to their needs. After creating an initial prototype backed by a secondary study on related works, we interviewed 10 indivduals, 5 neurotypical, 5 neurodiverse, to find what features they found most useful.

Feedback on the prototype was very positive and included suggestion to improve existing features like our emotion intelligent messagin system, the appearance, and our matching algorithm.

In the future we will be further implementing our prototype to begin user testing on a physical device.

Alt Text of Poster Images

image 1: 3 different mobile screenshots depicting original prototype.

a. Interest based matching page.

b. Profile page depicting percentages of matched interests as colored circles.

c. Emotion aware messaging page.

image 2: Mobile screenshot of updated prototype of emotion aware messaging page.

image 3: Mobile screenshot of updated profile settings page.

image 4: Mobile screenshot of updated matching interface.

image 5: Mobile screenshot of updated profile page.

image 6: Blown up visual of matched interest circles. red=0%-33% match, yellow=34%-63% match, green=64%-100% match.

DC Students

Travel Scholarship Students

 

[Back to top]

 

Demos: Wednesday October 30, 2019 at 10:10

ACM DL

Transcript of Audio File

Development of a real time Bionic Voice system for people without a larynx

By Farzaneh Ahmadi and Tomoki Toda (Nagoya University)

People who lose their larynx due to cancer, lose their ability to generate a natural voice for ever. In this specific problem "voice" is different with speech.

Voice is the sound that human vocal folds generate. We can then shape this sound into speech by moving our face and lips muscles.

People who lose their larynx can still move their face and lips muscles. But they don't have a voice source that sounds like natural human voice anymore.

We have introduced Bionic Voice, the first and only wearable electronic voice prosthesis that generates a high quality voice for these people using respiration. Inside the device, there an Artificial Intelligence or AI algorithm that can be trained to generate a voice similar to the natural vocal folds sound. We have developed and trained this algorithm in our offline Bionic voice method before.

In this paper, we implement the AI algorithm online or in real-time and we demonstrate its performance compared to the offline system. You are welcome to listen to audio samples of this real-time Bionic Voice.

Alt Text of Poster Images

Table 1. Introduction to an old school mechanical voice prosthesis which is called the Pneumatic Artificial Larynx or PAL. This table shows that although the PAL is not widely used, it outperforms any other voice prosthesis in the market in terms of ease of control, intelligibility, being wearable and voice quality.

Figure 1: A larynx amputee person using the PAL to speak with. Larynx amputees breathe through an opening in their neck called stoma, The PAL is like a whistle placed with two ends. One end is placed on the stoma. The other end of PAL is placed inside the mouth. Inside the PAL, there is a mechanical membrane that vibrates in response to breathing.

Figure 2. A Futuristic vision of how a respiratory driven Bionic Voice which is a modern adaptation of the PAL will look like. Bionic Voice somehow looks like a headset connected to a thin tube. The headset is wirelessly connected to a pressure sensor on the stoma and generates voice in response to variations of respiration monitored by this pressure sensor. The generated voice of is transferred to the mouth via a thin tube.

Figure 3: Since we want to mimic the PAL design in Bionic Voice, this figure takes a closer look at the mechanics of voice generation by the PAL. The PAL is driven by variation of the pressure at the neck stoma and inside the mouth. The PAL generates a voice signal called "e" which is the excitation source of speech. The patient shapes this voice into speech by moving their face and lips muscles. We want to develop a computer algorithm to estimate PAL's voice "e" from the underlying respiration signals.

Figure 4: The structure of a statistical engine to estimate the PAL voice i.e. "e" from respiration input. We do that by breaking "e" into three components of f0, spectral coefficients and aperiodicity. Then we use three GMMs that look at the trajectory of respiration signal features and estimate these three components. We then combine the three estimated parameters using a vocoder to build e-hat, which is our estimate e.

Figure 5. The performance of Bionic Voice statistical engine in estimating f0 of e-hat from respiration input. Bionic Voice f0 calculated for e-hat has 92% accuracy compared to the PAL f0 calculated from PLA voice "e" (which has been generated by the PAL)

Figure 6. The performance of Bionic Voice statistical engine in generating the excitation waveform "e-hat" as a whole compared to the original PAL waveform, "e". We compare these using their spectrogram which is the frequency response of each frame plotted over time. The original "e" and estimated "e-hat" waveforms match closely.

ACM DL

Transcript of Audio File

Still Not Readable? An Interactive Tool for Recommending Color Pairs with Sufficient Contrast based on Existing Visual Designs

Fredrik Hansen, Josef Jan Krivan and Frode Eika Sandnes, Oslo Metropolitan University, Oslo, Norway

What is the problem?

Too little contrast: Visual stimuli are based on differences in light. Many websites have insufficient contrast between the text and its background. This makes text difficult to read under unfavorable lighting conditions, with dim displays or glossy displays with glare, with low visual acuity or with color vision deficiencies. More contrast is also needed with smaller fonts compared to text with larger fonts.

Current tools provide no solutions: Current tools tell designers if their color choices provide enough contrast or not. If not enough contrast the tools do not give any clues as how to fix the problems. Contrast requirements are mathematical and employ trial and error until they achieve sufficient contrast.

What is our solution?

We developed a browser-based tool to help designers correct their color choices based on existing designs. The users simply make an initial mockup of their design, then uploads a snippet of the design to the tool. The tool analyses the color profile and automatically detects of there is too little contrast between a pair of colors. Moreover, the tool suggests how these colors can be correct to achieve enough contrast while maintaining the visual profile of the design.

How did we do it?

We assume that the hues choices are what is most significant for the perception of a visual design. The hues are therefore not altered. Instead we search the color space for the closest points with intensities and saturation levels that gives the resulting color pair sufficient contrast.

Try it!

Alt Text of Poster Images

The poster has a rainbowish background to attrackt attention and a logo for Oslo Metropolitan University, the ACM and a QR-code with a link to the online tool (also available from the paper).

The poster has one illustration showing a colour wheel wtith to complementary colours (cyan and red) and shows that colour contrast adjustments are performed by adjusting the intensity and not the hue of the colour which is kept fixed.

ACM DL

Transcript of Audio File

Title: A Demonstration of Molder: An Accessible Design Tool for Tactile Maps

Authors: Lei Shi, Yuhang Zhao, Elizabeth Kupferstein, Shiri Azenkot from Cornell University and Cornell Tech.

Printed tactile materials like tactile maps are important learning materials for people with visual impairments. Designing tactile materials relies on highly complex and visual-reliant modeling software. Since 3D modeling software relies on visual cues and can be difficult to learn, designing tactile materials is often inaccessible to inexperienced users and more so to people with visual impairments. Molder presents a design paradigm, which can be used both by those who are inexperienced and people with visual impairments.

This poster has seven figures describing the Molder software. Four of the figures show the process of how to use Molder to design a tactile map. First, a user needs to generate a digital draft model from a website, which is accessible to screen readers. Then, he must print the draft model and accessories. To modify the draft model, he uses his finger to touch different parts of the model. The Molder application senses his gesture and performs modifications accordingly. Finally, he prints the modified model. The final three figures on the poster show some sample functions of Molder. For example, a user can add different tactile patterns, braille labels, and interactive labels on a draft model. In addition, using Molder the user can also resize the model.

Alt Text of Poster Images

Figure 1 - 4 show a sample design process using Molder.

Figure 1. This figure shows the user interface of the map creation website. The website has a large yellow banner that says “Map Creation” followed by instructions to enter nominatim of a desired area to generate a draft map.

Figure 2. This figure shows a 3D printed shallow draft model of a map with six buildings that is placed in printed accessories.

Figure 3. This figure shows a designer using Molder. She is selecting the Dot function by touching the corresponding piece on the Tangible Widget with her finger. An iPad on a stand tracks the designer’s index finger as she chooses the function.

Figure 4. This figure shows a final model of the map with six buildings, which has a variety of tactile patterns (braille label, lined grooves patterns, and random dot patterns) and a bigger size based the designers’ editing with Molder.

Figure 5 - 7 show some sample functions.

Figure 5. This figure demonstrates the function of adding tactile patterns. There are three representations of the same building shown in yellow on a dark grey background. Each building has a different tactile pattern represented in light grey. The tactile patterns from left to right are: Icon, Line, and Dot.

Figure 6. This figure presents the function of adding a braille label. An iPad Screen showing the Molder displays of a draft model with eight buildings labeled in different colors. One of the building has a braille label saying “Hotel.”

Figure 7. This figure shows the function of resizing a model. The figure shows a white 3D printed shallow draft model of a map with 8 buildings that is placed in the Physical Rulers. The Indicator is positioned above and to the left of the top ruler. A small blue square is overlaid on top of the draft model with its right side aligned with the right edge of the Indicator, indicating the target size the user is adjusting to. A red line labels the right end of the indicator. The target size is smaller than the current model size.

ACM DL

Transcript of Audio File

We created iCETA-, an inclusive interactive system for math learning suitable for young children. iCETA was co-designed with young children with visual impairments and their educators. It combines tangible interaction with haptic and auditory feedback.

It consists of a set of blocks detected by a camera towards the working area. †Blocks vary in size, texture, braille and colors. Children use blocks representing numbers from 1 to 5 to solve additive compositions tasks as part of the game Logarin.†

iCETA has been piloted in schools and was welcomed by both children and educators. †It provides a playful and rich multi-sensorial environment for children with different visual abilities to learn math.

Alt Text of Poster Images

Figure 1 represents our device iCETA. This figure shows a PC with a mirror in the camera to detect the blocks when located above the keyboard, in the "working area". Blocks ranged from 1 to 5 are located in a "storage box" next to the PC and headphones are connected to the PC so the student could have auditory feedback. Children used the blocks to solve additive composition tasks.

Figure 2 shows a boy with low vision sitting on a chair while playing with the blocks to solve the additive composition tasks by using the system iCETA. The iCETA setup is located on the table.

Figure 3 shows a blind girl sitting on a chair while playing with the blocks to solve the additive composition tasks by using the system iCETA. The iCETA setup is located on the table.

ACM DL

Transcript of Audio File

Non-Visual Beats: Redesigning the Groove Pizza is a prototype web application designed to make the creation of drums loops easier and more fun for blind and low-sighted individuals. It is developed by Willie Payne and Alex Xu advised by Professor Amy Hurst and Professor Alex Ruthmann all from New York University.

Modern music technology has drastically expanded opportunities for people to compose, improvise, and perform music on their own terms from songwriters to producers to live coders. As music software/hardware becomes increasingly widespread and sophisticated, designers, engineers, researchers, and educators must ensure it is accessible to a wide range of users with varying abilities. Everyone should have the opportunity to express themselves in a way that is personally meaningful.

The Groove Pizza is a free, web-based, drum-sequencer developed by the NYU MusEDLab. Thousands of daily users reflecting a wide range of age and musical training use it to program simple beats. Originally released as part of an educational initiative called Math Science Music, the Groove Pizza is called a “pizza” because its circular representation of time visualizes rhythm patterns with geometric shapes. The Groove Pizza was not developed with accessibility as a consideration and presents a significant challenge to blind and visually impaired users. It only supports mouse input, uses low-contrast color palettes, and its GUI-driven implementation is a black box to screen readers.

We rebuilt the Groove Pizza from scratch prioritizing the abilities of people with low vision or no vision. Our prototype uses Tone.js for audio output including beats and sonification, p5.js for the user interface and animations, and p5Speech for text-to-speech. This demo presents only our first steps in this work. Moving forward, we formed a partnership with the Filomen M. D'Agostino Greenberg (FMDG) Music School, a storied community music school based in Manhattan made up of blind and visually impaired musicians, assuring that stakeholders will directly take part in future iterations.

Alt Text of Poster Images

The poster accompanying our demo is titled Non-Visual Beats: Redesigning the Groove Pizza. The authors are William Payne, Alex Yixuan Xu, Amy Hurst, and S. Alex Ruthmann. All authors are from NYU and logos for the following NYU organizations are included at the bottom: Ability Project, Music Experience Design Lab (MusEDLab), Music and Audio Research Laboratory (MARL), and Integrated Digital Media (IDM). A URL to access the demo is included: http://nyumusedlab.github.io/Accessible-Groove-Pizza/

The poster contains three main sections described here: an overview of the work, an image of the Groove Pizza, and two examples of the Groove Pizza with rhythmic patterns programmed on top of it.

The overview section holds three headers, "Background", "Problem", and "Implementation" included below:

1. Background: Music technology has opened up countless avenues for people to express themselves on their own terms. Designers, engineers, and educators must ensure software and hardware is accessible to users who range in ability.

2: Problem: The Groove Pizza is a widely used, free drum app developed by the NYU MusEDLab. It is inaccessible to blind and visually impaired users supporting only mouse input, using low-contrast colors, and presenting a black box to screen readers.

3. Implementation: We rebuilt the Groove Pizza to support non-visual use. While this is an early prototype, we have partnered with a Community Music School for people with vision loss ensuring stakeholders are involved in future iterations.

The center of the poster is a large image of the Groove Pizza. It consists of an inner circle containing the text "16 Bit" indicating the drum audio files the app is currently using. Three outer rings around the circle refer to individual drums (e.g. hi-hat, snare drum, kick drum) and contain sixteen nodes that can be toggled on or off. The Groove Pizza functions like a clock such that as a cursor repeatedly circles around the Groove Pizza, it passes by nodes that have been toggled on causing drums to play. Outside of the Groove Pizza image are text bubbles containing instructions that come directly from the app itself.

Near the bottom of the poster are two example drum patterns. The left-most example, titled "Lab Groove", is asymmetrical containing an inner triangle and an outer line. The right-most example, titled "Billie Jean" is a recreation of the drum beat found in that song. It hold a symmetrical groove pattern containing a cross which indicates an alternating kick drum and snare pattern overlaid by an octagon indicating repeating hi-hat eight notes.

ACM DL

Transcript of Audio File

GestureCalc is an eyes-free, target-free calculator app for touch screens.

A traditional calculator app makes itself accessible to blind and low vision users by employing a screen reader to help the user find targets in the app's spatial layout. GestureCalc has a target-free interface: the gestures can be performed at any location on the screen.

Each character code consists of 1 to 2 simple gestures. The app reads out the input as it is entered.

We conducted a study with 8 screen reader users, and found that they were 40% faster and made 52% fewer erroneous calculations with GestureCalc than with a traditional touch screen calculator.

We recommend more exploration of target-free interfaces in the future.

Please visit demo #65 to try GestureCalc for yourself! Instructions for the character codes are available in Braille and in print.

Alt Text of Poster Images

Figure: GestureCalc

Closeup of an iPhone running the GestureCalc app and a user's hands. The right hand is touching the screen with index, middle, and ring fingers in a row. White circles appear on the screen where the fingers touch. The phone is in portrait orientation and the top of the screen displays in small white text "Input: 12+3" and "Result: 15.0". The left hand holds the phone.

Figure: ClassicCalc

Closeup of an iPhone running the baseline ClassicCalc app and a user's hands. The bottom three quarters of the screen shows a keypad with digits and operators. The right hand index finger is extended and touching the 5 button while the left hand holds the phone. The top of the screen shows "12+3" in the first line and "15.0" in the second line.

Figure: Speed Graph

Line graph showing that GestureCalc had a higher average Characters Per Second in every session than ClassicCalc had in any session and the effect of Session on Characters Per Second was different for ClassicCalc versus GestureCalc. Graph has Session (1, 2, 3) on the x-axis and Characters Per Second (0.3 to 1.0) on the y-axis. There is a line for ClassicCalc sloping slightly upward and a line for GestureCalc that is higher and sloping upward more steeply. The lines do not cross. Both lines slope upward more steeply between sessions 1 and 2 than between sessions 2 and 3.

Figure: Error Graph

Line graph showing that ClassicCalc had a higher average number of erroneous calculations in every session than GestureCalc had in any session. Graph has Session (1, 2, 3) on the x-axis and Number of Erroneous Calculations (0 to 35) on the y-axis. There is a line for ClassicCalc and a line for GestureCalc that is higher. The lines do not cross. Both lines are lower in the middle than on either end.

ACM DL

Transcript of Audio File

Demo title: RoboGraphics: Using Mobile Robots to Create Dynamic Tactile Graphics

Presented by: Darren Guinness, Annika Muelhbradt, Daniel Szafir, and Shaun Kane from the university of colorado boulder

In this demo, we are presenting a new low-cost method representing dynamic tactile graphics using off-the-shelf ozobots, fabricated cardboard tactile overlays, and a touch screen. We present several tactile applications including tangible bar charts, stories, interactive diagrams and more. Our user study demonstrated that robographics can help users examine accessible data graphics, stories, and interactive applications. Please come visit our demo to find out more, or access the demo submission at bit.ly/robographicsdemo or the full paper at bit.ly/robographics

Alt Text of Poster Images

Figure 1 A:

Two robots with their cardboard tactile silhouettes in front of each bot. (left) is the hare, and (right) is the tortoise.

Figure 1 B:

The touch display used in the study, and the half inch cardboard border around the display which contains the tactile overlays.

Figure 1 C:

The tactile overlay of the charts application. The tactile features 6 buttons along the left side of the display used to toggle between datasets. In the center, there is a large graph window which features 4 semi circular cut outs at the top and bottom of the display which are used to indicate where the robots are initially placed. On both sides of the graph window there is a y-axis featuring 1-10 markers. Each unit marker is represented by a rectangular cut in the cardboard, odds markers are slightly larger than evens, and the fifth marker is even larger to denote the halfway point on the chart.

Figure 1 D:

Top down view of a tactile chart using 4 robots as tactile landmarks representing the height of the bar graph presented.

Figure 2 A:

Top down view of the cow's (pictured from the side) digestive system application. The image shows the cow rechewing its food during rumination. In this application the robot acts as food going through the cow’s body. The tactile features the outline of a cow from the side, and contains a button which moves the food from one digestive organ to the next.

Figure 2 B:

Top down view of the tortoise and the hare story. The hare is currently sleeping under a tree, while the tortoise passes. The tactile overlay features one horizontal track for the tortoise and one for the hare, and features small landmarks along the course on the top of the tactile. In addition there are three buttons on the lower left side to control the story playback. In this application one robot is equipped with a tactile hat in the shape of the hare, while the other has a tactile hat in the shape of a tortoise. The character’s motion is synced to the audio narration, and a user can touch the characters to feel the story play out.

Figure 2 C:

Top down view of the haptic clock using the inner robot to represent the minute hand and the outer robot to represent the hour hand. The robots are currently displaying the time 9:27. The tactile overlay features a circular window surrounded by the 12 hour rectangular markings along the outer edge of the circle.

Figure 2 D:

top down view of the robots forming the jumbo braille letter "Y" on the display. The braille assistant application displays a 6 dot jumbo braille cell which are represented by the robots. The tactile features a large window with 3 semi circular cut outs on the left and right side, and 3 buttons on the bottom left side to move through different braille characters. The robots move from the edge of the display to the center to indicate an active braille dot.

ACM DL

Transcript of Audio File

We present the demo ìGoDonnie A robot programming language to improve orientation and mobility skills in people who are visually impairedî. The authors are Juliana Damasio, M·rcia Campos, Alexandre Amory, and Rafael Bordini. M·rcia Campos works at Inedi College, while the other authors are from PUC-RS, both in Brazil.

Many robot programming environments are inaccessible for students who are visually impaired because are based on graphical interfaces and do not respect accessibility criteria. Thus, we have developed the GoDonnie that is a Logo-based language used to simulate a robot's behavior in a virtual environment. This simulation is described to the user through audible messages. GoDonnie has commands for the robot to move (FW and BW) and rotate (TR and TL), and specific commands for end-users to explore the virtual environment such as SCAN, COLOR, POSITION, and STATE. Also, has the selection, repeat, procedure, and assignment commands that are common to general-purpose programming languages.

We evaluated GoDonnie with two users who are visually impaired. They performed a set of programming activities with GoDonnie and a tactile map produced in EVA material. Both participants correctly reproduced the virtual environment on the tactile map with the help of GoDonnie. The experiments demonstrate that GoDonnie helps users to understand the space and relationship between objects.

Alt Text of Poster Images

The poster has 3 images.

Figure 1: Two screenshots labelled (a) and (b) show the Donnie programming environment which includes the editing panel where GoDonnie code is typed and a depiction of a virtual environment. Image (a), on the left-hand side, shows the GoDonnieís editor with the following commands: FW, TR, and COLOR.††Image (b), on right-hand side, shows a virtual environment representing a living room with the Donnie robot and five objects:††a blue sofa, a blue armchair, and a red table with two green chairs. Donnie is on the lower-left corner, the armchair is near the wall to the left of Donnie, the sofa is near the wall in front of the robot, and the table and chairs are near the wall to the right of Donnie. The scanning that Donnie makes to locate the objects is indicated in light blue, starting at Donnieís location and going towards the objects.

Figure 2: Two photos labelled (a) and (b) show the physical environment constructed by participant 1 with the tactile map representing the virtual environment. Image (a), on the left-hand side, is a photo of the yellow checkered EVA tactile map with the objects that stand for Donnie and the armchair, sofa, table, and chairs positioned by participant 1 after interacting with Donnie's virtual environment. Image (b), on the right-hand side, shows the virtual environment with the described objects. A comparison between the figures shows that participant 1 was able to position objects correctly.

Figure 3: Two photos labelled (a) and (b) show the physical environment constructed by participant 2 with the tactile map representing the virtual environment. Image (a), on the left-hand side, is a photo of the yellow checkered EVA tactile map with the objects that simulated Donnie and the armchair, sofa, table, and chairs positioned by participant 2 after interacting with Donnie's virtual environment. Image (b), on the right-hand side, shows the virtual environment with the described objects. A comparison between the figures shows that participant 2 was also able to position objects correctly.

Demo-19 GraVVITAS 2.0: A Framework For Digital Accessible Content Provision (ACM DL)
Cagatay Goncu and Kim Marriott

ACM DL

Transcript of Audio File

Jido: A Conversational Tactile Map for visually impaired People. Jido is a tactile map that aims at improving the orientation and mobility of visually impaired people through a touch aware tactile map and a conversational agent in a mobile device. Users can verbally interact with the agent and request information on points of interest, receive step-by-step instructions to facilitate navigation, and get feedback when using the tactile map.

Through a feasibility study participants stressed the importance of being able to get guidance and confirmation when exploring and looking for point of interest on the map. Some participants expressed that receiving audio feedback and confirmation boosted their confidence of understanding the map layout and reaching the point of interest on their own.

This work presents a preliminary study we would use the feedback received to improve the usability and make it available to the public.

ACM DL

Transcript of Audio File

Block-based programming environments are generally inaccesible for children with visual impairments. We designed two touchscreen interfaces for use with a screen reader for the Blockly library. The first interface is our spatial design. In this design, descriptions of block regions are read aloud by the screen reader when the user selects blocks. The second interface is our hierarchical list design. Here, a navigable list which mirrors the workspace is placed on the side of the screen. This allows the user to fully interact with Blockly-based environments purely using this list. We are still working on improving these designs and getting them tested.

ACM DL

Transcript of Audio File

We present the demo of Supporting Older Adults in Using Complex User Interfaces with Augmented Reality, authored by Junhan Kong, Anhong Guo, and Jeffrey P. Bigham from Carnegie Mellon University.

Using complex interfaces has been shown to be challenging for older adults. Existing tutorial systems such as instruction manuals can be cumbersome, and sometimes difficult to use. To solve this problem, we present a system to support older adults in using complex user interfaces by providing step-by-step AR visual guidance and voice feedback.

Instead of having to explore an unfamiliar interface through trial and error, users can simply select the task to perform from pre-generated action sequences of the interface, such as copying a document on a printer, and then follow the visual and voice guidance to complete it. Using Apple ARKit, our system detects the interface state in the user’s camera view, infers the user’s progress in the selected action sequence, thus displays corresponding visual indicators on the phone screen, and provides voice feedback to guide the users in performing the actions. In the demo, we present an example of how the system guides a user to complete a copying task on a printer through AR visual indicators and voice feedback.

We ran preliminary user testing with the system, and are currently working on making state detection more accurate and improving the visual indicators and voice feedback to better guide the users.

Alt Text of Poster Images

Figure 1: The system repeatedly run three major steps: detecting current state of interface from phone camera, updating position in the action sequence, and providing AR visual guidance and voice feedback.

Figure 2: Screenshots of example visual guidance to complete the task of copying a document on a printer. In the first screenshot, the app displays a circle around the OK button to press with text display on the screen saying “Now press the OK button”; in the second screenshot, the app displays an arrow on the opening side of the top cover of the printer, with text display on the screen saying “Wonderful! Now open top cover of printer”; in the third screenshot, the app displays a 3D plane with text on the side facing the scanning area, indicating the document to be copied, with text display on the screen saying “Nice! Now place document to copy”.

Figure 3: Diagram of example action sequence to complete the task of copying a document on a printer. The states go from “Home”, to “Copy Info”, then “Cover Opened”, and “Document placed”, then “Copying”, and “Copy Complete”, to finally “Done”. The actions that trigger the state transitions are, “Press OK”, “Open Top Cover”, “Place Document”, “Press Start”, “Press Home”.

ACM DL

Transcript of Audio File

AccessMap is a city-scale, highly-interactive web map for pedestrians. AccessMap highlights pedestrian-specific environmental features and automatically computes personalized trip plans, and therefore may be suitable for people with limited mobility that need to avoid potential barriers such as steep paths or raised curbs in unfamiliar places. AccessMap emphasizes pedestrian pathways and attributes as a "first-class" member of the transportation network, allowing for fine-grained decision-making that accounts for a wide diversity of pedestrian needs and preferences. At the time of writing, AccessMap supports pedestrian pathways that include sidewalks and street crossings, curb interfaces like curb ramps, flush curbs, or raised curbs, marked or unmarked street crossings, and pathway steepness. In addition to this applied work, we conducted a preliminary user study where users consistently rated AccessMap as useful and personally relevant, particularly compared to alternative maps that target pedestrians.

Alt Text of Poster Images

Figure 1 (top right): Three screenshots of the AccessMap website, showing maps of threemunicipalities: Seattle, Bellingham, and Mount Vernon, all in Washington State. Themaps show greens, yellow, and reds where sidewalks exist for each city, correspondingto steepness. Each shows variety of sidewalk existence and steepness. There is ablack line (convex hull) drawn around the regions for which data exists for eachmunicipality.

Figure 2 (middle right): Three screenshots of the AccessMap website from mobiledevices. Each screenshot shows the same overhead map view of approximately oneblock in Seattle, Washington, along with an automatically planned route from a pointon the bottom left to a point on the bottom right of the map view. The leftmostscreenshots shows personal map settings of 10% uphill and downhill maximum inclinesand no requirement for curb ramps, resulting in a route that travels on the North side ofa nearby street. The middle screenshot has an uphill setting of 8%, downhill of 10%,and no requirement for curb ramps, resulting in a route that travels far North aroundthe block to avoid a sidewalk that's greater than 8% grade. That sidewalk has changedin appearance and is now a dashed red line rather than filled in with a color gradient.The right screenshot has 10% uphill and downhill settings, but the user has indicated arequirement for curb ramps, with a route that travels along sidewalks on the South sideof a nearby street, making use of crossing locations known to have curb ramps.

Table 1 (bottom right): This table reflects a preliminary user study where pedestrianswere asked to rank the usefulness of several pedestrian-targeted maps. The mean ratingis presented to give a sense of the average score given, while the standard deviationof ratings is presented to give a sense for disagreement between participants. Themean ratings are 2.0 for a barriers-only map that calls out sidewalk problems only,3.9 for an "assets-only" map that displays municipal assets over the map, 2.2 forGoogle Maps, 4.6 for AccessMap without automatic route finding, and 4.9 for AccessMapwith automatic route finding. The standard deviations were 0.71 for the barriers-onlymap, 0.55 for the "municipal assets only" map, 0.84 for Google Maps, 0.55 forAccessMap without automatic route finding, and 0.22 for AccessMap with automatic routefinding.

ACM DL

Transcript of Audio File

Our poster is titled Expanding Blocks4All with Functions and Variables, and is co-authored by undergraduate student researchers Jacqueline Shao Yi Ong, Nana Adwoa O. Amoah, Alison E. Garrett-Engele, Mariella Irene Page, and Katherine R. McCarthy, and their faculty supervisor Lauren Milne. All authors are affiliated with Macalester College. In our poster, we discuss the changes we have made to Blocks4All, a blocks-based programming environment originally created by Milne for the Apple iPad. It is designed for children with visual impairments, who can use it to build programs that control the Wonder Workshop robots Dash and Dot. The changes made to Blocks4All include improving the accessibility of the user interface, allowing users to modify robot actions and to create functions that can be called in their program, and adding more robot commands. In the future, we hope to thoroughly evaluate Blocks4All’s visual and motor accessibility with the aid of experts, and to continue expanding upon the application. In this poster, we included six figures. The first four figures compare the appearance of Blocks4All’s main menu and main workspace before and after our changes to the application, and the last two demonstrate how users can create and add variables to their program by tapping on the block that represents a variable, setting its value on a separate screen, and then in the main workspace, tapping again to place it in their program.

ACM DL

Transcript of Audio File

The problem we are addressing is that of individuals who have difficulty or who cannot use computers due to disability, literacy, digital literacy, age or, many other causes.

We created an open source tool that extends the operating system, and makes it so that a person could automatically take their settings to any other computer. Thus once they'd set up one computer, they could automatically have all of the other computers set up like this.

However, this was not sufficient. Many of the other computers did not have the AT on them the user needed. So, we developed Installation on Demand, which causes the assistive technologies they need to be downloaded and installed whenever their needs and preferences call for it.

This also did not meet the demand, in that many individuals did not know what settings were even possible, nor how to set them up. So we created a QuickStrip which allows individuals to easily discover new features and also to easily apply them on new computers.

We added a Save and a Capture button that will automatically capture all of the settings of the computer the operating system and their AT.

Now an individual can sit down to a computer at a library, or school, or work, and have that computer set up for them. And when they start a new job, or an internship, their computer can be set up the first hour of the first day, rather than weeks or sometimes months later.

ACM DL

Transcript of Audio File

Primary school educational tools, including educational applications used for improving children literacy skills, often use graphical content as a means to engage the students, which makes them inaccessible to children with visual impairments or blindness.

To address these issues we present WordMelodies, an inclusive, cross-platform, mobile application to support children in the acquisition of basic literacy skills.

App analysis, design and evaluation were guided by three domain experts with a participatory approach, resulting in an accessible and entertaining system.

Limits in the cross platform engine, which cause inconsistencies in the order of UI elements encountered during sequential exploration using the screen reader, will be addressed as future work, by implementing native cross-platform components.

Alt Text of Poster Images
The poster contains 7 areas:

1:

Word Melodies

- Didactic game supporting acquisition of basic literacy skills

- Accessible to children with visual impairments

- Prototype implementing 8 types of exercises

2:

Contributions

- Participatory approach involving three domain experts

- Definition of design criteria

- Implementation with cross-platform developing technique

3:

The Design Challenge

- Inclusiveness: The app should be easy to learn and use for children with and without visual impairments.

- Entertaining: Besides allowing the user to practice literacy skills, the app should also be entertaining.

- Independence: The app should be usable by all users without requiring support from other people.

- Consistency: Key interaction elements should always be in the same area, close to the screen borders.

- Beyond tap: The app should use and teach common interaction gestures to children (Drag & Drop).

- Scalable: It should be possible to add new exercises and content with limited development effort.

IMG1: "Complete the word" game screen with lion audio-icon on the left, the word to complete "l_on" is above, while three draggable letters "i", "e", "o" are under. Done, back and menu buttons are on the bottom.

IMG2: "Select the rhyme" game screen with the word "sing" above, and selectable words "beam", "dish", "king" are under. Done, back and menu buttons are on the bottom.

IMG3: "Reorder the days of the week" game screen with three empty fields above, and three draggable days "friday", "wednesday", "thursday" are under. Done, back and menu buttons are on the bottom.

IMG4: "Complete the word" game screen with dog audio-icon on the left, the word to complete "d_g" is above, while three draggable letters "o", "a", "e" are under. Done, back and menu buttons are on the bottom.

4:

The technical challenge

- App developed with ReactNative, a cross-platform developing tool

- WordMelodies runs:

-- on iOS and Android

-- on smartphones and tablets

- Additional issues need to be addressed to make the app accessible on both platforms:

-- one accessibility problem still remains: tab order on iOS

5 (Huge):

Try the WordMelodies Demo With Us

6:

Future work

- Include children and parents in the design process

- Develop a native component to fix the tab order problem

- Develop new exercises and a tool to allow teachers to create new content

- Publish the app

7 (bottom right corner):

Qr code to access the paper

Demo-47 AfricaSign - A Crowd-sourcing Platform for the documentation of STEM Vocabulary in African Sign Languages (ACM DL)
Abdelhadi Soudi, Kristof Van Laerhoven and Elmostafa Bou-Souf

 

[Back to top]