Accepted Submissions for Student Research Competition

Graduate Level

ACM DL

Transcript of Audio File

Title: BrailleBlocks: Smart Toys for Cross-Ability Collaboration

Authors: Vinitha Gadiraju and Shaun K. Kane

Institution: University of Colorado, Boulder

Contact: vinitha.gadiraju@colorado.edu

Braille literacy has been found to increase chances of employment and improve literacy skills, such as reading comprehension and proficiency. However, the complexity and quantity of content can make learning Braille tedious and discouraging. Current Braille education tools lack interactivity, creating a potentially isolating learning experience. To help create an engaging and collaborative experience while learning Braille, we developed BrailleBlocks.

BrailleBlocks is a smart, tangible block set that supports interactive games by tracking blocks and providing feedback. Each block in the set represents a single braille cell. Users can place pegs in the block’s holes to create Braille letters. The system also includes a web-application with educational games for a sighted collaborator to play, support, and learn Braille with a visually impaired child. The system uses computer vision to identify the location of the pegs to determine the Braille letter. Once the system has determined the letters the blocks spell, the system displays the letters on the interface.

We conducted user studies to test BrailleBlocks with sighted parents and visually impaired children. Parents were excited to interactively guide their child through hand-over-hand touch, often used the system as a spelling practice tool, and lauded the system as a tool to help their child work on reasoning and deduction skills. Children also used the system creatively for building and storytelling. In the future, I will develop BrailleBlocks to become a permanent installation in specialized classrooms by building computation into the blocks to reduce the physical setup and moving games to a voice interface so that visually impaired students can play together without relying on a sighted person.

Thank you!

Alt Text of Poster Images

Figure 1.

An overhead view of the BrailleBlocks system. There are eight green blocks in a white cardboard frame. Each block has six holes in it, so that each block represents a single Braille cell. There is a cup filled with red pegs to the left. The blocks have pegs in them that spell out "lovelace" in Braille.

Figure 2.

A screenshot of the BrailleBlocks interface. This is the animal selection chart from the Animal Name game. There are 4 animals side by side to choose from: a dog, elephant, duck, and sheep. There are cartoons of each animal in the chart. Above the chart are instructions on what to do on this page of the interface. Below the chart is a button that takes you to the home page.

Figure 3.

A close-up of a dialog box from the BrailleBlocks interface. The dialog box says "'ELEPHANT'". Below the dialog box is an enlarged visual representation of the word "elephant" in Grade 1 Braille.

Figure 4.

This is an image of a participant family during the user study. The mother sits to the right with the laptop interface in front of her. She is watching her son (to the left) and daughter (in the middle) place red pegs into the green BrailleBlocks.

SRC-102 Designing a low-cost finger wearable audio tactile device (ACM DL)
Arshad Nasser, City University of Hong Kong

ACM DL

Transcript of Audio File

Intelligent Decision Support for Stroke Rehabilitation Assessment

Rehabilitation assessment is important to determine personalized intervention for post-stroke survivors.

However, assessment relies on therapist's subjective knowledge and is infrequently executed due to the limited availability of therapists.

This research presents an interactive multimodal approach that augments data-driven model with therapist's knowledge for personalized rehabilitation assessment.

This approach first automatically identifies salient features to predict the quality of motion and generate user-specific analysis to provide insights on patient's performance.

After reviewing user-specific analysis, therapists can provide feature-based feedback on each patient to iteratively update an assessment model.

For the evaluation, the dataset of three upper-limb exercises is collected from 15 post-stroke and 11 healthy subjects, which are annotated by two therapists.

Five therapists participated to provide 9 feature-based feedback on each patient.

While accommodating therapist's feedback, a generic model is tuned to a personalized model, which improves agreement with therapist's evaluation from 0.83 to 0.91 average F1-scores.

Specifically, an interactive model achieves significantly better agreement with therapist's evaluation than average therapist's agreement levels and non-interactive models.

This result describes the importance of an interactive approach to augment therapist's feedback for more accurate and personalized rehabilitation assessment.

In future, we plan to evaluate the effectiveness of user-specific analysis to provide insights on patient's performance for therapists and corrective feedback for post-stroke subjects.

Alt Text of Poster Images

The figure at the middle section of the poster describes the flow diagram and visualization interface of the presented approach.

The presented approach first automatically identifies the salient features to derive a data-driven, prediction model for assessment.

The visualization interace presents user-specific analysis with the predicted quality of motion, three salient feature values on unaffected and affected side, buttons to collect therapist's feature-based feedback.

After reviewing user-specific analysis, a therapist can provide feature-based feedback to update rule-based Knowledge Model.

This upated rule-based Knowledge and data-driven model are integrated into an interactive Multimodal using a weighted average.

The figure at the botton left describes an example of feature-based rules from a therapists, which checks whether a wrist joint is placed higher than subject's top of spine joint.

Three figures at the botton right describe the results of the presented approach.

The first figure describes the validation of feature selection: the presented approach can increase correct prediction while reducing the number of features for assessment.

The second figure shows that the presented interactive approach increases its performance while accomodating therapist's feature-based feedback over 10 iterations.

The third figure summarizes the performance of the presented interactive approach outperformed non-interactive approach and therapist's agreement level.

ACM DL

Transcript of Audio File

This research addresses the social participation and independent skills development challenges of young adults with Intellectual Disability by supporting them to learn life skills of interest using videos.

The research explored co-designing an accessible app that could be used by many young adults with intellectual disability in a support centre, to enable them access videos on their life skills interests. As all participants participated in social media, we investigated their participation competencies (i.e., shared proficiencies at performing specific activities) through workshops and leveraged them in designing the app.

The findings show that participants associated functional familiarity with the appís social media-inspired design icons and features, which fostered usability, and mediated engagement between participants and their networks.

Based on these findings, this research reflects on a competency-based design approach that leverages the existing technology competencies in designing with people with intellectual disabilities.

Alt Text of Poster Images

Alternate text of figures in the findings section.

Fig 1: Competency-based Approach

The first figure of the findings section shows a flow diagram of the competency-based design approach the study proposes for designing with people with intellectual disability. It shows the three stages of the approach. Stage one reveals the competencies of people with intellectual disability, stage 2 designes the technology leveraging these competencies and stage three enhances competencies by developing abilities further. These new competencies can then be reflected upon for designing other technologies.

Fig 2: Search and sharing Interface of the HowToApp

The first figure shows the search interface of the app. It shows the search bar, where participants can type in their search queries and a microphone where those that have challenges with

spelling can record their search query.An example of a search query "make pan" is displayed on the search bar, and alternate search suggestion including make pancakes, make pancake mix,

make pancakes without egg, make paneer and make panna cotta. The top of the figure shows the social media icons (magnifying glass, two heads, a star and a bell) depicting searching,

friends, favourites/playlist and notifications respectively.

The second figure shows a still video with a pancake, and displays the share and save buttons at the button for users to be able to either save the video if they like it or share it with their friends. The top of the figure shows the social media icons (magnifying glass, two heads, a star and a bell) depicting searching, friends, favourites/playlist and notifications respectively.

Fig 3: TechShops sessions

This figures shows two separate picture with about nine participants apiece engaged on tablets around a table. The figure illustrate the context of the workshop-based approach referred to

as TecShops, the study employed to engage participants on Facebook and YouTube to identify their competencies for leverage in the app's design.

ACM DL

Transcript of Audio File

User Perspectives on Robotics for Post-stroke Hand Rehabilitation

Besides clinical effectiveness, user perception of rehabilitative devices is an important criterion to consider when designing such devices, especially when the rehabilitation is performed at home by the patients without monitoring of physiotherapists.

There are few research on the user experience of patients and caregivers, who are the primary users of home-based devices. One such device for post-stroke hand rehabilitation is evaluated in this study.

This is a preliminary study aiming to elicit user perception of a post-stroke rehabilitation device. We specifically studied the perception of primary user (patient), secondary user (carer) as well as design expert in rehabilitative device so as to give an overall picture of how users perceive the device.

We combined in-depth interviews with participants and quantitative measures from the Credibility and Expectancy Questionnaire and Intrinsic Motivation Inventory to assess users’ subjective opinions in this study. Results help understand which factors each stakeholders prioritize for a rehab robotics, and revealed split attitudes across stakeholders.

Overall, the results show that having stroke patients and caregivers in subjective evaluation is crucial as it might yield differing views from domain experts. This is especially important for portable or home-based device in which user motivation and perceived effectiveness play a large role in the rehabilitation journey. In addition, due to the limited research in user evaluation of rehabilitative device, especially in the hand motor function domain, this study provided a reference point for future research in subjective evaluation of rehabilitative device.

Alt Text of Poster Images

The first two figures shows the exoskeleton robotic device evaluated in this study.

Figure 1 shows HandyRehab, the exoskeleton robotic device used in this study

Figure 2 shows a Myo armband for EMG detection.

The third figure shows the four passive movements we asked participants to perform with the robotic device. The movements are: ‘open and close’, ‘cylindrical grip and close’, ‘three-finger grip and close’ and ‘two-finger grip and close’.

Figure 4 to 6 are located under the thematic analysis highlight session.

Figure 4 shows a stroke patient’s open hand without assistance: the patient would feel embarrassed in social situations whenever a handshake was initiated due to his right hand “always clenching into a fist”.

Figure 5 shows the stroke patient's open hand assisted by an object, that is a table.

Figure 6 shows the stroke patient's open hand assisted by the exoskeleton.

Figure 7 shows Credibility/Expectancy Questionnaire Results.

Figure 8 shows one of the movement with the robotic device - Cylindrical grip in use.

Figure 9 shows one of the movement with the robotic device - Three-finger grip in use.

Figure 10 shows the results of IMI completed by the stroke patient after using the device.

Undergraduate Level

ACM DL

Transcript of Audio File

A Wearable Input Mechanism for Blind users of Computers based on Mental Mapping of Thumb-To- Phalanx Distances

The problem:

Accuracy of typing has a major impact on the quality of work produced in work environments. Past studies into accessible typing mechanisms have focused on the typing speed at the expense of accuracy and production cost. We propose an affordable mechanism which uses memory of thumb - to - phalanx distances to provide accurate input.

The solution we propose:

Our solution consists of a glove to be worn by the user, which would have buttons over every phalanx bone of the wearer’s hand. The uncovered thumb on each hand would be used to tap each button, for which a corresponding letter output would appear on the screen.

Our Study involved giving 5 participants our mechanism for 5 sessions of up to 20 minutes each. In each session participants had to type longer words and we measured Character Error Rate, Backspaces per Tap and Entry Rate. We found that our mechanism achieves comparable rates with existing systems and results improve in each session.

Alt Text of Poster Images

Image showing backspace button.

Image showing person using their thumb.

Figure 1:Image showing phalanx bones on the human hand

Figure 2: Glove mechanism layout with key values of each button labelled

Figure 3: Images of blind participants of different age groups using the mechanism

Table 1: Session numbers and example of corresponding reference text used as test cases in each session.

Figure 4: Comparison of our mechanism's CER%, BPT and Entry Rate with those of the Braillesketch device, the device described in Gaines' paper and the BrailleType device.

Figure 5: Speedometer showing the average entry rate of our mechanism at 3.7 WPM and the maximum at 6.0 WPM.

Figure 6: Line Graph showing values of CER% at each session of the participant, indicating a downward trend overall.

Figure 6: Line Graph showing values of BPT at each session of the participant, indicating a downward trend overall.

ACM DL

Transcript of Audio File

Our research poster is called, "Insights for More Usable VR for People with Amblyopia".

This poster is by Ocean Hurd, an undergradute computer engineering student and Sri Kurniawan,

a professor of computational media, both from UC Santa Cruz.

We created a virtual reality (VR) video game and tested how comfortable it was for people with Amblyopia, a neurological eye disorder, to use.

Our research found some recurring themes from our user base of people with Amblyopia that suggests VR games should be created

with some design features in mind to keep them usable, particularly when it comes using VR video game as therapy for Amblyopia,

which has become so popular.

Alt Text of Poster Images

Figure 1 is composed of two screenshots of the virtual reality video game we created.

The two images are shown, one on top of the other, to demonstrate a comparison for a more

heavily visually crowded area and a less visually crowded one.

The images both showcase gems and take place inside a cave.

However, the lower image has around 50 small, semi-transparent, white floating particles.

The upper image contains around 90 of these puff-like particles.

Figure 2 is composed of two models of the set up of the virtual reality playspace.

The first model shows the sizes of the play space, virtual axe and spawning range for targets to hit with the axe.

The play space, being the space in which users can move around in the real world is 3 meters by 2.5 meters.

The virtual axe is around 1 meter long and the gem's (being the targets to hit with the axe) spawning range is in front of the virtual play space.

The spawning range is 3 meters long.

The second model shows the location of the speed up, slow down and exit game options with relation to the play space.

The exit option in dead center on the opposite side of the play space from the gem spawning range.

The speed up and slow down options are also on this side of the play space, but in the left and right corner of the rectangle as opposed to the center.

Figure 3 is a screenshot of the virtual reality video game we created.

It is the instructional screen, and one can see gems shooting out of the ground with instructions corresponding to that gem hovering above.

The overall virtual setting is a cave.

ACM DL

Transcript of Audio File

Exploring Haptic Colour Identification Aids by Richard Nguyen and Connor Geddes, from the University of Guelph.

People with Colour Vision Deficiency (CVD) have difficulty with day-to-day colour identification tasks such as determining the ripeness of fruit or cooking meat. While also facing long-term challenges such as career restrictions and work impediments.

The current gold standard of haptic colour identification aids is a wrist-based aid, HaptiColor. HaptiColor, as well as other haptic colour identification aids, exist but are slow to learn. To address this, we developed two new colour identification aids - ColourWrist and ColourVest.

ColourWrist was developed to improve upon the shortcomings of HaptiColor, training time and colour selection diversity. To achieve this we used four solenoids to create sixteen unique patterns compared to the twelve HaptiColor can convey.

ColourVest was developed on the idea of conveying colour information of an area rather than from a single point, while also seeking to improve upon the same shortcomings. ColourVest is equipped with a ten by eight array of vibrotactile motors to convey spatial colour information. Users select a colour from a labelled keypad, then the colour is "highlighted".

Our participants were pleased with both ColourWrist's and ColourVest's assistance in identifying colours. While our participants reported that while they could see themselves using the aids in the future, there was still room for improvement. Based on participants' feedback we will be improving both devices and study the real-world effectiveness of our colour identification aids.

Alt Text of Poster Images

Figure 1: An image showing what bananas may look like to a person with unimpaired vision

Figure 2: An image showing what bananas may look like to a person with protanopia

Figure 3: An image showing what bananas may look like to a person with deuteranopia

Figure 4: A visualization of HaptiColor

Figure 5: A visualization describing the vibration patterns of HaptiColor

Figure 6: An image of various unique pencil crayons

Figure 7: An image of various unique pencil crayons when given the task to find the yellow pencil, using ColourPopper

Figure 8: An image of ColourWrist with some solenoids labelled

Figure 9: An image of a person wearing ColourVest

Figure 10: An image of the colour labelled keypad used by ColourVest

Figure 11: A screenshot of the colour identification task used for evaluating the devices

Figure 12: A table containing the participants' colour vision deficiency type and severity. Participant 1, inconclusive, none-mild. Participant 2, deutan, strong.

Figure 13: A table of the results of both participants using three methods: unaided, ColourWrist, and ColourVest. Participant 1, unaided 85%, ColourWrist 100%, ColourVest 98.8%. Participant 2, unaided 61.3%, ColourWrist 98.8%, ColourVest 96.0%.