Accepted Doctoral Consortium Participants

Transcript of Audio File

Poster title: Our perspectives matter. Using Universal Design Goals to Guide Technology Design in the Global South.

My name is Lynn Kirabo, my research focuses on using Universal Design goals to design technologies that improve the mobility of persons with disabilities in the Global South. Using Universal design goals like Cultural Appropriateness, Social Integration, Personalization, and Understanding, I hope to understand the needs that exist among this user group, and implement mobile solutions that can impact these needs.

I have completed the first exploratory phase of this work using ethnographic methods and interviews in Kampala, Uganda and Kigali, Rwanda. The next two phases of this work include 1) A collaboration phase in which we hope to partner with two stakeholders: a disability advocacy group and a smart transportation agency. We will do this in order to implement findings from our exploration phase. 2) A recommendation phase in which we will document specific design strategies and changes in interaction techniques and behaviors observed among users.

This work will contribute to the gap between Universal Design Goals and interaction design for the Global South. This is especially important today due to the increase in smartphone proliferation and push towards the development of smart city policies in various cities in the Global South

Alt Text of Poster Images

Poster title: Our perspectives matter. Using Universal Design Goals to Guide Technology Design in the Global South.

Beneath the title is a row of three images:

First image on the left consists of pedestrians walking on a dusty road with 2 motorists riding along side them.

The second image in middle contains riders wearing orange helmets using the motorcycle form of transportation. This form of transportation is popular across cities in East Africa.

The third image is an area view of the taxi park in Kampala, Uganda. It depicts images of taxis (14-seater mini vans), lining up in multiple directions.

The third section, titled the Motivation section contains 3 images of transportation options available in Kampala and Kigali.

The first is an image of bicycle with 2 riders.

The second rider is seated on a perch welded on top of the wheel. The second image is a motorcycle.

The third image depicts the a ride-share application which is a hand holding a cell phone with an image of a car on it.

Transcript of Audio File

Hi, My name is Akashdeep Bansal. I am a PhD student at IIT Delhi. My dissertation topic is "Comprehensive Accessibility of Equations by Visually Impaired".

As we know, persons with visual impairment can access digital content using screen reading software. Still, accessing equations is challenging due to their non-linear visual representation. We proposed to improve the accessibility of equations by associating a cogntive complexity metric with each equation and then to use this metric to modify their audio rendering.

We planned to start with finding the relation between various cognitive complexity parameters (such as number of attempts, mistakes, and time taken, etc.) and structural complexity parameters (such as height, weight, total number of nodes, etc.) through a series of user studies. This will provide us a way to compute cognitive complexity of an equation using structural complexity parameters.

Apart from the structural complexity parameters and their associated weights, complexity of an equation will also depend on the various user characteristics (such as IQ, Education, etc.) and semantic of the equation on the basis of the context in the document. Further, we propose to improve the delivery of complex equations using appropriate variable substitution.

Alt Text of Poster Images

Aim and Background

1. A boy looking at a equation and thinking "How will the screen reader read?" and "How complicated it will be for a person with dyslexia to comprehend?"

Research Paradigm

1. A block diagram showing Complexity parameters and associated weights, User characteristics, and Contextual semantics as input to the Equation cognitive complexity system. The output of this system is Delivery mechanism.

Methodology

1. A diagram showing various cognitive complexity parameters (such as time taken, thinking time, number of attempts, types of mistakes, and number of mistakes) and structural complexity parameters (such as height, weight, total number of nodes, max number of children of a node, and types of operators in the equation). A arrow connecting both types of parameters showing that we are working on finding the relationship between them.

2. A 1-D graph with cognitive complexity on the axis. A threshold level is marked to represent classification of equations into simple and complex.

Transcript of Audio File

Privacy Concerns of the Visually Impaired with Camera-based Assistive

Applications

Taslima Akter

The goal of this work is to understand the privacy concerns of people with visual impairments with camera-based assistive technologies. Nowadays they are using both artificial and human intelligence-based assistive technologies in their daily life to identify objects, reading texts, etc. In such camera-based assistive technologies, they usually capture and share a photo or video either with a human assistant or AI. It is possible to share sensitive content with assistive technology while getting information. They may either share the sensitive information unintentionally (background object) or intentionally (foreground object). In this work, we focus on the unintentional sharing behavior of people with visual impairments with human-assisted technologies.

We conducted an online survey with 155 people with visual impairments and asked their comfort level in sharing different information with different human assistants or audiences (friends, family members, volunteers). We observed selective disclosure practices based on the information and audiences. For example, people expressed the highest concerns about maintaining their impression with friends than volunteers and family members. They were more concerned about capturing bystanders in the images rather than themselves. People also shared higher trust for Aira than Be My Eyes because of professionalism. However, despite having privacy concerns people with visual impairments still prefer human-assistants because of the reliability and accuracy of the information. Future technologies can consider making the systems more humanized to make them more accessible for people with visual impairments.

Alt Text of Poster Images

Privacy Concerns of the Visually Impaired with Camera-based Assistive

Applications

Taslima Akter

Image 1: Indiana University logo.

Cluster of images (2-4): Image 2: Screen shot of the app BeSpecular identifying objects.

Image 3: Screen shot of the app Aira recognizing people.

Image 4: Screen shot of the app Be My Eyes identifying objects.

Cluster of images (5-10): Top row: Images of credit card, medicine bottle, and a person wearing scurf.

Bottom row: Images of a pop corn and a picture of a baby in the background, laptop screen with a reflection of a person, soda can with a person in the background.

Cluster of images (11-14): Top row: Graphs representing comfort level for different human assistants (friends, family, volunteers) with 95 percent confidence interval, comfort level for different information group (personally identifiable information, impression management, laptop screen, general objects) with 95 percent confidence interval.

Bottom row: Graphs representing comfort level for sharing information about people (bysnader, self) with 95 percent confidence interval, interaction between different objects and human-assistants with 95 percent confidence interval.

Image 15, 16: Icon of a man, Icon of a woman

Image 17: NSF logo.

Image 18, 19: Screen shot of inaccurate information given by Seeing AI, Screen shot of lack of description given by Seeing AI.

Transcript of Audio File

Doctoral consortium poster.

Empowering people with dementia to share and socialize.

Author: Jia Min Dai.

Supervisor: Professor Karyn Moffatt.

School of Information Studies, McGill University.

My overarching research goal is to explore technologies to empower people with dementia in social sharing.

To date, I have completed the fieldwork and investigated the challenges people with dementia encounter when sharing stories and socializing in a small group, as well as effective materials and prompts in supporting their storytelling.

My fieldwork is situated at Tales & Travels, a storytelling and social program in a local public library.

It invites people with dementia to explore a country every week.

Each session consists of a 1-hour story time, a 20-minute coffee break with featured snacks, and a 20-minute video time.

I conducted dyad interviews with people with dementia and their family caregivers, as well as individual interviews with caregivers and facilitators.

I also observed the sessions and took field notes.

I performed thematic analysis on interview transcripts and observation notes.

The fieldwork findings show that people with dementia are able to enjoy social sharing and contribute to the story telling and discussions.

But a successful social sharing program requires proper framing and careful crafting.

I have identified the following best practices:

Exploring community settings,

Encouraging peer collaboration,

Choosing mature and intellectual materials and activities,

Furthering tangible interactions and multisensory experiences,

Adopting technologies carefully,

Facilitating properly and professionally.

My next steps are the design and evaluation of a new social sharing tool for people with dementia.

Alt Text of Poster Images

Two images on top are about the fieldwork at Tales & Travels.

The one on the left shows three participants with dementia sitting with one facilitator and one caregiver at a table. Each with a cup of coffee or tea. On the table, there are several books, printed maps, and printed pictures with descriptions at the back.

The one on the right shows a roomful of participants, facilitators, and caregivers watching videos projected on to the big screen at the front of the room.

Six images at the bottom right are about potential roles for technology.

These images show possible avenues to explore in design, including websites, mobile applications, augmented reality applications and glasses, smartphone holograms, 3D printing, and touch projectors.

Transcript of Audio File

This is a description for an ASSETS 2019 Doctoral Consortium poster for Investigating Accessibility in the Writing Process with Dyslexic Adults by Emily Q. Wang at Northwestern University.

In this poster, I'm presenting ideas for my ongoing work on designing assistive writing tools for people with dsylexia. Dyslexia is a learning disability or learning difference that impacts how the brain processes language. Dyslexic people are known for being great at brainstorming and analysis, but they have a tendency to have a lot of ideas going at once, wander on tangents, and experience working memory overload while they're trying to write. This means it can be an additional challenge to get their papers to be organized and flow well. Unfortunately, help with structuring arguments and improving the flow of a paper is not the scope of status-quo assistive technology and accommodations, and the blank slate designs of mainstream word processors like Microsoft Word and Google Docs lack features for helping users self-assess written argument structures. In order to address some of these challenges, I’m prototyping writing tools for visualizing & interacting with a document at different levels of granularity. Different levels of granuarlity here could refer to sentences, paragraphs, or the big-picture thesis concepts in a document. I'm currently developing these tools, engaging in co-design activities with dyslexic undergraduate and graduate students at my research institution, and figuring out how to evaluate my prototypes either in lab settings or for longer-term naturalistic deployment.

I'd love to hear your thoughts if you're interested or have any questions! My email is eqwang@u.northwestern.edu if you'd like to chat.

Alt Text of Poster Images

Poster title

Investigating Accessibility in the Writing Process with Dyslexic Adults

Author information

Emily Q. Wang

Northwestern University Inclusive Technology Lab

ASSETS 2019 Doctoral Consortium

Background - About Dyslexia and Writing Section

Image 1. "Dyslexia" in overlapping red and blue capital letters. This graphic is meant to give the impression of how some dyslexic users may see overlapping or moving letters when they read.

Image 2. Stock photo of a brain instead of a human head silhouette. This graphic is meant to convey that dyslexia is a learning disability based on neurological differences in how the brain processes language.

Image 3. Diagram with the letters "b" and "d" in boxes and bi-drectional arrows in between them. This graphic is meant to convey the idea that dyslexic users may have a tendency to switch letters when they try to spell.

Image 4. Document with a speech bubble containing a graphic of many arrows. This graphic is meant to convey how the papers that dyslexic users write may be unorganized or have a messy writing style due to working memory overload when they are trying to write.

Background - Research Approach Section

Image 5. Flowchart that is meant to convey the ecosystem lens or approach. The flowchart contains three boxes, "Assistive technology" and "Office tools" and "Individual & group writing strategies."

Image 6. Circular diagram of the writing process that includes boxes for "Composing," "Information gathering," "Structuring arguments," and "Collaboration & feedback exchange."

Proposed Design

Image 7. First screen of proposed design. Consider an example user who already a rough draft with all of their ideas written but wants some help improving its structure and flow. They import their rough draft into the tool and specify what type of paper it is (such as a persuasive essay), the page limit and other details so the system can personalize the experience to what kind of paper they’re trying to write.

Image 8. Second screen of proposed design. Then our example user can work in the paragraph view, which has the document to the left and a paragraph structure checklist to the right. In this view, the user can go through the checklist to highlight and label the topic sentences, evidence, interpretation, or other components that they should have for that type of essay.

Image 9. Third screen of proposed design. Our example user can then switch to the big-picture view. In this view, instead of seeing the whole document, this only has the thesis and topic sentences on the screen. This gives the user a chance to focus on whether their ideas support their thesis and make sure they’re not making a circular or redundant argument. The example user can then iterate between paragraph views and big-picture views as they keep working through their paper.

DC-Bayor Co-designing technology with people with intellectual disabilities
Andrew Bayor, Queensland Univ of Tech, Australia
Transcript of Audio File

American Sign Language is a primary means of communication for over 500,000 people in the U.S., and many people who are Deaf or Hard of Hearing prefer to receive information in the form of American Sign Language. Unfortunately, few websites display contents in American Sign Language; one challenge is that videos of human American Sign Language signers would be difficult to update and maintain when information on a website must change. The human would need to be re-recorded. We therefore investigate technology to automate the creation of animations of American Sign Language based on easy-to-update script.

We used motion-capture data recorded from humans to train machine learning models to predict realistic timing parameters for American Sign Language animation, based on the sentence syntax and other features. 

I am investigating the following research questions:

1. Would adding pauses during American Sign Language animations improve the understandability of the message?

2. How much pause time do we need to add between signs, due to the syntactic phrase structure or sentence boundaries?

3. How does the speed of signing vary during sentences, and how can we produce animations with realistic timing?

I have built predictive models for modeling where to insert prosodic breaks (pauses), adjusting the pause durations for these pauses, and adjusting differential signing rate for American Sign Language animations.

I am using two evaluation approaches:

1. A cross-validation study using human recordings, where my model out-performed a state-of-the-art rule-based model.

2. A user study where American Sign Language native signers provided subjective feedback after viewing animations generated by our models.

My ultimate goal is to build software that can generate understandable American Sign Language animations of a virtual human signer automatically from an easy-to-update script.

Alt Text of Poster Images

Figure 1

The image illustrates the annotation software for the motion-capture data. The top-left corner illustrates three views (front, side, and zoomed) for the ASL signer. The bottom section of the image shows different annotation tires used by linguistic to annotate the corpus.

Figure 2

Image of Blender animation software with skeleton character. This image illustrates the process of extracting the bones coordinates from animations files of our motion corpus to a textual format that can be used for modeling.

Figure 3

An example of a script for a sign language sentence (labelled "a") with three phases of processing to insert pauses between some words (labelled "b"), to adjust the speed of individual words (labelled "c"), and to adjust the duration of the pauses (labelled "d"). The example transcript included in the image is: they make computer program it name #chess program play game chess use #SUPER computer. For each step of the process, the image displays the same sentence, illustrated with the words as individual rectangles of different width and with some amount of space between them. The width of the rectangle indicates word length, and the space between them indicates if there is a pause during the timeline between those two words.

Figure 4

Graph displaying the accuracy values for Pause Insertion, comparing two items: Our model with 75% accuracy and the state-of-the-art rule-based approach with 63% accuracy. My model has a higher value, which indicates a better result.

Figure 5

Graph displaying the Root Mean Squared Error (RMSE) values for the two regression models: Differential Rate and Pause Insertion. For Differential Rate, my model is 0.64, and the state-of-the-art rule-based approach is 0.84. For Pause Duration, my model is 5.31, and the rule-based approach is 6.23. In both cases, my model has a lower value, which indicates a better result.

Figure 6

Screenshot of a virtual human character performing ASL, with the following transcript shown: MANY PEOPLE THEY GO CAMPING FOREST VARIOUS STATES FIRST COLORADO SECOND WYOMING THIRD CALIFORNIA FOURTH WASHINGTON THEY SCARED WHY BLACK BEAR BROWN BEAR IF ATTACK DO THEY THINK SHOOT BUT SCIENTISTS UNIVERSITY ALASKA MAKE NEW CHEMICAL DEFENSE SPECIAL RED #PEPPER SPRAY AGAINST BEAR SHOO LAST YEAR RESEARCH EXPERIMENT SPRAY THERE RIFLE THERE COMPARE THERE STOP BEAR ATTACK #60 PERCENT SPRAY BETTER STOP #90 PERCENT ATTACK OTHER SCIENTISTS AFRICA MAKE SPRAY AGAINST INSECT READY WHEN NEXT YEAR

Figure 7

A chart illustrates the five-step of my system: 1) Video Corpus, 2) Data Pre-processing, 3) Building Models, 4) Synthesizing Animations, and 5) ASL Animation

Transcript of Audio File

Affordable rapid 3D printing technologies have become key enablers of the Maker Movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access to people with blindness and visual impairment (BVI). In my thesis, I outline three areas of research that I'm conducting towards my PhD thesis towards the goal of bridging a gap between blind and sighted makers.

The broad research questions I seek to answer are:

1. How can complex 3D information be effectively encoded through tactile representations?

2. What are the interaction techniques necessary to create and manipulate 3D models on tactile displays with limited resolution? and

3. How does access to 3D design and printing in the wild for BVI people change their self efficacy of making and their attitudes towards STEM?

So far I have conducted interviews towards understanding accessibility challenges and preliminary studies to understand how multimodal tactile representations can be leveraged to effectively communicate 3D information. And last, I've co-designed a 3D modelling workflow that allows blind and visually impaired users to ideate and desing 3D models by programming and obtaining dynamic tactile feedback through a 2.5D shape display.

Alt Text of Poster Images

Motivation:

1. An icon of a design drawing in a notebook and caption that reads "Lack of accessible authoring tools reduces agency, availability, and creativity."

2. An icon of a 3D cube and caption that reads "Increasing access to complex spatial information (e.g. 3D models, maps, etc)".

3. Increasing engagement in STEM.

Research Approach - Proposed Workflow:

1. An image of code written with OpenSCAD with caption that reads "CODE: OpenSCAD is used to specify 3D model geometry through code."

2. An image of a users hand touching a mug rendered on a 2.5D shape display with caption that reads: "Render & Verify: Dynamic feedback on a 2.5D shape display is used for previews."

3. An image of a 3D printed cup and caption that reads: "3D Print: After iterative modifications, the model can be 3D printed."

Research Approach - Current Methods to Access 3D Information:

1. An image of a user exploring a 2D raised line drawing of a human cell and caption that reads: "tactile graphics (require training and not efficient)".

2. An image of a user holding a 3D printed cup and caption that reads: "3D models (slow and domain-knowledge)".

Research Approach - Evaluation:

1. An image showing 3D models that blind users created in an evaluation. From left to right: P1 created a "staircase stand" (15.42 minutes). P2 created a "tall glass cup" (18.78 minutes). P3 created a "truck with carriage" (21.09 minutes). P4 created "a cube with a dome on top" (12.84 minutes). P5 created "a cylinder cup" (8.01 minutes).

Research Approach - Hardware:

1. An image of the designed 2.5D shape display. It consists of a grid of 12x24 pins with motors at the base that actuate each pin. The image has a zoomed in view of the motors coupled to the pins through a rotating leadscrew.

Transcript of Audio File

Title: Technology mediated nature interaction for blind and partially sighted people.

Author: Maryam Bandukda, PhD Student at UCL Interaction Centre London UK.

Supervisors: Dr. Catherine Holloway and Professor Nadia Berthouze.

The goal of my research is to explore the use of technology in mediating nature interactions for blind and partially sighted people. I have conducted interviews, focus group, and online survey to develop an in-depth understanding of how BPSP experience nature and investigate the barriers that affect their engagement.

The main findings from the qualitative research were;

1. The sense of freedom and connection with nature was an important motivator; the experience of nature involved the sensory and affective impact of the sounds, smells, and touch.

2. As many people visited parks with family or friends, the social Interaction and descriptions of the visual aspects of the environment provided an enriched experience.

3. Opportunities for technology are immense as people were interested in exploring ways to get information about nature near to them and further away. Technology has the potential to support BPSP in improving planning by learning about the space prior to visiting and through ubiquitous interactions while in the environment.

I am currently planning dyadic interviews with BPSP and their companions to investigate how they collaboratively explore and engage with nature. Next steps will be the design and evaluation of a mobile app to support collaborative nature exploration.

Alt Text of Poster Images

Image 1: Nature connection - A blue bird sitting on a branch.

Image 2: Social interaction - A blind or partially sighted person being supported by a guide.

Image 3: Mobility - A blind or partially sighted person using a guide dog and a cane.

Image 4: Autonomy - A person standing on the peak of a mountain facing towards the valleys and mountains ahead.

Transcript of Audio File

Title: Exploring Accessible Designs to Support Independent Practice of Skills in Everyday Routines

Author: Varsha Koushik

Affiliation: University of Colorado Boulder

Contact: varsha.koushik@colorado.edu

Performing everyday routines independently can be challenging for people with cognitive disabilities due to memory deficits, attention deficits, challenges performing executive functions and additional physical disabilities. However, in order to live independently, adults with cognitive disabilities are expected to perform everyday activities without supervision.

The current model of everyday routines for people with cognitive disabilities involves training by caregivers and using supports like digital assistants during routine execution. Caregivers demonstrate routines as part of the training, and they create supports like plans and worksheets to assist users in executing routines. A majority of users are less motivated and confident about performing routines independently and are often dealing with unexpected and exceptional situations while executing routines in the real world.

Practicing skills involved in everyday routines can build confidence for people with cognitive disabilities in performing those routines more independently. I am exploring designs for creating accessible embodied games to support people with cognitive disabilities in practicing skills present in everyday like decision making, iteration, ordered sequencing, exception handling, and communication skills in a different domain. The system will also support increasing confidence and persistence in everyday activities by recording virtual accomplishments and translating them to real world rewards. I plan to incorporate factors into games like designing with characters that can influence and impact users.

The poster includes a storyboard of an example scenario, where a user is using an augmented reality game to learn about recycling a bottle. In the game, when the user asks the system, where to recycle the bottle, the system matches the color of the bottle and the bin, to help users throw the bottle into the right bin.

Alt Text of Poster Images

Figure 1 displays an example diagram of the current support system for people with cognitive disabilities in performing everyday tasks. the first stage is routine demonstration done by caregivers as part of the training. The second and final stage is routine execution by users with some assistance using planners, digital assistants, reminders, and schedulers. One stage that's missing and could help is routine practice.

Figure 2 displays a storyboard of an example scenario, where a user is using an augmented reality game to learn about recycling a bottle. In the game, when the user asks the system, where to recycle the bottle, the system matches the color of the bottle and the bin, to help users throw the bottle into the right bin.

DC-Tomasky Pawsibilities for Assistive Technology
Stephanie Tomasky, University of Texas, Austin
Transcript of Audio File

"Neurodiverse Socio-Technical Collaboration:

Supporting Sensory-Social-Emotional Styles of Autistic Adults"

My name is Annuska Zolyomi and my research explores the dynamics of neurodiverse collaborators, meaning those who are autistic and non-autistic. This research examines opportunities for technology to increase the agency of autistic adults by taking into account their particular sensory, social, and emotional styles.

To engage with the autism community, I am using a framework called "Community Based Participatory Research." My methodology is grounded design, which is comprised of contextual inquiry, co-design, and appropriation of design artifacts.

An emerging contribution of this research is a theoretical concept that technologies aiming to be emotionally-aware (such as artificial intelligence) also need to be sensory aware. My research will also contribute design guidelines and design domains for creating neurodiverse socio-technical communication tools.

Alt Text of Poster Images

Image 1: Information School, University of Washington logo

Image 2: Color wheel representing different aspects of autism: language, motor skills, perception, executive function, and sensory.

Cluster of images (3-7): Icon of two people sitting facing each other, surrounded by images of a bright light, eyes, audio speaker, and a fidget spinner.

Cluster of images (8-10): Icons representing grounded design: discover, brainstorm, and evaluate.

Image 11: Circular model with arrows, representing a conceptual model

Transcript of Audio File

Poster titled “Interactive Computational Tools for Assessing and Understanding Urban Accessibility at Scale”

My name is Manaswi Saha, a PhD student in Computer Science and Engineering at the University of Washington. My research is looking into the problem of understanding urban accessibility at scale. Urban accessibility is about the ease of walking/rolling around the cities to get to destinations for pedestrians, especially for people with mobility disabilities. My thesis is around building interactive tools that will help develop a deeper understanding of urban accessibility such that the tools can help (1) Increase transparency and accountability from cities, (2) Facilitate evidence-based decision making, and (3) Foster civic engagement and advocacy. My thesis is divided into three parts: first looks into the problem of scalably collecting accessibility data for sidewalks; second, looks into developing a deep understanding of the problem domain by talking to several stakeholder groups in addition to people with disabilities and their caregivers. They are elected officials, transportation departments, and accessibility advocates. Finally, I will look into building an interactive visual geo-analytics tool that will help aid in answering questions around the current state of accessibility.

Alt Text of Poster Images

Figure 1. Headshot of Manaswi Saha

Figure 2. Screenshot of Project Sidewalk interface. The image shows an intersection marked with a curb ramp label with an open context menu showing severity rated as 1

Figure 3. Screenshot of a prototype of an interactive visualization tool. The image shows a map of DC colored by the accessibility score. The neighborhoods are colored on a scale of blues indicating high accessibility to greens and yellows indicating poor accessibility.

Figure 4. Bottom banner containing logos of Project Sidewalk, Makeability Lab, DUB and Paul Allen School of Computer Science and Engineering

Transcript of Audio File

My name is Stephanie Valencia and my research is on Agency in Augmentative and alternative Communication or AAC. Augmented communicators use AAC devices to speak. My research examines how their partners impact their agency and how we can design tools inspired by these partner interactions.

My work also aims to increase augmented communicators’ agency in the design process through participatory design workshops. These workshops will inform the design of the tools and also generate design guidelines on making participatory design accessible for augmented communicators.

Alt Text of Poster Images

Poster title: Agency in Augmentative and Alternative Communication

A poster for the Doctoral Consortium at Assets 2019 by Stephanie Valencia.

The first image shows a conversation between three people sitting at the table. The first one located on the left is the close conversation partner. Next to him we find the augmented communicator and on the far right is the third party that in this case is the researcher.

The section titled “Understanding Agency” is accompanied by a diagram showing how the cost of communication between the close conversation partner, the augmented communicator and the third party is unequal and unbalanced. Arrows connecting the three participants vary in weight showing that the effort, or cost, required to communicate along the pathways (arrow thickness) is different depending on who addresses who. For example, if the augmented communicator wants to respond to a question posed by the third party , time-sensitive conversational constraints e.g., replying fast while it is still relevant) can encourage communication along the lowest cost paths (lower width arrows) encouraging communication via their close conversation partner rather than direct augmented communicator to third party communication.

The next section of the poster is titled “Increasing agency in conversation” and is illustrated with three circles with images inside. The first one, located on the left, shows a monitor with the word “device” under the image. The second one shows a person’s silhouette with the word “Partners”. The third circle, located on the right contains the image of a house and a conversation balloon, this figure is labelled “social environment”.

The last section of the poster is titled “Increasing agency in the design process”. This is illustrated with an image that shows a meeting where four people are watching a woman pointing at some post-its on a wall as if they were in a brainstorming session.

Transcript of Audio File

Hello this is Jared Duval, a PHD student at University of California Santa Cruz in the computational media department advised by Sri Kurniawan and Katherine Isbister. My research is called approaches for creating therapy games. Therapy games have the potential to offer people with disabilities a cost effective, personalized, data driven, and connected context for otherwise tedious and repetitive therapy. The challenge is creating a motivating experience that translates into improved health outcomes. I explore creating 3 therapy games for various populations to identify best practices, unique insights, and suggestions for future therapy game creators. The first game is called SpokeIt, at a speech therapy game for children with cleft speech.

The second game is called CirKus a physical rehabilitation game for children with sensory based motor disorder and the third game is called spell casters a multiplayer physical rehabilitation game for stroke survivors using virtual reality reality. All three games are on spectrums.

To identify different approaches taken for example some approaches are game first or therapy first, game or play, single player or multiplayer, symmetrical or asymmetrical, collaborative or competitive, and simultaneous or sequential.

Alt Text of Poster Images

The first picture: my logo that uses negative space to create a hand holding a game controller in the shape of a heart.

The second picture is a picture of me near my contact information.

In the games columns, the first picture is the SpokeIt logo, followed by in-game screenshots depicting the game art and speech targets.

In the games columns, the second picture is the circus logo, followed by in-game screenshots depicting a hybrid creature.

In the games columns, the last picture is the Spell Casters logo, followed by in-game screenshots depicting the game instructions and casting a spell using gestures.

Finally, the last section shows where in the spectrum of approaches each game lies using the game’s logo.