Photography is a visual way to capture a moment in time. Photos can be used for artistic expression, and to remember significant events. Because photo taking, organizing and sharing traditionally requires visual information, those with no or limited sight often have problems with these activities. Previous work has made photo capturing without sight easier, however, there is little work that makes photo browsing and sharing blind-accessible. My dissertation research aims at facilitating independence for blind persons to take, organize and share photos through user-centered development of a smartphone application that can be used without sight. The work starts with an investigation of current practices of blind persons in these activities, continued with a review of existing applications, and finally the design and long-term evaluation of the application.
The overarching needs that this smartphone application aims to meet are the following:
- Photo Taking: Aiming, focusing, positioning, and framing; Easy way to get sighted help; Accessible device; Improving photo quality.
- Photo Organizing and Editing: Identifying what’s in the picture; Labeling the pictures; Manipulating pictures
- Photo Sharing: Easy way to get sighted help; Accessible photo sharing method.
Here is a video of the proposed system, Phodio, a blind accessible iPhone app designed to help blind people take and organize their photos for easier retrieval and sharing:
This project, funded through an NSF CAREER award and the NSF GRFP, aims to facilitate independence for blind authors in producing documents that meet the presentation 'standards' expected by sighted readers. This includes gathering design guidelines for tools that aim to help blind people format their docuemnts independetly by: 1) developmenting an impact-weighted taxonomy of common document presentation errors produced by blind authors, 2) exploring blind persons' mental models and strategies for learning and coping with docuemnt formatting, and how these models and strategies contribute to the success of independent document formatting and layout activities; and 3) by iterating over the previous the development and evaluation of prototype tools.
Here is a video overview of our project (i.e. our motivation, objectives, and methods):
Researchers: Morales, Kurniawan, Arteaga
Collaborators: Cottrell, Adams
We are working on developing and testing a new method of providing lessons on basic skills (e.g. adding money, reading numbers or letters, identifying US currency) to individuals with developmental disabilities (DD) of all ages. The goal is to make the process of taking the lessons more enjoyable and the process of administering the lessons more efficient.
Our methods include working in collaboration with Imagine!, a Colorado based not-for-profit organization that provides support services to people with DD and cognitive disabilities, to gather system requirements and developed and evaluate a prototype in the form of an online application which works primarily as a web-app on the iPad and includes lessons to teach individuals with DD of all ages about numbers, letters, colors, and currency. To check out the prototype please visit: Imagine! Online Review System.
Here is a video of a user testing the money lesson in our high-fidelity proportype of the online learning system:
Video produced by Imagine!
Researchers: Buschmann, Morales, Kurniawan. In collaboration with Imagine!
Speech Therapy Game for Children with Cleft Palate
We are developing a video game to aid young children undergoing speech therapy after cleft palate surgery. Children undergoing speech therapy must first unlearn their compensations and ommissions that they use to get around certain syllables they cannot do. Unlearning involves significant amounts of practice at home, and parents have extreme difficulty in motivating children to practice. Working directly with children who are undergoing speech therapy, we are designing a simple game using a novel speech recognition engine to help motivate them to perform their therapy exercises. We aim to accelerate the rate of recovery and give therapists more tools to use with children. We also intend to forward research in educational games.
Here is a video overview of our prototype game "Speech Adventure":
Researchers: Rubin, Kurniawan. In collaboration with UC Davis Medical Center.
Virtual Hemiparesis Rehabilitation Game for Stroke Survivors
Stroke can leave survivors with some form of hemiparesis (weakness of one side of the body). Virtual rehabilitation in the past decade has shown higher success than traditional rehabilitation. It improves the patient’s experience and can allow for clinical rehab without a clinician present. Research in non-virtual rehabilitation shows that constraint induced movement therapy is very effective in treating hemiplegia. In this therapy, the patient’s strong side is physically constrained using a mitt or hand splint on the non-affected limb, forcing the patient to utilize their weaker limb for daily activity.
The goal of this research is to evaluate the behavior of stroke survivors with hemiparesis in virtual rehabilitation where a constraint is induced virtually via varying incentives during gameplay.
Examining the user’s behavior during their virtual rehab sessions, we will look at a variety of elements. We will examine the user’s preference of game. Each of the four mini games use a different upper limb movement. The hypothesis is that the user may have a preference of game that relates to their physical condition. We will also examine the user’s preference of side (weak or strong) that they use to play the games. We may be able to find a point at which the incentive to use the weak side is adequate to reliably motivate the users. We can also compare the user’s compliance rate of adhering to the constraint with currently published research. We will also compare user’s range of motion measurements before and after the study to assess the effectiveness of the system.
In the next image we have a sample game: In the top left game, the user moves his hand side to side to control the bucket to catch the eggs as they fall from the sky. In the top right game, the user moves his hand up and down to control catch the stars as they fly across the sky. In the bottom left game, the user moves their forearm to control the bat as balls are thrown toward the screen. In the bottom right game, the user controls the egg pan with their forearm to catch fried eggs as they fall from the sky.
Researchers: Buschmann, Kurniawan. In collaboration with Cabrillo College's Stroke and Disability Learning Center.
Our project's goal was to test the feasibility of a novel vibrotactile guidance system, in the form of a belt, for helping blind walkers with wayfinding by enabling them to receive haptic directional instructions without negatively impacting their ability to listen and/or perceive the environment. We evaluated the belt interface in a controlled study with 10 blind individuals and compared it to an audio guidance system. The experiments were videotaped and the participants’ behaviors and comments were content analyzed. Completion times and deviations from ideal paths were also collected and statistically analyzed.
Here is a video overview of our project and clips from our user evaluation:
Researchers from UCSC: Kurniawan, Flores, Manduchi, Morales.
Researchers from Toyota ITC: Erich Martinson, Akin Sisbot
Brain-Training Software for Stroke Survivors
We investigated the feasibility of using web-based brain-training software to help stroke survivors and, in general, individuals with cognitive impairments. For this purpose we observed and interviewed stroke survivors to get a better understanding of the technologies that they feel are helpful, as well as examine the effectiveness and limitations of such technologies. From this, we compiled an informal set of design guidelines for rehabilitation software aimed to help stroke survivors improve their cognitive skills. To validate the guidelines and see if new ones emerged, we developed a low-fidelity prototype of a web-based brain-training software and tested it with five participants to check its feasibility as a cognitive rehabilitation software solution.
Here is a video overview of the prototype for our proposed system:
Researchers: Morales, Smith, Kurniawan. In collaboration with the Cabrillo College's Stroke and Disability Learning Center.
LASSIE (Live-in ASSistance for Independence and Eldercare)
This LASSIE robot is designed to be a monitoring assistive living robot, which is low cost and can be steered over the Internet by family member to remotely monitor and help assess the well-being of an older relative living alone. A commercially available iRobot Create platform was used as a starting point for the Assistive Living Robot, however the system could potentially be used on another robotic base. Our system (i) takes advantage of commercially available systems to reduce development cost and effort, (ii) acts as a video and audio communication tool between older persons and their family members or caregivers and (iii) can analyze the video feed to detect heart rate and breathing rate. The proposed system could be integrated with in-home monitoring sensors (Smart Grid, Professor Patrick Mantey), which could be the trigger for alerting family members that an out of ordinary event is occurring, therefore, our robot could be woken up and sent to be used as over the Internet watchful eye.
Researchers: Cottrell, Hening
Labor and childbirth is a multidimensional experience for which it is difficult to prepare (without having done it before). It has been shown that emotional, informational, and physical support is key in shortening labor, increasing maternal happiness, and decreasing the need for unnecessary interventions during labor. This mobile (iPhone) video game training tool aims to train birth partners about the stages of labor and ways to support a first-time mom.
Here is a video overview of our prototype training tool: