Posts | Comments

Planet Arduino

Archive for the ‘Visually Impaired’ Category

Smartphones have become a part of our day-to-day lives, but for those with visual impairments, accessing one can be a challenge. This can be especially difficult if one is using a cane that must be put aside in order to interact with a phone.

The GesturePod offers another interface alternative that actually attaches to the cane itself. This small unit is controlled by a MKR1000 and uses an IMU to sense hand gestures applied to the cane. 

If a user, for instance, taps twice on the ground, a corresponding request is sent to the phone over Bluetooth, causing it to output the time audibly. Five gestures are currently proposed, which could expanded upon or modified for different functionality as needed.

People using white canes for navigation find it challenging to concurrently access devices such as smartphones. Build­ ing on prior research on abandonment of specialized devices, we explore a new touch free mode of interaction wherein a person with visual impairment can perform gestures on their existing white cane to trigger tasks on their smartphone. We present GesturePod, an easy-to-integrate device that clips on to any white cane, and detects gestures performed with the cane. With GesturePod, a user can perform common tasks on their smartphone without touch or even removing the phone from their pocket or bag. We discuss the challenges in build­ ing the device and our design choices. We propose a novel, efficient machine learning pipeline to train and deploy the gesture recognition model. Our in-lab study shows that Ges­ turePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster. Our in-wild study suggests that GesturePod is a promising tool to im­ prove smartphone access for people with VI, especially in constrained outdoor scenarios.

While there are tools that allow the visually impaired to interact with computers, conveying spacial relationships, such as those needed for gaming, is certainly a challenge. To address this, researchers have come up with DualPanto.

As the name implies, the system uses two pantographs for location IO, and on the end of each is a handle that rotates to indicate direction. One pantograph acts as an output to indicate where the object is located, while the other acts as a player’s input interface. One device is positioned above the other, so the relative position of each in a plane can be gleaned. 

The game’s software runs on a MacBook Pro, and an Arduino Due is used to interface the physical hardware with this setup. 

DualPanto is a haptic device that enables blind users to track moving objects while acting in a virtual world.

The device features two handles. Users interact with DualPanto by actively moving the ‘me’ handle with one hand and passively holding on to the ‘it’ handle with the other. DualPanto applications generally use the me handle to represent the user’s avatar in the virtual world and the it handle to represent some other moving entity, such as the opponent in a soccer game.

Be sure to check it out in the video below, or read the full research paper here.

In order to help those with visual impairments navigate streets, college student Satinder Singh has come up with an innovative solution that literally pokes the user in the right direction. 

Singh’s system, called DeepWay, uses a chest-mounted camera to take images of the road that a wearer is walking down, then feeds this information to a laptop for processing. 

If the deep learning algorithm determines that the user needs to move left or right to stay on the path, a serial signal is sent to an Arduino Uno, which in turn commands one of two servos mounted to a pair of glasses to tap the person to indicate which way to walk. Additional environmental feedback is provided through a pair of earphones.

This project is an aid to the blind. Till date there has been no technological advancement in the way the blind navigate. So I have used deep learning particularly convolutional neural networks so that they can navigate through the streets.

My project is an implementation of CNNs, and we all know that they require a large amount of training data. So the first obstruction in my way was a correclty labeled dataset of images. So I went around my college and recorded a lot of videos (of all types of roads and also off-roads). Then I wrote a basic Python script to save images from the video (I saved 1 image out of every 5 frames, because the consecutive frame are almost identical). I collected almost 10,000 such images almost 3,300 for each class (i.e. left right and center).

I made a collection of CNN architectures and trained the model. Then I evaluated the performance of all the models and chose the one with the best accuracy. I got a training accuracy of about 97%. I got roughly same accuracy for all the trained model but I realized that the model in which implemented regularization performed better on the test set.

The next problem was how can I tell the blind people in which direction to move. So I connected my Python program to an Arduino. I connected the servo motors to Arduino and fixed the servo motors to the sides of a spectacle. Using serial communication I can tell the Arduino which servo motor to move which would then press to one side of the blind person’s head and would indicate him in which direction to move.

A demo of DeepWay can be seen in the video below, while code for this open source project is available on GitHub.

May
19

Museum for all: a tactile exhibition and project from Minsk

3DPrinting, arduino, Exhibition, Visually Impaired Comments Off on Museum for all: a tactile exhibition and project from Minsk 

Minsk-exhibition2

Gleb Kanunnikau is  a designer and trainer based in Minsk. He is part of a group of volunteers running a meetup group and an open laboratory bringing together people from the tech and education/media and experimental, hackerspace scene trying to solve a few very local and very practical problems that don’t seem to be getting a lot of attention from the tech community.  Their initiative is focused on providing educational tools for children and adults with vision disabilities and is organized as an open laboratory with contribution from Minsk hackerspace (the first in Belarus), Belarusian meetup.by community, and monogroup.by - community of architects and visual artists.

Gleb wrote me a long email and explained the aims and the context of their amazing work:

The problem is that schools for the visually impaired aren’t getting new books with Braille type and the education system for these kids is stuck in the 1970s, only now it is much worse (at least in the USSR there were factories and employment options for these people, as well as city districts with disabilities-friendly housing). They are the forgotten, invisible people – no textbooks means there are few people able to read Braille books – and they just can’t leave their apartments nor get education or a job.

Luckily, Ludmila Skradal, who works with these children on a regular basis as a tour guide and a teacher, had met a few architects, as well as people from the first Belrusian hackerspace and we’ve organized a hackathon a year ago.

We are building the first tactile museum exhibition for these children (but also for adults) on history/ethnography/architecture.

This is a sound/tactile installation that uses technology but isn’t tech-centric and solves a practical problem. We are combining hand-built architectural plastic models of buildings and elements printed with a 3d printer (open source mendel prusa, with Arduino inside) for small-scale columns and ornaments etc.

minsk-exhibition

The models serve as instructional materials and partly substitute for the missing handbooks on history and culture that the children in schools for the visually impaired are not receiving currently.
The kids say that these architecture lessons were the first time they’ve been able to even imagine what buildings in cities “look like” above ground level. Things that were outside of their reach, like the clock tower on the city hall building, rooftops, column capitals were suddenly accessible – they were invited to touch the real city hall walls during the field trip to feel their texture and then they explored the model, and hearing the sound of the real city hall clock they examined it in the model.

The current goal is to build a museum exhibition unified by narrative and allowing self-exploration within the space, using Arduino for controlling the exhibits.

We hope that 3d printed objects could work as handbooks on history, culture, art. Maybe we’ll even print DNA segments that can be combined as like lego puzzles – so that kids can try to put together a DNA chain out of aminoacid plastic blocks to understand how the spiral of amino-acids looks like. There are many possibilities.

minsk-hackerspace

If you want to get in touch and know more about their project, visit the website.



  • Newsletter

    Sign up for the PlanetArduino Newsletter, which delivers the most popular articles via e-mail to your inbox every week. Just fill in the information below and submit.

  • Like Us on Facebook