Obstacle Avoidance for the Visually Impaired

cover

Members - Advait Rane, Katie Foss

In this project, we simulated a device that uses Computer Vision (CV) to help a Visually Impaired Person (VIP) navigate around obstacles on a sidewalk. We further evaluated the simulated system against safety requirements defined using Signal Temporal Logic (STL).

We defined an environment with three types of obstacles - ground level (fire hydrant), head level (stop sign) and large ground level obstacle (bicycle). We used Unity to simulate this as a narrow sidewalk of fixed lenghth with bounds on either side and three randomly placed obstacles. Our devices uses an RGB-D camera and CV techniques to direct the VIP using 4 commands, telling them to go straight, left, right, or to stop. We defined three safety requirements - staying on the sidewalk, not colliding with obstacles, and reaching the end of the sidewalk.

obstacles

Our controller used three CV techniques to detect the sidewalk and obstacles:

Edge detection was done using OpenCV and Object detection was done using a pre-trained YOLOv4 model with Unity Barracuda. We divided the free area between the sidewalk bounds or any obstacle into three lanes and directed the user to use the straight, right, or left motion based on the free lane. If no lane is free we ask the user to stop.

To evaluate safety requirements, we converted the 3 requirements into STL formulae. We then evaluated each simulation run with these formulae using the RTAMT package in Python. The video below shows some of our successful runs.

successful runs