Binocolors

A personal device to help color blind students to distinguish between colors while in class.

What is it?


Binocolors is a personal device to help color blind students to distinguish between colors while in class. The system consists of an HD camera that is connected to the user’s laptop and a software that translates the live stream video to colors the color blind can see based on her specific color deficiency.

Binocolors suggests a color blindness test (Farnsworth-Munsell 100 Hue Color Blindness Test) in case the user is not familiar with its deficiency. Once taken, the system adjusts accordingly. For example, if the color deficiency is Protan – meaning shades of red are problematic to the user, then the system will emphasis the red over other colors.

The product design process wanted to achieve both practicality and the feeling of confidence and intimacy to the user.

Project Info


Students: Yahav Izchaki, Maia Hillel, Shani Cohanim, Einat Belardi, Moshe Marcus, Yarden Ashkenazi, Or Barda

Mentors: Dr. Oren Zuckerman, Noa Morag

TA: Neta Tamir, Daniel Shir, Daniel Shein

Designers: Einat Belardi, Shani Cohanim

This Project was created with the collboartion of Seminar Hakibutzim mentored by Dori Oryan

   Students' Website

How does it work?

An HD camera (Logitech HD Pro Webcam C920) is connected to the user’s laptop and transfers the image taken from the board to the laptop screen. The image is processed using our algorithm, and thus, the colors of the image are changed according to the student’s specific color deficiency. The algorithm uses a process called Daltonization – changing the colors in the image to colors that color blind can see and differentiate. The settings of the algorithm are according to the Daltonizing research – there are three color deficiency types: Deutan (green blind), Protan (red blind) and Tritan (blue blind), while the red and green deficiency are the most common among the color-blind population.

Image processing is 12 frames per second to allow “live translation”. We use OpenCV, a python library for image processing, and to compute all of the manipulations done on every pixel in the image we used another python library called Numpy – a library that allows very fast mathematical calculations.

We designed a graphic user interface, for the user to interact with the system, using PyQT, a cross-platform GUI framework.