Abstract (Rezumat)

4_follow_me_animated

Over the last years, automotive industry has shown a tremendous interest in Advanced Driver Assistance Systems (ADASs), especially the ones based on driver’s bio-features. As most car producers strive to meet the increasing needs of high-end and even average consumers, more and more complex systems are being developed. The current trend is to maximize the synergy between humans and machines by designing better user interfaces (UIs) which can anticipate the behaviour of the driver. Great research efforts are put into inferring car position, traffic environment, driver’s condition (e.g. degree of drowsiness, driving skill, emotional state) and intentions (e.g. changing lanes, overtaking other cars).

Since the 60’s, many studies treated eyes states and movements as features that can provide crucial insights for improving car control. Based on Gibson’s theory on visual attention, many researchers believe that they can predict how the subject will react just by analysing the information produced by his visual system: gaze direction, level of drowsiness and scan paths.

Mobile devices industry experienced an important boom in the last 3-4 years. Last generation of smartphones are equipped with powerful sensors, among which one can count the front and back video cameras, the accelerometer, the gyro sensor, the proximity sensor and others. Over 800000 mobile applications are already developed for both Android and IOS platforms, and over 70B have been already downloaded by users worldwide.

Our project team plans to use the latest hardware and software advances in mobile phones and navigation systems for creating a standalone application which will run on any dual-camera mobile device and which will enrich a personal navigation assistant (PNA) previously developed by the industry partner with additional functions that perform eye and head tracking and provide, based on intelligent computation, a set of notifications useful to prevent possible hazardous situations. The application will basically detect whether the driver is not attentive to road obstacles or certain types of traffic signs such as danger warning, priority, restrictive or mandatory. Moreover, the application will calculate the percentage of eye closure (PERCLOS) and alert the driver when its drowsiness level is high. The core of the application will be based on established computer vision algorithms (e.g. colour segmentation, SVM, Adaboost, optical flow, CAMSHIFT and others) and will contain 3 main modules: the eye gaze and head orientation estimator, the road obstacles and traffic signs detector and the application programming interface (API) necessary to integrate these modules within the PNA. The application will be tested on a modified vehicle on which a pre-collision system will be implemented.

The concept behind NAVIEYES is that both cameras of a mobile device docked in a car kit will act as “eyes” that will perceive the car interior and the outside traffic environment. We address the need of improved ADASs for average consumers who own a smartphone with 2 cameras. Thus, one of the real advantages of this approach is the reduced cost of the complete solution. To the best of our knowledge, no other company has managed to integrate the functionalities described in this project into a standalone solution, and no other research on PNAs based on eye tracking analysis for standard dual-camera mobile devices has been reported at the moment. A secondary objective is to integrate this PNA with a test car, for implementing a controlled braking system or an avoidance algorithm, similar to the one used in home appliances (i.e. the latest Eye 360).