Driver State of Mind, Intent and Anomaly Detection using Multi-Modal Data
Project Description
In this project, we plan to use multi-modal data from sensors such as 3D and infrared cameras, wearable smartwatch, in-vehicle telematics devices, together with new multi-modal data fusion, machine vision and artificial intelligence techniques, to identify accurately the driver’s state of mind (SoM) and state of health (SoH), driving anomaly, as well as his/her intent. A system diagram is shown in Figure 1. A driver’s state of mind can consist of emotions like anger, anxiety and fear, as well as indications of distraction and fatigue; driver’s intent can consist of future driving actions like lane changing, turning, and overtaking; and driving anomaly can consist of unexpected or undesirable driving behaviors like sudden accelerating or decelerating, weaving, and wild turns. While cameras are widely used in driver monitoring systems, accurate detection may still be a challenge due to various limitations such as lighting conditions and driver’s poses. Hence, as shown in Figure 1, with the accuracy and capabilities provided by new available sensor technologies such as in-vehicle telemetry sensors and non-intrusive wearable devices, as well as abilities to capture multi-modal live information about both inside and outside the vehicle simultaneously, we propose to develop more robust models to achieve higher detection accuracy, which can have a significant impact on enabling intelligent assistance/guidance for the driver’s vehicle as well as neighboring vehicles and other road users.
Figure 1. Proposed system to detect driver SoM, intent and driver anomaly
Project 1: Driver State of Mind Detection
To develop driver SoM detection models, our data collection plan consists of both in-laboratory data collection through emulated driving conditions as well as using real driving situations with instrumented vehicles. In this project, we are designing laboratory experiments to enable efficient and safe data collection without real-world driving risks while being able to emulate a comprehensive set of driving situations. The experiments consist of inducing different states of mind in the subjects participating in our study, such as various emotions, distraction, fatigue, anxiety, etc., while using a driving simulator platform using a gaming steering wheel, Logitech G29, and a simulator software Carla, shown in Figure 2(a). The camera device is used to capture images of the subject’s upper body, including RGB images, NIR images and depth maps. Examples of the collected images are shown in Figure 2(b). A microphone is used to collect audio information. We also plan to use wearable devices like Samsung Galaxy Watch or Fitbit to capture the subject’s biological signals such as PPG signals.
We have conducted preliminary data collection using the proposed in-laboratory setup and collected images of posed facial expressions from 15 subjects, which is being used to build a machine learning model to detect driver’s emotions.
Figure 2. (a) driving simulator platform (b) example images
Associated Publications