Preliminary results obtained with different modalities:

  • Eye-tracking Data
  • EEG Data
  • Physiological Data
  • Vehicle-based Data
  • Face Videos
  • Preliminary Results Obtained with Eye-tracking Data

    Using the eye-tracking modality from the eDREAM dataset, we explore if visual attention patterns could be exploited for estimating driver cognitive load under practical settings. The detection of cognitive load level is framed as a supervised classification problem (depicted below), for which an estimation model would be trained with machine learning approaches based on certain observations. The objective task condition (i.e. no-task, 1-back or 2-back) are used as the target cognitive load levels.

    Figure 1. Overview of the classification experiment. The performance of the proposed system is evaluated based on the testing set that is held out from the training process.

    For the input observations to the classification model, motivated by previous domain knowledge connecting eye-related measures with cognitive load, we proposed four meta-features for capturing variations in eye blinking and glancing behaviors. They are designed to describe the intensity and direction of visual attention.

    Figure 2. Signal example showing process of extracting eye closure related features from the raw eye-tracker outputs.

    Then, several commonly applied classification algorithms, including k-nearest-neighbor (KNN), support vector machine (SVM), AdaBoost and Random Forest, are explored and compared for constructing the estimation model. Issues arose from the machine learning workflow, such as potential evaluation bias embedded within time-series data, are examined (Figure 3). The most promising algorithm (Random Forest) achieves 70.3% accuracy on classifying between the highest and lowest cognitive load.

    Figure 3. Illustration of data partitioning methods applied in cross validation iterations.

    Figure 4. Classification accuracies with different grouping methods and different machine learning algorithm.

    More information can be found in the thesis document and defense slides