5 June -Meeting
1) Anlayze all ROS bag files:
Ros bag topics and types:
2) Check dropped messages: ??
3) Extract EEG data from each ROS bag file:
4) Extract button pressed labels from each ROS bag file:
5) Extract /processed image from each ROS bag file:
S1S1-PI-Video.avi
S2S2-PI-Video.avi
S4S1-PI-Video.avi
6) Extract /left image from each ROS bag file:
S1S1-LI-Video.avi
S2S2-LI-Video.avi
S4S1-LI-Video.avi
7) Filter all EGG data files:
S1S1-filtered-eeg_data.txt
3.3 MB
S2S2-filtered-eeg_data.txt
1.6 MB
8) Visualize EGG data files:
S1S1:
S4S1:
9) and 10) Clean EGG data files and merge with labeled joystick:
11) Apply models:
Model 1: EEGNet
Application: General EEG classification and Brain-Computer Interface (BCI) tasks.
Paper/GitHub:
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces, Model Description:
Initial Convolutional Layer: 1 layer Depthwise Convolutional Layer: 1 layer Separable Convolutional Layer: 1 layer Classification Layer: 1 layer
Justification: Designed to be compact and efficient for EEG data processing, capturing both spatial and temporal features. Types: Conv2D, DepthwiseConv2D, SeparableConv2D, Linear Loss Function: Cross-Entropy Loss Metrics: Accuracy, Precision, Recall, F1-Score Performance: Achieves high accuracy on multiple BCI datasets, often above 90% for binary classification tasks, 60-80% for multiclass classification
S1S1:
Accuracy: 53.51%
Precision: 0.38
Recall: 0.54
F1-score: 0.44
S4S1:
Model 2: DeepConvNet
Application: General EEG tasks, more complex and deeper representations.
Paper/GitHub:
Deep learning with convolutional neural networks for EEG decoding and visualization, Model Description:
Several convolutional layers followed by fully connected layers. Typically includes 4 Conv2D layers and 2 fully connected layers.
Justification: Deeper layers allow for capturing more complex patterns in EEG data. Types: Conv2D, MaxPooling, Dense (Fully Connected) Loss Function: Cross-Entropy Loss Metrics: Accuracy, Precision, Recall, F1-Score Performance: Known for high accuracy, robustness across different EEG tasks.
S1S1:
Accuracy: 83.27%
Precision: 0.84
Recall: 0.83
F1-score: 0.83
S4S1:
Model3: EEG-Transformer by eeyhsong
Description: Here applies a Transformer (ViT) model to 2-D physiological signal (EEG) classification tasks, which can be adapted for various tasks including driving intention prediction.
Utilizes the attention mechanism to enhance spatial and temporal features. Applies common spatial pattern (CSP) for feature enhancement. Performance: The repository mentions achieving state-of-the-art performance in multi-classification of EEG signals. S1S1:
Accuracy: 72.37%
Precision: 0.84
Recall: 0.83
F1-score: 0.83
S4S1: