Meetings
Link to page

5 June -Meeting


output (2).png

1) Anlayze all ROS bag files:

Ros bag info:
Subject
Session
File Name
Size
Duration
Messages
Compression
Sequence Duplicate
Dropped Frame
S1
S1
S1S1_2024-05-17-merged.bag
154.9 GB
9:44s (584s)
194,119
None
No
?
S2
S1
S02S01S1_2024-05-21-merged.bag
151.8 GB
11:45s (705s)
192,574
None
No
?
S2
S2
S02S02S1_2024-05-21-merged.bag
76.1 GB
3:59s (239s)
96,448
None
No
?
S3
S1
S03S01S1_2024-05-21-merged.bag
192.0 GB
16:20s (980s)
226,185
None
No
?
S4
S1
S04S01S1_2024-06-04-13-06-13.bag
126.3 GB
10:35s (635s)
36,555
None
No
?
Average
140.2 GB
10:28 (628s)
149,176
There are no rows in this table
Ros bag topics and types:
Topics
Types
/cmd_vel/joystick
geometry_msgs/TwistStamped
/eeg_data
bcv/EEGData
/managed/joy
geometry_msgs/Twist
/odom
nav_msgs/Odometry
/processed_image
sensor_msgs/Image
/robot_status
std_msgs/Float32MultiArray
/status
ds4_driver/Status
/tf_static
tf2_msgs/TFMessage
/yoctopuce/fix
sensor_msgs/NavSatFix
/yoctopuce/imu
sensor_msgs/Imu
/yoctopuce/light
sensor_msgs/Illuminance
/zed/zed_node/left/camera_info
sensor_msgs/CameraInfo
/zed/zed_node/left/image_rect_color
sensor_msgs/Image
/zed/zed_node/odom
nav_msgs/Odometry
/zed/zed_node/path_map
nav_msgs/Path
/zed/zed_node/path_odom
nav_msgs/Path
/zed/zed_node/pose
geometry_msgs/PoseStamped
/zed/zed_node/pose/status
zed_interfaces/PosTrackStatus
/zed/zed_node/pose_with_covariance
geometry_msgs/PoseWithCovarianceStamped
/zed/zed_node/right/camera_info
sensor_msgs/CameraInfo
/zed/zed_node/right/image_rect_color
sensor_msgs/Image
There are no rows in this table

2) Check dropped messages: ??


3) Extract EEG data from each ROS bag file:

S1S1-eeg_data.txt
3.9 MB
S2S2-eeg_data.txt
1.6 MB
S4S1-eeg_data.txt
1.8 MB

4) Extract button pressed labels from each ROS bag file:

S1S1-joystick_data.txt
1.8 MB
S2S2-joystick_data.txt
910.6 kB
S4S1-joystick_data.txt
243.9 kB

5) Extract /processed image from each ROS bag file:

S1S1-PI-Video.avi
S2S2-PI-Video.avi
S4S1-PI-Video.avi

6) Extract /left image from each ROS bag file:

S1S1-LI-Video.avi
S2S2-LI-Video.avi
S4S1-LI-Video.avi

7) Filter all EGG data files:

S1S1-filtered-eeg_data.txt
3.3 MB
S2S2-filtered-eeg_data.txt
1.6 MB

8) Visualize EGG data files:

S1S1:

Figure_1.png

S4S1:


9) and 10) Clean EGG data files and merge with labeled joystick:


11) Apply models:


Model 1: EEGNet

Application: General EEG classification and Brain-Computer Interface (BCI) tasks.
Paper/GitHub:
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces,
Model Description:
Layers:
Initial Convolutional Layer: 1 layer
Depthwise Convolutional Layer: 1 layer
Separable Convolutional Layer: 1 layer
Classification Layer: 1 layer

Screenshot from 2024-06-05 12-25-47.png


Justification: Designed to be compact and efficient for EEG data processing, capturing both spatial and temporal features.
Types: Conv2D, DepthwiseConv2D, SeparableConv2D, Linear
Loss Function: Cross-Entropy Loss
Epochs: 150
Metrics: Accuracy, Precision, Recall, F1-Score
Performance: Achieves high accuracy on multiple BCI datasets, often above 90% for binary classification tasks, 60-80% for multiclass classification

S1S1:

Accuracy: 53.51% Precision: 0.38 Recall: 0.54 F1-score: 0.44

S4S1:


Model 2: DeepConvNet

Screenshot from 2024-06-05 13-15-45.png
Application: General EEG tasks, more complex and deeper representations.
Paper/GitHub:
Deep learning with convolutional neural networks for EEG decoding and visualization,
Model Description:
Layers:
Several convolutional layers followed by fully connected layers.
Typically includes 4 Conv2D layers and 2 fully connected layers.

Justification: Deeper layers allow for capturing more complex patterns in EEG data.
Types: Conv2D, MaxPooling, Dense (Fully Connected)
Loss Function: Cross-Entropy Loss
Epochs: 100
Metrics: Accuracy, Precision, Recall, F1-Score
Performance: Known for high accuracy, robustness across different EEG tasks.

S1S1:

Accuracy: 83.27% Precision: 0.84 Recall: 0.83 F1-score: 0.83

S4S1:


Model3: EEG-Transformer by eeyhsong

Description: Here applies a Transformer (ViT) model to 2-D physiological signal (EEG) classification tasks, which can be adapted for various tasks including driving intention prediction.
Justification:
Utilizes the attention mechanism to enhance spatial and temporal features.
Applies common spatial pattern (CSP) for feature enhancement.
Performance: The repository mentions achieving state-of-the-art performance in multi-classification of EEG signals.

S1S1:

Accuracy: 72.37% Precision: 0.84 Recall: 0.83 F1-score: 0.83

S4S1:

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.