Meetings
Link to page

Without windowing
label count steer 3393 stop 2350 right 1140 left 1018 reverse 77

EEGNET Model:
Number of unique labels: 5 Size of training set: 6382 Size of testing set: 1596
Accuracy: 99.94% Precision: 1.00 Recall: 1.00 F1-score: 1.00

DeepConvNet:
Number of unique labels: 5 Size of training set: 6382 Size of testing set: 1596
Accuracy: 99.87% Precision: 1.00 Recall: 1.00 F1-score: 1.00

EEGTransformer:
Class distribution in training set: 3 2675 4 1912 2 905 0 827 1 63 Name: count, dtype: int64 Number of unique labels: 5 Size of training set: 6382 Size of testing set: 1596
Finished Training /home/ghada/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py:1471: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) Accuracy: 38.75% Precision: 0.15 Recall: 0.39 F1-score: 0.22

AFter applying weighted random sampler:
Finished Training /home/ghada/.local/lib/python3.8/site-packages/sklearn/metrics/_classification.py:1471: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) Accuracy: 11.97% Precision: 0.01 Recall: 0.12 F1-score: 0.03

# Create a weighted random sampler to handle class imbalance
Count the Samples per Class:
class_sample_count = pd.Series(y_train).value_counts().sort_index().values
Compute Weights for Each Class:
weights = 1. / torch.tensor(class_sample_count, dtype=torch.float32)
Assign Weights to Each Sample:
sample_weights = weights[y_train_tensor]
Create a Weighted Random Sampler:
sampler = WeightedRandomSampler(weights=sample_weights, num_samples=len(sample_weights), replacement=True)

After applying weighted random sampler:
Accuracy: 0.88% Precision: 0.00 Recall: 0.01 F1-score: 0.00

With Gaussian windowing
label count steer 341 stop 233 right 114 left 100 reverse 9

EEGNET Model:
Number of unique labels: 5 Size of training set: 637 Size of testing set: 160
Accuracy: 99.38% Precision: 0.99 Recall: 0.99 F1-score: 0.99

DeepConvNet:
Number of unique labels: 5 Size of training set: 637 Size of testing set: 160
Accuracy: 96.88% Precision: 0.97 Recall: 0.97 F1-score: 0.97

EEGTransformer:
Class distribution in training set: 3 279 4 180 2 92 0 80 1 6
Number of unique labels: 5 Size of training set: 637 Size of testing set: 160
Finished Training Accuracy: 12.50% Precision: 0.02 Recall: 0.12 F1-score: 0.03

confusion matrex
classification report

AFter applying weighted random sampler: Accuracy: 33.12% Precision: 0.11 Recall: 0.33 F1-score: 0.16






















































Model 2: DeepConvNet

Model 3: CNN-LSTM Hybrid

Model 4: Spatio-Temporal Convolutional Networks (STCNN)

Model 5: Convolutional Bi-Directional LSTM (Conv-BiLSTM)

Model 6: 3D Convolutional Neural Networks (3D-CNN)

Model 7: Deep Residual Networks (ResNet) for SSVEP

Model 8: Temporal Convolutional Networks (TCN)

Model 9: EEGTransformer


Combination 2: Standard Preprocessing Pipeline with Overlapped Windows

Filtering - Bandpass Filter: 1 Hz to 40 Hz
Re-referencing: Average reference
Artifact Removal: ICA for EOG and EMG artifacts
Normalization: Z-score normalization
Windowing: 1-second windows with 50% overlap (step size of 0.5 seconds)

Model 1: EEGNet

Model 2: DeepConvNet

Model 3: CNN-LSTM Hybrid

Model 4: Spatio-Temporal Convolutional Networks (STCNN)

Model 5: Convolutional Bi-Directional LSTM (Conv-BiLSTM)

Model 6: 3D Convolutional Neural Networks (3D-CNN)

Model 7: Deep Residual Networks (ResNet) for SSVEP

Model 8: Temporal Convolutional Networks (TCN)

Model 9: EEGTransformer


Combination 3: Minimal Preprocessing

Filtering - Bandpass Filter: 1 Hz to 40 Hz
Re-referencing: Single reference electrode (e.g., Cz)
Normalization: Min-max scaling to [0, 1]
Windowing: 2-second non-overlapping windows

Model 1: EEGNet

Model 2: DeepConvNet

Model 3: CNN-LSTM Hybrid

Model 4: Spatio-Temporal Convolutional Networks (STCNN)

Model 5: Convolutional Bi-Directional LSTM (Conv-BiLSTM)

Model 6: 3D Convolutional Neural Networks (3D-CNN)

Model 7: Deep Residual Networks (ResNet) for SSVEP

Model 8: Temporal Convolutional Networks (TCN)

Model 9: EEGTransformer


Combination 4: Extended Artifact Removal

Filtering - Bandpass Filter: 1 Hz to 40 Hz
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.