Link to page

DL Models


Model 1: EEGNet

Application: General EEG classification and Brain-Computer Interface (BCI) tasks.
Paper/GitHub:
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces,
Model Description:
Layers:
Initial Convolutional Layer: 1 layer
Depthwise Convolutional Layer: 1 layer
Separable Convolutional Layer: 1 layer
Classification Layer: 1 layer

Screenshot from 2024-06-05 12-25-47.png
image.png
Justification: Designed to be compact and efficient for EEG data processing, capturing both spatial and temporal features.
Types: Conv2D, DepthwiseConv2D, SeparableConv2D, Linear
Loss Function: Cross-Entropy Loss
Epochs: 150
Metrics: Accuracy, Precision, Recall, F1-Score
Performance: Achieves high accuracy on multiple BCI datasets, often above 90% for binary classification tasks, 60-80% for multiclass classification
Customized code on my data:

Model 2: DeepConvNet

Application: General EEG tasks, more complex and deeper representations.
Paper/GitHub:
Deep learning with convolutional neural networks for EEG decoding and visualization,
Model Description:
Layers:
Several convolutional layers followed by fully connected layers.
Typically includes 4 Conv2D layers and 2 fully connected layers.
Screenshot from 2024-06-05 13-15-45.png
Justification: Deeper layers allow for capturing more complex patterns in EEG data.
Types: Conv2D, MaxPooling, Dense (Fully Connected)
Loss Function: Cross-Entropy Loss
Epochs: 100
Metrics: Accuracy, Precision, Recall, F1-Score
Performance: Known for high accuracy, robustness across different EEG tasks.
Customized code on my data:

Model 3: CNN-LSTM Hybrid

Model 4: Spatio-Temporal Convolutional Networks (STCNN)

Model 5: Convolutional Bi-Directional LSTM (Conv-BiLSTM)

Model 6: 3D Convolutional Neural Networks (3D-CNN)

Model 7: Deep Residual Networks (ResNet) for SSVEP

Model 8: Temporal Convolutional Networks (TCN)

Model 9: EEGTransformer



Dropout Rate (dropoutRate)

Explanation: Dropout is a regularization technique used to prevent overfitting by randomly setting a fraction of input units to 0 during training.
Typical Values: Common values are between 0.2 and 0.5.
Choosing a Value:
High Dropout Rate (e.g., 0.5): Use if you have a small dataset or observe overfitting.
Low Dropout Rate (e.g., 0.2): Use if you have a large dataset or observe underfitting.
Importance: Dropout helps in generalizing the model by preventing it from relying too much on any single neuron.

Temporal Kernel Length (kernLength)

Explanation: This defines the size of the convolutional filter applied over the temporal dimension of the input data.
Choosing a Value:
Rule of Thumb: It can be set to approximately half the sampling rate. For instance, if your sampling rate is 128 Hz, a kernel length of 64 is a reasonable starting point.
Importance: This affects how the model captures temporal features. Longer kernels can capture more extended temporal dependencies.

Number of Temporal Filters (F1)

Explanation: The number of filters in the first convolutional layer, capturing different temporal features.
Typical Values: Common values are 8 or 16.
Importance: More filters can capture more diverse features but also increase the computational load.

Depth Multiplier (D)

Explanation: This defines the number of spatial filters learned within each temporal convolution.
Typical Values: Common values are 1 or 2.
Importance: Higher values allow capturing more spatial features but increase model complexity.

Number of Pointwise Filters (F2)

Explanation: This is usually set to F1 * D, ensuring the separable convolution has the same number of filters as the combined depthwise convolution.
Importance: Determines the complexity and capacity of the separable convolution layer.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.