Meetings

17 Apr

Scale and print raw EEG data:
printing and visualize both raw and filtered data of both ultracortex and cap
using model detection for left and right
export data from ros bag and visulize data
stream and display on GUI and record GUI and visulize streaming data
convert EEG data to spectogram and used in model
shared forms
systematic review

viz.py
3.1 kB
viz_16apr.py
1.4 kB
Ultracortex - unfiltered two samples
headset-u-1.png
headset-u-2.png
cap - unfiltered two samples
headset-c-1.png
headset-c-2.png
Ultracortex - filtered two samples using band-pass filter [1-40]Hz
headset-u-filteredRange-1.png
headset-u-filteredRange-2.png
cap - filtered two samples using band-pass filter [1-40]Hz
headset-c-filteredRange-1.png
headset-c-filteredRange-2.png
Ultracortex - filtered two samples using band-pass filter [18-22]Hz
headset-u-filtered-20-1.png
headset-u-filtered-20-2.png
cap - filtered two samples using band-pass filter [18-22]Hz
headset-c-filtered-20-1.png
image.jpg
84.6 kB
And, you know, it seems reasonable apart from this channels, which makes sense that one of two times will go are going to go wrong way is very difficult to avoid. It's usually. But as long as most things are fine, it should be fine.
If you detect bad channels or empty channels or flat channels, you typically do an average you replace them with the average across all channels. So always expect one or two channels to go wrong.
So we'll have to see a smaller segment of the recording for a specific activity to try to see if while we want to see that exists, because right now it's very difficult to to understand more about the signal.

In any case, this type of repeats, the power spectral density makes more sense to compute it after we do our basic filtering. So that clean out at least the artifacts and all of these, so it might look completely different after remove.
filter before find the power spectral density
at least the basic filtering unless you want to go for adjust the power spectral density for specific bands, like the alpha, beta and gamma, delta, and so on.

here it makes sub of now that we have frequency between one and 40 hertz it seems like a reasonable leads the signal so looks good looks quite clean.
Cap:
definitely will solve a lot of issues with how much stand on a good positioning because they get that if you don't put it properly might, you know, a little bit angled, so it's, so the cab will solve this.
other advantage of the cap is when we move to the car, the cap is going to be a lot easier to do for safety. Because people wear caps in cars, and that's an established thing. Whereas people don't wear the other thing in cars. And therefore that falls down in front of the drivers eyes, that can be an issue.

We can, the driver can have a cushion between them and the seat to kind of separate them we can get we can put something in that would be one option.

I have this error mismatch between expected input dimension for the model and the actual data being processed.
Okay, so that means that the size of your numpy array, you're passing into the model for the input there
one thing that happens is that the duration of your activities problem is not the same. So usually, a neural network will expect a standard input size. And because you don't have all of them having the same duration. Some of them will be smaller, some of the bigger so probably that's the reason why you're getting this problem.
so I guess the first thing to do is to to decode the stimulus to know which stimulus was displayed. When is that the first task and then eventually you want to be able to predict the driving command, I guess.
what happens is that you're the commands for example, or the stimulus associated with timestamps, right. So you know, when a stimulus started, when a stimulus finished, or you know, when a specific action by the driver happened, you have a timestamp at the start, and you have a timestamp at the end. And this is recorded in ROS bag, right. So what you do is that you write some Python code that reads these timestamps, and goes to the EEG data and segments the signal at this specific time. So you need a script to iterate through all the record the timestamps, and then create the segments.
Okay, so we have the labels of which stimuli was published and when so we can use that to automatically label the data. Okay, because we have the, you know, as Stamos says we have all the EEG signals, and we have this extra signal, which is which stimuli so we can just use that to generate these labels.

Ground Truth

"Ground truth" refers to the accurate information used as the standard against which the outputs of your model are compared. In machine learning, particularly in supervised learning, ground truth data is essential as it contains the actual labels or results that the model aims to predict.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.