LIFE ALTERING TECHNOLOGIES transforming life as we know it.

We live in extraordinary times that we call the 4th Industrial Revolution. Our short-sighted behavior and practices till this point in time have led our planet to the brink of disaster. The myopic…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




BCI Paradigms Cheat sheet

This article was initially intended to be included in the previous article that discussed the analytics layer. However, the latter grew into its own beast. I suggest referring to it to get a comprehensive overview of the technical aspects of the analytics layer (i.e., pipeline construction, filtering).

In this cheat sheet, I will provide a concise summary of the various BCI paradigms I’m familiar with.

My goal, with this article, is to document what can be done with BCI, so that you can innovate new applications from them.

In this review, I focus on the paradigms that can be implemented using consumer multimodal BCIs. My primary interest lies in paradigms that are reliable and have achieved a certain level of maturity. To the best of my knowledge, I also try to highlight technical restrictions associated with these paradigms.

Please note that the following list is in no particular order:

EEG paradigms:

Other modalities:

Alpha wave paradigm. If you are getting started in BCI, this is the first paradigm you should get familiar with. The alpha wave is best measured toward the back of the head (visual system, occipital lobe) but is so strong that can be measured almost all over the scalp. The paradigm is simple, when you close your eyes, the power in the alpha band goes up. When you open your eyes, it goes down [1]. You can’t miss it. Starting with this paradigm will help you develop your first pipeline and get your tools straight. And because it’s an easy one, you will be rewarded quickly.

Technical considerations:

Pipeline Overview:

[1] Barry, R. J., Clarke, A. R., Johnstone, S. J., Magee, C. A., & Rushby, J. A. (2007). EEG differences between eyes-closed and eyes-open resting conditions. Clinical neurophysiology, 118(12), 2765–2773.

Motor Imagery [2]. This is the dream of many BCI developers. Imagine controlling a wheelchair or a mouse, using only your thoughts. Motor imagery has been tried and tried again. Truth is, it works, but not as well as we would like it to (in-brain implants perform strongly here).

You can expect to classify between right hand, left hand or feet, under good conditions, but unless you equip yourself with a high density and wet electrode research grade system, that’s as far as you’ll go.

This is one of the hard problems of BCIs. If you want to give it a try, I suggest you start by working on existing datasets [3], to save you the logistic of recording your own. When you’ll achieve satisfying results, then you can move on to develop an online acquisition and modeling process.

Technical considerations:

Pipeline Overview:

[2] Lotze, M., & Halsband, U. (2006). Motor imagery. Journal of Physiology-paris, 99(4–6), 386–395.

[4] Barachant, A., Bonnet, S., Congedo, M., & Jutten, C. (2011). Multiclass brain–computer interface classification by Riemannian geometry. IEEE Transactions on Biomedical Engineering, 59(4), 920–928.

[5] Barachant, A., Bonnet, S., Congedo, M., & Jutten, C. (2010). Riemannian Geometry Applied to BCI Classification. Lva/Ica, 10, 629–636.

SSVEP, ASSR [8, 10]. These two paradigms are similar. SSVEP applies to visual stimuli, while ASSR applies to auditory responses. In my opinion, the best implementation of the SSVEP as a user interface belongs to NextMind [9]. SSVEP has a very high SNR, making it easy to detect. ASSR works well but the subject is likely to experience tinnitus and the stimuli are very annoying to hear. I might be missing something, but it didn’t appear to have much potential, beyond medical applications (see cocktail party problem for a high-potential auditory BCI application).

I read that SSVEP amplitude is modulated by attention [11]. On average, it is probably true, but in my experience, inter-individual variation and even inter-trial variations largely dominate amplitude variations, making it hard to extract attentional signal.

Technical considerations:

Pipeline Overview:

[8] Zhu, D., Bieger, J., Molina, G. G., & Aarts, R. M. (2010). A survey of stimulation methods used in SSVEP-based BCIs. Computational intelligence and neuroscience, 2010, 1–12.

[9] Korczak, P., Smart, J., Delgado, R., Strobel, T. M., & Bradford, C. (2012). Auditory steady-state responses. Journal of the American Academy of Audiology, 23(03), 146–170.

[11] Morgan, S. T., Hansen, J. C., & Hillyard, S. A. (1996). Selective attention to stimulus location modulates the steady-state visual evoked potential. Proceedings of the National Academy of Sciences, 93(10), 4770–4774.

P300 [12]. The P300 speller is a classic application [13], but the backbone is the oddball paradigm [14]. The best consumer-oriented implementation of the paradigm to BCI probably belongs to Neurable [15]. While I haven’t compared them myself, my knowledge of SSVEP leads me to conclude that SSVEP probably offers a higher brain-computer information transfer rate. Still, this is a core BCI paradigm.

Technical considerations:

Pipeline Overview:

[12] Linden, D. E. (2005). The P300: where in the brain is it produced and what does it tell us?. The Neuroscientist, 11(6), 563–576.

[14] García-Larrea, L., Lukaszewicz, A. C., & Mauguiére, F. (1992). Revisiting the oddball paradigm. Non-target vs neutral stimuli and the evaluation of ERP attentional effects. Neuropsychologia, 30(8), 723–741.

Cognitive state monitoring (stress/relaxation/engagement/motivation/ cognitive load). These cognitive states are passive paradigms and can be decoded efficiently using EEG. These signals are the most represented among today’s consumer-grade EEG systems. (Neurosity, Muse, Neurable) [16,17,18]. There is an extensive literature on the topic, here’s a few samples [19–22]

Technical considerations:

Pipeline Overview:

[19] Antonenko, P., Paas, F., Grabner, R., & Van Gog, T. (2010). Using electroencephalography to measure cognitive load. Educational psychology review, 22, 425–438.

[20] Nuamah, J. K., Seong, Y., & Yi, S. (2017, March). Electroencephalography (EEG) classification of cognitive tasks based on task engagement index. In 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA) (pp. 1–6). IEEE.

[21] Kelley, N. J., Hortensius, R., Schutter, D. J., & Harmon-Jones, E. (2017). The relationship of approach/avoidance motivation and asymmetric frontal cortical activity: A review of studies manipulating frontal asymmetry. International Journal of Psychophysiology, 119, 19–30.

[22] Hou, X., Liu, Y., Sourina, O., Tan, Y. R. E., Wang, L., & Mueller-Wittig, W. (2015, October). EEG based stress monitoring. In 2015 IEEE International Conference on Systems, Man, and Cybernetics (pp. 3110–3115). IEEE.

Cocktail party problem. This is, in my opinion, one of the BCI paradigms with the highest underexploited potential (although it appears to be hard to implement). The concept is to develop a neuro-steered hearing aid [23], that selects which sounds of the environment should be amplified, based on attention feedback from the wearer. A very promising application of BCI technologies.

I recently found this demo [24] and at first I thought they had solved the neurosteering challenge, but then I noticed that eye movements are likely to be the effector, making it, effectively, an EOG paradigm (note that IDUN also showed the could track eye movements, using earbuds, in their case [25]) Yet, I think this is a valuable approach and am looking forward to see how far this can go.

One of the challenges of the cocktail party problem consists in separating the audio sources from real-world scenario. Not a BCI issue per se, it is still an integral part of the neuro-steered hearing aid challenge.

Technical considerations:

Pipeline Overview:

[23] Geirnaert, S., Vandecappelle, S., Alickovic, E., de Cheveigné, A., Lalor, E., Meyer, B., … & Bertrand, A. (2021). Electroencephalography-Based Auditory Attention Decoding: Toward Neurosteered Hearing Devices. IEEE Signal Processing Magazine, 38(4), 89–102.

[26] O’sullivan, J. A., Power, A. J., Mesgarani, N., Rajaram, S., Foxe, J. J., Shinn-Cunningham, B. G., … & Lalor, E. C. (2015). Attentional selection in a cocktail party environment can be decoded from single-trial EEG. Cerebral cortex, 25(7), 1697–1706.

Facial expressions classification. Facial expressions analysis is commonly done using cameras, the best system on the market is likely to be [27], but there are more and more available options. There are various opinions regarding their meaning. Personally, I find them to be very informative about the emotional reactions of subjects and convey pertinent information about their state of mind. The best system I know is our Nucleus-Hermès [28] a fEMG system that classifies facial expressions, using a smart-glasses form factor, but I’m obviously biased in that regard.

Technical considerations:

Pipeline Overview:

[28] https://medium.com/@re-ak/what-is-the-nucleus-herm%C3%A8s-eb6d84551659

[29] Phinyomark, A., Phukpattaranont, P., & Limsakul, C. (2012). Feature reduction and selection for EMG signal classification. Expert systems with applications, 39(8), 7420–7431.

Arousal (from EDA). Arousal can be estimated using EEG, but EDA reacts firmly to arousing stimuli. These events can be used to establish precisely which stimuli are the most likely to be responsible for distracting the subject.

Technical considerations:

Pipeline Overview:

[31] Kalinkov, K. (2020, September). Algorithm for peak detection in the Skin Conductance Response component of the EDA signals. In 2020 International Conference on Biomedical Innovations and Applications (BIA) (pp. 89–91). IEEE.

Heart dynamics convey a lot of information about metabolism and stress levels. It has been demonstrated to react to valence, but the latter is hard to use because it gets buried under all the other effects that change heart rate.

Technical considerations

Pipeline Overview

Electrooculography (EOG), I have played with EOG much, but I plan to. In the meantime, eye movements are a well-known artifact, when sampling EEG. It measures eye movement, but I don’t know how accurate it can be as an eye-tracker. Beyond eye-tracking, eye movement can reveal behavioral information about the wearer.

Technical considerations:

Other considerations

There are at least two other paradigms that I’m aware of, although I never studied them in detail. As far as I know, they are similar to P300.

· Mismatch Negativity [32]

· Face recognition N170 [33]

Also, LLM’s and MRI have been paired up as a mean to generate brain-to-text interface. Given how novel important this advancement would be, I feel like a dedicated article is needed, but this is not my area of expertise, so I’ll leave it to others. Here’s a link if the topic interests you [34].

If you think I left important experiences aside, please reach out. This list represents the metrics and experiments I have been in contact with, but I’m sure there is some work I’m not aware of.

I obviously left aside most of the work done with MRI/MEG and in-brain implants, as these are complex systems which are unlikely to reach the consumer market soon.

Add a comment

Related posts:

Creating Realistic Artificial Avatars for Videos

HeyGen is an AI video platform based out of Los Angeles, CA. It allows you to create realistic looking/sounding audio and video from text inputs. Basically, you can give it a script, it’ll read it…

Access control in Swift

Access control works to restrict assess from parts of your code, enabling you to hide implementation details of code and specify a prefered interface through which code can be accessed. Module: A…