Loading...
 

EEG study design and data analysis resources

Analyzing infant EEG data is a challenging task, especially in the case of visual stimulation, because of two main factors: 1) Due to infants’ limited attentional span, the data segments during which infants effectively attend to the stimuli are very short; 2) Since infants are almost unconstrained, the most frequent artifacts are caused by a variety of movements (head, arms, frowning, sucking) which generate non-stereotyped artifacts that constantly vary in topography and temporal dynamics. Because of these factors, artifact removal for infant EEG data is an arbitrary and time-consuming task and EEG data analysis is generally more challenging compared to adult data.

For this purpose, the Baby Lab provides important resources for its users:

  • For designing your study in such a way to obtain reliable EEG responses within the limited span of infant attention, you may ask advice both to the lab manager and the scientific advisor.
  • For efficient, semi-automatic EEG artefact removal, check out NEAR, our new pipeline, available in the form of an EEGLAB plugin (details in the next paragraph).
  • For EEG data analysis relative to frequency-tagging designs, read the paragraph on Frequency-tagging analysis describing the main features of the frequency-tagging design and a toolbox for frequency-tagging analysis developed in the lab.
  • For coregistration of EEG electrode positions with anatomical MRI images with the 3D camera available in the lab, check out this coregistration pipeline manual, together with its scripts with and without Fieldtrip software.
  • For any other advice concerning EEG data analysis both at the sensor and source level, contact the lab manager.

NEAR pipeline for newborn and infant EEG artifact removal

Recently, we proposed a pipeline called NEAR (Newborns EEG Artifact Removal) for removing artifacts from short and heavily contaminated newborn EEG data (Kumaravel et al., 2022). NEAR is compatible with the EEGLAB software and can be executed both in an automated way or semi-automated way with custom scripts. Further, NEAR supports both single-subject and group-level analysis. The software can be found in this repository.

For a step-by-step tutorial on sample newborn data collected from our lab/hospital for the study (Buiatti et al., 2019), please visit here.

Frequency-tagging analysis

In general, extracting functional brain responses from EEG signals requires long-lasting, repeated stimulus presentations because of the interference of endogenous EEG activity and artifacts of biological and technical origin. One successful experimental paradigm (hereby termed “frequency tagging”) that minimizes this constraint is based on strictly periodic stimulation and exploits the property of the brain activity to respond to a sensory stimulus presented periodically at a specific (“tag”) temporal frequency by resonating at the same frequency during the stimulation period (Picton et al., 1999, Norcia et al., 2015). This effect is manifested in the EEG recordings by a sharp peak in the signal’s power spectrum at that specific “tag” frequency. Since the ongoing EEG activity is broadband in frequency, the stimulus-related response in the frequency domain is very easily discriminated from the stimulus-unrelated activity, yielding a much higher Signal-to-Noise Ratio (SNR) than the one obtained with the repetitive presentation of single, temporally isolated stimuli (event-related paradigm). Moreover, since most EEG artifacts (eye movements, blinks, motion) are also broadband in frequency, frequency tagging is more robust to artifacts and requires a lighter artifact rejection procedure than event-related designs.

Frequency tagging is widely used to test the integrity of sensory processing, especially in the visual domain (Steady-State Visually Evoked Potentials—SSVEP, see Norcia et al., 2015) and in the auditory domain (Auditory Steady-State Responses—ASSR, see Picton et al., 1999). Typical presentation rates depend on the frequency ranges in which these sensory systems are most responsive: 8–20 Hz for SSVEP and around 40 Hz for ASSR. In these frequency ranges, it is possible to obtain very high SNR because the amplitude of both ongoing EEG activity and biological EEG artifacts is low.

Recent studies have extended the use of frequency tagging to lower frequencies (0.5–7 Hz, encompassing the classical delta and theta frequency ranges) either because
they focused on higher-level neural processing characterized by longer temporal scales (e.g., syllables and words in speech perception in adults (Buiatti et al., 2009) and infants (Kabdebon et al., 2015)), or because the experimental design was based on the infrequent presentation of key stimuli among control ones (e.g., selectivity for faces among other visual objects in adults (Rossion et al., 2015) and infants (De Heering & Rossion, 2015)), or because neural processing is slow due to the immaturity of the visual system (e.g., face perception in newborns (Buiatti et al., 2019)). Obtaining reliable brain responses within this low-frequency range is more challenging because since frequencies are lower, longer presentations are needed to capture the related oscillatory responses, and the interference of both EEG ongoing activity and artifacts increases for decreasing frequencies. Still, even at low frequencies, the spectral specificity of the stimulus-related response and the robustness to artifacts of frequency tagging makes it very well-suited for neuro-cognitive testing of special populations with limited attention span, such as infants and patients (Kabdebon et al., 2022),

Because of the advantages described above, the frequency-tagging design is widely used in the Baby Lab and in the Neonatal Neuroimaging Unit for EEG studies on newborns and infants. For this reason, we implemented a toolbox for frequency-tagging analysis extending the analyses used in Buiatti et al., 2019. The Frequency-tagging analysis toolbox is freely available here.

Back to: BabyLab Home Page


Created by matteo.giovannelli. Last Modification: Thursday 14 of March, 2024 17:08:04 CET by marco.buiatti.