Jackson Cionek
1 Views

fNIRS and Deep Learning: Automated Quality Control in NIRS Signals

fNIRS and Deep Learning: Automated Quality Control in NIRS Signals

fNIRS is growing because it allows us to study the brain in more natural situations: infants, adults, social interaction, speech, music, learning, resting-state activity, clinical settings, and hyperscanning. But before interpreting oxyhemoglobin, deoxyhemoglobin, cortical activation, or functional connectivity, there is a basic question: is the signal good enough to be analyzed?

The study by Guglielmini, Chen, and Wolf addresses exactly this problem. It presents DL-QC-fNIRS, a deep learning tool for automated quality control in fNIRS signals, evaluating channel by channel whether the signal is high quality or low quality. The proposal is to replace part of manual inspection and arbitrary thresholds with a more standardized, scalable, and reproducible method.

The starting point of the study is very important: traditional metrics such as CV — coefficient of variation — and SCI — scalp coupling index — are useful, but they depend on user-defined thresholds. This can generate errors: some bad channels may pass as good, while useful channels may be discarded. For a science that wants to be reliable, this is a serious problem.

The scientific question of the article can be stated as follows: can a deep learning model identify fNIRS signal quality better than traditional fixed-index metrics? To answer this, the authors used signals from two independent resting-state fNIRS datasets, combining recordings obtained with NIRSport 1 and NIRSport 2, both from NIRx Medical Technologies.

The study worked with two datasets. The first included 40 recordings from healthy adults using NIRSport 1, with 20 long-separation channels and 8 short-separation channels over the frontal region. The second came from a hyperscanning study with 92 recordings using NIRSport 2, with prefrontal and temporo-parietal channels, also including short-separation channels. When both datasets were combined, the authors obtained 10,660 channel segments, with an almost balanced distribution between high-quality and low-quality samples.

The method is elegant. First, raw signals were converted to optical density and then to oxyhemoglobin. Then the authors used continuous wavelet transform to generate time-frequency images called scalograms. The key idea is to observe whether cardiac pulsation appears continuously and stably in the signal. When the cardiac pulse is clear, this suggests good optode-scalp coupling. When it is absent, fragmented, or masked by artifacts, the channel tends to be low quality.

A strong point of the article is that the model does not use a fixed cardiac band for everyone. It identifies each subject’s cardiac frequency through a spectral fitting algorithm. This makes the process more physiologically specific, because different people may have different heart rates. Figure 2 of the article shows this process: cardiac peak detection, individual cardiac band definition, and transformation of the signal into a scalogram for input into the neural network.

The authors tested four CNN architectures: GoogLeNet, ResNet-50, SqueezeNet, and EfficientNet-B0. The best overall performance came from GoogLeNet, especially in the combined dataset. In cross-validation, it reached about 93.10% accuracy and an F1-score of 92.71% in the combined dataset. In the independent combined test set, DL-QC-fNIRS reached 91.89% accuracy, outperforming CV and SCI in the balance between sensitivity and specificity.

The comparison with traditional methods is the heart of the article. CV was too conservative: it preserved many good signals but failed to detect many bad signals. SCI detected poor-quality channels better, but discarded many good channels. Deep learning achieved a better balance: it detected low-quality channels without excessively destroying useful data.

This is fundamental for BrainLatam2026. There is no serious decolonial neuroscience without the material quality of the data. We can speak about Damasian Mind, APUS, Jiwasa, Tensional Selves, Zone 2, and belonging, but everything begins earlier: is the optode well coupled? Is the cardiac pulse visible? Is the channel saturated? Did movement contaminate the data? Is the AI classifying with balance? Can the researcher audit the process?

The avatar-lens for this blog can be Brainlly, as a methodological guardian. Brainlly reminds us that scientific creativity needs rigor. A beautiful question with a poor signal becomes a fragile interpretation. A well-designed question with reliable signal quality can become reproducible science.

APUS also enters here, as body-territory. The quality of the fNIRS signal begins in the contact between equipment and body: hair, skin, scalp, sweat, head shape, cap comfort, optode pressure, and movement. The article shows that the data does not begin in the software; it begins in the body. Deep learning helps later, but the materiality of the body-territory comes first.

The generous decolonial critique is that AI models need to be trained with real diversity. A classifier trained with specific skin tones, hair types, ages, equipment, and protocols may not generalize equally to all bodies. Therefore, BrainLatam2026 would ask: with which populations was this model trained? With which hair types? With which skin tones? With which ages? With which equipment? With which tasks?

A future Latin American experimental design could create a regional fNIRS signal-quality database. We could collect data with NIRSport2, EEG, GSR, respiration, PPG, EMG, and motion sensors, including children, adolescents, adults, different skin tones, different hair types, and tasks involving language, music, education, clinical contexts, and social interaction. Specialists would then label good, bad, and uncertain segments to train models better adapted to Latin American realities.

The experimental question would be: does a deep learning model trained with Latin American diversity improve automated quality control in fNIRS signals compared with CV, SCI, and manual inspection? This question is technical, scientific, and political. It is technical because it improves pipelines. It is scientific because it increases reproducibility. It is political because it prevents global neuroimaging from being calibrated only by bodies from the Global North.

For hyperscanning studies, the impact is even greater. When we measure teachers and students, mother and infant, musicians, therapeutic groups, or teams in interaction, there are many channels and a much higher chance of movement, sweat, misalignment, and noise. A system such as DL-QC-fNIRS can help researchers quickly identify which channels are reliable, which segments should be excluded, and which participants need adjustment.

The bridge with DREX Cidadão appears at the level of scientific infrastructure. A public policy based on neuroscience needs reliable data. Poor measurement can generate poor interpretations about attention, learning, social suffering, mental health, and belonging. Measuring well is part of the ethics of evidence. Automated quality control, therefore, is not only a technical step: it is a form of care for the truth of the measured body.

The article also delivers something practical: an open-source MATLAB graphical interface that allows users to apply pretrained models or train custom models. The tool accepts commonly used fNIRS file formats, allows segmentation, signal visualization, manual labeling, model training, and comparison with CV and SCI. Figures 4 to 9 show this operational flow, from data loading to final model evaluation.

Closing
The future of fNIRS will not be only about better equipment. It will be about more reliable signals, transparent pipelines, auditable models, and diverse databases. Deep learning can help transform quality control into a more objective and reproducible step. For BrainLatam2026, this means uniting technical rigor and decolonial awareness: measuring the brain better while respecting the body-territory that makes each signal possible.


Single Reference
Guglielmini, S., Chen, Z., & Wolf, M. (2026). DL-QC-fNIRS: a deep learning tool for automated quality control in functional near-infrared spectroscopy signals. Neurophotonics, 13(1), 015001. doi:10.1117/1.NPh.13.1.015001.




#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States