The “entry point”: what fNIRS is actually giving you
The “entry point”: what fNIRS is actually giving you
Someone on the team says it plainly: fNIRS estimates relative changes in oxygenated and deoxygenated hemoglobin (HbO/HbR), leveraging neurovascular coupling — an indirect marker of neural activity, comparable in concept to fMRI’s BOLD, but with its own strengths and constraints.

The entry point - what fNIRS is actually giving you
In BrainLatam2026 terms: the measurement is already “body + brain” from the start. It’s blood dynamics, not a direct neuron counter. That matters because real life changes blood dynamics for many reasons.
The first real-world shock: “rest” is not a control
Now we’re in the design meeting. Someone proposes: “Let’s compare task vs rest.” The paper pushes back hard: rest is usually a poor control condition, because brains don’t rest in the way we pretend — people think different thoughts, with different cognitive and emotional loads. The primer recommends a fine-cuts approach: match experimental and control conditions tightly and use active control, not “do nothing.”
And it gets concrete: simulations and classic neuroimaging logic show why timing matters; for block designs, 30 s blocks are often optimal for power when convolved with the hemodynamic response model (HRF), and event-related designs can work if timing is irregular enough to avoid collinearity in GLM predictors.
In Jiwasa mode, you feel why this matters: if your control is vague, your conclusion becomes a story you tell yourself — not a mechanism you can defend.
The core deliverable: Seven design principles (and how they feel in your hands)
The paper ends by crystallizing seven principles — not as slogans, but as a survival kit for real-world fNIRS.
1) Timing matters
You design around hemodynamics. You stop treating timestamps like EEG events and start treating them like HRF-shaped predictors.
2) Fine cuts work
You fight for active control and avoid “rest” as a contrast. You match conditions so that what differs is the cognitive process you claim — not luminance, not motor demand, not social salience.
3) Behavior matters
In real-world studies and hyperscanning, behavior isn’t decoration — it’s part of the signal architecture. The primer explicitly recommends recording behavior and incorporating it into analyses when possible.
4) Physiology matters
This is where the paper becomes visceral: even with a stationary participant doing a simple computer math task, the paper shows an example where heart rate ranged from 54 to 86 bpm — big enough to drive false positives/negatives if you don’t measure and model physiology.
The authors discuss strategies like short-separation channels (capturing superficial scalp blood flow to regress it out), global mean removal, and “systemic physiology augmented” approaches (adding concurrent physiological measures).
5) Ecologically valid tasks matter
The paper makes the trade-off explicit: tasks that resemble real life may reduce experimental control, but can evoke more robust, meaningful activation patterns than rigid computerized tasks — especially important in developmental contexts.
6) Statistics matter
Power, test–retest reliability, and multiple-comparisons corrections have to be considered at design stage, not as an afterthought when p-values disappoint.
7) Cognition matters
The paper is blunt: interpretation requires mapping patterns to plausible information-processing mechanisms — otherwise we’re just labeling colors on a brain map.
Hyperscanning: Jiwasa becomes methodological
When you scan two (or more) people together, the primer emphasizes the interpretive fork: shared brain patterns can come from mutual prediction within interaction and/or from common environmental inputs. fNIRS hyperscanning often quantifies interpersonal neural synchrony (INS) via correlations or wavelet coherence, but the key is: your design must separate “we are coordinating” from “we are co-experiencing the same stimulus.”
This is BrainLatam2026 in practice: the “collective” is real, but you still need experimental clarity about which collective mechanism you are measuring.
The anti-trap: Reverse inference
Even with perfect design, the paper warns about reverse inference: you cannot reliably infer a specific mental state from activation in a brain region, because regions support many processes. The primer uses classic examples (e.g., amygdala) and notes that “dlPFC = workload” is not a logically safe shortcut. It suggests mitigation strategies like meta-analytic tools (e.g., Neurosynth) and carefully constructed multi-condition designs.
BrainLatam2026 “incorporation” (explicitly an interpretation)
Mente Damasiana: physiology isn’t “noise”; it’s part of the embodied state that shapes cognition. The paper’s insistence on measuring systemic physiology makes this non-negotiable in real-world fNIRS.
Eus Tensionais: behavior logs are not optional; they are the trace of how the “task-self” actually stabilized in the body moment to moment.
Zona 1–2–3 (methodologically): “rest as control” is a pathway to interpretive capture (a methodological Zone 3): you’ll fill the ambiguity with narrative. Active control (fine cuts) is how you keep cognition accountable.
A ready-to-use mini checklist (what you can apply tomorrow)
Define your cognitive target and the competing explanations (movement, arousal, social salience).
Choose active control; avoid rest contrasts.
Plan timing with HRF/GLM in mind (30 s blocks often strong; events need irregularity).
Record behavior richly (video, logs, event reconstruction).
Measure physiology (at minimum HR/resp) and/or add short-separation channels.
Pre-plan stats: power logic, reliability expectations, multiple-comparisons correction.
Write interpretation rules that resist reverse inference.