Introduction
Introduction Statistics Contact Development Disclaimer Help
_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
Visit Hacker News on the Web
COMMENT PAGE FOR:
40 percent of fMRI signals do not correspond to actual brain activity
bookofjoe wrote 12 hours 53 min ago:
>BOLD signal changes can oppose oxygen metabolism across the human
cortex (no paywall)
[1]: https://www.nature.com/articles/s41593-025-02132-9
physPop wrote 16 hours 7 min ago:
Unfortunately, for experts in the field, this is a "we know" article
that probably shouldn't have been published, and belongs more in a
textbook...
D-Machine wrote 14 hours 26 min ago:
Yes this is ancient news for experts, but, IMO, most fMRI research
outside of methodological research is quite practically useless at
the moment because of deep measurement issues like these.
So if awareness of this increases the skepticism of papers claiming
to have learned things about the brain/mind from fMRI, then I'd say
it is a net plus.
sharts wrote 17 hours 43 min ago:
So all the studies citing fmri data are now wrong probably? yikes
stainablesteel wrote 17 hours 57 min ago:
> Many fMRI studies on psychiatric or neurological diseases – from
depression to Alzheimer’s – interpret changes in blood flow as a
reliable signal of neuronal under- or over-activation. Given the
limited validity of such measurements, this must now be reassessed
idk about that, the brain is complicated, blood flow itself may as well
be a factor to interpret
quasarj wrote 1 day ago:
Uh-oh
instagraham wrote 1 day ago:
I might be oversimplifying, but isn't a lot of our neurological
understanding about ADHD based on "fMRI shows decreased activity in the
frontal cortex"? Or for that matter, our neurological understanding of
a lot of mental health conditions.
I know the actual diagnosis is several times more layered than this
attempt at an explanation, but I always felt that trying to explain the
brain by peering at it from outwards is like trying to debug code by
looking at a motherboard through a bad microscope.
D-Machine wrote 1 day ago:
I do not think there is much neurological understanding about ADHD at
all from current fMRI research, there are far too many quality and
reliability issues here, not just on the fMRI end or limited amount
of data overall, but in the measurement and diagnosis of ADHD itself
(i.e. ADHD subtypes, and of course ADHD is a complicated diagnosis
with many components manifesting to different degrees in different
individuals, which makes it very hard to cleanly link to messy fMRI
signals).
Or, as I have commented elsewhere here, the idea that statements like
"fMRI shows decreased activity" are ever valid is just fundamentally
suspect (lower BOLD response could mean less inhibition or less
excitation, and this is a rather crucial difference that fMRI simply
can't distinguish). EDIT: Or to be more precise: it may well be that
fMRI research suggests less metabolic activity in certain regions,
but this could mean the region is actually firing more than normal,
less than normal, is more efficient than normal, etc., and
interpreting anything about what is functioning differently in ADHD,
given this uncertainty, is what is going to be suspect.
Your analogy is largely correct IMO.
instagraham wrote 22 hours 33 min ago:
Thanks for the excellent explanation, I didn't know it couldn't
distinguish inhibition and excitation.
It seems then that while oxygenation itself may be a good proxy for
brain health, the way we measure it is unreliable
supersour wrote 1 day ago:
Yes this is true, but we actually have a lot more data to back this
on than exclusively fMRI analysis - for example the ADHD medication
guanfacine works only because the alpha 2 receptors happen to be
wired differently in the prefrontal cortex than it is in other areas
of the brain (a2 is inhibitory for most brain regions, but in the PFC
they're positioned to amplify connections between neurons) , so by
stimulating alpha 2 we allow for a more “top down” control from
the prefrontal cortex than we do without, which improves executive
function.
So that is one extremely robust way to understand neurological
conditions like ADHD or Parkinson’s
instagraham wrote 22 hours 34 min ago:
With such medications, besides behavioural changes, how are they
able to measure outcomes without fMRIs? Like knowing whether neuron
connections are amplified or not?
D-Machine wrote 10 hours 0 min ago:
They don't, this is speculative (i.e. a theory) and almost
certainly untrue (or a gross over-simplification), much like the
early and now disproven serotonin theories of depression.
subroutine wrote 1 day ago:
I was a grad student at UCSD when Ed Vul published Voodoo Correlations
in Social Neuroscience [1], which stoked a severe backlash from the
fMRI syndicate resulting in a title change to Puzzlingly High
Correlations in fMRI Studies of Emotion, Personality, and Social
Cognition [2]. There is a lot of interesting commentary around this
article (e.g., “Voodoo” Science in Neuroimaging: How a Controversy
Transformed into a Crisis [3]). To me it was fascinating to watch Vul
(an incredibly rare talent, perhaps a genius), take on an entire field
during his 1st year as assistant professor.
1. [1] 2. [2] 3.
[1]: http://prefrontal.org/blog/2009/01/voodoo-correlations-in-soci...
[2]: https://journals.sagepub.com/doi/10.1111/j.1745-6924.2009.0112...
[3]: https://www.mdpi.com/2076-0760/12/1/15
voxleone wrote 1 day ago:
It is the microbiome, stupid.
[just kidding]
j45 wrote 1 day ago:
What's surprising is the desire to have a silver bullet, or not
solution.
What's still amazing is fMRI can provide more visual context of what's
happening in the brain, in what region, and activities that can help
that improve.
There are other complementary technologies like QEEG and SPECT that can
also shed a light as well.
It does seem the case that fMRI cann be more of a snapshot photo, and
technologies like SPECT can provide more of a regional time lapse of
activity.
Olshansky wrote 1 day ago:
I didn't get my name on this, but contributed to it as an undergrad:
[1] We sped up fMRI analysis using distributed computing (MapReduce)
and GPUs back in 2014.
Funny how nothing has changes.
[1]: https://ieeexplore.ieee.org/abstract/document/6903679
eykanal wrote 1 day ago:
Now seems like a good time to remind folks of the Stanford dead fish
fMRI study: [1] fMRI has always had folks highlighting how shaky the
science is. It's not the strongest of experimental techniques.
[1]: https://law.stanford.edu/2009/09/18/what-a-dead-salmon-reminds...
salynchnew wrote 1 day ago:
Why are you calling Bennett et al "the Stanford... study" ? Not one
person on that team went to Stanford.
Direct link to the poster presentation:
[1]: http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf
KalMann wrote 17 hours 59 min ago:
Why are you phrasing your correction in the form of a question? I
think it's pretty reasonable to infer that he mistakenly thought it
was a Stanford study because the link was from Stanford.
cafebeen wrote 1 day ago:
This study was really highlighting a statistical issue which would
occur with any imaging technique with noise (which is unavoidable).
If you measure enough things, you'll inevitably find some false
positives. The solution is to use procedures such as Bonferroni and
FDR to correct for the multiple tests, now a standard part of such
imaging experiments. It's a valid critique, but it's worth
highlighting that it's not specific to fMRI or evidence of shaky
science unless you skip those steps (other separate factors may
indicate shakiness though).
Terr_ wrote 1 day ago:
> a statistical issue which would occur with any imaging technique
I sounds like it goes beyond that: If a certain mistake ruins
outcomes, and a lot of people are ruining outcomes and not
noticing, then there's some much bigger systematic problem going
on.
prefrontal wrote 1 day ago:
When we published the salmon paper, approximately 25-35% of
published fMRI results used uncorrected statistics. For myself and
my co-authors, this was evidence of shaky science. The reader of a
research paper could not say with certainty which results were
legitimate and which might be false positives.
cafebeen wrote 1 day ago:
Thank you for publishing that paper, which I think greatly helped
address this problem at the time, which you accurately describe.
I guess things have to be taken in their historical context, and
science is a community project which may not uniformly follow
best practices, but work like this can help get everyone in line!
It's unfortunate, and no fault of the authors, that the general
public has run wild with referencing this work to reject fMRI as
a experimental technique. There's plenty of different ways to
criticize it today, for sure.
Balgair wrote 1 day ago:
Hey, I know you got a lot of flack for the article. So, I just
wanted to thank you for having the courage to publish it anyways
and go through all of that for all of us.
I go back to the study frequently when looking at MRI studies,
and it always holds up. It always reminds me to be careful with
these things and to try to have other be careful with their
results too. Though to me it's a bit of a lampooning,
surprisingly it has been the best reminder for me to be more
careful with my work.
So thank you for putting yourself through all that. To me, it was
worth it.
prefrontal wrote 1 day ago:
Many thanks - appreciate the kind words. Thanks also for always
working to work with care in your science. It makes all the
difference.
Among other challenges, when we first submitted the poster to
the Human Brain Mapping conference we got kicked out of
consideration because the committee thought we were trolling.
One person on the review committee said we actually had a good
point and brought our poster back in for consideration. The
salmon poster ended up being on a highlight slide at the
closing session of the conference!
dang wrote 1 day ago:
Discussed here. others?
Risk of false positives in fMRI of post-mortem Atlantic salmon (2010)
[pdf] - [1] - Nov 2017 (41 comments)
Scanning dead salmon in fMRI machine (2009) - [2] - Sept 2009 (1
comment)
[1]: https://news.ycombinator.com/item?id=15598429
[2]: https://news.ycombinator.com/item?id=831454
levocardia wrote 1 day ago:
fMRI methods and statistics have advanced quite a lot since the dead
fish days, that critique does not really hold up today.
prefrontal wrote 1 day ago:
While I would agree that the prevalence of the problem has been
minimized in fMRI during the last 15 years, I disagree that our
critique does not hold up. The root of our concern was that proper
statistical correction(s) need to be completed in order for
research results to be interpretable. I am totally biased, but I
think that remains worthwhile.
zahlman wrote 1 day ago:
I immediately thought of it too. Didn't realize it was that long ago.
Trickery5837 wrote 1 day ago:
let me write the correct title for you: "new evidence that fMRI data
should be processed and interpreted only in the presence of an adult"
isacdaavid wrote 1 day ago:
If you actually read their paper, you will find that it's only the sign
of the correlation that is being questioned. The field has generally
been aware of this interpretational gap, and that's why two-sided
hypothesis tests are important. Cellular neuroscience and
electrophysiology are only starting to face the challenges that fMRI
faced 2 decades ago.
To me this is like shitting on cars in 1925 because they kill people
every now and then. Cars didn't go away, and nor will fMRI, until
someone finds a better way to measure living people's brains.
TUM's press is being sloppy, from conflating fMRI with MRI to presuming
this is revolutionary, and ignoring earlier empirical work against this
narrative (Windkessel's, Logothetis beta/gamma coupling, etc.)
rcv wrote 1 day ago:
I remember reading a paper back in grad school where the researchers
put a dead salmon in the magnet and got statistically significant brain
activity readings using whatever the analysis method à la mode was. It
felt like a great candidate for the Ig Nobel awards.
prefrontal wrote 1 day ago:
That was our paper! We showed that you can get false positives
(significant brain activity in this case) if fMRI if you don't use
the proper statistical corrections. We did win an Ig Nobel for that
work in 2012 - it was a ton of fun.
dang wrote 1 day ago:
This is one for [1] !
(I mention this so more people can know the list exists, and
hopefully email us more nominations when they see an unusually
great and interesting comment.)
p.s. more on the salmon paper in this thread: [2] [3]
[1]: https://news.ycombinator.com/highlights
[2]: https://news.ycombinator.com/item?id=46291600
[3]: https://news.ycombinator.com/item?id=46288560
[4]: https://news.ycombinator.com/item?id=46288557
jldugger wrote 1 day ago:
Interesting -- I just use [1] for a weekly roundup, but that only
tracks posts. Might need to supplement it with highlights or
similar.
Reviewing the HN docs, [2] might also be a good summary link.
[1]: https://news.ycombinator.com/best?h=168
[2]: https://news.ycombinator.com/bestcomments?h=168
riazrizvi wrote 1 day ago:
The researchers found that “40% of increased fMRI signal correspond
to a decrease in neuronal activity”, so it’s even worse than the
headline.
SubiculumCode wrote 1 day ago:
I'll get raked for this, but as someone in the field, I can say with
high confidence that the majority of comments in this thread are not
from imaging experts, and mostly (mis)informed by popular science
articles. I do not have the time to properly respond to each issue I
see. The literature is out there in any case.
physPop wrote 16 hours 8 min ago:
agree. especially the comments saying "just address it". Its a lot of
technically complicated interactions between the physics, imaging
parameters, and processing techniques.
Unfortunately the end users (typically neuroscience/psych grad
students in labs with minimal oversight) usually run studies that
just "throw everything at the wall and see what sticks" not realizing
that is the antithesis of the scientific method. No one goes in to a
resting state study saying "we're going to test if the resting state
signal in the is becuase of ".
They instead measure a bunch of stuff find some regions that pass
threshold in a group difference and publish it as "neural correlates
of X". Its not science, and its why its not reproducible. People
have build whole research programs on noise.
D-Machine wrote 14 hours 23 min ago:
The meaningless NHST ritual is so harmful here. Imagine what we
might know by now if all those pointless studies had used their
resources to do proper science...
Der_Einzige wrote 1 day ago:
This is also true when HN talks about AI/ML :)
nerdface wrote 1 day ago:
I'm not a specialist by any means, although I am a patient of an
fMRI. One thing I will note is that in the eventual, resultant
paperwork from the broad array of tests I had, the fMRI was not noted
whatsoever, neither was it discussed with me by any of the numerous
neurologists or surgeons involved in my case. I was quite curious as
to why it was performed at all, but presumably it was some formality
to check a box.
D-Machine wrote 1 day ago:
It makes sense they wouldn't look at it, there are very few, if
any, well-validated clinical uses for it. However, they might have
taken it as a baseline for later comparison, and it is definitely
plausible when surgery is involved that visible abnormalities could
be seen in fMRI that might not show up in MRI, either now or later.
I don't think there would be much clear guidance for them on how to
interpret any such fMRI abnormality on its own, but it might still
be something useful for further investigations, and this might
especially be the case for surgery. It might also have been done as
part of research, if you consented to anything like that?
I am NOT an expert on fMRI in medical contexts, but you can surely
get a rough idea of the potential value of fMRI with a quick
search:
[1]: https://scholar.google.ca/scholar?q=fMRI+surgery+brain&hl=...
xandrius wrote 1 day ago:
There are literally less than 20 top level comments and this one is
(at least for me) the 2nd or 3rd.
Instead of a nothingburger, you could have used your academic prowess
to break down the top 1/2 misconceptions with expertise.
You might not have time to respond to all the comments but a couple
of clarifications could have helped anyone else who doesn't comment
without experience.
Just saying that next time you can be the change you want to see in
HN instead of wasting text telling us how ignorant we are.
Aurornis wrote 1 day ago:
> I do not have the time to properly respond to each issue I see. The
literature is out there in any case.
I think your expertise would be very welcome, but this comment is
entirely unhelpful as-is. Saying there are bad comments in this
thread and also that there is good literature out there without
providing any specifics at all is just noise.
You don't have to respond to every comment you see to contribute to
the discussion. At minimum, could you provide a hint for some
literature you suggest reading?
mattkrause wrote 1 day ago:
I'll co-sign SubiculumCode's comment -- there's a lot of yelling
about how bad fMRI is generally, which is not particularly fair to
the fMRI research (or at least the better parts of it) or related
to the argument.
The BOLD signal, the thing measured by fMRI, is a proxy for actual
brain activity. The logic is that neural firing requires a lot of
energy and so active neurons will being using more oxygen for their
metabolism, and this oxygen comes from the blood. Thus, if you
measure local changes in the oxygenation of blood, you'll know
something about how active nearby neurons are. However, it's an
indirect and complicated relationship. The blood flow to an area
can itself change, or cells could extract more or less oxygen from
the blood--the system itself is usually not running at its limits.
Direct measurements from animals, where you can measure (and
manipulate) brain activity while measuring BOLD, have shown how
complicated this is. Nikos Logathetis and Ralph Freeman's groups,
among many others did a lot of work on this, especially c.
2000-2010. If you're interested, you could check out this news and
views on Logathetis's group's 2001 Nature paper [1]. One of the
conclusions of their work is that BOLD is influenced by a lot of
things but largely measure the inputs to an area and the synchrony
within it, rather than just the average firing rate.
In this paper, the researchers adjust the MRI sequences to compare
blood oxygenation, oxygen usage, and blood flow and find that these
are not perfectly related. This is a nice demonstration, but not a
totally unexpected finding either. The argument in the paper is
also not "abandon fMRI" but rather that you need to measure and
interpret these things carefully.
In short, the whole area of neurovascular coupling is hard--it
includes complicated physics (to make measurements), tricky
chemistry, and messy biology, all in a system full of complicated
dynamics and feedback.
[1]: https://www.nature.com/articles/35084300
D-Machine wrote 1 day ago:
I have also published and worked for some years in this field, if
that helps.
The literature is huge, and my bias is that I believe most of the
only really good fMRI research is methodological research (i.e.
about what fMRI actually means, and how to reliably analyze it).
Many of the links I've provided here speak to this.
I don't think there is much reliable fMRI research that tells us
anything about people, emotions, or cognition, beyond confirming
some likely localization of function to the sensory and motor
cortices, and some stuff about the Default Mode Network(s) that is
of unclear importance.
A lot of the more reliable stuff involves the Human Connectome
Project (HCP) fMRI data, since this was done very carefully with a
lot of participants, if you want a place to start for actual
human-relevant findings. But the field is still really young.
ahtihn wrote 1 day ago:
> Saying there are bad comments in this thread and also that there
is good literature out there without providing any specifics at all
is just noise.
Nah, it's not noise. It's a useful reminder not to take any
comments too seriously and that this topic is far outside the
average commenter's expertise.
throw10920 wrote 7 hours 50 min ago:
> Nah, it's not noise
Yes, it factually is, because...
> It's a useful reminder not to take any comments too seriously
...this is factually incorrect, because GP comment is literally
not saying that - it's a specific dunk on a specific subset of
critical comments with zero useful information about which
comments or bad or why they're bad or any evidence to back up the
assertion that they're bad or anything else useful.
(GP did go back and respond to some other comments with specific
technical criticisms - after they made this initial comment. The
initial comment itself is still highly problematic, as are
fallacious praise of it, like this one.)
pessimizer wrote 1 day ago:
It's definitely noise. Not recognizing it as noise is why phone
and email scams work.
I say this as a psychologist who is advising you to ignore all
claims to the contrary, because they are misinformed. It is clear
from the literature.
strongpigeon wrote 1 day ago:
I’m sure you’re right, but given the spectrum of answers here,
it’d be much more useful to point out which ones you think are
wrong.
DANmode wrote 1 day ago:
Seeing HN take on your speciality or topic can be brutal.
Condolences.
Der_Einzige wrote 1 day ago:
[1] Or worse, your whole field can be insitutionally blind to its
own failings and randoms outside of it actually DO know more than
you!. Chiropractors are literally worthless and being told "oh you
don't get it bro" by them is their cope for being scammers, not an
example of "their Gell Mann amnesia"
[1]: https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
Loughla wrote 1 day ago:
I hide any thread that deals with education, education funding, or
teaching in general for that specific reason. It really saddens me
to see that this place is full of so much misinformation and
anecdotes made into data (and usually with much more
self-confidence than other forums, which is interesting to me).
It's why I generally only ask questions, or ask for clarification
instead of directly challenging something I think might be wrong
now in threads that aren't related to something I have deeeeep
personal knowledge of. I know when I'm out of my area, and don't
want to add to the ignorance.
DANmode wrote 1 day ago:
A great habit - especially when your question is an irresistible,
easily-addressed homerun to a domain expert wandering through the
thread looking for an entry-point.
NemoNobody wrote 1 day ago:
Challenging something with a question about it, is not adding to
ignorance - if a statement/study/fact/belief can't hold to up to
questions, from actual opposing critics, what's the point of that
position existing?
Being all "PC" and "nice" about stuff that is what it is, or
isn't -- THAT adds to ignorance.
Loughla wrote 1 day ago:
I guess maybe I didn't phrase that correctly - I ask
challenging questions, but don't state the things I "know"
without clarification first. I meant that I don't just pop off
with "yeah, but in reality it's x, y, z" because I know that
I'm probably ignorant of facts. I'll ask about x, y, or z
first.
yboris wrote 1 day ago:
In related news: ironically, Psychedelics disrupt normal link between
brain’s neuronal activity and blood flow - thus casting some doubt on
findings that under psychedelics more of the brain is connected (since
fMRI showed elevated blood flow, suggesting higher brain activity).
[1]: https://source.washu.edu/2025/12/psychedelics-disrupt-normal-l...
HocusLocus wrote 1 day ago:
As a caveman pondering "Stoned Ape Theory" during the rise of MRI in
the 80s, having done light reading of Huxley, McKenna et. al, the
claim that vascular variations were so tied to thought patterns in a
purely calm and cognitive activity was fascinating. To see the brain
of someone as they went through a deck of cards and paused to look at
each... astounding! But frustrating also. My first question always
was, was the person's hands busy going through the deck and holding
up the cards, focusing on them... or were they merely shown the cards
sitting still? It seemed the popsci articles often glossed over that
information, and any simple "control for coordinated body movement"
played second fiddle to the novelty of it all. Then I worked in a
club where I was often surrounded by tripping people. I'd fetch them
glasses of water and they would always drink. Do you know you can
smell them, they smell like fear? The experience has every sweat
gland working overtime. When I learned that I greeted this "tripping
people MRIs light up indicating enhanced brain connectivity" with a
grain of salt. I would not be the least bit surprised if the sweat
gland thing also has the brain's vascular system in overdrive.
yboris wrote 6 hours 35 min ago:
My favorite explanation for why LSD and similar psychedelics
generate the visual patterns they do: mathematics of wrapping polar
coordinates of the retina to the rectangular coordinates of the
visual processing system:
[1]: https://www.quantamagazine.org/a-math-theory-for-why-peopl...
antipaul wrote 1 day ago:
Biotech industrial complex
fMRI is a cool, expensive tech, like so many others in genetics and
other diagnostics. These technologies create good jobs ("doing well by
doing good").
But as other comments point out, and practitioners know, their
usefulness for patients is more dubious.
zerof1l wrote 1 day ago:
I wonder how much variation there is between a person who does certain
mental activity regularly vs a person who rarely does it.
If they were to measure a person who performs mental arithmetic on a
daily basis, I'd expect his brain activity and oxygen consumption to be
lower than those of a person who never does it. How much difference
would that make?
subroutine wrote 1 day ago:
I worked in an fMRI lab briefly as a grad student. I suspect you'd be
correct but perhaps not exactly why you'd expect. Studies using fMRI
measure a blood-oxygenation-level-dependent (BOLD) signal in the
brain. This is thought to be an indirect measure of neural activity
because a local increase in neural firing rate produces a local
increase in the need for, and delivery of, oxygenated blood.
The question then is, do you expect a person who is really good at
mental arithmetic to have less neural firing on arithmetic tasks
(e.g., what is 147 x 38) than the average joe. I would hypothesize
yes overall to solve each question; however, I'd also hypothesize the
momentary max intensity of the expert to peak higher. Think of a
bodybuilder vs. a SWE bench-pressing 100 lbs for 50 reps. The
bodybuilder has way more muscle to devote to a single rep, and will
likely finish the set in 20 seconds, while the SWE is going to take
like 30 minutes ;)
cj wrote 1 day ago:
I did a fMRI study as a volunteer in college.
It involved going to the lab and practicing the thing (a puzzle /
maze) I would be shown during the actual MRI. I think I went in to
“practice” a couple times before showing up and doing it in the
machine.
IIRC the purpose of practicing was exactly that, to avoid me trying
ti learn something during the scan (since that wasn’t the intention
of the study).
In other words, I think you can control for that variable.
(Side note: I absolutely fell asleep during half the scan. Oops! I
felt bad, but I guess that’s a risk when you recruit sleep deprived
college kids!)
D-Machine wrote 1 day ago:
It's so much worse than this.
For task fMRI, the test-retest reliability is so poor it should
probably be considered useless or bordering on pseudoscience, except
for in some very limited cases like activation of the visual and/or
auditory and/or motor cortex with certain kinds of clear stimuli. For
resting-state fMRI (rs-fMRI), the reliabilities are a bit better, but
also still generally extremely poor [1-3].
There are also two IMO major and devastating theoretical concerns re
fMRI that IMO make the whole thing border on nonsense. One is the
assumed relation between the BOLD signal and "activation", and two is
the extremely horrible temporal resolution of fMRI.
It is typically assumed that the BOLD response (increased oxygen
uptake) (1) corresponds to greater metabolic activity, and (2)
increased metabolic activity corresponds to "activation" of those
tissues. This trades dubiously on the meaning of "activation", often
assuming "activation = excitatory", when we know in fact much metabolic
activity is inhibitory. fMRI cannot distinguish between these things.
There are other deeper issues, in that it is not even clear to what
extent the BOLD signal is from neurons at all (could be glia), and it
is possible the BOLD signal must be interpreted differently in
different brain regions, and that the usual analyses looking for a
"spike" in BOLD activity are basically nonsense, since BOLD activity
isn't even related to this at all, but rather the local field
potential, instead. All this is reviewed in [4].
Re: temporal resolution, essentially, if you pay attention to what is
going on in your mind, you know that a LOT of thought can happen in
just 0.5 seconds (think of when you have a flash of insight that
unifies a bunch of ideas). Or think of how quickly processing must be
happening in order for us to process a movie or animation sequence
where there are up to e.g. 10 cuts / shots within a single second.
There is also just biological evidence that neurons take only
milliseconds to spike, and that a sequence of spikes (well under 100ms)
can convey meaningful information.
However, the lowest temporal resolutions (repetition times) in fMRI are
only around 0.7 seconds. IMO this means that the ONLY way to analyze
fMRI that makes sense is to see it as an emergent phenomenon that may
be correlated with certain kinds of long-term activity reflecting
cyclical BOLD patterns / low-frequency patterns of the BOLD response.
I.e. rs-fMRI is the only fMRI that has ever made much sense a priori.
The solution to this is maybe to combine EEG (extremely high temporal
resolution, clear use in monitoring realtime brain changes like
meditative states and in biofeedback training) with fMRI, as in e.g.
[5]. But, it may still well be just the case fMRI remains mostly
useless. [1] Elliott, M. L., Knodt, A. R., Ireland, D., Morris, M. L.,
Poulton, R., Ramrakha, S., Sison, M. L., Moffitt, T. E., Caspi, A., &
Hariri, A. R. (2020). What Is the Test-Retest Reliability of Common
Task-Functional MRI Measures? New Empirical Evidence and a
Meta-Analysis. Psychological Science, 31(7), 792–806. [1] [2]
Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C.
(2018). Test-retest reliability of longitudinal task-based fMRI:
Implications for developmental studies. Developmental Cognitive
Neuroscience, 33, 17–26. [2] [3] Termenon, M., Jaillard, A.,
Delon-Martin, C., & Achard, S. (2016). Reliability of graph analysis of
resting state fMRI using test-retest dataset from the Human Connectome
Project. NeuroImage, 142, 172–187. [3] [4] Ekstrom, A. (2010). How
and when the fMRI BOLD signal relates to underlying neural activity:
The danger in dissociation. Brain Research Reviews, 62(2), 233–244.
[4] , [5] [5] Ahmad, R. F., Malik, A. S., Kamel, N., Reza, F., &
Abdullah, J. M. (2016). Simultaneous EEG-fMRI for working memory of the
human brain. Australasian Physical & Engineering Sciences in Medicine,
39(2), 363–378.
[1]: https://doi.org/10.1177/0956797620916786
[2]: https://doi.org/10.1016/j.dcn.2017.07.001
[3]: https://doi.org/10.1016/j.neuroimage.2016.05.062
[4]: https://doi.org/10.1016/j.brainresrev.2009.12.004
[5]: https://scholar.google.ca/scholar?cluster=6420450573860538418&...
[6]: https://doi.org/10.1007/s13246-016-0438-x
physPop wrote 15 hours 59 min ago:
re: your last point that is not true. we can measure arbitrarially
quickly (Nottingham group does some 3d EVI at ~100ms TRs). You can
also reduce volumes and just look at single slices etc, a lot of the
fundamental research did this (wash U / Minnesota / etc in the 90s).
Its just not all that useful because the SNR tanks and the underlying
neurovascular response is inherently low-pass. There is a much faster
'initial-dip' where the bold signal swings the other way and crosses
zero (from localized accumulation of DeoxyHg before the inrush of
OxyHg from the vascular response). Its a lot better correlated with
LFP / spiking measures but just very hard to measure on non-research
scanners...
D-Machine wrote 14 hours 45 min ago:
Yes, I didn't mention this because you sacrifice so much spatial
resolution and/or info doing this that it hardly matters, unless
you believe in some very extreme and implausible forms of
localization of function. (EDIT: I mean looking at a single slice
seems to imply some commitment to localization assumptions; this
isn't relevant for reducing spatial resolution.)
For readers who don't know, we can measure at a higher temporal
resolution better if we use some tricks, and also massively
sacrifice spatial resolution ("reduce volumes") and/or how much of
the brain is scanned (look at single slices), but the spatial
resolution in most fMRI given e.g. a 0.5 TR (2 images per second)
is usually already quite poor (generally already getting difficult
to clearly even make out gyri and basic brain anatomy: see for
example Figures 7 and on here, noting the TRs in the captions: [1]
).
Still, it's a good point, and you're right of course newer and
better scanners and techniques might improve things here on both
fronts, but my understanding is that the magnetic field strengths
needed to actually get the right combo of spatial and temporal
resolution are, unfortunately, fatal, so we are really up against a
physical/biological limit here.
And as you said, it isn't that useful anyway, because the BOLD
response is already so slow, and obviously something just emerging
from the sum of a massive amount of far more rapid electrochemical
signaling that the fMRI just can't measure anyway.
[1]: https://www.frontiersin.org/journals/neuroscience/articles...
freehorse wrote 1 day ago:
Re: temporal resolution
Even if neuronal activity is (obviously) faster, the (assumed)
neuro-vascular coupling is slower. Typically there are several
seconds till you get a BOLD response after a stimulus or task, and
this has nothing to do with fMRI sampling rate (fNIRS can have much
faster sampling rate, but the BOLD response it measures is equally
slow, too). Think of it as that neuronal spiking happens in a range
of up to some hundred milliseconds and the body changing the blood
flow happens much slower than that.
The issue is that measuring the BOLD response, even in best case
scenario, is a very very indirect measure of neuronal activity. This
is typically lost when people referring to fMRI studies as
discovering "mental representations" in the brain and other
non-sense, but here we are. Criticising the validity of the BOLD
response itself, though, is certainly interesting.
D-Machine wrote 1 day ago:
Right, my point is sort of that both the BOLD response and fMRI
sampling rates are far too "slow" (not nearly approaching the
Nyquist frequency, I guess) a priori to deeply investigate
something as fast as cognition.
freehorse wrote 16 hours 32 min ago:
Yeah I agree mostly. Cognition happens in multiple timescales, as
such I don't think that fmri's sampling rate is a problem if we
understand which cognitive phenomena it can actually address and
which not. But there is definitely a tendency to not understand
such limits of our tools.
D-Machine wrote 14 hours 38 min ago:
Precisely, if we restrict fMRI to investigating phenomena and
theories of cognition and the mind that are plausibly
measurable at the appropriate temporal resolution, it will
potentially start yielding some fruit.
It will also require fMRI researchers to think more carefully
about their theories as well (e.g. noting the speed of the mind
/ amount and kinds of thinking involved in certain tasks, and
being realistic about whether or not fMRI could actually
capture something meaningful there). Too often there is no
theory, and too many studies are just correlating patterns with
some task without actually carefully thinking about the task
and deconstructing the components, testing activations in those
(e.g. ablation studies in AI research) and etc.
ryandv wrote 1 day ago:
> BOLD response and fMRI sampling rates
Funny, because these exact measures [0] were brought up in
response to a similar claim I made over a year ago [1] about the
resolution of our instrumentation.
There would appear to be a worrying trend of faith in scientism,
or the belief that we already have all the answers squirreled
away in a journal somewhere.
[0] [1]
[1]: https://news.ycombinator.com/item?id=41834346
[2]: https://news.ycombinator.com/item?id=41807867
D-Machine wrote 1 day ago:
It's a bit funny, the qualia thing and sampling rates.
Obviously we hope what we learn from e.g. psychology and fMRI
will help us explain more things about the mind, and surely
most researchers in psychology hope their research will help us
get some answers on things related to qualia as well. And
almost certainly most good / consistent reductionist
researchers must believe that qualia arise from the brain, at
least in significant part.
However, precisely by this reductionist logic, and since it is
immediately and phenomenally clear that the rate of change of
qualia in the mind (or the "amount" of different qualia, i.e.
images or sounds that one can process or generate in the mind
in under a second) is incredibly fast, it follows immediately
and logically without any need for an experiment that fMRI
cannot have the temporal resolution needed for a rich
understanding of the mind, simply based on knowing the TR
(temporal sampling resolution) is so poor. And yet, I also find
a lot of people in scientific brain research go oddly silent or
seem to refuse to accept this argument unless some strange sort
of published, quantificationist operationalization can be
pointed to (hence my pre-emptive mentioning information
transmission in neurons in under 100ms).
I'm not sure I'd call this scientism, exactly, I tend to see it
as "selective quantificationism", i.e. that certain truths can
only be proven as true if you introduce some kind of numerical
measurement procedure and metrical abstraction. Like, no one
demands a study with Scoville units to prove that e.g. a ghost
pepper is at least an order of magnitude hotter than candied
ginger, even though this is as blazingly obvious as the fact
that the mind moves too fast for something that can barely
capture images of the brain at a rate of two per second.
throw4847285 wrote 1 day ago:
I'm not a scientist, and I don't even have a very good
statistical background, so correct me if I'm wrong; would it
be far to say that the lack of skepticism about fMRI studies
in the broader public is due to scientism? Because of naive
reductionism and a gut understanding of what is "scientific",
people are far more skeptical of a study that says, "we
surveyed 100,000 people" vs. "we scanned the brains of 10
people." I've noticed a similar phenomenon with psych vs.
evolutionary psych. People have an image in their head of
what is scientific that has nothing to do with statistical
significance and everything to do with vibes.
D-Machine wrote 1 day ago:
It is tempting to speculate on what might cause the
credulousness of the broader public re: fMRI, but I think
there is enough / too much going on here for me to really
be able to say anything with much confidence. Scientism
especially is hard to define.
I think I broadly agree with you though that credulousness
to (statistically and methodologically weak) scientific /
technological claims mostly comes down to vibes and desires
/ needs, and not statistical significance, logical rigor,
evidence, or etc.
Where needs / desires are high, vibes will (often) win over
rationality, and vice-versa. It is easier for people to be
objective about science that doesn't really clearly matter
in any obvious direction, or at all. fMRI is "the mind",
and thus consciousness, and so unfortunately reduces
rational evaluation in much the same way speculation about
AI and "consciousness" and etc does. *Shrug*
kspacewalk2 wrote 1 day ago:
Depends on what you mean by cognition, but as you yourself said,
BOLD may be correlated with certain kinds of long(er)-term
activity, and that in itself is very useful if interpreted
carefully. No one claims to detect single "thoughts" or anything
of the sort, at least I haven't seen anything so shameless.
D-Machine wrote 1 day ago:
Well, a lot of task fMRI designs are pretty shameless and
clearly haven't taken the temporal resolution issues seriously,
at least when it comes to interpreting their findings in
discussions (i.e. claiming that certain regions being involved
must mean certain kind of cognition, e.g. "thoughts" must be
involved too). And there have definitely been a few papers
trying to show they can e.g. reconstruct the image ("thought")
in a person's mind from the fMRI signal.
But I don't think we are really disagreeing on anything major
here. I do think there is likely some useful potential locked
away in carefully designed resting-state fMRI studies, probably
especially for certain chronic and/or persistent systemic
cognitive things like e.g. ADHD, autism, or, perhaps more
fruitfully, it might just help with more basic understanding of
things like sleep. But, I also won't be holding my breath for
anything major coming out of fMRI anytime soon.
pdevr wrote 1 day ago:
>which are known to produce predictable fMRI signal changes in
distributed brain regions.
Wondering how they created that baseline. Was it with fMRI data (which
has deviance from actual data, as pointed out)? Or was it through other
means?
NalNezumi wrote 1 day ago:
My previous job was at a startup doing BMI, for research. For the first
time I had the chance to work with expensive neural signal measurement
tools (mainly EEG for us, but some teams used fMRI). and quickly did I
learn how absolute horrible the signal to noise ratio (SNR) was in this
field.
And how it was almost impossible to reproduce many published and well
cited result. It was both exciting and jarring to talk with the
neuroscientist, because they ofc knew about this and knew how to read
the papers but the one doing more funding/business side ofc didn't
really spend much time putting emphasis on that.
One of the team presented a accepted paper that basically used Deep
Learning (Attention) to predict images that a person was thinking of,
from the fMRI signals. When I asked "but DL is proven to be able to
find pattern even in random noise, so how can you be sure this is not
just overfitting to artefact?" and there wasn't really any answer to
that (or rather the publication didn't take that in to account,
although that can be experimentally determined). Still, a month later I
saw tech explore or some tech news writing an article about it,
something like "AI can now read your brain" and the 1984 implications
yada yada.
So this is indeed something probably most practitioners, masters and
PhD, realize relatively early.
So now that someone says "you know mindfulness is proven to change your
brainwaves?" I always add my story "yes, but the study was done with
EEG, so I don't trust the scientific backing of it" (but anecdotally,
it helps me)
pedalpete wrote 13 hours 33 min ago:
I think you're throwing the baby out with the bathwater, while also
pointing to the missing pieces in our understanding of the brain and
consciousness.
I also work in the field, specifically with sleep slow-wave
enhancement.
Blood flow as a proxy for brain activity I always felt was a weak
measure, as the brain activity involved across all manner of
operating our biological systems, so is the increased blood flow
measured in fMRI a response to cognition, or autonomic activity? What
does that oxydation mean.
EEG is similarly flawed when we try to equate "brainwaves" to
emotions and consciousness. I think we're almost better off measuring
HRV, a much simpler measure, and more reliable.
I'm fascinated that so many people who discuss brainwaves think of
them as actual "waves", when it is just how we plot electrical
activity which creates a visual wave like pattern.
However, and this is specifically related to our work in sleep, we
can detect slow-waves (I dislike that term, it's the synchronous
firing of neurons) and we are able to stimulate this restorative
brain function through sensory perception during sleep, and even
create slow-waves in a lab using TMS.
Research linked on our website [1] I agree the industry needs to stop
projecting what we hope we're seeing with what is actually being
measured, and we don't understand enough about how the brain works,
but I think completely throwing away any brain related measures we
have is going too far.
1 -
[1]: https://affectablesleep.com/how-it-works#research
aardvark92 wrote 1 day ago:
Saw the same thing first hand with Pathology data. Image analysis is
far more straightforward problem than fMRI, but sorry, I do not trust
your AI model that matches our pathologist’s scoring with 98.5%
accuracy. Our pathologists are literally guesstimating these numbers
and can vary by like 10-20% just based on the phase of the moon,
whether the pathologist ate lunch yet, what slides he looked at
earlier that day…that’s not even accounting for inter-pathologist
variation…
Also saw this irl with a particular NGS diagnostic. This model was
initially 99% accurate, P.I. smelled BS, had the grad student crunch
the numbers again, 96% accurate, published it, built a company around
this product —-> boom, 2 years later it was retracted because the
data was a lot of amplified noise, spurious hits, overfitting.
I don’t know jack compared to the average HN contributor, but even
I can smell the BS from a mile away in some of these biomedical AI
models. Peer review is broken for highly-interdisciplinary research
like this.
canjobear wrote 1 day ago:
> but DL is proven to be able to find pattern even in random noise,
so how can you be sure this is not just overfitting to artefact?
You test your DL decoder on held-out data. This is the common
practice.
j-krieger wrote 1 day ago:
90% of papers I read in computer science / computer security speak of
software written or AI models they trained that are nowhere to be
found. Not on git nor via email to the authors.
Plutoberth wrote 1 day ago:
I'm not sure I understand. Wouldn't any prediction result above
statistical random (in the image mind reading study) be significant?
If the study was performed correctly I don't really need to know much
about fMRI to tell whether it's an interesting result or not.
ladberg wrote 1 day ago:
The study misleading claimed to produce images from brainwaves. In
reality, they effectively built a combination of classifier from
brainwaves to one of a few predetermined classifications of images
shown (still cool, but less impressive) and a neural net to
reproduce images it was trained on given a classification (boring).
ErroneousBosh wrote 1 day ago:
> When I asked "but DL is proven to be able to find pattern even in
random noise, so how can you be sure this is not just overfitting to
artefact?"
So here you say quite a mouthful. If you train it on a pattern it'll
see that pattern everywhere - think about the early "Deep Dream"
trippy-dogs-pictures nonsense that was pervasive about eight or nine
years ago.
I repaired a couple of cameras for someone who was working with a
large university hospital about 15 years ago, where they were using
admittedly 2010s-era "Deep Learning" to analyse biopsy scans for
signs of cancer. It worked brilliantly, at least with the training
materials, incredible hit rate, not too terrible false positive rate
(no biggie, you're just trying to decide if you want to investigate
further), really low false negative rate (if there was cancer it
would spot it, for sure, and you don't want to miss that).
But in real-world patient data it went completely mental. The sample
data was real-world patient data, too, but on "uncontrolled"
patients, it was detecting cancer all over the place. It also
detected cancer in pictures of the Oncology department lino floor, it
detected cancer in a picture of a guy's ID badge, it detected cancer
in a closeup of my car tyre, and it detected cancer in a photo of a
grey overcast sky.
Aw no. Now what?
Well, that's why I looked at the camera for them. They'd photographed
the biopsies with one camera on site, from "real patients", but a lot
of the "clear" biopsies were from other sites.
You're ahead of me now, aren't you?
The "Deep Learning" system had in fact trained itself on a speck of
shit on the sensor of one of the cameras, the one used for most of
the "has cancer" biopsies and most of the "real patient under test"
biopsies. If that little blob of about a dozen slightly darker pixels
was present, then it must be cancer because that's what the grown-ups
told it. The actual picture content was largely irrelevant because
the blob was consistent across all of them.
I'm not too keen on AI in healthcare, not as a definitive "go/no-go"
test thing.
caycep wrote 1 day ago:
There's fancier ML studies on EEG signal but probably not consistent
enough for clinical work. For now, the one thing EEG can reliably
tell is if you're having a seizure or not, if you're delirious (or in
a coma) or not, or if you're asleep.
SubiculumCode wrote 1 day ago:
There are lots of reliable science done using EEG and fMRI; I believe
you learned the wrong lesson here. The important thing is to treat
motion and physiological sources of noise as a first-order problem
that must be taken very seriously and requires strict data quality
inclusion criterion. As far as deep learning in fMRI/EEG, your
response about overfitting is too sweepingly broad to apply to the
entire field.
To put it succinctly, I think you have overfit your conclusions on
the amount of data you have seen
j45 wrote 1 day ago:
I have heard and seen good things about QEEG and fMRI as well.
D-Machine wrote 1 day ago:
I would argue in fact almost all fMRI research is unreliable, and
formally so (test-retest reliabilities are in fact quite miserable:
see my post below). [1] EDIT: The reason being, with reliabilities
as bad as these, it is obvious almost all fMRI studies are
massively underpowered, and you really need to have hundreds or
even up to a thousand participants to detect effects with any
statistical reliability. Very few fMRI studies ever have even close
to these numbers ( [2] ).
[1]: https://news.ycombinator.com/item?id=46289133
[2]: https://www.nature.com/articles/s42003-018-0073-z
mattkrause wrote 1 day ago:
That depends immensely on the type of effect you're looking for.
Within-subject effects (this happens when one does A, but not
when doing B) can be fine with small sample sizes, especially if
you can repeat variations on A and B many times. This is pretty
common in task-based fMRI. Indeed, I'm not sure why you need >2
participants expect to show that the principle is relatively
generalizable.
Between-subject comparisons (type A people have this feature,
type B people don't) are the problem because people differ in
lots of ways and each contributes one measurement, so you have no
real way to control for all that extra variation.
D-Machine wrote 1 day ago:
Precisely, and agreed 100%. We need far more within-subject
designs.
You would still in general need many subjects to show the same
basic within-subject patterns if you want to claim the pattern
is "generalizable", in the sense of "may generalize to most
people", but, precisely depending on what you are looking at
here, and the strength of the effect, of course you may not
need nearly as much participants as in strictly between-subject
designs.
With the low test-retest reliability of task fMRI, in general,
even in adults, this also means that strictly one-off
within-subject designs are also not enough, for certain claims.
One sort of has to demonstrate that even the within-subject
effect is stable too. This may or may not be plausible for
certain things, but it really needs to be considered more
regularly and explicitly.
SubiculumCode wrote 1 day ago:
Between-subject heterogeneity is a major challenge in
neuroimaging. As a developmental researcher, I've found that
in structural volumetrics, even after controlling for total
brain size, individual variance remains so large that
age-brain associations are often difficult to detect and
frequently differ between moderately sized cohorts
(n=150-300). However, with longitudinal data where each
subject serves as their own control, the power to detect
change increases substantially—all that between-subject
variance disappears with random intercept/slope mixed models.
It's striking.
Task-based fMRI has similar individual variability, but with
an added complication: adaptive cognition. Once you've
performed a task, your brain responds differently the second
time. This happens when studies reuse test questions—which
is why psychological research develops parallel forms. But
adaptation occurs even with parallel forms (commonly used in
fMRI for counterbalancing and repeated assessment) because
people learn the task type itself. Adaptation even happens
within a single scanning session, where BOLD signal amplitude
for the same condition typically decreases over time.
These adaptation effects contaminate ICC test-retest
reliability estimates when applied naively, as if the brain
weren't an organ designed to dynamically respond to its
environment. Therefore, some apparent "unreliability" may not
reflect the measurement instrument (fMRI) at all, but rather
highlights the failures in how we analyze and conceptualize
task responses over time.
D-Machine wrote 1 day ago:
Yeah, when you start getting into this stuff and see your
first dataset with over a hundred MRIs, and actually start
manually inspecting things like skull-stripping and stuff,
it is shocking how dramatically and obviously different
people's brains are from each other. The nice clean little
textbook drawings and other things you see in a lot of
education materials really hide just how crazy the
variation is.
And yeah, part of why we need more within-subject and
longitudinal designs is to get at precisely the things you
mention. There is no way to know if the low ICCs we see now
are in fact adaptation to the task or task generalities, if
they reflect learning that isn't necessarily task-relevant
adaptation (e.g. the subject is in a different mood on a
later test, and this just leads to a different strategy),
if the brain just changes far more than we might expect, or
all sorts of other possibilities. I suspect if we ever want
fMRI to yield practical or even just really useful
theoretical insights, we definitely need to suss out
within-subject effects that have high test-retest
reliability, regardless of all these possible confounds.
Likely finding such effects will involve more than just
changes to analysis, but also far more rigorous
experimental designs (both in terms of multi-modal data and
tighter protocols, etc).
FWIW, we've also noticed a lot of magic can happen too when
you suddenly have proper longitudinal data that lets you
control things at the individual level.
caycep wrote 1 day ago:
which is why the good labs follow up fMRI results and then go in
with direct neurophysiological recording...
SubiculumCode wrote 1 day ago:
You got downvoted, but I think you are right in a way. Direct
neurophysiological recording is not a panacea because either 1)
you can't implant electrodes in your participants ethically, 2)
Recordings usually are limited in number or brain areas. That
said, I think the key is "convergent evidence" that spans
multiple levels and tools of analysis. That is how most
progress has been made in various areas, like autism research
(my current work) or memory function (dissertation). We try to
bridge evidence spanning human behavior, EEG, fMRI, structural
MRI, post-mortem, electrode, eye-tracking, with primate and
rodent models, along with neuron cultures in a dish type of
research. We integrate it and cross-pollinate.
caycep wrote 14 hours 28 min ago:
There is actually a field where subjects do have electrodes
implanted (see [1] ). - this is done when they are doing
pre-operative recordings in preparation for brain surgery for
the treatment of epilepsy, and the electrodes are already
there for clinical diagnostic purposes and they volunteer a
few hours of their time while they are sitting around in the
hospital, to participate in various cognitive
tasks/paradigms. The areas you can go are limited for sure,
but some areas are near regions of interest depicting in fMRI
scans.
Then there are also papers w/ recordings done in primates...
But overall yes - you integrate many modalities of research
for a more robust theory of cognition
[1]: https://www.humansingleneuron.org/
SubiculumCode wrote 1 day ago:
Yes on many of those fronts, although not all those papers
support your conclusion. The field did/does too often use tasks
with to few trials, with to few participants. That always
frustrated me as my advisor rightly insisted we collect hundreds
of participants for each study, while others would collect 20 and
publish 10x faster than us.
parpfish wrote 1 day ago:
The small sample sizes is rational response from scientists in
the face of a) funding levels and b) unreasonable expectations
from hiring/promotion committees.
cog neuro labs need to start organizing their research programs
more like giant physics projects. Lots of PIs pooling funding
and resources together into one big experiment rather than lots
of little underpowered independent labs. But it’s difficult
to set up a more institutional structure like this unless
there’s a big shift in how we measure career
advancement/success.
D-Machine wrote 1 day ago:
+1 to pooling funding and resources. This is desperately
needed in fMRI (although site and other demographic /
cultural effects make this much harder than in physics, I
suspect).
leoc wrote 1 day ago:
I'm not an expert, but my hunch would be that a similar
Big(ger) Science approach is also needed in areas like
nutrition and (non-neurological) experimental psychology
where (apparently) often group sizes are just too small.
There are obvious drawbacks to having the choice of
experiments controlled by consensus and bureaucracy, but if
the experiments are otherwise not worthwhile what else is
there to do?
D-Machine wrote 1 day ago:
I think the problems in nutrition are far, far deeper (we
cannot properly control diet in most cases, and certainly
not over long timeframes; we cannot track enough people
long enough to measure most effects; we cannot trust the
measurement i.e. self-report of what is consumed;
industry biases are extremely strong; most nutrition
effects are likely small and weak and/or interact
strongly with genetics, making the sample size
requirements larger still).
I'm not sure what you mean by "experimental psychology"
though. There are areas like psychophysics that are
arguably experimental and have robust findings, and there
are some decent-ish studies in clinical psychology too.
Here the group sizes are probably actually mostly not too
bad.
Areas like social psychology have serious sample size
problems, so might benefit, but this field also has
serious measurement and reproducibility problems, weak
experimental designs, and particularly strong ideological
bias among the researchers. I'm not sure larger sample
sizes would fix much of the research here.
leoc wrote 1 day ago:
> Areas like social psychology have serious sample size
problems, so might benefit, but this field also has
serious measurement and reproducibility problems, weak
experimental designs, and particularly strong
ideological bias among the researchers. I'm not sure
larger sample sizes would fix much of the research
here.
I can believe it; but a change doesn't have to be
sufficient to be ncessary.
D-Machine wrote 1 day ago:
Agreed, it is needed regardless.
D-Machine wrote 1 day ago:
Yes, well "almost all" is vague and needs to be qualified.
Sample sizes have improved over the past decade for sure. I'm
not sure if they have grown on median meaningfully, because
there are still way too many low-N studies, but you do see
studies now that are at least plausibly "large enough" more
frequently. More open data has also helped here.
EDIT: And kudos to you and your advisor here.
EDIT2: I will also say that a lot of the research on fMRI
methods is very solid and often quite reproducible. I.e. papers
that pioneer new analytic methods and/or investigate pipelines
and such. There is definitely a lot of fMRI research telling us
a lot of interesting and likely reliable things about fMRI, but
there is very little fMRI research that is telling us anything
reliably generalizable about people or cognition.
SubiculumCode wrote 1 day ago:
I remember when resting-state had its oh shit moment when
Power et al (e.g. [1] ) showed that major findings in the
literature, many of which JD Power himself helped build, was
based off residual motion artifacts. Kudos to JD Power and
others like him.
[1]: https://pubmed.ncbi.nlm.nih.gov/22019881/
D-Machine wrote 1 day ago:
Yes, and a great example of how so much research in fMRI
methodology is just really good science working as it
should.
jtbayly wrote 1 day ago:
But none of this (signal/noise ratio, etc) is related to the topic of
the article, which claims that even with good signal, blood flow is
not useful to determine brain activity.
D-Machine wrote 1 day ago:
The difference is that EEG can be used usefully in e.g. biofeedback
training and the study of sleep phases, so there is in fact enough
signal here for it to be broadly useful in some simple cases. It is
not clear fMRI has enough signal for anything even as simple as these
things though.
j45 wrote 1 day ago:
I have been told QEEG can offer an additional perspective in
neurofeedback, etc as well.
fMRI's are being used in TBI/Concussion recovery that are study
backed and seem to be delivering results.
hirvi74 wrote 1 day ago:
> fMRI's are being used in TBI/Concussion recovery
Interesting. Do you happen to have any more information on this
topic? I ask because I was under the impression that concussions
are a functional/metabolic injury and not a structural injury,
therefore, concussions are not visible on any type of fMRI, CT
Scan, etc.. Though, I haven't looked into this topic for almost
half a decade, so I imagine things have likely progressed.
j45 wrote 1 day ago:
Concussions seem to be pretty physiological - first they're a
brain bleed, and blood doesn't seem to pump the same as it did
before the concussion... resulting in different symptoms.
That might be what you're referring to as functional?
Metabolically, or otherwise, if the brain can't operate, other
things in the body such as metabolism would be impacted for
sure when it can't oversee and run as it normally can?
While I'm not sure if a concussion directly is visible or not
(some have sizeable enough brain bleeds that can be visible),
concussions to the extent that they are a change in blood
circulation changes and issues, can be visualized on fMRI, etc,
where it's not regular, those areas suffer in a brain.
Things luckily have progressed and quite exciting.
Out of convenience, I'll share one I know about (no
affiliation) that lay out their therapies and the science
behind it as well.
Effectively (I hope I'm getting this accurately) it seems the
blood vessels in the brain also have signalling from the blood
and oxygen that gets affected which affects things downstream
from there.
These guys do an fMRI baseline, have you jump on a bike, fMRI
again, see what's not getting blood, and then give you
exercises and activites for those regions of the brain. It's
pretty interesting. [1] Some reported patient outcomes: [2]
Blog links to research: [3] Independently of this I've heard
QEEGs can do a similar thing of seeing where brain activity
is/isn't baseline.
[1]: https://www.cognitivefxusa.com/treatment
[2]: https://www.cognitivefxusa.com/our-patients
[3]: https://www.cognitivefxusa.com/blog
D-Machine wrote 1 day ago:
Well fMRI (as opposed to MRI) is used precisely because it
measures things directly related to metabolism and function.
Not hard to find info on this stuff:
[1]: https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q...
D-Machine wrote 1 day ago:
Yes, there are a few medical cases where fMRI makes good simple
basic sense, and TBI/Concussion sounds immediately like one of
those to me. I seem also to recall them being useful in some
cases prior to brain surgeries and the like.
This all makes sense because fMRI tracks metabolic activity via
oxygenation changes, which is much more clearly and plausibly
related to tissue health and recovery. In these cases, it is also
most likely being used within-subject (i.e. longitudinally) to
make comparisons to baselines, rather than in an attempt to make
speculative inferences about the mind using groups of people, and
likely is a simple comparison to baseline rather than bespoke
statistical analyses relying on questionable assumptions about
the BOLD response being related to overly-specific kinds of
neural activity.
j45 wrote 1 day ago:
fMRI can track oxygenation changes, and indirectly where the
blood flow is, or isn't, and perhaps some ideas on where to get
it.
All to say, this application might not fall in the 40%.
I just find articles like these can't help but feel like they
have an agenda to undermine something instead of simply
acknowledge the kinds of things it is and isn't working for.
There's no doubt these researchers have found something, but
the need for sensationalistic headlines is well known in
academia as well.
Sometimes it's noticeable where the research is specific in
scope, but the findings are more general and broad.
kspacewalk2 wrote 1 day ago:
This study is validating a commonplace fMRI measure (change in
blood-oxygenation-level-dependent or BOLD signal) by comparing it with
a different MRI technique, one that uses a multiparametric quantitative
BOLD model, a different model for BOLD derived from two separate MRI
scans which measure two different kinds of signal (transverse
relaxation rates), and then multiply/divide by a bunch of constants to
get at a value.
I'm a software engineer in this field, and this is my
layman-learns-a-bit-of-shop-talk understanding of it. Both of these
techniques involve multiple layers of statistical assumptions, and
multiple steps of "analysing" data, which in itself involves implicit
assumptions, rules of thumb and other steps that have never sat well
with me. A very basic example of this kind of multi-step data massaging
is "does this signal look a bit rough? No worries, let's
Gaussian-filter it".
A lot of my skepticism is due to ignorance, no doubt, and I'd probably
be braver in making general claims from the image I get in the end if I
was more educated in the actual biophysics of it. But my main point is
that it is not at all obvious that you can simply claim "signal B shows
that signal A doesn't correspond to actual brain activity", when it is
quite arguable whether signal B really does measure the ground truth,
or whether it is simply prone to different modelling errors.
In the paper itself, the authors say that it is limited by methodology,
but because they don't have the device to get an independent measure of
brain activation, they use quantitative MRI. They also say it's because
of radiation exposure and blah blah, but the real reason is their uni
can't afford a PET scanner for them to use.
"The gold standard for CBF and CMRO2 measurements is 15O PET; but this
technique requires an on-site cyclotron, a sophisticated imaging setup
and substantial experience in handling three different radiotracers
(CBF, 15O-water; CBV, 15O-CO; OEF, 15O-gas) of short half-lives8,35.
Furthermore, this invasive method poses certain risks to participants
owing to the exposure to radioactivity and arterial sampling."
themulticaster wrote 1 day ago:
> [...] but the real reason is their uni can't afford a PET scanner
for them to use.
This is incorrect, TUM has a PET scanner (site in German): [1] Can't
comment regarding the other observations.
[1]: https://nuklearmedizin.mri.tum.de/de/Patienten-Zuweiser/Pet-...
kortex wrote 1 day ago:
This is why I love this site. You get input from so many specialized
folks! I appreciate you contributing your expertise and I also
appreciate you calling out the limits to that knowledge.
Two points I'm hoping you can help clarify:
> Researchers ... found that an increased fMRI signal is associated
with reduced brain activity in around 40 percent of cases.
So it's not just that they found it was uncorrelated, they found it
was anticorrelated in 40% of cases?
And you are suggesting that conclusion suffers from the same
potential issues as these fMRI studies in general?
Like you mention, it seems to me if we wanted to really validate the
model, we'd have to run the same experiment with two, three, or maybe
even more different modalities (fMRI, PET with different tracers,
etc).
freehorse wrote 1 day ago:
Most studies in non-clinical populations afaik do not use 150 PET
though? Afaik this is mostly used for clinical purposes. Could be
wrong though.
kspacewalk2 wrote 1 day ago:
If you have a PET/MR system [0], you can probably do this "gold
standard" comparison, and I know that one is used for research
studies. I think you can piggy-back off a different study's healthy
controls to write a paper like this, if that study already uses
PET/MR and if adding an oxygen metabolite scan isn't a big problem.
But that's speaking as someone who does not design experiments.
[0]
[1]: https://www.siemens-healthineers.com/en-us/magnetic-resona...
physPop wrote 16 hours 17 min ago:
no. you generally can't irradiate healthy volunteers for studies
Aurornis wrote 1 day ago:
This isn’t entirely news to people in the field doing research, but
it’s important information to keep in mind when anyone starts pushing
fMRI (or SPECT) scans into popular media discussions about neurology or
psychiatry.
There have been some high profile influencer doctors pushing brain
imaging scans as diagnostic tools for years. Dr. Amen is one of the
worst offenders with his clinics that charge thousands of dollars for
SPECT scans (not the same as the fMRI in this paper but with similar
interpretation issues) on patients. Insurance won’t cover them
because there’s no scientific basis for using them in diagnosing or
treating ADHD or chronic pain, but his clinics will push them on
patients. Seeing an image of their brain with some colors overlayed and
having someone confidently read it like tea leaves is highly convincing
to people who want answers. Dr. Amen has made the rounds on Dr. Phil
and other outlets, as well as amassing millions of followers on social
media.
mNovak wrote 1 day ago:
>> Seeing an image of their brain with some colors overlayed ... is
highly convincing
Indeed, there's been quite a few studies [1] that find just including
any old image of a brain with stuff highlighted will cause a paper to
be perceived as more scientifically credible.
[1]: https://pubmed.ncbi.nlm.nih.gov/17803985/
saidnooneever wrote 1 day ago:
thanks for this comment. it was really insightful thank you.
caycep wrote 1 day ago:
I saw a clinical report of his on a patient, he puts a graphic in
their report of their "brain scan" but it's basically a vector
graphic of the brain w/ a multicolor MS Paint gradient...
api wrote 1 day ago:
Pop science guru-ing is a giant flashing red sign for me. I am never
even a little surprised when the latest “sense maker” or pop
science guru comes out as a complete loon or is consumed by some kind
of scandal.
Influencers in general are always suspect. The things that get you an
audience fast are trolling or tabloid-ish tactics like conspiracism.
There are good ones but you have to be discerning.
ashleyn wrote 1 day ago:
Back in 2009 I remember reading about how dead salmon apparently
turns up brain activity in fMRI without proper statistical methods.
fMRI studies are something frequently invoked unscientifically and
out of context.
[1]: https://www.wired.com/2009/09/fmrisalmon/
caycep wrote 1 day ago:
I think technically there's some statistical correction you apply
to the voxels to avoid this. But yea...most hypotheses from fMRI
are considered hypotheses until there's some other modality, i.e.
electrical recordings, etc that confirm it.
i.e. the well regarded studies, i.e. Kanwisher and the visual
processing areas, have follow up studies on primates and surgical
volunteers w/ actual electrical activity correlating w/ visual
stimuli etc
suyash wrote 1 day ago:
Dr. Amen is more of a marketing/sales guy than a medical expert.
badlibrarian wrote 1 day ago:
I thought I was being clever by coining the term "non-invasive
phrenology" but it appears people are already using it
non-ironically.
caycep wrote 1 day ago:
I saw Parvizi say this in a talk back in 2019!
fluidcruft wrote 1 day ago:
("wallet biopsy" is another fun term if you haven't encountered it)
kridsdale3 wrote 1 day ago:
Cashectomy.
telotortium wrote 1 day ago:
In many ways old-school bump measurement is actually less invasive
kspacewalk2 wrote 1 day ago:
Dr. Mike, a rare YouTube doctor who is not peddling supplements and
wares, and thus seems to be at the forefront of medical critical
thinking on the platform, interviewed Dr. Amen recently[0]. I haven't
finished the interview yet, but having watched some others, generally
the approach is to let the interviewee make their grandiose claims,
agree with whatever vague generalities and truisms they use in their
rhetoric (yes it's true, doctors don't spend enough time explaining
things to patients!), and then lay into them on the actual science
and evidence.
[0]
[1]: https://www.youtube.com/watch?v=J-SHgZ1XPXs
patmorgan23 wrote 1 day ago:
Dr. Mike did an incredible job in that interview. He gave Dr. Amen
all the rope to hang himself with his own words. When you're
hawking a diagnosis method and you're not interested in building up
the foundation of evidence for it by doing a double blinded,
randomized controlled study. And that the results of said study
would change how your treating patients it's pretty clear who the
snake oil salesman is
rkagerer wrote 1 day ago:
I'm no expert in medicine, but I watched that entire video and
your analogy about performance and rope doesn't fit well with how
it came across to me.
I actually thought the interviewer was a little disingenuous. He
said things like "We're on the same team" and "I'm not trying to
trap you", then proceeded to lob his guest with criticisms from
the other team and questions aimed to maneuver him into a
contradiction. There's nothing inherently wrong with that, but
if you're going to do it, be forthright you're engaging in a
debate.
Earlier in the interview he could have put his cards on the table
and plainly stated "Myself and others in the medical community
are skeptical of the efficacy of imaging on outcomes, and a
rigorous, double-blind study would lend dramatic support for us
to adopt what you're touting."
Then they could have had the conversation he was clearly after,
focused on that issue.
Instead it felt like I was watching for ages as he took a winding
route to get there, then the interview cut off abruptly when they
finally really did.
The overlays applied in editing while helpful and fair in some
cases, at other times came across as one-sided. It's a shame we
can't see a follow-up where the interviewee has an opportunity to
respond (or squirm) in light of them.
For the record I would very much love to see additional research
and gold-standard, double-blind studies. In the meantime I'll
treat this as "Hey, we've got this interesting thing we can
measure, we're seeing some good results in our practice" without
over-emphasizing the confidence in this one diagnostic.
I did find the bit interesting about how having a gauge you can
viscerally see impacted patients' engagement in care. Both
agreed on the potential usefulness of that aspect, and conceded
the difference in profiles between patients coming to Dr. Amen
vs. ordinary front-line family physicians.
westmeal wrote 16 hours 0 min ago:
In my opinion it's pretty clear dr. Amen was really only there
to push a book. He was never really interested in having a real
discussion anyway, hes just shilling. If you're going to be
pushing a diagnostic method and supplements to solve issues
without any proof whatsoever that's a problem. No one should be
making statements about the efficacy of a technique without
evidence. The fact that he got defensive about it speaks
volumes about his character and what he hopes to get away with.
jama211 wrote 1 day ago:
Unfortunately I worry about the rebound effect, where even though
the entire interview was debunking his claims this could still on
average increase amen’s popularity.
Aurornis wrote 15 hours 55 min ago:
I worry about the same effect. Debunking style conversations
produce the opposite effect in viewers who instinctively take
the side of anyone who appears to be trying to help and
reactively take the opposite position of anyone who appears to
be attacking.
So many will watch this video and come away siding with Dr.
Amen, feeling like they're doing the right thing to disregard
the mean man on the other side who is questioning everything.
The alternative medicine and pseudoscience communities thrive
on "but what if it works" or "they're just trying to help"
attitudes, which snake oil sellers capitalize on.
flatline wrote 1 day ago:
I also thought the rest of the interview was really worthwhile -
they talked a lot about real problems in the medical industry
from different perspectives. What a great and critical discussion
from Dr. Mike. If Amen had conceded the point they could have
moved on. There could be real findings to be had there, and some
may even match his conclusions, but many likely will not, and the
whole thing could also be pure fiction. We should want better
answers to these questions. It's unfortunate to watch someone as
seemingly intelligent and well-informed as Amen come across as
shilling snake oil, and/or just being hung up on his ego, at the
end of it all. Scientific literacy is so critical, because it's
easy to cloak pseudoscience behind high-tech smokescreens.
georgeecollins wrote 1 day ago:
As someone who used to work at the Cognitive Neurophysiology Lab in the
Scripts Institute-- doing some work on functional brain image-- I can
confirm this was not news even thirty years ago. I guess this is
trying to make some point to lay people?
tlb wrote 1 day ago:
Are there proposed reasons for increased blood flow to brain regions
other than neural activity? Are neurons flushing waste products or
something when less active?
D-Machine wrote 1 day ago:
Many reasons, and yes, basically, that is one of them.
Ekstrom, A. (2010). How and when the fMRI BOLD signal relates to
underlying neural activity: The danger in dissociation. Brain
Research Reviews, 62(2), 233–244. [1] ,
[1]: https://doi.org/10.1016/j.brainresrev.2009.12.004
[2]: https://scholar.google.ca/scholar?cluster=6420450573860538...
DANmode wrote 1 day ago:
The glymphatic system, sure.
freehorse wrote 1 day ago:
The BOLD response (oxygen-neuronal activity coupling) has been pretty
much accepted in neuroscience. There have been criticisms about it
(non-neuronal contributions, mysteries of negative
responses/correlations) but in general it is pretty much accepted.
D-Machine wrote 1 day ago:
The measurement of the BOLD response is well-accepted, but the
interpretation of it with respect to cognition is still basically
mostly unclear. Most papers assuming BOLD response uniformly can be
interpreted as "activation" are quite dubious.
georgeecollins wrote 1 day ago:
Yes, I stupidly read the headline and said "no duh" but they are
making a point about our understanding of brain activity. I was
thinking about the part of the signal that is reliably filtered
out, they are talking about something else. Sorry, I was wrong.
sgt101 wrote 1 day ago:
Good for you George E Collins.
jtbayly wrote 1 day ago:
Really? This was known: "there is no generally valid coupling between
the oxygen content measured by MRI and neuronal activity"?
mattkrause wrote 1 day ago:
"generally valid" is a bit of a loaded phrase.
They are indeed coupled, but the coupling is complicated and may be
situationally dependent.
Honestly, it's hard to imagine many aggregate measurements that
aren't. For example, suppose you learn that the average worker's
pay increased. Is it because a) the economy is booming or b) the
economy crashed and lower-paid workers have all been laid off (and
are no longer counted).
georgeecollins wrote 1 day ago:
The coupling was always debated, but you are right, that wasn't
known or at least decided. I made a mistake and you are right.
Hasty post. I apologize.
Aurornis wrote 1 day ago:
fMRI has been abused by a lot of researchers, doctors, and authors
over the years even though experts in the field knew the reality.
It’s worth repeating the challenges of interpreting fMRI data to a
wider audience.
sigmoid10 wrote 1 day ago:
The way I understood it is that while individual fMRI studies can
be amazing, it is borderline impossible to compare them when made
using different people or even different MRI machines. So
reproducibility is a big issue, even though the tech itself is
extremely promising.
tsimionescu wrote 1 day ago:
The article is pointing out that one of the base assumptions
behind fMRI, that increased blood flow (which is what the machine
can image) is strongly correlated to increased brain activity
(which is what you want to measure) is not true in many
situations. This means that the whole approach is suspect if you
can't tell which situation you're in.
mattkrause wrote 1 day ago:
fMRI ususally measures BOLD, changes in blood oxygenation
(well, deoxygenation). The point of the paper is that you can
get relative changes like that in lots of ways: you could have
more or less blood, or take out more/less oxygen from the same
blood.
These can be measured themselves separately (that's exactly
what they did here!) and if there's a spatial component, which
the figures sort of suggest, you can also look at what a
particular spot tends to do. It may also be
interesting/important to understand why different parts of the
brain seem to use different strategies to meet that demand.
D-Machine wrote 1 day ago:
It is in fact even difficult to compare the same person on the
same fMRI machine (and especially in developmental contexts).
Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C.
(2018). Test-retest reliability of longitudinal task-based fMRI:
Implications for developmental studies. Developmental Cognitive
Neuroscience, 33, 17–26.
[1]: https://doi.org/10.1016/j.dcn.2017.07.001
mattkrause wrote 1 day ago:
I read that paper as suggesting that development, behavior, and
fMRI are all hard.
It's not at all clear to me that teenagers' brains OR
behaviours should be stable across years, especially when it
involves decision-making or emotions. Their Figure 3 shows that
sensory experiments are a lot more consistent, which seems
reasonable.
The technical challenges (registration, motion, etc) like
things that will improve and there are some practical
suggestions as well (counterbalancing items, etc).
D-Machine wrote 1 day ago:
While I agree I wouldn't expect too much stability in
developing brains, unfortunately there are pretty serious
stability issues even in non-developing adult brains (quote
below from the paper, for anyone who doesn't want to click
through).
I agree it makes a lot of sense though the sensory
experiments are more consistent, somatosensory and
sensorimotor localization results generally seem to the be
most consistent fMRI findings. I am not sure registration or
motion correction is really going to help much here, I
suspect the reality is just that the BOLD response is a lot
less longitudinally stable than we thought (brain is changing
more often and more quickly than we expected).
Or if we do get better at this, it will be more sophisticated
"correction" methods (e.g. deep-learners that can predict
typical longitudinal BOLD changes, and those better allow
such changes to be "subtracted out", or something like that).
But I am skeptical about progress here given the amount of
data needed to develop any kind of corrective improvements in
cases where there are such low longitudinal reliabilities.
===
> Using ICCs [intraclass correlation coefficients], recent
efforts have examined test-retest reliability of task-based
fMRI BOLD signal in adults. Bennett and Miller performed a
meta-analysis of 13 fMRI studies between 2001 and 2009 that
reported ICCs. ICC values ranged from 0.16 to 0.88, with the
average reliability being 0.50 across all studies. Others
have also suggested a minimal acceptable threshold of
task-based fMRI ICC values of 0.4–0.5 to be considered
reliable [...] Moreover, Bennett and Miller, as well as a
more recent review, highlight that reliability can change on
a study-by-study basis depending on several methodical
considerations.
SubiculumCode wrote 1 day ago:
This isn't really true. The issue is that when you combine data
across multiple MRI scanners (sites), you need to account for
random effects (e.g. site specific means and variances)...see
solutions like COMBAT. Also if they have different equipment
versions/manufacturers those scanners can have different SNR
profiles. The other issue is that there are many processing with
many ways to perform those steps. In general, researchers don't
process in multiple ways and choose the way that gives them the
result they want or anything nefarious like that, but it does
make comparisons difficult since the effects of different
preprocessing variations can be significant. To defend against
this, many peer reviewers, like myself, request researchers
perform the preprocessing multiple ways to assess how robust the
results are to those choices. Another way the field has combatted
this issue has been software like fMRIprep.
Aurornis wrote 1 day ago:
Individual fMRI is not a useful diagnostic tool for general
conditions. There have been some clinics trying to push it (or
SPECT) as a tool for diagnosing things like ADHD or chronic pain,
but there is no scientific basis for this. The operator can
basically crank up the noise and get some activity to show up,
then tell the patient it’s a sign they have “ring of fire
type ADHD” because they set the color pattern to reds and a
circular pattern showed up at some point.
darfo wrote 1 day ago:
Can the OP change the HN item title so scrollers don't think there is a
problem with MRI? Isn't fMRI being questioned?
dang wrote 1 day ago:
Looks like a mod got to it!
belter wrote 1 day ago:
Why did TUM let this misleading headline front the news release? Dont
we have enough issues with Academia? The result just mean BOLD is an
imperfect proxy.
kspacewalk2 wrote 1 day ago:
It is especially unforgiveable that the title of on the news release
itself is about "40 percent of MRI signals". What, as in all MRI, not
just fMRI? Hopefully an honest typo and not just resulting from
ignorance.
mrcrm9494 wrote 1 day ago:
this headline is a bit misleading on the first read, since it only
affects functional (f)MRI, which is controversial since a longer time.
a prominent example is the activity that has been detected in a dead
salmon
ErroneousBosh wrote 1 day ago:
If you apply enough gain and filtering to an unknown signal,
eventually you'll pull something out of it that you can convince
yourself is what you're looking for.
SubiculumCode wrote 1 day ago:
The dead salmon was just a lesson in failing to correct for multiple
comparisons.
prefrontal wrote 1 day ago:
As the first author of the salmon paper, yes, this was exactly our
point. fMRI can be an amazing tool, but if you are going to trust
the results you need to have proper statistical corrections along
the way.
SubiculumCode wrote 1 day ago:
Cheers!
giancarlostoro wrote 1 day ago:
So, is fMRI like "fast" MRI? Can someone fill the rest of us mortals
in on this? :)
ErroneousBosh wrote 1 day ago:
I'm going to follow on a bit from what jawilson said. The idea is
this - you can measure blood oxygenation by sticking your head in a
big magnet that makes atoms spin really fast and measuring the
radio waves that come off. This is imprecise, but reasonably
repeatable.
So if I show you a picture of a cat, and you like cats, then a bit
of your brain might start using more oxygen because you're thinking
about cute furry things, and if I show you a picture of a car, and
you like cars, a different bit of your brain lights up showing more
oxygen use because you're thinking about fast shiny things.
But really we've only got the barest idea of what bits of the brain
do what, and maybe it's a bit of brain that goes "hey I'm happy"
that lights up in both cases because you like both cats and cars.
We can kind of see bits we think are associated with muscle
movement coming to life if I show you a picture of a bike, and you
like cycling, and if I show you a really cool mountain track you
imagine belting down it flat out. That lights up differently if I
show you something else.
However, we do not really know except in very broad terms what bits
of the brain actually do what. We can't "see thoughts", we just
know that some bits of brain seem to use more oxygen than others,
and from that we guess "this bit of brain is for thinking about
sitting in a nice cafe with a cup of coffee and a newspaper" versus
"this bit of brain is for being frightened of lions".
At least when phrenology was a thing, the ceramic heads with lines
painted on were inexpensive and didn't require three-phase power
and huge barrels of liquid helium.
jawilson2 wrote 1 day ago:
f is functional.
MRIs are basically huge magnets used for imaging. When you apply a
strong magnetic field, different tissue types and densities will
react differently, and the MRI is basically measuring how those
tissues react to the magnet. It is very good for imaging soft
tissues, but not so much bone. Someone figured out that you can
measure blood flow using the MRI, because blood cells react in a
magnetic field, then "relax" at a known rate. Since we can measure
blood flow, that is correlated with increased brain activity, i.e.
since more neurons are firing, they require more energy, and
therefore more blood. So, fMRI is using blood flow as a proxy for
brain activity.
parpfish wrote 1 day ago:
Fmri doesn’t measure blood flow, it measures the oxygen level
in the blood. Hemoglobin molecules change shape when they carry
oxygen and the different shapes react differently to magnets,
which is a real stroke of luck
physPop wrote 16 hours 12 min ago:
it doesn't measure the oxygen level directly either. the bold
signal is correlated to dephasing induced by the oxy/deoxy hg
ratio that isn't even necessarially localized to the voxel
(flow or long range magnetic susceptibility perturbations from
nearby accumulated deoxyhg (veins)).
jawilson2 wrote 1 day ago:
Yep, this is why it's also called BOLD imaging, for
blood-oxygenation-level-dependent fMRI. I did my PhD is BME and
brain-computer interfaces, but it has been a while since I
worked in the field.
freehorse wrote 1 day ago:
Structural MRI does not record brain activity, because it is, like,
structural, not functional.
Structural MRI is even more abused, where people find "differences"
between 2 groups with ridiculously small sample sizes.
kgarten wrote 1 day ago:
wondering why you are downvoted. You are right, though it's kind of
inferred that the author means fMRI as the title focuses on brain
activity only.
kspacewalk2 wrote 1 day ago:
It's not that fMRI itself is controversial, it's that it is prone to
statistical abuse unless you're careful in how you analyse the data.
That's what the dead salmon study showed - some voxels will appear
"active" purely by statistical chance, so without correction you will
get spurious activations.
tsimionescu wrote 1 day ago:
This study questions the fMRI method itself, not the statistical
analysis (you're right that the dead salmon study was challenging
the way statistical analysis is done). Basically, this study claims
that the association between the BOLD signal measured by fMRI and
actual brain activity is quite weak, and they are even
anti-correlated in 40% of cases.
There is no statistical analysis that can save you if your
interpretation of a signal is wrong (for example, you can't get
information about personality from phrenology, regardless of what
statistical analysis you try to apply to the data). That's not to
say that we need to just trust this study implicitly - I'm just
trying to describe how serious of a problem to the field their
claim is.
bschne wrote 1 day ago:
you're telling me the results of this paper were likely bs? ---
[1]: https://www.sciencedirect.com/science/article/abs/pii/S1053811...
kspacewalk2 wrote 1 day ago:
Curious what you find to be "bs" about the results of this paper?
That statistical corrections are necessary when analysing fMRI scans
to prevent spurious "activations" that are only there by chance?
koolala wrote 1 day ago:
They were being sarcastic.
parpfish wrote 1 day ago:
The point of the salmon paper is to demonstrate to people “if you
do your stats wrong, you’re going to think noise is real” and not
“fmri is bs”
prefrontal wrote 1 day ago:
As the first author on the salmon paper, yes, that was exactly our
point. Researchers were capitalizing on chance in many cases as
they failed to do effective corrections to the multiple comparisons
problem. We argued with the dead fish that they should.
chuckadams wrote 1 day ago:
> We argued with the dead fish that they should.
Arguing with a dead fish may be a sign you're working too hard :)
prefrontal wrote 1 day ago:
Yeah, it did prove to be a rather one-sided conversation... ;)
chuckadams wrote 1 day ago:
Did you try tuning it?
[1]: https://youtu.be/F2y92obnsc0
rdgthree wrote 1 day ago:
Nothing to add to this conversation in particular, but just
wanted to say - truly amazing paper. Well done!
prefrontal wrote 1 day ago:
Many thanks! It was a ton of fun. Hard to beleive that we are
coming up on 20 years since the data for the salmon was first
collected...
fishnchips wrote 1 day ago:
Oh man you stole my thunder. I hoped to be the first to bring up the
dead salmon.
<- back to front page
You are viewing proxied material from codevoid.de. The copyright of proxied material belongs to its original authors. Any comments or complaints in relation to proxied material should be directed to the original authors of the content concerned. Please see the disclaimer for more details.