Friday, July 8, 2016

Free Will, Presentiment, and an fMRI Bug

Over the course of the last decade, a number of neuroscientists have come to the conclusion that "free will" is in fact an illusion, and that all human behavior is fundamentally deterministic. As evidence, they cite studies showing that brain scans can predict the decisions individuals make moments before those individuals become aware of their decisions. They have even developed a model built around this notion, called passive frame theory. According to this model, consciousness does not make decisions, but simply observes the results of autonomous processes.

But there's a big problem with this model and the studies on which it is based. Other studies have identified a phenomenon called presentiment, which appears to show that the brains of subjects who are randomly shown photographs that are either neutral or designed to provoke strong emotional responses react to each photograph a moment before it is displayed. This may not be evidence of some sort of psychic awareness, but it does call the studies on which passive frame is built into question.

Simply, you can't conclude that a free choice has not occurred if the brain can somehow perceive future information. Likewise, if a problem with the scanner is making it look like the brain can react to the future when it really cannot, the same issue is probably affecting the free will studies. When confronted with the presentiment results, most skeptics will immediately jump to the conclusion that something must be wrong with the scanner. But the free will studies have not received the same scrutiny.

Normally I find skeptics far too dismissive of possible paranormal results, but based on some recent findings I'm going to agree with them this time around, believe it or not. A recent review of fMRI data from the last fifteen years has turned up a bug in the software that runs the machines, and those bugs could easily have produced data collection errors in both the free will and the presentiment studies.

This is especially true because both sets of studies rely on tracking neural firing at a very high resolution, where any inaccuracy could significantly skew the results. Generally speaking, all it would take is a shift of a tenth of a second or so in the scans to produce results for both studies that conform to common-sense intuition. That is, they would show that the conscious mind is making decisions as they happen, and that it is not seeing into the future.


It’s fascinating stuff, but the fact is that when scientists are interpreting data from an fMRI machine, they’re not looking at the actual brain. As Richard Chirgwin reports for The Register, what they're looking at is an image of the brain divided into tiny 'voxels', then interpreted by a computer program.

"Software, rather than humans ... scans the voxels looking for clusters," says Chirgwin. "When you see a claim that ‘Scientists know when you're about to move an arm: these images prove it,' they're interpreting what they're told by the statistical software."

To test how good this software actually is, Eklund and his team gathered resting-state fMRI data from 499 healthy people sourced from databases around the world, split them up into groups of 20, and measured them against each other to get 3 million random comparisons.

They tested the three most popular fMRI software packages for fMRI analysis - SPM, FSL, and AFNI - and while they shouldn't have found much difference across the groups, the software resulted in false-positive rates of up to 70 percent.

And that’s a problem, because as Kate Lunau at Motherboard points out, not only did the team expect to see an average false positive rate of just 5 percent, it also suggests that some results were so inaccurate, they could be indicating brain activity where there was none.

"These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results," the team writes in PNAS.

According to the article, the bug was corrected a little over a year ago after affecting at least the last fifteen years of results. What that means is that the free will and presentiment studies should be re-run with the new, corrected software to see if they produce the same results. I think it's likely that they won't, but if the data holds up it would suggest that there might be a paranormal effect going on. The only way the free will study can stand is if it produces the same results, while the presentiment study does not.

My main problem with passive frame theory is that it denies the existence of any sort of "active frame" processing. When the word "only" is inserted into a prescriptive model, it usually means that the model is wrong. As a magicial practitioner, I can sense passive frame processing in my mind, and it differs from the active frame processing that I use in magical rituals and when making important decisions. I will say, though, that even I run on passive frame a lot because it's extremely inefficient to deliberately dictate every aspect of things like my daily routines.

So it isn't the passive framers are wrong per se - they are describing a mode of consciousness that most people use most of the time. I would argue, though, that one of key goals of spiritual practice should be developing the ability to shift into active frame mode when it is appropriate to do so. The notion of attachment and aversion in both Thelema and Buddhism is all about transcending passive frames that become maladaptive as the circumstances of your life change. Most of the time you don't need to do it, but having the ability can make a huge difference.

Technorati Digg This Stumble Stumble

2 comments:

Ivy Bromius said...

I work in the software field, and I have to say that this is extremely likely. It would actually only take a hundredths of a second rounding error -- that gets compounded through internal software cycles -- to create a noticeable measurement error.

Scott Stenwick said...

Something along those lines is my thought as well. Seeing as the two sets of studies rely so heavily on exact timing, they strike me as particularly vulnerable to inaccuracies.