I want to take a look at an article published in the Atlantic a few years ago, called “The Brain on Trial” by David Eagleman. (The link to the original article can be found here by clicking on this sentence.) I will not critique the general legal conclusion that Eagleman pushes for, because I largely agree with him, i.e., the conclusion that neuroscience can be used to determine whether some temporary abnormality can and should exculpate an alleged criminal offender.[1] What I will address is the sloppy philosophy that Eagleman performs. I do appreciate that Eagleman is well aware of the intellectual domains of which he speaks, but his craft in each varies widely —his philosophy, in particular, needs critiquing.
I am going to pick certain critical statements and points of reasoning to demonstrate why Eagleman’s thinking that brain abnormalities necessarily implies that there is either no free will or limited free will. I genuinely hate this area of the philosophical literature, because I do not think many philosophers and outside writers really know what they mean when they use the words “free will,” as I rarely ever encounter its being defined or described explicitly. For the purposes of present, I want to define freedom of will as: “the lack of unique determinacy established by prior events, such that there is no general agency afforded to a unity.” I think this is general enough that most would agree to it, so let me explain why I choose these words particularly. “Unique” is important because any outcome that is not the singular possible outcome, no matter how many inputs, cannot be said to have been necessitated by the input(s). If it isn’t necessary, then it isn’t determined, so the lack of unique determinacy is the item of essential importance for what free will is. As long as there is a multiplicity —two or more— non-uniquely established possibilities, there is free will, no matter how limited it is. The establishing of consequents by antecedents is a direct result of has just been said: in order for determination, such that no free will is admitted, all antecedent evens, by virtue of their priority and causal efficacy, must uniquely determine an outcome. Finally, a “unity” is, in some way, distinct from its environment via the agency it has as an autonomous unit. This may seem like rehashing what has been said, and when it is exfoliated in the fashion I have, in discussing and explicating the definition, it is a slight rehashing; yet there is some importance in noting that a unity (e.g., a human) is somehow not entirely, and in all respects, a part of its environment —that, in some sense, the boundary of an entity constitutes a layer of distinction from the world. The capacity for decision-making among two or more undetermined options makes the unity autonomous and distinct through the agency afforded it through the lack of unique determinacy. With that, we may begin looking at Eagleman’s article.
Unfortunately, we are intellectually far enough beyond Cartesian “cognitive science” that we should be able to entertain free will without wrongly thinking that free will implicates mind-body dualism, but the reality is that most people I talk to immediately say, “so, if you posit the freedom of will, how does the mind and body interact?!” Let me make it clear that there are all sorts of alternatives, wherein freedom of will can be so, yet there need be no for dualism. Such alternatives are panpsychism, theories of quantum consciousness, and so forth. I am going be a little be crude and brutal in my approach, and simply show how Eagleman’s points of contention don’t even preclude good old fashion Cartesian dualism, even though I don’t necessarily subscribe to the view. The choice is more to show just how poor the argumentation is on his part, that it doesn’t even eliminate the most infamous non-deterministic approaches in the free will debate.[2] A knockdown argument that dualism provides, and which I will use here, is that the immaterial substance, however it would be supposed to interact with the material substance, requires a particular functionality out of all the components in the material substance. That is, the immaterial substance can operate within “the shell” adequately, only if all of the parts are working as the natural degrees of freedom would usually permit. On the first page of the article, Eagleman tells a story about Whitman, whose behavior is fundamentally altered by a tumor that, as it grows, presses evermore against the amygdala, an important feature in the brain whose function is responsible for things like aggression control, general emotional response, and decision-making. The kind of proposition that Eagleman sets up throughout his piece is one that says:
P1) If brain part not functioning properly, then decisions will be improper (i.e., not of the otherwise ordinary sort).
However, for necessity, the relation in the proposition cannot be merely of the “if-then” variety, it must be of the “iff” variety, that is, “if and only if” variety —and the proposition must be altered slightly, as well. A counterexample using dualism shows that there is at least one hypothetical case in which the above proposition, P1, may be true whenever a strict deterministic materialism doesn’t hold. For instance, if a car is considered analogous to the body, a driver in the car analogous to the immaterial substance, and the steering wheel the functional component of the car analogous to the amygdala, we can see how improper function of the part alters decisions, yet free will is preserved. For instance, if the steering wheel suddenly begins making left turns when the wheel is turned clockwise, although properly and freely functioning, the driver consistently runs into walls. What one is left with is a case of free will and an improperly functioning part of the bodily apparatus (the car in the analogy, the amygdala in the actual case), so, even though the complete freedom of will remains intact, the malfunction of a localized constituent in the unity causes a problematic outcome. In fact, this coincides very well with the cases in which the subjects explicitly say they don’t want to do something, but, then, go ahead and do that something; the intention and consequent are disconnected by some intermediary component that is not functioning as it usually would.
More completely, Eagleman is trying to make the argumentative line of reasoning that employs P1, and supplements it with:
P2) It is the case that a part of the brain is not functioning properly.
P3) If alterations in the brain result in alterations of behavior, then there is a necessary causal connection between brains and behviors.
P4) If there is a necessary causal connection between brains and behavior, then the brain is the behavior, in that the brain plus inputs uniquely determines behavior (outputs).
P5) If the brain plus inputs uniquely determines behavior and outputs, then there is no free will.
It should be clear from this that Eagleman, in his article, has neither determined that these malfunctions only produce particular outcomes —they may only tend to, which is what we typically get in science anyways (i.e., science is a probabilistic affair)— nor has he demonstrated that the brain’s local function is not an intermediary of global function or an intermediary between a level of agency and global bodily behavior. The argument is flawed on a number of levels. Eagleman goes from a proposition of the sort “there is a set of brain component malfunction that occasionally produce drastic alterations in behavior” to “all brain component malfunctions necessarily results in drastic alterations in behavior.” The fact of the matter is that, on the legal end of things, the jury will still have to judge whether someone’s actions were legitimately the result of some malfunction in the brain, or whether the person has some defect, yet intended to commit the criminal act, and is simply using the fact of the brain defect to hose the court. There is always the danger of the appeal to authority, and this is one, while siding with Eagleman’s judicial suggestion, that I am inclined to think that the jury should be fully informed. (The somnambulist in the article may have been faking, and it is the jury’s responsibility to decide on the basis of context, not some universal law, i.e., a particular biological defect necessarily means the somnambulist was not at fault —though, the jury should have this neuroscientific information before their minds, and have legal precedence to act on it, Eagleman and I agree.) The example, here, would be the difference in whether motives existed or not. Eagleman present fine instances in which the perpetrator had absolutely no motive; but, instances where there may be (or is) motive, the jury should be aware of the probabilistic nature of the brain defect. I think Eagleman is intellectually irresponsible for not pushing this probabilistic aspect, choosing, rather, to push his philosophical agenda.
I have to point out, just so it isn’t lost in this discussion, that there is a disconnect between Eagleman’s philosophical assertions and his conclusion, that there is legalistic import to the neuroscientific findings about malfunctioning brain components and subcomponents. What I have just demonstrated is that there is a way to stitch in the free-will narrative account of what is going on, while preserving the fact that the individual is not at fault by virtue of the separation of intention and consequent (and the systematically malfunctioning component). On this point, Eagleman had no right bringing in philosophy, as though the science said something of necessity about the free will debate; it simply doesn’t. Whether the subject has free will or not, the neuroscientific argument stands as is, and so the philosophical content is underdetermined by the science. I think it is a fundamentally misguided venture to suppose that science will answer philosophical questions. I do believe that they can inform philosophical discussion, but the fact of the matter is that philosophy is the foundation of all thought —all thought— and so it is philosophy that can play a role in nuancing science, guiding methodology, adding distinctions to concepts, etc.
On the second page of the article, Eagleman says, ‘When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault? I submit that this is the wrong question to be asking.’ This is where he tries to conflate agency and biological impetus. However, there are two identities here, even though he tries to compress them into one biological identity. First, there is actor A, who is the healthy and properly function person, and there is A*, the unhealthy version of A who now has a malfunctioning brain part. Eagleman acknowledges that A is not A*, entirely accepting that the behavioral output of the two is distinct, yet his philosophical interest is in eliminating free will, and therefore wishes to say that they are the same, i.e., that it is the wrong question to ask, if we ask whether biology is at fault or the individual. Acknowledging the difference between behaviors of A and A*, he cannot legitimately do that. More or less, this is simply rhetoric on Eagleman’s part, the agenda being to A is A* by virtue of the fact that they are the same biological entity, the differencing being that of a non-essential predicate, namely, ‘healthy’ (i.e., ‘proper brain function’) and ‘not-healthy’ (i.e., ‘not-proper brain function’). By saying the non-essentially (that is, essential to the identity) biological feature is what makes the difference, Eagleman wants to say that A is A*, and so there is no room for free will. Effectively, to argue for the existence of free will, in his opinion, is to argue for a “free will of the gaps.” That simply isn’t necessarily the case. The errors in thinking, as I see it, are numerous. One is that there is one simple explanation that induces to reject all other explanations. This need not be the case, just take a look at my post on ontological pluralism for a discussion of this. Another error is in thinking that scientific explanation is married to data, which is not the case: theory is always underdetermined by the data, and, therefore, ever asunder from it. Finally, the major error, I think, in the typical materialistic kind of reasoning, as is the case with Eagleman, is that a material scientific account eliminates something like free will from the picture, but this is never necessarily so, simply by virtue of underdetermination. The power of science is pretty awesome, but it does have bounds, and we need to understand these limits.
[1] I am not going to go into my opinion on the moral and legal philosophical aspects of this point, but I will say that the lines of reasoning between my and Eagleman’s positions differs quite a bit, while coming to the same conclusion.
[2] Also, one may translate these argumentative strands into any other framework that is non-deterministic that affords free will.
I think the major issue is that what neurosciences are describing in their findings of the decisional processes and the functionalism of the self are two different things, and most of the time when they begin describing them in these articles, books, etc. they get entrapped in outdated vocabularies that can no longer afford the meanings they ascribe to their findings. It’s as if neuroscience is discovering all these processes but doesn’t have a natural language vocabulary in the sense of neopragmatics of Rorty, McDowel, Brandom to actually describe these brain functions in a clear and precise form that doesn’t fall back on outmoded philosophical notions.
Even your notion that the sciences are bound by an outmoded materialist perspective is a misjudgment on your part, yet was formed by this authors inability to describe these findings in more contemporary terms. That’s the shame of it all, we have no common ground and vocabulary, no specific framework within the sciences and philosophy to cross these new thresholds. Without it we will continue to debate in outmoded modes of thought that are no longer pertinent to the actual workings of the sciences and philosophical perspectives needed to bridge these disciplines. Bummer that it is…
Your first sentence really hits the mark. We have language issues abound.
You said, ‘Even your notion that the sciences are bound by an outmoded materialist perspective is a misjudgment on your part…’ I have to make a correction here. The only time I mentioned boundaries, bounds, or limits was pertaining to science. My view is that philosophy undergirds all intellectual thought, science among others. My point in saying that was to subtly reproach scientistic attitudes. It had nothing to do specifically with any sort of outmoded materialistic perspective, as the point extends to pretty much any view of (or mentality within) the sciences. I simply want to disabuse people of the fallacious line of thinking that science will resolve philosophical problems. If one thinks that a scientific finding has answered some philosophical question, that individual fundamentally misunderstands science, and, further, misunderstands the relationship between science and philosophy. In effect, that was the point of the entire post, namely, to show that a neuroscientific finding, while serving the pragmatic function of informing the law, does not act decisively in answering philosophical questions.
Thanks for your thoughts, as always.
True… many scientists themselves still revert to physcialism and older notions of naturalism. Scientism does need to be rooted out wherever it is discovered, whether in discursive practices or in lectures by academics, etc.
David…I am still reading this…just wanted you to know that…I like it so far !