There are three scholars, Ezequiel A. Di Paolo, Jason Noble, and Seth Bullock, who have a brilliant little paper floating around. I say “floating,” because I don’t believe they have published it formally, so I append it here. The paper is called “Simulation Models as Opaque Thought Experiments.” I think the paper could be the first move in a very interesting direction, but there is a problem: I don’t think they concluded the paper in the most natural and potent way; the presentation made me realize something very different from what the authors concluded. Let me give you a quick, two-second rundown of the paper.
The paper tries to develop an understanding of the role of simulation models. They briefly discuss views in the extremes, and they also note a number of fundamental differences that appear to exist between models and simulations. Moreover, they discuss the philosophy and theory of their employment. If you want more details, the paper can quickly fill you in, but nothing more than this is needed for our purposes. The interesting point comes when the authors mention Thomas Kuhn (Di Paolo et al. 2000, pg. 5) —honestly, they do say a number of other interesting things, but I have a narrow mind, in this post. It is somewhat brash of me, but I must say that I think the authors missed the nature of Kuhn’s project, for had they not, I think they would have concluded differently. The authors refer to an essay by Kuhn’s, found in The Essential Tension, called “A Function for Thought Experiments” (1977). They quote Kuhn from page 261 of this work, which reads, “If the two [laboratory experiments and thought experiments] can have such similar roles, that must be because, on occasions, thought experiments give the scientist access to information which is simultaneously at hand and yet somehow inaccessible to him” (The Essential Tension 261). I wish the authors had asked themselves “why” —why would Kuhn say such a thing, and what in the world could he possibly mean? Kuhn frustrates me for reasons like this; he very clearly knows what he is saying, yet rarely ever gives his meta-meta-commentary, when he could easily tell everyone what he is getting at, explicitly. At any rate, I will now attempt to rationally compel to the conclusion I came to, using a deeper understanding of Kuhn, with respect to the paper of interest.
What does Kuhn mean by “accessible” yet “inaccessible?” I argue that he is pointing at that obscure line between consciousness and unconsciousness. That is, I contend that he is extending the consequences of the thinking inferred from the Postman-Bruner experiment, and projecting it onto normal science. In a word, he is psychologizing normal science in the way that he had already done with paradigm shift in revolution. Consider the Postman-Bruner experiment briefly (The Structure of Scientific Revolutions 3rd edition, pg. 62-64). What does one infer from it? A subject is shown card from a deck, the deck is loaded, and, among the traditional playing cards, there is an unusual card, a black heart. The subject, not having been sufficiently psychologically primed, will say that he or she sees a black spade or a red heart. What happens, in terms of psychology and cognitive function, when the subject finally realizes that something is amiss? Does an idea, set of ideas, or some subconsciously packaged bit of knowledge buoy up to the level of consciousness? If you do not think that this is what Kuhn gathered from the experiment, then, further, consider this: “More often no such structure is consciously seen in advance. Instead, the new paradigm, or a sufficient hint to permit later articulation, emerges all at once, sometimes in the middle of the night, in the mind of the man [or woman] deeply immersed in crisis” (The Structure of Scientific Revolutions pg. 89-90, my emphasis). Not formerly in the consciousness? Emerges all at once? From where? At this point, please revert to the original passage in The Essential Tension, and re-ask the above question about what is meant by the dual accessibility and non-accessibility.
To me, at least, it is very clear that Kuhn is talking about this gray area in the mind qua complex psychological entity, the gray area being the zone of interchange between consciousness and subconsciousness. In fact, I would go so far as to bet my golden goose that, if Kuhn were alive today (and moderately less cryptic), he would employ modern cognitive, philosophy of mind, and neuroscience terms, like A-consciousness. What is important is that the authors of the opaque thought experiment paper, in my opinion, probably missed out on cashing in, so to speak, on the appropriate conclusion. This sort of thing happens all of the time, just because the scientists had their minds on one thing (the simulations, models, and computation), rather than on the most important issue, the crux of the whole discussion (psychology and cognitive processes). However, I think their paper and general use of philosophy is to be applauded, and shows exactly what philosophy can do for science, when properly employed and integrated. Now, to brass tax.
The conclusion that I think the authors should have arrived at (in little detail): The conclusion should have been something more oriented toward the notion that simulation models can act as an instrument in the psychology of the scientist, which aids the scientist in the discovery process. In other words, disregarding how oxymoronic it may sound, instrumentation in thought experiments may be a reality. I don’t want to go so far as inserting “extended mind” into this discussion, because I haven’t fully formulated my opinion on Chalmers and Clark’s ingenious and creative idea, but I do apophasistically throw it in there, just because it is worth a moment’s consideration. I think an approach to Di Paolo et al.’s paper from the perspective I have supplied would be far more interesting, useful, and simply more accurate. I humbly apply “more accurate” on the basis that I haven’t the slightest clue about how a computer program can have access and not have access to something, at the same time, whereas it is much more natural and obviously coherent (through psychological modes) for such to be the case in the human brain.
(Closing remark: I also am uncertain about whether computers could possibly supply something that humans earnestly could not be able to figure out with the data and rule that are put in. I haven’t done enough exploration into Wolfram’s claim that higher order structures appear in certain kinds of algorithms —supposing that this is actually his claim. As you might imagine, this is relevant far beyond uses in computer science. For example, this sort of claim, as I understand it, would suggest that natural sciences aren’t reductive, and that there may be different physical laws at different scales, small to large. I have some intuition about this, but it is at far too an inchoate stage to supplement the above discussion.)