I have recently been overwhelmed by a large number of scientific topics that bear one very important relation to one another. The relationship is the theme of holism; or, more accurately, the debate between reductive and holistic science. Plato’s notion of carving nature at its joints is one that the early modern through present scientific ventures embrace and take for granted. The point of the following post is not to rehash any of the points in the reduction/anti-reduction debate, but to present some perspective, without actually going into the debate. More or less, I was to touch on some of the philosophical features that have jumped out at me, as of late.
The major feature that has jumped out at me is one of alienation of common human experience and the sciences. Quantum mechanics is exemplar science that upsets and rejects common experience. This doesn’t bother me so much: even if small scales behave incredibly differently than what is experience on large scales —i.e., even if there are ontological and metaphysical differences between the scales—, I am not bothered, because there is some logos that will likely connect them. I am really not at all concerned or threaten by coherence and reduction within a sufficiently uniform science. Biology is a different story, as far as I am concerned. No, what concerns me is the alienation instituted by philosophers and scientists that seek to negate some part of experience to make their respective philosophy of science and science sufficiently comprehensive. Let me give an example. I just read “Life after Kant: Natural Purpose and the Autopoietic Foundations of Biological Individuality” by Weber and Varela, in Phenomenology and the Cognitive Sciences (2002). In it, the central point is naturalization of teleology. If you don’t know, Varela was coauthor (with Maturana) of the original work on “autopoiesis”, which sought to establish a holistic philosophical basis for organismic biology, wherein the organism is viewed as a self-sustaining and comprehensive whole in which the parts cannot be viewed as non-purposive to the whole. Quoting Webster and Godwin (1982), they say, “The organism as a real entity, existing in its own right, has virtually no place in contemporary biological theory.” This should be particularly bothersome to one that is not trained in the sciences. What is it that biology intends to study? I would say life, but, by the time you have been inculcated and systematically acclimatized to the discipline of biology, there is no such thing as a living thing —unless you are a closet holdout. I am being a little cynical; there are plenty of biologists that think there living things are somehow different from inanimate. However, the literature, beyond the first year biological texts, rarely ever talks about life; and it is most definitely the case that the scientists, themselves, seek to dismantle organisms into functional parts throughout the literature. The debate between strong adaptationists and bauplan adherents rages on. A philosopher who surreptitiously infiltrates a biology lecture hall will almost assuredly find herself overcome by this-is-nothing-but-that talk, and where the concept of life is pitched, part and parcel. At a fundamental level, my concern isn’t even initially questions about bridge laws or anything like that. Here’s my concern, in brief: life is common to human experience, so what is it that we are doing by stripping the science of these natural conceptions of human experience? I would claim that it seems as though we are unnecessarily turning science and its philosophy into an occult practice, alienating humanity from understanding of the natural world. Not to mention that I think science that scientists and philosophers want to annihilate all such conceptions that they can’t explain. My question is, is that what we are doing —annihilating concepts natural to human experience, because the fall outside the scope of our best efforts to try to explain them within the reductionist model? I think it is quite possibly so.
The point that Varela and Weber essentially skirt in their article is the historical and social forces that pushed for a naturalized teleology, without the teleology, namely, the post-Enlightenment, Victorian scientific impulse, driven by mechanistic visions Nature. They do well to point out that Darwin, for the most part, was the Newton to the blade of grass, contra Kant’s opinion that organisms could not be understood as mechanized. Their view is that Hans Jonas’ phenomenologizing Kant’s teleology of natural purpose, was progenitor to the science of autopoiesis, which readmits the talk of “purpose” in the science, and does so in a non-theologically charged fashion. For my part, it is that theological charge which served as a strong compulsion for social forces to naturalize teleology, not admitting the teleological part —which is traditionally done in just the way that Dawkins has explained, by inserting teleonomic language, while not taking serious the metaphysical equivalent to the phenomenon, that of teleology. With ideas arising in physics (see “criticality”) that may end up supporting autopoiesis, it makes me wonder whether science should ever (I can’t think of a good hypothetical case for the contrary) deny a concept of common human experience. After all, autopoiesis, if it holds up as a well-founded scientific idea, presents a de facto demarcation of animate and inanimate objects. Why explain away the idea of life, if there are conceivable avenues for explaining such phenomena. I think there is a strong push in academia to develop frameworks of thought that sufficiently adhere to the fundamentalist monism, the implicit thinking, to my mind, being that there can be no such thing a top-down, transcending (or even transcendental) materialism. It’s understandable, to a degree: science is so ideologically charged, with virtually everyone wanting to ensure there is no room for “theological powers.” Autopoiesis, and other brands of thought, sufficiently naturalizes teleology for this fear to be realized, though.
This sort of explaining away of common human experience can be seen elsewhere. In probably my all-time favorite article —an article which is a brilliantly argued as it is creative—, “Quining Qualia,” Dennett has oft taken my mind to the point where I all but accept his conclusion. It really is a compelling argument that he presents. It only takes me to the brink of acceptance, because, perhaps, pragmatism is too strong in my blood (and mind) to accept that qualia can be explained away. Once one does this, accepting the illusory nature of qualia, the individual has been completely alienated from his or her most fundamental level of experience, the phenomenal level. Compare this to the kind of thinking seen in Autopoiesis and Cognition: The Realization of the Living, and one sees the phenomenal world put first. For instance, the treatise essentially begins and ends with the “observer”. van Fraassen, too, places a considerable amount of weight on the phenomenal level, in his The Scientific Image, advancing such ideas as pragmatics of explanation; which brings one to the question, what is the real difference? There is slightly more to it than the simple statement I made earlier, that it is an issue of adhering to fundamentalist monism. No, it goes well beyond that. The heart of the matter, I contend, is physics chauvinism, the thinking that physics, being most closely related to the fundamental behaviors of matter, must dictate the terms of scientific play. It is this physics chauvinism that serves as the metaphysical and methodological underpinnings of science as it is known today. Carving nature at its joints is all the rage, trust me. Nevermind that there is no “objective” way to establish where these joints are —or, more importantly, if we can see them (and know where they are), we can’t even tell how we know that they are, in fact, there. This kind of thinking doesn’t even stand up to the bullshit-at-the-bar test, yet it pervades academic thought.
I will close by stating the worst nightmare of physics chauvinists, which is that, if physics is the heart of all the sciences, then there is the problem that Nature cannot be a formalized axiomatic system that is also complete —and, thus, physics isn’t the whole story, anyways. Gödel told us so much. However, you don’t need mathematical logic to begin think we live in a world that is ontologically incomplete (or a non-static ontology). There have been a number of rather creative ways in which the incompleteness of the World’s ontology has been suggested. Amidst the literature, one of the fascinating ways in which this has been suggested is “The Genesis of the Transcendent: Kant, Schelling, and the Ground of Experience” by Adrian Johnston, where Johnston proposes the problem of transcendental priority and genetic priority in Kant, which Schelling tried to resolve, yields an incompleteness. If, between the incompleteness arising from the physics chauvinistic view and the alienation of common human experience, you are not brought to question the current views about how Nature or the sciences are structured, then there may be nothing that will.
 In mentioning Bas C. van Fraassen, I am glazing over another animus, that of anti-realism, which, to my mind, seems to be a contributing factor to ideological adherence to fundamentalist monism. I don’t think the realism/aniti-realism debate needs to be involved in this. If you have read my paper on the onto-epistemic stance, you would see why I think this. There are realist positions that also place the phenomenal level at their foundation.
14 responses to “Assessing the Explaining Away of Elements of the Human World Qua Experience”
Yea, Johnston has some good ideas. Been reading his works on Zizek’s ontology, and the one on Baiou and Zizek on Politics of late… Johnston cuts to the quick.
Did you by chance get my email? I sent it to the email you have listed on your “About” page. Let me know if you didn’t get it. I have a question about some things that Adrian told me at the recent conference. Yeah, I got his work on Zizek’s ontology, prior to the start of the semester, but I haven’t had a chance to read it. He’s a very likable person, too. I am not a big fan of the platonic form of stodgy academic, and much prefer the non-institutionalized intellectual’s mentality. It’s a rare breed, but there are some scholars floating around that have that non-institutional intellectual quality, and he’s one of them.
Nice post, David. Keep up the good work. Some of what you wrote reminded me of the phrase “the reification of science as an entity separated from practical experience” from the following article. (I haven’t read the whole article, just the abstract.)
Mark Dietrich Tschaepe (2011). John Dewey’s conception of scientific explanation: moving philosophers of science past the realism–antirealism debate. Contemporary Pragmatism 8(2), 187–203. Abstract: “John Dewey provided a robust and thorough conception of scientific explanation within his philosophical writing. I provide an exegesis of Dewey’s concept of scientific explanation and argue that this concept is important to contemporary philosophy of science for at least two reasons. First, Dewey’s conception of scientific explanation avoids the reification of science as an entity separated from practical experience. Second, Dewey supplants the realist–antirealist debate within the philosophical literature concerning explanation, thus moving us beyond the current stalemate within philosophy of science.”
And here’s a related article published the same year. My purpose in mentioning these articles is not to promote Dewey or “pragmatism” as an ideology, but just to throw some more grist into your conceptual mill.
Tibor Solymosi (2011). Neuropragmatism, old and new. Phenomenology and the Cognitive Sciences 10(3), 347–368. Abstract: “Recent work in neurophilosophy has either made reference to the work of John Dewey or independently developed positions similar to it. I review these developments in order first to show that Dewey was indeed doing neurophilosophy well before the Churchlands and others, thereby preceding many other mid-twentieth century European philosophers’ views on cognition to whom many present day philosophers refer (e.g., Heidegger, Merleau-Ponty). I also show that Dewey’s work provides useful tools for evading or overcoming many issues in contemporary neurophilosophy and philosophy of mind. In this introductory review, I distinguish between three waves among neurophilosophers that revolve around the import of evolution and the degree of brain-centrism. Throughout, I emphasize and elaborate upon Dewey’s dynamic view of mind and consciousness. I conclude by introducing the consciousness-as-cooking metaphor as an alternative to both the consciousness-as-digestion and consciousness-as-dancing metaphors. Neurophilosophical pragmatism—or neuropragmatism—recognizes the import of evolutionary and cognitive neurobiology for developing a science of mind and consciousness. However, as the cooking metaphor illustrates, a science of mind and consciousness cannot rely on the brain alone—just as explaining cooking entails more than understanding the gut—and therefore must establish continuity with cultural activities and their respective fields of inquiry. Neuropragmatism advances a new and promising perspective on how to reconcile the scientific and manifest images of humanity as well as how to reconstruct the relationship between science and the humanities.”
I was wondering what you had been up to. Thanks a bunch for the encouragement. I am finding myself surprised by the continued influence of the pragmatists, so the article you suggested looks good. What I really like about pragmatism is that it permits us to go to the level of abstraction, in extremis (maybe that’s not the word, but you get the idea), without defenestrating the basis that allowed us to get there. I am impressed by reading works, like Where is Science Going? by Max Planck, where unobserved theoretical entities (e.g., atoms), even if their existence is indicated indirectly, are not embraced and given some moiety of (let alone full) ontological status that is typically afforded to objects of the phenomenal world. I am doing a bit of reading on aether theories of the nineteenth century, and I am fascinated by how there seemed to be an underlying fear of being too confident in accepting out right ideas that went the way of imponderable fluids, on the one hand, and a brazen boldness to accept, part and parcel, the absolute actuality of something like the aether. (I am really anticipating Jordi Cat’s “Master and Designer of Fields: James Clerk Maxwell and Constructive, Connective and Concrete Natural Philosophy,” which is sure to be a very historically accurate and comprehensive work on Maxwell’s philosophy and science.)
As I explore the phenomenology-first perspective of theory-ladenness, I look forward to your insights —and no worries about promoting pragmatists’ ideas, as I see the relevance, taking what is applicable and leaving the rest. Thanks again for the literature suggestions.
David, thanks for mentioning Jordi Cat’s book; that sounds like a must-read. I have one more thought about your post: I’m ready to discard the idea of absolute objectivity (I don’t see how we could keep it), but I’m not so ready to discard the idea of carving nature at its joints. Here’s a simple example: You and I sort a batch of Hydrangea flowers into three piles of red, blue, and green flowers and we agree that we have carved this batch of Hydrangea flowers at its joints. But then our color-blind friend Antonio comes along and insists that the red and green flowers are arbitrarily separated and should be aggregated into one pile. If we all know that Antonio is color-blind and if we all understand how color blindness happens, we can agree that we are all carving nature at its joints, but the joints are in different places for Antonio than they are for you and I. And we can understand that this act of carving is as dependent on our technical apparatus (in this case, especially our eyes) as it is on nature. And we can also see that our technical apparatus is part of the nature we are carving. I would say that in this case it is possible to carve nature at its joints: we can see the joints and we can tell how we can know that the joints are there. We can also see that (1) our joint carving is theory-laden and (2) our explanation of our joint carving is theory-laden; but (1) and (2) are theory-laden with different orders of theory. And we may be able to come up with further orders of theory beyond those two.
Well, on the point of objectivity, one need not get rid of it, necessarily. I think there are ways of going about, making it possible to keep it, but such schemes of conforming present thought with objectivity make for a variegated and complex philosophy. Some proponents of the Speculative Turn maintain that there is objectivity by eliminated subject-object correlation. There are other ways of doing it, too, but much work is to be done, because objectivity is not a popular idea, meaning that there aren’t as many thinkers working on it. Two examples that come to mind are: 1) a leftist-Hegelian interpretation of Kuhn’s “Structure,” where epistemic failures make it so that it isn’t possible to grasp objective knowledge, and 2) jamming Poincaré’s view of science with a metaphysics that admits underlying objectivity with observational variation of perspective of the one objective reality (this is not too far from what Kai Hauser has proposed as Husserl’s implicit response to Kant’s thing-in-itself). There are ways to hang onto the idea, but quite a bit of creativity is necessary.
In regard to carving nature at its joints, I am not so much thinking that we need to throw the idea out. Instead, I think that is could be a useful pragmatic tool, when proceeding within the sciences. I think the different sciences (including those collections of sciences clumped together and catalogued as “biology”) are abstractions of varying degree, in the sense that we cut away a certain amount of information to arrive at the theoretical objects of the science. What permits explanations that include various objects of the sciences is ontological flatness, the idea that, in our theory-laden world, an electron has the same ontological status as a sofa. The way in which the sciences carve nature is arbitrary, and, in my mind, purely contingent upon interest-dependence and historico-genetic heritage. I think the problem you are getting at with hydrangeas is the thinking that there seems to be something fundamentally different between theoretical objects (or entities, salient features, et cetera) that we can’t see and those that we can see. That is, the problem is one of phenomenology versus a more general epistemic methodology. I am not sure how thrilled you will be at this answer, but I lean very much towards an idealist tradition, the thinking being that the lack of access to color/sense datum (a salient feature) is not different from a situation where some piece is missing from a theoretical framework (e.g., Higgs particle). This was a major idea I tried to get across in the onto-epistemic paper, without actually showing my (idealist) hand.
Given that, do you feel as though the two orders you presented persist, or do you think they collapse into one? I ask because your thought had occurred to me previously, and the idealist turn seemed, to me, to fix it.
Nice response, David. As usual, I like your ideas. I should also add that I’ve read your paper “Why emergence doesn’t emerge and why secondary qualities are not secondary” and I enjoyed it. I found it much more lucid than your onto-epistemic paper; the latter needed a lot of work to become a coherent story. Speaking of your onto-epistemic paper, I came across another paper recently that’s relevant: Angela Potochnik & Brian J. McGill (2012), “The limitations of hierarchical organization,” Philosophy of Science 79(1), 120–140. It discusses the “illusive nature of levels” (your phrase from the onto-epistemic paper) in ecology, with implications for the rest of science and philosophy of science.
I’ll skip to your last question: “Do you feel as though the two orders you presented persist, or do you think they collapse into one?” After I read your response, I realized that the two different orders that I described are like 1st- and 2nd-order cybernetics. Can 2nd-order cybernetics be collapsed into 1st-order cybernetics, or vice versa? I don’t know; that would require a more detailed conceptual analysis. Could both orders be collapsed into another model entirely, one that preserves an idea of absolute objectivity? I see your point that it’s possible, but as you say it would require actually doing the conceptual work.
You say, “The way in which the sciences carve nature is arbitrary…” I wonder: Can we find anything in the hydrangeas scenario that is not arbitrary? What about the speed of light? Light is what makes it possible for you and I and Antonio to agree that all of us are carving nature at its joints when we sort hydrangeas by color, even though we may put the joints in different places due to differences in our retinas. And if we became blind, we could use a colorimeter in place of eyes, so I don’t see any fundamental difference between the visible and invisible, but I do think that some kind of interactivity is involved, whether that interactivity is described in a subjective way or in an interobjective way. We agree that we are carving nature at its joints because our theories (and experiences) of color share common elements based on an invariant. Invariant constraints make it possible to coordinate our theories and to try our best to carve nature in a way that, while always historically contingent and onto-epistemically incomplete, is not entirely arbitrary.
You mention Poincaré and Husserl. I’m no expert on them, but my understanding is that both of them were metric conventionalists. What continues to fascinate me about the philosophical history of space-time (as told by Robert DiSalle) is how that history clearly shows the limitations of metric conventionalism (the idea that fundamental principles in geometry are arbitrarily chosen among empirically equivalent alternatives). Conventionalism has its merits (after all, Poincaré and Husserl were smart guys), but DiSalle shows how the development of theories of space-time had to go beyond conventionalism through a more rigorous process of conceptual analysis that constructed new theories by critically testing the presuppositions of the existing theories. As DiSalle wrote: “It is possible to see the most important conceptual transformations in physics as explicitly motivated by analyses of this sort, analyses that uncover the presuppositions guiding the use and misuse of central theoretical concepts, and expose the challenges to these presuppositions raised by new empirical discoveries. In this process of analysis we find a historical and philosophical dimension to the so-called ‘paradigm shift’ that had completely eluded the Kuhnian school: a rational philosophical engagement where Kuhn had seen only a clash of incommensurable philosophical prejudices. What emerges from such engagement is not merely a novel theoretical perspective, but also a deeper understanding of the old perspective, and the inadequacies in its conceptual foundations that empirical progress has brought to light.”
Hello, Nathan. Oh, I am glad you read and liked the paper on emergence. I’m thinking that if I am ever going to make the onto-epistemic stance publishable, I will probably have to turn it into a monograph. Probably the best reader (and comprehender) said the onto-epistemic paper is abstruse, which is virtually compliment, coming from such a source; but, at the same time, it is to the detriment of the work. I think the onto-epistemic philosophy’s applicability is where I can make it more palpable and more easily (cerebrally) managed. Like you and I discussed on that post, I think there are some value the philosophy can bring to the history of science, as well as to phenomenology, philosophy of mind, and perspectives on cognition. It’s interesting that you bring up the illusive nature of levels again, because I have had a number of folks email me, trying to get me to say more. Most of these individuals are interested in fresh approaches to qualitative science, which naturally entails, as far as I can tell, anti-reductionist explanations for emergence.
I wish I knew a bit more about cybernetics, so that I could comment, there. I am doing a reading group with dynamicists from the cognitive science program, and they tell that I missed out on their previous year’s tour through the first wave of the self-organizing systems. I think Maturana and Varela are the second wave, which is what we’ve been up to for the past six months. I’ll probably have to find time in the summer to catch up on that. I haven’t even read Wiener.
On the point of whether carving up nature is arbitrary, I think the carving begins prior to perception. This is really a problem. Nature is carved up, to some extent, in perception, right out of the gates —if we are considering a grown individual. The recognition of so much as anything remotely like an object has been carved out, in perception. It’s an amazingly fascinating thing, which I bring up all the time to cognitive scientists: how is it that objects are recognizable as being parceled pieces of the phenomenal plane. This is the kind of reason that I am so sympathetic to Kant’s philosophy. This perceptual parsing requires that there is some understanding of the parceled object as being distinct from a background of phenomena, and so forth. So my thinking is that cognition as done some kind of cutting/making of joints, which is the tricky part, but let’s not assume that other minds (maybe humans, but I am thinking more about animals) perceive the world in the same way. My guess is that perception for many animals, especially evolutionary throwbacks, organisms like trilobites, may have thoroughly different perceptual schemas of the world. Forget the species for a second, here’s a story I often tell people who like to put scare quotes around ‘theory-ladenness’: I was once walking along with an Amish friend, on a patch of grass between a dirt road and a hayfield. He said to me, “Don’t trip on that rock.” It was not a rock but a blackened, oily chunk of car engine. Scattered-brained, in the midst of chatting, I stopped after two or three more steps, and I pointed out that the rock was a car engine. The Amish know what cars are, and some even deal with diesel engines, but not Sam. He had never seen an engine. What struck was that he saw it coming, had plenty of time to look at the object (this wasn’t one of your psychology experiments where you have a fraction of a second to see something), yet the object was meaningless and close enough to some basaltic rock to him. The point I am trying to make, and one which I think Hanson and Kuhn couldn’t quite bite into, is that perception already has some cutting/creating of joints in nature that’s been done before we can operate in the world. Our world has to presuppose some kind of theory-ladennes, from the outset. Otherwise, it’s just a mess we can’t operate within.
Your point about the genetic contingency is well taken, but I am not sure that, without some necessary grounding, we can pitch the term “arbitrary”. So here would be my point: If the structural integrity of the sciences and cutting of nature is to be something other than arbitrary, there has to be some amount of necessity that underlies the reason we have the sciences that we do. If the perceptually salient features of the world is the only thing grounding that, we might be in trouble, because, as my thinking went above, there could be other ways of seeing and, thus, cutting up the world. Hegel proposed variability in the categories, right? I’ve always thought this indicates different conceptions as much as perceptions of the world throughout times and cultures, which is about the only way I have ever been able to make any kind of sense of medieval and renaissance magic, a seeming progenitor to modern science. Let me take this in a slightly different direction, rather than use “arbitrary” in the typical sense. If there is an infinitude of information that Nature presents us with, which is to say that the phenomena of the natural world are inexhaustible, and further supposing that the relations (thinking about structural realism or something of the sort) that truly exist and are objective, then it still could be the case that we choose our theory-laden framework and have complete rein on which particular relations we conglomerate into it. It’s pretty wild and unorthodox, but it seems possible to me that there could be an infinitude of true relations (conventionalism a la Duhem and Poincaré), but such as they arbitrarily selected into a theoretical framework. Coming from physics, I am doing a mad sprint through so much of the classical philosophy literature, as well as the technical philosophy of science literature, that I am still trying to make heads and tails of so much of this; it’s entirely conceivable that I am missing the mark on multiple points.
As far as DiSalle goes, I think he might be on to something, too. I have been going back over his work in my mind, and it’s been quite a while since I read “Understanding Space-Time,” so I think it is nearly time to revisit it.
Thanks for the response; those are all very interesting ideas. “Nature is carved up, to some extent, in perception, right out of the gates—if we are considering a grown individual. The recognition of so much as anything remotely like an object has been carved out, in perception.” Yes. If we are considering a grown individual. Here’s an interesting question: If we are considering a newborn infant that has just popped out of its mother, how much of the world is already carved out for it? We can’t say exactly, but one thing is clear: that infant is born with a capacity for carving. There’s a huge amount of research on neurogenesis and synaptic pruning, which is why we should take into account the findings of the developmental sciences (developmental biology, developmental psychology, and newly emerging syntheses like ecological evolutionary developmental biology) and also not forget the role of “action in perception” (that’s the title of a book by Alva Noë) which is emphasized by cognitive scientists who study enactive, embodied, and extended cognition. Your Amish friend didn’t know what a car engine was until he walked past one and you explained it to him; so the act of walking past the engine and talking about it was integral to his process of learning to perceive car engines, which he can now probably do (or at least have a better chance of doing) thanks to that event.
Nathan, I have just dealt a little with your question: “If we are considering a newborn infant that has just popped out of its mother, how much of the world is already carved out for it?” Kant deals with the question in his “Anthropology,” and I also read Adrian Johnston’s “Genesis of the Transcendent,” which deals with the Schellingian view of historico-genetic priority versus transcendental priority in Kant. It seems like a tall order to figure out this mess, with infant perception. I am incredibly sympathetic to the Kantian view that the world needs to be organized and formatted in order to operate within. Obviously, there are issues, as regards an infant.
Thanks for your response, David. Yes, the world needs to be organized and formatted to operate within—but isn’t the world also organizing and formatting us? We think we carve nature at its joints—but isn’t nature also carving us at our joints (is not the process of carving nature also the process of nature carving)? Nature carves us, we carve nature—could we have one without the other? Isn’t explaining away experience itself an experience? Just some questions I ask myself, without any expectation of a final answer.
Oh, well, sure. A naturalized Kantian approach would be that Natural Selection operates on the efficacy of cognitive processes. However, that doesn’t mean a pre-carved-out Nature is doing organizing and formatting us. It’s possible, but the seeming inexhaustible nature of phenomena is good reason to believe otherwise than the notion of Nature as already carved. I do think we can have one without the other, on the grounds that something like an incomplete ontology could present an infinitude of options for the modes by which cognition works. So even a naturalized teleology in an incomplete ontology would admit endless variation that affords for pragmatic improvement of the ability to operate within the world. Also, just as in the ecologically-minded Natural Selection, an incomplete ontology would have selection pressure that are time-dependent, the reason incomplete ontology is also called “non-static”. If there are a plethora of ontological options for the betterment of cognition, then I think phenomena should be inexhaustible; a foundationalist (or fundamentalist) ontology, joints already carved, should show signs of exhaustion of phenomena. The reality is, even within our “modern science,” gravitation theory seems to be on the cusp of radical changes (cosmological constant seems to vary, dark matter, algebra not compatible with QM, fly-by anomaly, and so on). If the most inherently ground-up science can’t exhaust Nature’s phenomena, I am a bit worried for the other sciences, and definitely have my doubts about nature having joints ready-made.
I don’t think explained-away experience is an experience. The process is, but that is because its humans in action. “Being-in-the-world”, which should really be “becoming-in-the-world”, is itself a process, an activity. However, I think the alienation of fundamental human experience is indicative of the fact that scientific explanations of extreme abstraction, which also eliminate key points of reference and features of the quotidian, means removing thought from the realm of experience. Even in a time when most everyone agrees that mathematics is the key to Nature’s inner workings, people doubt string theory, even though it has at least one point of contact with the natural world. Subconsciously, everyone is wondering: If we continually abstract the world, does there come a point when we are dealing with anything real? If it isn’t real, it can’t be experience.
David, I came across a brief article recently that reminded me of our discussion here. You may or may not be interested: Jedediah W. P. Allen & Mark H. Bickhard (2013). Beyond principles and programs: an action framework for modeling development: commentary on Fields. Human Development, 56(3), 171–177.
An excerpt: “Epistemic agents are open systems that must interact with their environment to exist, and modeling their ontology and development from within a computational framework is inherently at odds with these dynamical interactive systems considerations. Instead, we suggest that an (inter-)action-based dynamical framework is required (Bickhard, 2009a, b)…. The interactivist model of representation opens up the possibility that an infant could detect and/or track an object without that necessarily indicating ‘persistent objecthood.’ It also means that infants could successfully re-identify objects (and people) prior to the capacity to represent the world in terms of objecthood. Infants do eventually come to represent the world in terms of persistent objects, but that is a developmental accomplishment that unfolds over the first 2 years (Piaget, 1954). For interactivism, the development of object representation is a matter of learning to organize the interactive possibilities afforded by objects such that they form an invariant web of mutually reachable anticipations. For a toy block, the interaction possibilities will include visual scans, hand manipulations, and mouthing explorations. However, full object representation also requires that children have learned the class of transformations under which such webs remain invariant (i.e., recoverable). These transformations will include visible and invisible displacement (e.g., Piaget, 1954) as well as occlusion and containment ‘events’ (e.g., Baillargeon, 2008). However, not all transformations will maintain the web of interactive possibilities. For example, if the block is burned or pulverized, the collection of interactive possibilities that previously existed is no longer recoverable. In short, object representation is constituted by a web of mutually reachable interactive possibilities that remains invariant with respect to a large class of transformations. Persistence is manifest in the invariance of the web of interactive possibilities and in terms of that web being recoverable through appropriate intervening steps (e.g., I must first open the box to recover the toy). Fields focuses on implementation not origins, but the origins of nativism are located in issues about the origins of representations. His narrower focus is on organization of empirical atoms—nativism argues that object representations must be innate. In contrast, Fields argues that persistence is a computational phenomenon and that programs can construct ‘object files,’ thus, nativism about object representations is not necessary. All such positions, however, assume basic empiricist atoms. Action-based approaches provide a powerful alternative to the foundationalist assumption common to both nativist and empiricist frameworks. Only an action-based framework is able to account for the emergence of representation from a base that is not itself already representational. Accordingly, an action-based approach to representation in general and object representation in particular has implications for understanding persistence. In convergence with Piagetian theory, the interactivist model outlined above suggests that object persistence is itself a developmental phenomenon that involves increasing representational complexity over the first 2 years of an infant’s life.”
Nathan, thank you very much. I will give the full article a look.
Do you have anything in the works, or do you have a systematic reading project going?