I recently read an article by Jesus Mosterin, called “The Unity of Particle Physics and Cosmology?” (pg. 165-176 in *The Problem of the Unity of Science* edited by Agazzi and Faye). The article is very interesting, because it proposes something I hadn’t heard before, namely, that the Casimir effect might be the phenomenon that is the conceptual key to unifying quantum and cosmological scales. The idea is that vacuum energies associated with a cosmological constant, Λ, might be the cause of the effect (there are numerous interpretations); but there is/are a problem(s), which has been noted by Steven Weinberg, Alan Guth, and others. In particular, the one that immediately comes to the fore is the problematic nature of the consequences of a varying cosmological constant. (Keep in mind that the early universe seemed to have an enormous vacuum energy present, while, now, all we have is this rinky-dink Casimir effect of quantum mechanical origin.) “Why in the world would that matter,” is a natural question. Again, think “unity”: we are talking about general relativity, too. Mosterin sets the scenario up nicely, first quoting Guth, then Weinberg:

“*According to A. Guth, ‘something is happening to suppress this vacuum density.’ Nobody knows what it is. ‘This unbelievable amount of fine-tuning tells us that our understanding of the interplay of gravity and quantum vacuum energy is insufficient. Therefore, a mechanism relying mainly on this interplay seems unsatisfactory.’ Possibly solutions to the cosmological constant problem are particularly constrained if they are to be compatible with the inflationary scenario*” (Mosterin 169).

The constraint Mosterin is talking about is the mathematical necessity that arises from the portion of the Einstein equations, **T**_{ab}, that makes it impossible for Λ to vary. The mathematical reason is that, if the covariant derivative of the Einstein tensor field is not zero, ∇ ∙ **T**_{ab }≠ 0, then energy is not conserved. The conundrum is clear: Currently, measurement suggest Λ ≈ 0, while inflationary theory suggests values in the early universe were enormous. In fact, the number that is oft thrown around is the difference between what inflationary theory suggests Λ should be today and what it is observed to be, a difference of 10^{120} (i.e., 120 orders of magnitude in difference).[1]

I do have one concern about Mosterin’s article, which is that I wonder whether he (and other philosophers of physics; and physicists, too) aren’t thinking too much like physicists, or even thinking too much about the mathematics and constraints the physical system places on the mathematics. The mathematics-first mentality is too natural among physicists, sometimes, and this may be an issue, here. What I have in mind is thinking about physical properties of space, first, then altering the mathematics on that basis. For example, if space time “unfolds” as space expands, this may permit space to retain a constant Λ, while adjusting the observed value over time. The idea would be to place physical space into a mathematical space, where the vacuum energy is constant per unit of physical space, but where it can be dispersed in a mathematical space. The mathematical space is only a conceptual aid; it need not be anything more than heuristic. It is not a sure-fire way of working out all of the problems —it may work, it may not—, but I feel that very good philosophers of physics sometimes make the error of thinking too much like physicists, being that many are trying very hard to adhere to many of the community’s various modes of thinking while doing their philosophy. I think this is one place where it might be getting us into trouble, and forestalling philosophers of physics from lending an important philosophical hand to the physicists.

[1] For anyone who cares how this calculation is done, the expected Λ value is the planck mass divided by the planck length cubed.