I recently bumped into a graduate student in the economics department at the University of Pittsburgh, Shawn McCoy, and he brought to my attention that there are some folks who wish to claim that .9999…=1. That is, the decimal value, .9-repetend, which has infinitely many places of ‘9’ after the decimal, is equivalent to the whole number, 1. Any individual of sufficient commonsense and no real inclination toward contrarianism-for-the-sake-of-contrarianism will maintain that the claim is silly and move on. However, there is a bit of mathematical prestidigitation —and that’s precisely what it is, as I will show— presents an “argument” to the contrary of commonsense. The argument requires that we do the following:
Let x be .9999… Then, let the left-hand side (LHS) of our equation be 10x-x. Also, let the right-hand side of the equation (RHS) be the same, not in algebraic terms, but in numerical terms: 9.9999…-.9999…=9. Solving the LHS, we get 10x-x=9x. The conundrum is that it should be the case, of course, that LHS=RHS. However, if one divides LHS and RHS by 9, the consequent value is x=1, though, ab initio, we said that x=.9999… Therefore, some try to conclude, .9999…=1.
One initial comment I absolutely have to make, being a philosopher of science with a mind to history, is that such an argument styled as such, were it made in the period of scholastic philosophy, would have been accepted as a reductio ad absurdum against the universal validity of algebraic operations; it, in fact, demonstrates a contradiction between the assumption and the conclusion, the hallmark of the fallacy. My intention, in the following, will be to show conceptually where the problems lie, and thus substantiating the otherwise last resort, brute force argumentation to absurdity.
All too often, we forget that mathematics is a language. This was ever present to great minds like Galileo, who, in The Assayer, said, “The book of Nature is written in the language of mathematics” (emphasis added). The decimal system, as any other system of numerical symbolic representation (e.g., hexadecimal), represents, qua language, quantities manifest in the world, and the algebraic operations represent empirical activities in the world. The error in the above instance, I claim, arises from the operations. First, a conceptual aside in set theory will be useful to demonstrate, just linguistically, that .9999…=1 cannot be a reality without compromising the representational schema.
Consider a number line going from 0 to 1. Now, consider two sets, the first being the closed set from zero to one, [0,1], and the second being the set that is open at one, [0,1). It is a definitional fact inherent to the codification of quantity that these sets, [0,1] and [0,1), are not equivalent. Let me repeat, this is stipulated, ab initio. They can only be different by virtue of the symbolic representation being different at one. What does the representation mean, i.e., the difference of being closed and open at the point in, the reals, R, called ‘one’? The difference is simply that the set of values that is open at 1 does not contain that number, while the closed interval does. What number does the open interval go up to? It goes up to .9999… I will spare the technical detail, but there is an epsilon-delta definition, in the language of real analysis, such that the limit is not included on an interval (the point may simply not exist, so far as the interval or function is concerned), yet there always exists an arbitrarily small δ-neighborhood. Hence, being that modern mathematics allows, what Aristotle called, “actual infinities,” we see .9999… is distinct from 1. That is, adding a ‘9’ in the succession of 9’s after the decimal, ad infinitum, will always be less than 1. The confusion comes in the thinking that the limit of continually adding fractions that sum to .9999… —or some other mathematical manipulation—, which has a limit of 1, necessarily means 1 is included in the interval. As I just said, it need not be so. The problem is precisely with the operations and the symbolic representation per se.
Now that we have established some conceptual backing for our commonsense intuition, let’s get to the point. The problem comes from the actual infinity and the algebraic operation of multiplication. Consider the question, what does it mean to add 1 to infinity? Does the notion make sense? It doesn’t actually. I slipped in the Aristotelian notion of “actual infinity” for this reason. He didn’t believe there was such thing in our physical world. (I think he would be called a “finitist,” or maybe an “intuitionist,” by modern philosophers.) Our experience, in the literal sense of experience in the phenomenal world, has nothing to say about actual infinity; it is an abstraction. Yet we construct this conception, in abstracto, out of “potential infinity,” which is the idea that an operation can always be performed, turning a present ontological reality into a past-perfect (i.e., completed) act. That is, potential infinity is the idea that, for example, one could divide a piece of paper into, then perform the task to one of those segments again, and again. Supposing there were no metaphysical constraint (i.e., atomic [in the generic sense] structure to material stuff), this maybe be done any increasing number of times. Actual infinity, in this example, is the idea that ever division that could be made has been made. Apropos the situation of the repetend, it is the idea that all possible instances of placing ‘9’ after the decimal has been made. Let’s think for a moment. The multiplication of a number by the number ten means, in the decimal system, to shift everything a place. That is, if one has the mathematico-linguistic representation of 10.01, multiplication by ten is an operation on the linguistic level to shift the numbers, yielding 100.1. But what does it mean to shift infinitely many 9’s? I argue that it simply makes no sense to do so. There is nothing in experience that tells us how to add one (decimal place) to an infinity (of decimal places). This is the error in the representational language. Consider the meaning of .9999… within the context of infinity: it means that an inexhaustibly-numerable succession of placeholders contain the symbol ‘9’. Just as the mathematician who complain, “there is no such thing as infinitely many zeros past a decimal followed by ‘1’, e.g., 0.0000…1 —a case in point for the conclusion I am driving at—, they somehow feel it acceptable to leave this “last placeholder” (itself a contradiction in terms) empty(!) after shifting nine-repetend to the left by one decimal. Looking back to the operational meaning of multiplication by the number ten, the operation naturally intends to leave a formerly occupied placeholder empty. That is, in multiplying by 10, 10.01 became 100.1, leaving the hundredths placeholder empty. Were one, by contrast, to stipulate that there are no actual infinities, only potential infinities, multiplication by ten becomes an algebraic operation that is universally consistent with respect to the present problem. And so, when is left with the sober clarity, that reality is as intuition suggested all along, namely, that .9999…≠1 —and any mathematical line of argument to the contrary is incapable of addressing the true error itself, as it a product of the mathematical language and operation.
 For those not familiar with the rules of engagement in philosophy, we actually consider it on the verge of unacceptable to use the argument to absurdity as a method for proof. It is “brute” in the most vulgar terms, and so we often seek deeper conceptual exfoliation through other modes of investigation. The biggest problem with the method is that, while virtually always correct with respect to the assumptions (whether the assumptions are right or wrong is another matter), the reason there is a fundamental conceptual problem does not typically show itself through the reductio. For instance, in the present example, it is not clear why there is a contradiction between the given and the consequent, hence the purpose of this blog post.