Quick: If you have the choice between being one of 5 people sharing 3 bottles of wine, or being one of 8 people sharing 5 bottles of wine, which would you prefer (assuming you like wine)? That is, which is bigger, 3/5 of a bottle or 5/8 of a bottle?
If you find yourself reaching for the calculator app on your smartphone, then you can stop reaching right now, because you’ve already proved my point for me: when it comes to comparing two nearby numbers to see which is bigger, decimals are often handier than fractions. To compare the fractions 3/5 and 5/8, we can first write them as 0.6000 and 0.6250 (the number of zeroes at the end may vary), and then we can apply the familiar procedure for comparing two positive decimals: we march through the two strings of digits from left to right, until we find the first position in which the expansions don’t match. Whichever number has the larger digit in that position is the larger number.
3/5 = . 6 0 0 0 …
5/8 = . 6 2 5 0 …
The 6 matches with the 6, but then the 0 doesn’t match with the 2, and since 2 is larger than 0, we see that the second fraction is the larger one.
This is a great way to compare two positive numbers (at least if we’ve already got them expressed as decimals), and it’s part of the reason for the success of the decimal system for representing real numbers, introduced in Europe over 400 years ago by Flemish mathematician, physicist, and engineer Simon Stevin (building on earlier work by many other people; see the Wikipedia articles “Decimal” and “Decimal mark” for details).
But the decimal system isn’t a perfect system for doing arithmetic; students start to realize this when we teach them that the fraction 1/3 corresponds to the unending decimal .333… (where “333…” stands for infinitely many 3’s), and perplexity comes to a head when we tell them that .999… equals 1. Thoughtful students notice that there’s something strange going on here, and good teachers can help them over the impasse, but I think some of the best students come away unconvinced. And they’re right to be unconvinced, because the question can’t be fully resolved with the conceptual tools that are taught in pre-college mathematics. If you’re a high school student whose notion of real numbers is founded upon on the way we represent numbers by decimal expansions, then no amount of algebraic magic designed to convince you that .999… = 1 is likely to shake your view that the way to compare real numbers is by comparing their digits from left to right, the way we did with 3/5 and 5/8, and that .0.999… is therefore less than 1.000…; the only effect of algebraic proofs of the formula .999… = 1 will be to undermine your confidence in algebra. (See Jordan Ellenberg’s book How Not To Be Wrong for a nice discussion of the effects of this kind of “algebraic intimidation”. It’s not usually the teacher’s intent to intimidate, but that can be the unintended effect.)
Many high school teachers have the honesty to tell students “I’m not going to tell you the real reasons why .999… equals 1; you’re going to have to take it on faith for now, or at least keep an open mind on the issue until you learn the real reasons in college.” Then the college calculus teachers say “You’ll understand this when you take real analysis” (which is what mathematicians call the rigorous study of real numbers and calculus). And then all too many real analysis teachers end up having to say “Don’t feel bad that you didn’t understand epsilons — lots of students don’t get it the first time around.” The point is, this stuff is really hard. So most people never learn what mathematicians mean by the equation .999… = 1, or why it’s true.
I’ll tell you why it’s true. That is: I’ll explain why, viewed as the address of a point on the number line, “.999…” designates the same point as “1.000…”. Then I’ll explain what mathematicians mean when they write down infinite sums like 9/10 + 9/100 + 9/1000 + … and why, viewed as an infinite sum, .999… is equal to 1. It turns out that the proof of the formula 9/10 + 9/100 + 9/1000 + … = 1 — and indeed, the mathematical meaning of the formula — hinges on the mathematical concept of a “limit”, whose definition involves a certain peculiar kind of two-player game. For those readers who are emotionally invested in .999… < 1 and are having a hard time giving it up, I’ll briefly mention an alternative to the real number system in which .999… is less than 1. (It’s not a substitute for the real number system, but it’s a nice place to visit, and I’ll take you there someday.)
For those of you who want to see other people’s treatment of this topic, I highly recommend the Wikipedia article on .999…, as well as Vi Hart’s video 9.999… reasons that .999… = 1. I should say that I also like the short and sensible analysis given by “Dr. Math”, but for those who aren’t satisfied by that, I hope my more detailed perspective on the issue will be helpful. And if any historians of mathematics read this and can provide information on who first wrote down an infinite decimal, I’d be glad to include that information here.
DOES .999… EXIST?
Let’s start with a brutally practical observation: you can’t really write down infinitely many 9’s. To the extent that .999… represents the idea of a decimal point followed by infinitely many 9’s, it’s a theoretical construct. An infinite decimal like .999… (or, if you prefer, its less suspicious-looking relative .333…) doesn’t have a clear meaning until we give it one. If we treat it as if it means something but we don’t agree on its meaning, we’re likely to get into a pseudo-debate (the frequently heated kind of argument in which two people never reach agreement because they never notice that they’re talking about two different things).
Here’s a meaning that we could give to .333… that’s close to (or perhaps the same as) the one Stevin had mind: “.333…” means the unique real number that’s between .3 and .4, and is also between .33 and .34, and is also between .333 and .334, and so on, ad infinitum. That is, using the mathematical convention of writing [a,b] to denote the set of real numbers that are greater than or equal to a and less than or equal to b, we define .333… to be the unique real number that lies in all the intervals [.3,.4], [.33,.34], [.333,.334], etc.
For those who prefer geometry: Divide the interval on the number line from 0 to 1 into ten equal sub-intervals, and label them 0 through 9 from left to right. Take the sub-interval labeled 3 and divide it into ten equal sub-sub-intervals, and label them 0 through 9 from left to right. Take the one that’s labeled 3, and so on. You’ll keep restricting yourself to smaller and smaller sub-sub-…-sub-intervals, so you’re squeezing down on a point on the number line. And that point is exactly what we mean by .333…
If we adopt this meaning of infinite decimals, then we can see why (for instance) .499… and .500… are both equal to 1/2. In the case of .499…, we take the sub-interval [.4,.5] and then take its rightmost sub-sub-interval, and then take its rightmost sub-sub-sub-interval, and so on, until the point we’ve narrowed down on is the right endpoint of [.4,.5]. (Don’t worry if this last claim seems shaky; wait till Archimedes has a chance to chime in on this issue, a couple of paragraphs from now.) In the case of .500…, we take the sub-interval [.5,.6] and then take its leftmost sub-sub-interval, and then take its leftmost sub-sub-sub-interval, and so on, until the point we’ve narrowed down on is the left endpoint of [.5,.6]. The fact that the right endpoint of [.4,.5] coincides with the left endpoint of [.5,.6] (and that therefore .499… = .500…) is not inherently more mystical than the fact that the eastern border of Mississippi coincides with the western border of Alabama.
There are actually two separate questions lurking inside the innocuous-looking definition of .333… as the unique real number that lies in all the intervals [.3,.4], [.33,.34], [.333,.334], …: how do we know there’s only one such real number, and how do we know there are any at all? Putting it geometrically, if we’re trying to narrow in on a point on the number by line by trapping it narrower and narrower intervals, ad infinitum, how do we know we catch only one point in our trap, and how do we know we trap any points at all? The former is our concern here; if you’re concerned about the latter, see End Note #1.
WHAT ARCHIMEDES SAID
Long before Stevin came up with his decimals, there was the Syracusan mathematician, physicist, and engineer Archimedes. Contemplating the physical world and its range of phenomena, from the minuscule to the cosmic, Archimedes made a bold hypothesis: for any small magnitude A and any big magnitude B, if you add A to itself sufficiently many times, you get something bigger than B. For instance, if you have enough ants, they could reach all the way around the earth. (Yes, Archimedes knew that the earth was round, and even took a stab at measuring it.) If we (anachronistically) replace the magnitudes of the Greeks by the real numbers of the modern age, we can paraphrase Archimedes as saying that for any two positive real numbers A and B, there exists a whole number n such that nA > B. If we divide both sides of this inequality by A (obtaining n > B/A), and write the positive real number B/A as just r, we get one version of Archimedes’ axiom: for any positive real number r, there is a whole number n with n > r.
Additionally (and more relevantly for our purposes), if we write nA > B as A/B > 1/n, and give the positive real number A/B the name s, then Archimedes claimed that for every positive real number s, there is a whole number n with s > 1/n, that is, 0 < 1/n < s. That is, for every positive real number s, however small, there is some fraction of the form 1/n that lies between 0 and s on the number line.
Example: If s = .000013, then n = 100,000 will do, since 1/100,000 = .00001 < s.
It’s easy to underestimate the importance of this axiom, because it’s baked into the decimal system. If I give you a small real number s that your calculator can handle, then you could compute an integer n that fills the bill by taking the reciprocal of s, and rounding up what you see to the next largest integer or the next largest power of ten. But remember, Archimedes did not have your smartphone, or a calculator, or the decimal system.
For a marvelous illustration of Archimedes’ axiom and its relevance to our physical world, see Eva Szasz’s short film “Cosmic Zoom” (1968) or Charles and Ray Eames’s short film ”Powers of Ten” (1977). Both movies show how the sizes of the physical objects we know about are part of a single continuum: expand a proton by a factor of ten to the fiftieth power, and you get something much bigger than the known universe. Putting Archimedes’ axiom in more modern terms, using multiplication by powers of 10, we can say that for all positive real numbers A and B, there exists an exponent k such that 10k A > B. Or, giving A/B the name s as before, no matter how small the positive number s is, there is an exponent k such that 1/10k < s. Putting this negatively, you can’t find a positive real number s that’s so small that it’s less than 1/101 and less than 1/102 and less than 1/103 and so on; if s is positive, there must be an exponent k such that 1/10k < s.
Translated into the realm of numbers, Archimedes’ axiom about magnitudes becomes a pair of assertions: there are no infinite real numbers, and there are no infinitesimal real numbers (where “infinite” means “larger than every integer” and “infinitesimal” means “positive but smaller than the reciprocal of every integer”).
I called this claim an axiom, because it’s not something that Archimedes was able to deduce from more fundamental rules about how numbers behave. It’s part of what we observe in nature, and so we make it part of our mathematics. There exist non-Archimedean number systems (like the surreal number system I talked about last month), but none of them have borne as much fruit as the ordinary real number system.
Archimedes was probably inspired in part by the earlier Greek mathematician Eudoxos, who in trying to come to grips with irrational numbers like the square root of 2 used the same trick Stevin did, namely, bounding irrational numbers from above and from below by rationals. When we write π = 3.1415…, we are saying that π is greater than or equal to all the rational numbers 3, 3.1, 3.14, etc. and less than or equal to all the rational numbers 4, 3.2, 3.15, etc. Does this set of properties uniquely specify π? Archimedes assures us that it does. For, if π and π‘ are two different numbers that satisfy all those inequalities, then the difference between π and π‘ is at most 1 (since π and π‘ both lie between 3 and 4), and it’s at most 1/10 (since π and π‘ both lie between 3.1 and 3.2), and it’s at most 1/100 (since π and π‘ both lie between 3.14 and 3.15), and so on. That is, the difference between π and π‘ is some positive number that is less than or equal to 1/10k for every k. Archimedes then says that this can’t happen. So π and π‘ are equal.
More generally, Archimedes’ axiom tells us that if you know the decimal expansion of a real number, then you know what real number is meant. And that’s a good thing, because decimal expansions were created to give names to all the real numbers. If some real numbers got left out, i.e. didn’t have a decimal expansion, or if two different real numbers had the same decimal expansion, that would defeat the purpose that motivated the introduction of decimals in the first place. (See End Note #1 for more on the relationship between decimal expansions and real numbers.)
So now let’s return to .499… We see that .499… and .500… differ by at most 1/10 (because both numbers lie in the interval [.4,.5], whose width is 1/10); and they differ by at most 1/102 (because both numbers lie in [.49,.50]; and they differ by at most 1/103; and so on, ad infinitum. It follows that the difference between .499… and .500… is smaller than 1/10k for every k. Therefore Archimedes assures us that .499… and .500… must be equal.
It may well be that some early adherents of decimal notation advocated barring numbers like .499… from the decimal system (if only because .499… is a really silly way to write 1/2). One could create a theory of infinite decimals that simply forbids infinite decimals that end in infinitely many 9’s. But such discriminatory practices have a way of eroding in the face of questions like “Okay, but if it did correspond to a number, what number would it be?” (Here I’m reminded of all the old retort “Okay, if K-A-T doesn’t spell “cat”, what does it spell?”) There’s a kind of expansionism at work here, a sort of syntactic manifest destiny, that says that if you can write something down, it ought to mean something. You need only look at the fuss mathematicians make about the unending sum 1 + 2 + 3 + 4 + … to see how far this expansionist tendency takes some of us.
The founders of integral calculus were inspired by Stevin’s work, and by their need to consider other sorts of infinite processes, to view .999… as the value of the unending sum 9/10 + 9/100 + 9/1000 + …. But whatever could it mean to add infinitely many things together, if no matter how early you get up in the morning to start summing, you’ll never finish the job?
ON GAMES AND NUMBERS
The way mathematicians have chosen to define 0.999… is as a limit. They define 0.999… not as an infinite process of increase (which students rightly point out never stabilizes) but as the real number that the infinite process tends towards without ever reaching. More precisely, mathematicians define the infinite decimal 0.999… as the “limit” toward which the non-problematical terminating decimals 0.9, 0.99, 0.999, … are tending.
So what does it mean to say that the sequence .9, .99, .999, … converges to the limit 1?
It means that a certain two-player game, played between antagonists I’ll call Adam and Eve, is a win for Eve. Here’s how they play: Adam picks some positive real number ɛ (the Greek letter epsilon), and reveals it to Eve; then Eve, based on Adam’s choice of ɛ, picks some counting number k. Then they look at the kth term of the sequence .9, .99, .999, … If that term of the sequence differs from the putative limit 1 by less than ɛ, and all subsequent terms differ from the putative limit 1 by less than ɛ, then we decree that the game is a win for Eve; otherwise we decree that it’s a win for Adam.
Let’s see how the game might go. Suppose Adam picks ɛ = .012. Then Eve (after a bit of thought) might pick k = 2. So Adam and Eve, to see who has won, look at the second term of the sequence, which is .99, and they realize, to Adam’s dismay, that .99 and .999 and all subsequent terms of the sequence differ from 1 by less than .012. So Adam has lost.
At this point, Adam might say “Wait, I know how to win this game: I pick ɛ = .00123!” But he’s making the classic general’s mistake of fighting the last war; Eve will win again, provided she chooses k to be some positive integer greater than or equal to 3, because 10-3 < .00123.
When a real analysis teacher says that the sequence .9, .99, .999, … converges to 1, or that the limit of the sequence is equal to 1, he or she is by definition merely saying that the game is a win for Eve if she plays wisely; that is, for any positive real number ɛ Adam may pick, Eve can pick a counting number k (dependent on epsilon) such that, leaving aside the first k-1 terms of the sequence, the discrepancy between each term and the putative limit 1 is less than ɛ. The teacher may say “The limit of the nth term, as n goes to infinity, equals 1”, and the use of the word “infinity” may make you think that something mystical and/or illicit is going on, but the mention of infinity is just a figure of speech; when you unpack the definition, it’s just asserting something about a game between two players, and about ordinary (finite) real numbers and ordinary (finite) whole numbers.
Complicated nested assertions like “For all real numbers ɛ > 0, there exists a whole number k such that 1/10k < ɛ” (and others that are much worse) are called quantified assertions, and they’re a big part of what makes real analysis hard; many students aren’t used to paying so much attention to words when they’re doing math, and prefer to think that all the meaning resides in math-y looking things like “1/10k < r“. This is a mistake. For one thing, rearranging the words can make a big difference; “For all real numbers r > 0, there exists a whole number k such that 1/10k < r” is true (by Archimedes’ principle), but “There exists a whole number k such that for all real numbers r > 0, 1/10k < r” is false. Quantifiers are a key part of the mathematician’s toolkit, and thinking about propositions as games can help. Every time a proposition says that something is true for All x (sometimes written as “∀ x“), think of Adam as getting to choose x; and every time a proposition says that there Exists some x having some property (sometimes written as “∃ x“), think of Eve as getting to choose x. This dynamic, adversarial point of view can help bring out the meaning of an otherwise inscrutable proposition.
If you accept Archimedes’ axiom, you must grant that for all real numbers ɛ > 0, there exists a whole number k such that any finite decimal of the form .999…999 consisting of k or more 9’s differs from 1 by less than ɛ. And that is exactly what mathematicians mean by asserting the proposition 0.999… = 1. Defining 0.999… the way they do is a human convention, but it is not an arbitrary one; it’s chosen so as to make the real number system serve the purposes Stevin had in mind when he designed it.
WHY THIS DEFINITION?
Are you disappointed that this is all there is to limits? Were you hoping that the “…” at the end of “.999…” would reveal the secrets of the universe to you? If you were disappointed, you shouldn’t be. If we want our mathematics to be sound, we want it to be as un-mystical as possible. Ultimately, all of the assertions of calculus that you’re likely to encounter in a high school or college class, even the ones that use the symbol ∞, are figures of speech that, when reduced to their definitional meaning, turn out to be not propositions about infinity but propositions about two-player games played with ordinary real numbers and whole numbers. And that’s a good thing. It means that our mathematics is rooted in the soil of ordinary numbers.
But if you find yourself missing infinity as an actual transcendent thing and not just a roundabout way of talking about ordinary things, take heart: if you study math for long enough, you’ll find that, after a while, even though you know how to unwrap those figures of speech, you find yourself imagining that you can picture infinity. This is an illusion, but it’s an illusion that’s wedded to a rigorous protocol for translating assertions about “infinity” into more prosaic assertions about finite numbers, so it’s not an illusion that’s likely to get you into trouble. At St. Mary’s College of Maryland, the math department awards students ‘infinity licenses’ which permit them to throw around words like “infinite” and “infinitesimal” loosely, once they’ve demonstrated that they understand how to go back and forth between such evocative language and the more down-to-earth meanings that are attached to them. Someone with an infinity license would never think that 1 minus .999… equals 0.000…1, because he or she knows that there is no decimal place “at infinity”.
… Or maybe you’re disappointed with the definition of limits for a different reason. Maybe you find the “for all ɛ, there exists k” definition of limits too complicated. Most people, including many who go on to become mathematicians, find quantifiers hard to get used to; I sometimes feel this way myself. But there are good historical reasons for why we’ve formalized the intuitively appealing and seemingly simple concept of limits with all this verbiage. Mathematicians of earlier centuries ran into all sorts of trouble when they played with infinite sums using the same rules that apply to finite sums. For instance, playing games with the sum 1/1 − 1/2 + 1/3 − 1/4 + … that are not too different from the games math teachers use when pseudo-proving that .999… = 1, you can prove that the sum is positive and you can prove that the sum is zero (see End Note #2 for details). When mathematicians who are exploring new terrain, like the mathematicians who first studied infinite series, take a procedural approach, they have a way of getting into trouble: the procedures lead into contradictions. Then it’s time for them to step back and think about what things really mean.
Something like that happened during the first century and a half after Newton and Leibniz developed the calculus. And that, reader, is why we mathematicians have developed a fearsome definition-based approach to the subject, including things like the modern concept of limits. It’s not that we wanted to make life difficult for our students. We were forced to do it. Intuitive and procedural approaches to the subject led us into paradoxes from which we could extricate ourselves only by giving everything in sight a precise definition. And the epsilon definition of limits is the simplest definition that works.
If you’re still disappointed that .999… isn’t less than 1, you might at some point want to learn about the hyperreal number system, and the concept of ultralimits. In this context, .999… is less than 1, and for a very appealing reason: since each individual term of the sequence .9, .99, .999, … is less than 1, the ultralimit must be less than 1 as well. But be warned: in order to make sense of the hyperreal number system, you’ll probably have to get familiar with the ordinary real number system first. Also, the hyperreals have some anomalies of their own. For instance, the sequence .9, .99, .999, … converges to a different ultralimit than the sequence .99, .999, .9999, … . Weirder still, in the hyperreal number system, the sequence .9, 1.01, .999, 1.0001, … converges to something that’s either infinitesimally greater than 1 or infinitesimally smaller, but there’s no way to deduce which! Or rather, you get to choose which way you want it to come out (infinitesimally greater than 1 or infinitesimally smaller).
A different kind of non-Archimedean number system is the surreal number system I wrote about last month. (We saw that in the game of Checker Stacks, the value of a Blue checker is 1, the value of a Deep Blue checker is an infinite surreal number, and the value of a Blue checker with a Deep Red checker on top of it is an infinitesimal surreal number.)
But if you still find yourself wanting .999… to be less than 1 in the ordinary real number system, ask yourself if you really want .333… to be less than 1/3 (which goes hand-in-hand with .999… being less than 1; it’s sort of a package deal). Remember, where this whole idea of infinite decimals got its start was from people trying to write 1/3 as a decimal and ending up with 1/3 = .333… If you give that up, you’ve given up a lot of the raison d’etre of the decimal system. On the other hand, if you do accept that .333… is the right way to locate 1/3 on the number line, then it makes sense that we’d want a way to drive the process in reverse, and re-constitute the number 1/3 from its decimal expansion. That’s precisely what the standard notion of infinite sums does for us. That way of making sense of infinite sums leads us to the surprising but sturdy conclusion that .999… is the same number as 1.
Because I know the subtle way mathematicians have resolved the problem of interpreting and manipulating infinite sums, and because I’ve seen in the classroom how hard it is for undergraduates and graduate students to understand the resolution, I have respect for some high school students’ misgivings about the issue. Specifically, I have qualms about the standard way we “prove” that .999… = 1. For those of you who haven’t seen it, it goes like this: Take the equation x = .999…; multiply it by 10, to get 10x = 9.999… (using the rule that multiplication by 10 means sliding over the decimal point); then subtract the first equation from the second, canceling those 9’s, to get 10x − x = 9.000…, or 9x = 9; and finally divide both sides by 9, obtaining x = 1.
But how do we know that .999… even means anything? If instead of starting with x = .999… we start with x = 1 + 10 + 100 + 1000 + …, the same sort of argument (subtracting x from 10x) leads us to the patently false conclusion that x = −1/9. (Well, maybe “patently false” is a bit too strong; there are number systems in which the reasoning and the conclusion are valid, but they aren’t the standard real number system. In the standard way of evaluating infinite sums, 1 + 10 + 100 + 1000 + … has no value.)
An even more troubling example involves the infinite sum 1/1 − 1/2 + 1/3 − 1/4 + …. that I mentioned above. This infinite sum does have a well-defined (limiting) value in the real number system, namely the natural logarithm of 2, which is approximately .693, but starting with x = 1/1 − 1/2 + 1/3 − 1/4 + … and using some (incorrect but) plausible algebraic manipulations, one can derive the false conclusion that x = 0. (See End Note #2 for more on this.) Correct manipulation of infinite sums is a tricky thing, and the standard argument for .999… = 1 is a cartoon of a much more intricate argument. So students who think that the standard proof is sketchy (in both senses of the term) are right. The most I’ll concede is that it provides rough-and-ready evidence (not proof) that if .999… is equal to any real number at all, the real number 1 is the most natural candidate.
Also, people who object to the way the 9’s cancel (because one of the 9’s is left uncancelled) are tripping over a deep and surprising property of infinite sets. There’s a way of pairing up the 9’s in the first row with the 9’s in the second so that everything in the first row gets paired up:
9.9 9 9 9 ... \ \ \ \ .9 9 9 9 ...
And there’s another way of pairing up the 9’s so that one 9 in the first row is left unpaired:
9.9 9 9 9 ... | | | | .9 9 9 9 ...
This is the Hilbert Hotel Paradox, and it’s tricky. Once one has seen the first pairing, it’s hard to believe that the second pairing is fully legitimate, until you’ve come to grips with the ways infinite sets behave. This is something genuinely subtle and surprising, and if the issue is left tacit (the way it is in the algebraic derivation of .999… = 1), we shouldn’t be surprised if the internal tension expresses itself as a vague, unarticulated unease that undermines full confidence in the proof.
I’m probably being too critical of the standard “10x − x” argument. I found it convincing, and even beautiful, when it was first shown to me (I think I was twelve or so). And the argument certainly does prove something, namely: if you believe that .999… actually corresponds to a number, and you accept that non-terminating decimals obey certain natural-seeming properties (such as: multiplying by 10 is the same as shifting the decimal point), then you must accept that .999… = 1. For twelve-year-old-me, those properties were already familiar ones, and my commitment to them was pretty strong. But I know that the 10x − x argument leaves many intelligent people unconvinced. So I thought there was a need for an explanation that confronts the question of what a non-terminating decimal like .999… means, instead of just juggling it using plausible rules grounded in the behavior of terminating decimals.
WHY THIS PROBLEM WON’T DIE
One reason the problem will continue forever (much like .999… itself) is that it’s so far removed from experience. You can convince yourself that 2/6 = 1/3 by comparing two pizzas, but there’s no real-world measurement we can make to assure ourselves that .999… = 1. Instead, we have to use the decimal system as a construct of pure thought; and if this construct isn’t securely tethered to something concrete, it can drift quite a bit from what the originators of the decimal system had in mind.
In a way, the decimal system is a victim of its own near-perfection: it comes so close to giving a one-to-one correspondence between decimal expansions and real numbers that it’s easy for students to slip into the belief that the correspondence is perfect. It’s not.
Also, for some students the idea that .999… is less than 1 ties in all too well with another wrong idea, namely that real numbers have neighbors. If, in thrall to the calculator on your smartphone, you imagine the real numbers as being just like the counting numbers but with a decimal point thrown in, then it’s easy to imagine that there’s a real number immediately to the left of 1 on the continuous number line, just as there’s a whole number immediately to the left of 1000 on the discrete number line. Since 999 is the whole number immediately to the left of 1000, you might think that .999… is the real number immediately to the left of 1.000… Maybe a part of you knows that this can’t be right, because you learned in geometry that you can’t have two points that are next to each other (there’s always a midpoint between them), and you learned in analytic geometry that there’s a one-to-one correspondence between the points on a line and the real numbers, but still, the discrete number line looks more and more like the continuous number line as you make the points closer and closer together, so it’s hard to see that the idea that numbers have neighbors must be sacrificed if one wants to make further progress in one’s understanding of the real number concept.
The fallacious belief “real numbers have neighbors” and the fallacious belief “0.999… < 1” are two wrong ideas that fit together beautifully, and are therefore harder to remove from people’s heads. Attempt to extract one of them, and the elastic cord that joins the two will cause the belief you’re removing to snap back into place as soon as you leave the operating theatre.
Another reason for students to subscribe to the belief that .999… is different from 1.000… is a kind of procedure-based approach to mathematics. Instead of asking “Does .999… refer to the same number as 1?”, the student ducks the question of meaning and asks “Can I reduce one of these numbers to the other, using the dozen or so rules I’ve learned for simplifying expressions?” Since the student has mostly seen problems in which two things are proved to be equal by simplifying both sides until they become manifestly equal, and since .999… and 1 cannot be simplified further, the student feels that they must not be the same.
One approach to removing wrong ideas is to introduce cognitive dissonance. For instance, if you maintain that 0.999… is the left-neighbor of 1, I might ask you, what is the right-neighbor? And if you answer “1.000…1”, I might point out that this isn’t the same kind of numeral as 0.999… (1.000…1 has a last digit, while 0.999… doesn’t). This pedagogical approach sometimes works, but often it merely seems to work, because it stifles dissent without giving deep understanding. As John Morley famously stated, “You have not converted a man because you have silenced him.”
So, many students whose high school teachers told them that 1.000… and 0.999… are equal come to college with a garbled understanding of the issue, because back in high school they exercised their mathematical Miranda right to remain silent when they were confused. A majority of my honors calculus students start the semester subscribing to the belief that 1.000… and 0.999…, although “the same for practical purposes”, are “mathematically different”.
I know where they’re coming from: when n is a large finite number (like a million), 1 and 1−10−n are the same for practical purposes yet mathematically different. But we’re not talking practical purposes here; we’re talking about pure mathematical purposes. This is no one-way flight from reality; even if one’s end goal is to accomplish things in the real world, it’s helpful to have an ideal world to compare it with.
Maybe what’s missing in some of my students is an appreciation of the virtues of an ideal world. Maybe even the idea of precise mathematical equality, embedded in the “=” in the formula 0.999… = 1.000…, seems overly theoretical and unconvincing to some students. “Two numbers are equal if my calculator says they are! Isn’t that good enough?” There’s a counter-argument to be made here, showing the superiority of the real number system over the number system implicit in calculators. Sometime I hope to come back to this, by talking about the kind of arithmetic that’s embodied in modern computers, called IEEE arithmetic, to see what it’s good for and what its limitations are. Once you see the pitfalls, I think you’ll gain a new appreciation of the virtues of an idealized system in which we pretend we can write down infinite strings of digits for numbers like π even though we can’t really.
Speaking of contexts in which a brain can be a useful adjunct to a calculator, can you determine, using just pencil and paper, the hundredth digit of 1/992 after the decimal point? I don’t want the first ninety-nine digits — just the hundredth. (As a warm-up, you might want to consider the decimal expansion of 1/92 = 1/81.) See End Note #3 for the answer.
If the “=” in “.999… = 1” is tricky, the “…” is even trickier. Infinity is not an easy thing to grasp, and indeed all the ways in which we pretend to “grasp” it are forms of sly indirection in which we actually grasp something else. We’ve seen that statements about actual infinity (as in “the limit as n goes to infinity”) can be understood as paraphrases of more complicated statements that don’t involve infinity at all, but this takes some getting used to.
I think that, at base, .999…=1 is a hard truth to grasp because real numbers are hard to think about and because we never tell students what real numbers actually are. We’re not deliberately withholding the goods from them; it’s more that the goods aren’t really so good. The intuitively compelling models of the real number system (such as the real number line) are vague (what geometric axioms are being assumed?), and the precise models (via Dedekind cuts or Cauchy sequences) are quite abstract. We also like to be vague and say things like “The details don’t matter; what matters is that the real numbers form a complete ordered field, and there’s only one such system up to isomorphism”, but getting to the point where you can unpack that sentence takes years.
I’d love to see how the real number system is taught a thousand years from now; maybe the reason it seems so complicated is that we’re thinking about it wrong. Right now, the best we can do at the pre-college level, as a compromise between concreteness and rigor, is to define real numbers as infinite decimals. And then the convention that 0.999… = 1.000… is bound to seem ad hoc at best.
I’ll close by mentioning the views of Ed Dubinsky, who sees part of the trouble students have with .999… = 1 as being an issue of “encapsulation”. Just as the notion of physical objects and their properties (notably, the permanence of objects and the occasional lack thereof) is something toddlers have to learn through experience, the notion of mathematical objects is not something we are born with; it’s something we must acquire, and the way we best acquire it is not by being explicitly taught but by having the sorts of experiences that lead us to embrace it. In order to understand the equation .999… = 1 in the way mathematicians intend it, one needs to have an “object conception” of the left hand side. Many students understand .999… solely as an unending process of tacking on 9’s, but since the process can never be completed, they don’t see how the unending process can be viewed as an object in its own right. One needs a way to encapsulate “.999…” so one can say “What limit is this process tending toward?” One needs a point of view in which .999… isn’t seen as a dynamic process that never arrives at its destination, but as a static object arising from (but distinct from) that dynamic process.
The abstract concepts of mathematics are in a fashion reminiscent of the Tardis in the television series Dr. Who, a space-and-time-travel conveyance that looks different on the inside and on the outside. On the inside, the Tardis is enormous, and perhaps even infinite (like an unending stream of 9’s); but from the outside, it’s an object you can walk around and even (when it’s small) pick up and manipulate. Mathematics is full of boxes that look different from the inside and the outside. The number 1/3 is such a box: viewed from inside it involves a process (division), but viewed from outside it’s just a number. (People who say “1/3 isn’t a number because it involves an operation” haven’t achieved the inside/outside double-perspective yet.) The unending decimal .999… is another such box; infinite on the inside, finite on the outside. (Someone who objects to .999… = 1 on the grounds that “It gets closer but it never reaches it” have not fully encapsulated .999…)
Navigating the mathematical cosmos gracefully requires a fluency in passing through the skins of these capsules, sometimes taking the exterior view, sometimes taking the interior view; unpacking definitions when we need to, and packing them back up when we’re done. Once we’ve gotten the trick of it, the possibilities open to us are, well, limitless.
This article was written with help from Dan Asimov, Henry Cohn, Marina Franchild, Sandi Gubin, Hans Havermann, David Jacobi, David Kung, Andy Latto, Henri Picciotto, Rich Schroeppel, Steven Strogatz, James Tanton, and Glen Whitney. The graphic was created by Li-Mei Lim.
#1: I asserted above that π can be defined to be the unique real number that lies in all the intervals [3,4], [3.1,3.2], [3.14,3.15], … Archimedes assures us that there can’t be more than one number that lies in all these intervals. But how do we know that there are any at all? How do we know that when we intersect all these closed intervals in the real numbers, we aren’t left with the empty set of real numbers? This might seem trivial if you think of real numbers as infinite decimals, but when you think in terms of the number line, there’s a serious geometrical problem here. The question of whether there actually is some point on the number line whose address is given by 3.14159… is a deep issue; it reflects a gap in Euclidean geometry that wasn’t filled until the 19th century. I won’t say more about the matter here, but if you want to know more about it, look up the completeness property of the reals.
I’ve said that the standard argument for why .999… = 1 merits more skepticism than it is usually given by teachers and students. To see why, let’s look at a somewhat similar bit of algebraic analysis of an infinite sum. Write
(1) x = 1/1 – 1/2 + 1/3 – 1/4 + 1/5 – 1/6 + 1/7 – 1/8 + …
Suppose you ignore the little voice in your head (the one that sounds a lot like me) saying “You can’t just do that; you haven’t proved that these manipulations are valid for infinite series” and you march bravely/foolhardily into these woods, pretending there are no monsters there. Write
x = 1/1 + (1/2 – 2/2) + 1/3 + (1/4 – 2/4) + 1/5 + (1/6 – 2/6) + …
= 1/1 + 1/2 – 1/1 + 1/3 + 1/4 – 1/2 + 1/5 + 1/6 – 1/3 + …
In the last expression, positive terms and negative terms can be cancelled in pairs, with no term left unconcealed, so that x = 0. But grouping x as (1/1 – 1/2) + (1/3 – 1/4) + (1/5 – 1/6) + (1/7 – 1/8) + … it’s plain that x is positive. There are monsters in these woods after all! You should have listened to that little voice.
I show you that paradox with trepidation, because paradoxes like this leave some students with the impression “You can prove anything with math.” In fact, what it really shows is that you can pretend to prove anything with math. In this case, it’s a pretend proof because it manipulated infinite sums in an appealing but illegitimate way.
#3: The hundredth digit of 1/992 is a 9. Here’s a good way to see it: 1/99 is .010101010101… When you square it, you get
For the first hundred digits, and well beyond, you see blocks of two digits that count up by 1: 00,01,02,03,… In particular, the ninety-ninth and hundredth digit together form the fiftieth two-digit block, containing the digit pair 49.
The fact that a simple fraction like 1/992 can have such a long repeat-period in its representation as an infinite decimal is a different kind of flaw in the decimal system from the .999… = 1 anomaly.
Kees Boeke, “Cosmic View: The Universe in 40 Jumps”. This 1957 book was the basis for both the movie “Cosmic Zoom” and the movie “Powers of Ten”.