It hapneth also some times, that the Quotient cannot be expressed by whole numbers, as 4 divided by 3 in this sort, whereby appeareth, that there will infinitly come from the 3 the rest of 1/3 and in such an accident you may come so neere as the thing requireth, omitting the remaynder…

— Simon Stevin, The Tenth (1585)^{1}

Many people find fractions and decimals confusing, counter-intuitive, and even scary. Consider the story of the A&W restaurant chain’s ill-fated third-of-a-pound burger, introduced as a beefier rival of the McDonald’s quarter-pounder. Many customers were unhappy that A&W was charging more for a third of a pound of beef than McDonald’s charged for a quarter of a pound. And why shouldn’t they be unhappy? Three is less than four, so one-third is less than one-fourth, right?

Well, that’s what many of those aggrieved customers told the consultants who had been hired to find out why A&W’s “Third is the Word!” innovation had gone so disastrously awry. But I wonder if those customers were rationalizing (sorry…) after the fact. Maybe some of these people had had such bad experiences when learning about fractions in school (the awkward fraction 1/3 in particular) that they preferred to avoid eating at establishments that triggered their math anxiety.

Perhaps part of the problem is that for many people, the standard middle school curriculum on fractions and decimals doesn’t hang together well, with its mélange of different representations of things that they’re told are really the same thing under different names, such as 1 1/5 and 6/5 and 12/10 and 1.2 (and let’s not even mention 120%). And as if that weren’t bad enough, there are decimals that never end?!? It’s easy to come away from this experience confused and disheartened.

A common stumbling block, even before decimals come into the picture, is division of fractions. Ask a student “What’s 6 divided by 1/2?” and they’re likely to give the wrong answer 3 instead of the right answer 12. Part of what’s tripping them up is the way the phrase “divided by one-half” resembles the phrase “divided in half”, but a deeper issue is that the student often has no access to a mental model in which dividing one fraction by another is meaningful.

The education theorist Liping Ma opened my eyes to the complexities of teaching fractions in her book “Knowing and Teaching Elementary Mathematics”, which introduced me to different models of division. The *partitive* model of six-divided-by-two asks, “If you have six cookies to share among two children, how many cookies does each child get?” This model works well for 6 ÷ 2 but isn’t so helpful for making sense of an expression like 6 ÷ 1/2 in which the divisor (the number you’re dividing by, which in this case is the number of children) isn’t a whole number: how do you feed half a child?

The *quotative* model of six-divided-by-two asks, “If you have six cookies, and you want to share them among some children so that each child gets two cookies, how many children can you feed?” This model works well for 6 ÷ 1/2; if you have six cookies and you want each child to get half a cookie, then the number of children sharing the cookies should be twelve. But this model is less helpful when the quotient (the answer to the division problem, which in this case is the number of children) isn’t a whole number.

So how do we make sense of division of fractions when neither the divisor nor the quotient is a whole number, such as one-half divided by one-third?

Most students learn a procedure for dividing one fraction by another, handily summarized in the verse “Yours is not to reason why, just invert and multiply!”, where inverting a fraction means swapping the numerator and the denominator. The verse assures us that the scary expression *a*/*b* ÷ *c*/*d* equals the less-scary expression (*a*/*b*) × (*d*/*c*) , or (*a*×*d*)/(*b*×*c*). But if you just apply a memorized rule, you’re letting the rule (or the people who devised it) do the thinking for you.^{2} And then you risk becoming one of those people who thinks a third of a pound of beef should cost less than a quarter of a pound of beef.

(In a blog that focused more on real-world issues, an essay on fractions would treat specific forms of innumeracy related to fractions. The coronavirus pandemic showcased many examples of this, such as when people focused on case *counts* when they should instead have attended to case *rates*. Knowing when to use denominators, and just as crucially knowing what denominator to use, is a huge part of mathematical literacy, or as it’s come to be called, numeracy. But that’s not my beef today.)

A teacher explaining why one-half divided by one-third is one-and-a-half might make use of the quotative model: when you’ve got a girl and a boy who each want a third of a pizza (two slices) but you’ve only got half a pizza (three slices), if you give the girl her quota the boy will only get half of his. So in that sense there’s enough pizza for one-and-a-half children.

That’s a good approach – one that’s grounded in the kind of concrete situation that fractions were introduced to handle. But let’s see how a person of an oddly schematic cast of mind might approach the problem, not because of what this will tell us about fractions, but because of what it will tell us about mathematicians, and more specifically, about how mathematicians think when negotiating unfamiliar terrain – because we’ve got a lot of unfamiliar terrain coming up in future essays.

**THE AMNESIC MATHEMATICIAN**

Imagine an amnesic mathematician who’s forgotten how to divide fractions but remembers one important thing about dividing a whole number (call it *x*) by another whole number (call it *y*): for any (nonzero) whole number *m* (call it the scaling factor), the quotient (*m* × *x*) ÷ (*m* × *y*) is equal to the quotient *x* ÷ *y*. For instance, 60 ÷ 20 = (10×6) ÷ (10×2) = 6 ÷ 2. You might call this the scaling property of division. It can simplify division by letting us cancel common factors.

To apply the scaling property to 1/2 ÷ 1/3, we perform the scaling trick in the opposite direction: instead of scaling down two big whole numbers to get a simpler problem involving smaller whole numbers, we can scale up two fractions to get a simpler problem involving, not fractions, but whole numbers.

When *x* is 1/2 and *y* is 1/3 , the savvy choice of scaling factor turns out to be *m* = 6, so that *m* × *x* is 6 times 1/2, or 3, while *m* × *y* is 6 times 1/3, or 2: both whole numbers. Then *x* ÷ *y* = (*m*×*x*) ÷ (*m*×*y*) = 3 ÷ 2 = 3/2.^{3} (Here I’m skipping over some issues that a good teacher would have to address, such as the relationship between *x* ÷ *y* and *x*/*y* and *x* × 1/*y.*)

The scaling-property approach to dividing one fraction by another isn’t completely different from the model-based approach; in particular, scaling numbers up by a factor of 6 corresponds to telling a student “Try counting slices instead of pizzas.” But the Amnesic Mathematician is operating in a realm of pure number, with no kids or pizza in sight – just symbols (*x* and *y*) that are general in form though specific in their nature.

**THE PRINCIPLE OF PERMANENCE**

If that last phrase (“general in form though specific in their nature”) struck you as a little old-fashioned (in addition to being a bit obscure), that’s because I stole it from a book written over a century and a half ago: George Peacock’s “A Treatise of Algebra”. In it, Peacock enunciated a principle he called the Permanence of Form. The Principle says that when we’re trying to extend the operations of arithmetic from some number system (such as the counting numbers) to some larger number system (such as the fractions), we should assume that any algebraic formula that holds true in the smaller number system (such as (*m*×*x*)÷(*m*×*y*) = *x*÷*y*) will hold true in the larger number system as well. This principle isn’t a mathematical fact, and indeed it has many exceptions, of which the most historically important may be William Rowan Hamilton’s quaternionic number system (ironically, invented by Hamilton at about the same time as Peacock wrote his book): in inventing the quaternions, Hamilton had to ditch the commutative law of multiplication (*x*×*y* = *y*×*x*). When you apply Peacock’s principle, it’s important to be keep in mind that it’s not an infallible guide, but when it’s wrong, it’s wrong for an important reason, and the reason is worth understanding.

It turns out that what the Amnesic Mathematician did for 1/2 ÷ 1/3 (determining its value not by appealing to a model situation in which the division makes sense but by assuming that general properties of division of counting numbers will apply to fractions as well) can also be done for 1/2 + 1/3, 1/2 – 1/3, and 1/2 × 1/3, or indeed for the sum, difference, product, and quotient of any two (positive) fractions. Our Amnesic Mathematician can go on to prove that there’s one and only one way to extend the operations of addition, subtraction, multiplication, and division from the realm of counting numbers to the realm of fractions while preserving properties like (*m*×*x*)÷(*m*×*y*) = *x*÷*y* and *m*×(*x*+*y*) = *m*×*x*+*m*×*y*. And the resulting way of adding, subtracting, multiplying, and dividing fractions, although derived from purely formal considerations, turns out to be the right way to do arithmetic with fractions in contexts where those operations have meaning – even though our Amnesic Mathematician was not attending to meaning at all, but merely looking at formal properties of counting-number arithmetic and guessing that they extended to fractions.

Let’s apply the Principle to another problem: figuring out what 9^{1/2} should mean. We know that when the exponents *m* and *n* are counting numbers, 9^{m} times 9^{n} equals 9^{m+n}. Let’s make the brave guess that this equation is true even when *m* and *n* are fractions, and more specifically, when *m* and *n* equal 1/2. So 9^{1/2} times 9^{1/2} should equal 9^{1/2+1/2}, which equals 9^{1}, which is 9. This tells us that 9^{1/2} should be a number that, when squared, gives 9; that is, 9^{1/2} should be 3 (the square root of 9). The Principle of Permanence of Form predicts that in contexts where fractional exponents have some sort of meaning, the value of 9^{1/2} will turn out to be 3. That is, even before we know what the question “What is the value of 9^{1/2}?” *means*, the Principle gives us a way to divine the answer! This is magic of a high order.

There’s an odd dance between meaning and form. Form without meaning is incomplete, but even before meaning attaches itself to form, form can point the way towards meaning. That’s why mathematicians trust form even when meaning isn’t available; they stumble forward using form instead of meaning, hoping that their guesses are right and that the properties they are using (such as the scaling property for division) are valid in the new country they’re exploring. Sometimes those properties are too permissive to provide clear answers in the new country. Other times the properties are too stringent and admit no consistent answers. But every now and then, properties imported from the old country yield univocal, consistent answers in the new country. In that case, mathematicians treat this univocality as a sign that they’re on the right track.

**WHAT CHANGES**

An important property of the counting numbers that lies outside the purview of the Principle of Permanence of Form is the Archimedean property: Given two counting numbers *m* and *n*, no matter how disparate in size they are, if you add enough *m*’s together you can get a sum at least as big as *n*, and vice versa. The older I get, the more profound I think the Archimedean Property is, not just as a mathematical assertion but as an assertion about the observable universe. We study quarks and we study galaxies, and they’re very different from each other, but they occupy a common scale, with human beings somewhere in the middle. Maybe there are things that are infinitely smaller than quarks or infinitely larger than galaxies, but how could we ever come to know about them? It seems to me that the Archimedean property of the counting numbers in a way corresponds to fundamental limits on our ability to probe the universe with our finite bodies and minds.

It turns out that the Archimedean property persists when we include fractions: given two fractions *p*/*q* and *p*′/*q*′, adding *p*/*q* to itself *p*′×*q* times gives (*p*/*q*)×(*p*′*q*) = *p×p*′ which is at least as big as *p*′/*q*′ (because *p*, *q*, *p*′, *q*′ are all integers), and similarly when the two fractions reverse roles.

Another important fact about the set of counting numbers is that it is discrete. Putting it concretely, each counting number *n* has a successor *n*+1, and there are no other counting numbers in between (despite fanciful whimsies about “bleem” and “bleen” – see last month’s essay). So you might guess that the set of fractions is similar, albeit in a more compressed way with each fraction and its successor being much closer together. But you’d be very wrong.

Before we go on to talk about the strange topography of the set of fractions, let’s adopt the word mathematicians use to embrace both whole numbers and fractions: rational numbers. “Rational” just refers to a number that’s a ratio of integers (excluding division by zero, of course). Notice that all counting numbers are rational, since each counting number *n* can be written as the ratio (or fraction) *n*/1. I’m choosing to ignore negative fractions and zero for the time being, since humanity invented zero and negative numbers after fractions. So in this essay, when I talk about rational numbers, I’ll always mean positive rational numbers.

So now I can ask: What’s the first rational number that’s bigger than 1? Is it 101/100? No; 102/101 is smaller than 101/100 while still being bigger than 1. In fact, if you name any fraction *p*/*q* that’s bigger than 1, the fraction (*p*+1)/(*q*+1) is ever-so-slightly smaller while still being bigger than 1. So there’s no first rational number after 1. And 1 is not alone in this regard. Pick any rational number *p*/*q* that you like, and any slightly larger rational number *r*/*s*. *r*/*s* isn’t the smallest rational number that’s bigger than *p*/*q*; for instance, (*p*+*r*)/(*q*+*s*) comes in between.^{4} In fact, there are infinitely many rational numbers between *p*/*q* and *r*/*s*, no matter how close *p*/*q* and *r*/*s* are!

We summarize the situation by saying that the set of rational numbers is *dense*, which means that it’s infinite in a very strong way: every interval in the number line contains infinitely many rational numbers. The set of counting numbers is infinite too, but at least it has the decency to do its being-infinite thing out beyond the zillions, where we don’t have to look at it happening — whereas the set of rational numbers flaunts its infinitude right under our noses, everywhere we look.

**THE DECIMATION OF THE RATIONALS**

When it comes to how people treat rational numbers, I divide the modern world into three subcultures: Science, Math, and Real Life. In a table of physical constants (in Science), is the standard acceleration of free fall listed as 9 4/5 (or 49/5 or 98/10) meters per second squared? Never; it’s always listed as 9.8 meters per second squared (unless more precision is required). In a cookbook (in Real Life), would you see a recipe that calls for 1.5 cups of flour, or 3/2 of a cup of flour? Maybe your cookbook does, but in every cookbook I’ve ever seen, it’s 1 1/2 cups. And in a geometry textbook (in Math), would you see a formula that gives the area of a triangle of base *b* and height *h* as .5*bh*, or the volume of a ball of radius *r* as 1 1/3 *πr*^{3}, or worse, 1.3*πr*^{3}? (If you’ve forgotten, 3 is short for infinitely many 3’s.) No; it’d be 1/2 *bh* and 4/3 *πr*^{3}, respectively.

There are good reasons why these subcultures have adopted their respective conventions, and as long as we don’t get confused about which culture we’re in, all is fine. But trouble can arise when perfectly nice fractions get written as non-terminating repeating decimals; for instance, 1/17 is 0.0588235294117647 (with a block of 16 digits under the bar), while 1/2023 requires a block of 816 digits.

When you insist on limiting yourself to fractions whose denominators are powers of ten (or divisors of powers of ten), as is required by the decimal system popularized in Europe by Simon Stevin in the 1500s, you drastically cull the set of permitted fractions. Most rational numbers can’t be expressed precisely as decimal fractions but can only be approximated. 4/3 is close to 13/10 (aka 1.3), closer to 133/100 (aka 1.33), closer still to 1333/1000 (aka 1.333), etc. The good news is that when the number you’re approximating is rational, there’s always a pattern to the sequence of digits of ever-better, never-perfect approximations; if you’re patient enough, the pattern of the digits will repeat from some point onward.

So we write 4/3 as 1.3 or 1.333… as part of the decimal game. But when we do this, we’re changing the game; unlike a terminating decimal, which is a shorthand for a fraction whose denominator is a power of ten, a non-terminating decimal is a new kind of thing, and if we don’t say what “1.333…” is supposed to mean, then assertions involving that expression, like “1.333… > 1”, aren’t meaningful. It’s fine to say “the dot-dot-dot stands for infinitely many 3’s,” but that’s just restating, rather than answering, the question of what “1.333…” really means.

I’ll return to the question of meaning later. But for now, if we want to duck the question and just use Principle-of-Permanence magic to figure out what the value of 1.333… *should* be, we could posit that multiplying a non-terminating decimal by ten amounts to shifting the decimal point one place to the right (as is the case for terminating decimals); then *x* = 1.333… implies 10*x* = 13.333…; and if (invoking the Principle of Permanence again) we posit that subtraction for non-terminating decimals works like subtraction for terminating decimals, then subtracting the first equation from the second gives 9*x* = 12.000… = 12, or *x* = 12/9 = 4/3.^{5}

If you provisionally accept that 1.333… = 4/3, then you should accept that 0.333… = 1/3, as is taught in schools everywhere. I wonder: Has any student ever maintained that the fraction 1/3 doesn’t really exist, because you can’t finish writing down its decimal expansion? That would be historically perverse, since fractions predate decimals by thousands of years. It would also be “decimal-chauvinist”, since the fact that 1/5 has a terminating decimal expansion while 1/3 doesn’t is purely a result of the fact that the base we use, namely ten, is divisible by 5 but not 3; on a planet peopled by extra-fingered humanoids who use base twelve, 1/3 would have a terminating duodecimal expansion while 1/5 would not. Would it make sense to say that on our planet, 1/5 exists and 1/3 doesn’t, but that on their planet, 1/3 exists and 1/5 doesn’t?

(This may seem like a strange straw man for me to attack, but as we’ll see in my upcoming essay about real numbers, there are people who say things nearly as silly about the square root of two.)

A different obstacle to understanding 1/3 = 0.3 is a common confusion between the *process* represented by an expression and the *value* represented by that expression. As processes, the two sides of the equation are very different. But then again, so are the two sides of the equation 2×4 = 3+5, and even the two expressions 6÷3 and 4÷2 denote different processes. When we write 6÷3 = 4÷2, we aren’t asserting that the two division processes are the same; we’re saying that the two expressions 6÷3 and 4÷2 are different names for the same entity in the realm of natural numbers. Likewise, when we write 3/6 = 2/4, we’re saying that the two expressions are different names for the same entity in the realm of rational numbers. Different-looking fractions can represent the same rational number. It’s in that sense that 1/3 and 0.3 are asserted to represent the same rational number.

But now we come to a thornier issue than 1/3 = .333…. If we accept that equation as being both meaningful and true, then (invoking the Principle of Permanence again) we’d expect that we can triple both sides of the equation, obtaining 0.999… = 1, and that seems impossible. The numbers 0.9, 0.99, 0.999, etc. are all less than 1; how could 0.999… suddenly become equal to 1? Many students wonder about this. After all, a valid way to decide which of two terminating decimals is larger is to find the first digit at which they disagree; whichever decimal has the larger digit there is the larger number. Call this the first-discrepancy test. Some intuitive version of the Principle of Permanence makes students think that this first-discrepancy test should apply to non-terminating decimals as well.^{6} And that’s okay! As a teacher, I prefer prefer principled dissidence (“I don’t think .999… is equal to 1”) to muddled conformity (“I guess it means that the difference eventually becomes too small to matter”) or outright indifference (“Who cares?”).

I wrote about the mystery of .999… twice before, in my essays The one about .999… and More about .999…. (Third time’s the charm?)

**DOPPELGANGERS EVERYWHERE**

What if we rejected the consensus and tried to develop an alternative theory of decimals in which 0.999… was actually less than 1, so that the number 1 had an evil twin, a doppelgänger? After all, one of the themes of this blog is that math isn’t about following rules that other people decreed; it’s about following ideas (including ideas you dreamed up yourself) to wherever they lead us, and the only ironclad rule is that you have to accept the consequences of your choices.

So, what are the consequences of .999… < 1? Well, to start with, 1 isn’t the only number with a doppelgänger. We also have 1.999… (2’s evil twin) and 2.999… (3’s evil twin) and so on: infinitely many doppelgängers, one for each counting number!

But it’s worse than that. 1/2 has a doppelgänger too: .4999… And 17/20, which we’d normally write as 0.85, has 0.84999… as a doppelgänger. In fact, every rational number whose denominator is the product of a power of two (1 or 2 or 4 or 8 or …) and a power of five (1 or 5 or 25 or 125 or …) will give rise to a terminating decimal, which in turn will have a doppelgänger that you get by decreasing the last (nonzero) digit by 1 and sticking “999…” afterwards. Those doppelgängers won’t just be infinite in number on the number line as a whole: they’ll be dense. That is, they’ll infinitely infest every tiniest piece of the number line.

But it’s even worse than that. Peacock’s Principle of Permanence demands that you should be able to express 1.000… minus 0.999… in your system, and the result shouldn’t be 0 (since subtracting two unequal numbers can’t give zero). How will we represent that as a decimal? 0.000…1 perhaps? Likewise, if 0.999… is really less than 1, then there should be a number that’s halfway between 0.999… and 1.000…; would that be 0.999…5? But now we’ve changed the game in a serious way, allowing not just infinitely many digits after the decimal point but also digits that come *after* those infinitely many digits. How can something come after infinity?^{7}

I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical *non sequitur*, can scarcely be improved upon.^{8}

**SOMETHING’S GOTTA GIVE**

We’ve seen that there’s a tension between different intuitions about how non-terminating decimals should behave. If we accept that the first-discrepancy test applies to non-terminating decimals, we’re led to believe that .999… is less than 1. On the other hand, if we accept that 10 times .999… is 9.999… and that 9.999… minus .999… is 9 and that the equation 9*x* = 9 has only the solution *x* = 1, we’re led to believe that .999. . . is equal to 1.

I do know of a number system in which .999… isn’t equal to 1.000…. It arises semi-naturally from sandpile models, and I haven’t written about it though I gave a talk about it once at a Gathering 4 Gardner convention. One big problem with this number system is that there’s no subtraction or division – just addition and multiplication. Peacock would be most displeased.^{9}

Even if you could find a number system in which .999… < 1.000… that’s better than the one I found – richer in the range of algebraic operations it permits, more endowed with the forms of structural beauty that mathematicians prize – applied mathematicians probably won’t care at all, and even most pure mathematicians would regard your system as a mere curiosity. That’s in part because your system would violate the Archimedean Property: the difference between .999…and 1.000…would be less than 1/10, less than 1/100, less than 1/1000, etc.; or, putting it differently, the difference between .999… and 1 would be so tiny that, no matter how big a power of 10 you multiplied it by, the product would still be less than 1. Losing the Archimedean property in exchange for enforcing the conviction that decimals that *look* different should *be* different would strike most mathematicians as a poor trade.^{10}

There are non-Archimedean number systems, to be sure, and we’ll meet a few in future months. But as we’ll see, the real number system, although equipped with far more numbers than the system of counting numbers or the system of rational numbers, maintains the Archimedean property. So a variant number system that extends the counting numbers but has .999… < 1.000… is going to be asked to prove what problems it solves better than the real number system does.

But still, what does “.999…” mean? It’s time to stop ducking the central question.

**OF UBS AND NUBS**

Regardless of what your pre-college teachers told you, your teachers’ teachers (in college math courses) almost certainly told them that .999… is defined through a “limiting process”. That is, “.999…” means “the limit approached by the infinite sequence .9, .99, .999, . . . ” Or perhaps they were told that “.999…” denotes the infinite sum 9/10+9/100+9/1000+…, where infinite sums, upon further discussion, turn out to be defined in terms of limits. What they meant is that the numbers 9/10, 99/100, 999/1000, etc. approach 1 in the limit as the number of 9’s goes to infinity.

But limits are a subtle concept (what does “going to infinity” even mean?), so this explanation sometimes lands on half-understanding ears. Some students, who don’t quite get it but who metacognitively *get* that they don’t get it, will add an additional layer of equivocation: instead of saying “The sequence approaches 1” or “The limit is 1,” they’ll use the mixed locution “The limit approaches 1” (which is kind of like saying “The name of Shakespeare’s last play is called The Tempest”).

Fortunately there’s an alternative way to express what mathematicians mean by .999… without recourse to the limit concept, using the notion of the *least upper bound* of a set of numbers. (Sometimes “least upper bound” is abbreviated as “lub”, though the abbreviation is still pronounced as “least upper bound”.) We cut the number line into two pieces. On the right are the rational numbers like 1 and 3/2 and 17 that are bigger than 9/10 *and* bigger than 99/100 *and* bigger than 999/1000 and so on – that is, rational numbers that are bigger than every fraction of the form (10* ^{n}*−1)/10

*– and on the left are the rational numbers like 0 and 999/1000 and 1/2 and −17 that*

^{n}*aren’t*bigger than all those numbers (note that 999/1000 is bigger than some but not bigger than all, so it goes in the second group). I’ll call the former numbers

*ubs*(short for “upper bounds”) and the latter numbers

*nubs*(short for “not upper bounds”) – though you shouldn’t call them that because I just made up those words and nobody besides you and me will have a clue what you’re talking about.

Anyway: If a number is an ub, every number to its right on the number line is an ub, while if a number is a nub, every number to its left on the number line is a nub. (That’s going to be a really confusing sentence in the audio version of this essay if I don’t pronounce it carefully, but never mind.) The division of the number line into ubs and nubs is thus an example of what mathematicians call a cut of the rationals into a “right half” and a “left half”.

Now comes the big question: What’s the boundary between the ubs and the nubs? The answer is 1: 1 is bigger than 9/10, 99/100, 999/1000, etc., but if *p*/*q* is any rational number that’s less than 1, there’s a number in the sequence 9/10, 99/100, 999/1000, …that’s bigger than *p*/*q*.^{11} So 1 is the smallest ub. That is, it’s the least upper bound.

And *that* is the way mathematicians make sense of the non-terminating decimal .999… : it’s the smallest number that’s bigger than all the approximations .9, .99, .999, . . .

Perhaps you feel that mathematicians are cheating: choosing the definition that leads them to the conclusion that they were hoping to arrive at. There’s a lot of truth to this. Mathematicians began writing things like 1/2 + 1/4 + 1/8 + … = 1 long before they developed the concept of limits or the concept of a least upper bound. Mathematicians of the 19th century developed these concepts to support the beautiful and powerful work on calculus that Newton and Leibniz and Euler and others had come up with more than a century earlier but which rested on shaky foundations. I admit it: we retconned^{12} the definition of infinite sums to support the conclusions of Newton, Leibniz, and Euler, and then for good measure we went further back in history and retconned non-terminating decimals. In our defense, we weren’t overturning the original intent of the inventors of decimal notation; they were practical people who focused on terminating decimals and weren’t too clear about what .333… meant. So 19th century mathematicians felt free to tell their predecessors “Here’s what you should have meant.”

If you want to develop a rival theory of non-terminating decimals and infinite sums and whatnot, the math police won’t break into your office and confiscate your notebook to stop you. But you’ll definitely have an easier time convincing others of the value of your theory if you’ve got something at least as good as the calculus of Newton and Leibniz as a spin-off. (Time travel, maybe?)

*Thanks to Richard Amster, Jeremy Cote, Sandi Gubin, Ben Orlin, Henri Picciotto, Evan Romer and Glen Whitney.*

**ENDNOTES**

#1. If any of you know of other early appearances of infinite decimals in mathematics, please let me know in the Comments!

#2. On the other hand, once you learn to think of division as the inverse of multiplication and multiplication as the inverse of division, then the problem of finding *a*/*b* divided by *c*/*d* is recast as the problem of solving the equation *x* × (*c*/*d*) = (*a*/*b*). If we multiply both sides by *d*/*c*, then the left side becomes *x* and the right side becomes (*a*/*b*) × (*d*/*c*). So now you can interpret the verse differently: what’s being inverted isn’t the fraction but the operation of division (whose inverse is multiplication). More importantly, you now have a better understanding of why the rule works.

#3. If asked to find a general formula for *a*/*b* ÷ *c*/*d*, the Amnesic Mathematician will proceed as follows: Scale up both fractions by *b*×*d* so that *a*/*b* becomes *a*×*d* and *c*/*d* becomes *c*×*b*; and then perform division on those two counting numbers, obtaining (*a*×*d*)/(*c*×*b*).

#4. The fraction (*p*+*r*)/(*q*+*s*) is called the *mediant* of the fractions *p*/*q* and *r*/*s*. Sometime soon I’ll write a Mathematical Enchantments essay about the mediant operation and how it relates to other pretty things like the (mis-named) Farey fractions, Ford circles, the Stern-Brocot tree, the Calkin-Wilf process, and the curious fact that, in a strange but mathematically well-defined sense, the average positive rational number is 3/2.

#5. Some students feel that 10 times 1.333… shouldn’t end in infinitely many 3’s; rather it should end in infinitely many 3’s followed by a 0. But what does that even mean?

#6. In some future month, when I introduce the nonstandard reals, you’ll learn about a context in which this intuition actually holds water, although not quite in the way those students imagine.

#7. If you’ve seen infinite ordinals, then you know that there’s a way to make sense of going “beyond infinity” (or at least going beyond the smallest infinity), so you might think that this provides a way to rescue the vision of an arithmetic in which 0.999… is less than 1. But this is starting to look awfully complicated…

#8. Curiously, shortly after ChatGPT gave me this answer, the chat session terminated unexpectedly, and when I started a new session and repeated my question, ChatGPT gave me a more sensible answer; no matter how strongly I prompted it, it wouldn’t repeat its earlier bogus answer. I know ChatGPT is just a predictive language model, but it was hard no avoid the sensation that this predictive language model was ashamed of its earlier response.

#9. This system, which others have discovered before, is just the formal arithmetic of decimal numerals in which carries (called “firings” in the world of sandpiles or “explosions” in James Tanton’s Exploding Dots pedagogy) take place as much as one needs, even out to infinity. If you add 0.999… to itself, with infinitely many firings (from left to right), you get 1.999…, which is also what you get from adding 1.000… to 0.999…. So in this number-system-wannabe, the equation *x* + 0.999… = 1.999… has two solutions, not one, and subtraction isn’t a well-defined operation anymore. Likewise the equation *x* × 0.999… = 0.999… has both 0.999… and 1.000… as solutions, so division isn’t well-defined either. The Principle of Permanence gets violated in a big way. This defective number system often crops up as a sort of way-station when people try to define the real number system in terms of decimal representations, as was done nearly half a century ago by Faltin, Metropolis, Ross, and Rota and was done more recently by Fardin and Li.

#10. For more about infinite decimals and the Archimedean property, you can watch a video of a talk I gave to middle schoolers at MathPath in the summer of 2022, or just check out the slides.

#11. If *p*/*q* is less than 1, *p* must be less than *q*, so *p* is at most *q*−1, so *p*/*q* is at most (*q*−1)/*q*, or 1−1/*q*. But now we can find a counting number *n* for which 10^{n} is bigger than *q*. Then 1/10^{n} is smaller than 1/*q* and 1−1/10^{n} is bigger than 1−1/*q*, That is, (10^{n}−1)/10^{n} is bigger than 1−1/*q*. But (10^{n}−1)/10^{n} is one of the fractions in our sequence 9/10, 99/100, 999/1000, . . . .

#12. If you’re unfamiliar with the concept of retroactive continuity, check out what Merriam-Webster has to say about it (or, if you dare, visit the TV Tropes page and risk being lured down the TV Tropes rabbit-hole). Someday I’ll write a Mathematical Enchantments essay called “Retroactive Continuity” about how the modern concept of continuity got retconned into the foundations of calculus.

Raul MartinezLoved your comment “… ChatGPT, the modern apotheosis of unjustified self-confidence.”

LikeLiked by 1 person

jamesproppPost authorI was able to dig up the bogus ChatGPT proof of .999… < 1 after all! Here it is:

"To prove that .999… is less than 1, we can use the fact that .999… is equal to 1 – (1/10) – (1/100) – (1/1000) – ….

If we add (1/10) + (1/100) + (1/1000) + … to both sides of the equation, we get:

.999… = 1 – (1/10) – (1/100) – (1/1000) – … + (1/10) + (1/100) + (1/1000) + …

This simplifies to:

.999… = 1

Thus, it is true that .999… is equal to 1. However, since we are trying to prove that .999… is less than 1, this means that the proof is complete."

LikeLike