# Reckoning and Reasoning

I don’t like arithmetic, maybe in part because I’m not especially good at it. But what I dislike more than the feeling of doing arithmetic is the fact that so many people think math is nothing but arithmetic.

So let’s start with arithmetic — because if I’m trying to undermine the view of math as a mule train, there’s no better place to start than the place where the tethers feel tightest.

Suppose someone asked you to compute 997 + 998 + 3 + 2. How might you do it? You could use the standard one-size-fits-all procedure for adding up a list of nonnegative integers illustrated below (I’m omitting the cross-outs and carries).

But if the niceness of the final answer leads you to suspect that there’s a slicker way, you’re right: 997 + 998 + 3 + 2 equals (997 + 3) + (998 + 2), which equals 1000 + 1000, which equals 2000. This alternative path will lead you to the right answer because addition satisfies laws: the commutative law, which guarantees that changing the order of the terms doesn’t change the sum (so that 997 + 998 + 3 + 2 equals 997 + 3 + 998 + 2) and the associative law1, which guarantees that how you group the terms doesn’t change the sum (so that 997 + 3 + 998 + 2 equals (997 + 3) + (998 + 2)).

Unlike human laws, which constrain the behavior of people, number laws constrain the behavior of numbers and thereby free people to solve problems in flexible ways.

What if someone asked you to compute 999 × 997 + 999 × 3?  As before you can use general-purpose multiplication and addition procedures and if you perform them without making any mistakes you’ll get 999,000, but as before there’s a shortcut, this time using the distributive law, hinging on the way 999 appears twice in the problem: 999 × 997 plus 999 × 3 equals 999 × (997+3), which is 999 × 1000, or 999,000.

At this point, you may be thinking that I’m just using contrived examples to show how abstract reasoning can occasionally let you arrive at answers more quickly than blindly performing the one-size-fits-all procedures taught in grade school. Fair point.  A more honest example, one that’s truer to the ways in which the laws of arithmetic get used, is the problem of computing

1 = 1,

1+2 = 3,

1+2+3 = 6,

1+2+3+4 = 10,

et cetera, ad infinitum. But there are infinitely many sums like this, so if you tried to write down all the answers you’d never finish! Instead, I want you to find a general formula so that if (for instance) someone asked you to find 1 + 2 + 3 + … + 100 you’d be able to compute the answer in seconds rather than minutes or hours.

YOUNG CARL: A PROBABLY-NOT-TRUE STORY

Many popular books on mathematics tell the tale of how young Carl Friedrich Gauss, destined to become the pre-eminent mathematician of his day, had a math teacher who asked the class to add the numbers from 1 up to 100. The teacher was shocked when Gauss wrote down the correct answer on his slate seconds later. How did Carl do it? He instantly thought of regrouping the terms as (1+100)+(2+99)+(3+98)+…+(50+51), obtaining 101 × 50, or 5050.

I don’t believe that the story is true (see the Brian Hayes article in the References). But even if we accept it as a fable rather than mathematical history, it’s not a tale I like to tell, since some students may take the teller’s tacit message to be “This shows the difference between someone like Gauss and someone like you.”

The question that interests me is, how might Carl have discovered a shortcut like this (if he did, which he probably didn’t)?

One possibility is that future statistics-pioneer Carl was already thinking about averages; if you ask yourself what the average of the numbers 1 through 100 is, it’s easy to see intuitively that the answer should be about 50, and then, thinking a little harder, you may hit on the idea of dividing the hundred numbers into fifty pairs where the numbers in each pair average to 50.5.  This tells you that the average of all hundred numbers is 50.5, so that the sum is 50.5 times 100, or 5050.

Or maybe Carl computed a few of the sums 1, 1+2, 1+2+3, 1+2+3+4, … and, looking for patterns (“There’s got to be a trick to this”), divided the successive sums 1, 3, 6, 10 by the successive numbers 1, 2, 3, 4, obtaining the quotients

1/1 = 1.0,

3/2 = 1.5,

6/3 = 2.0,

10/4 = 2.5.

These numbers form an arithmetic progression in which each term exceeds the one before by 1/2, suggesting that 1+2+3+…+n divided by n equals (n+1)/2. Perhaps Carl didn’t prove that the pattern persists but, trusting in an orderly mathematical universe, guessed that 1+2+3+…+100 divided by 100 would follow the pattern and equal 101/2 = 50.5.

Or maybe Carl had read a book on classical Greek mathematics. The ancient Greeks called numbers of the form 1+2+3+…+n (the sum of the integers between 1 and n) triangle numbers because of the way we can arrange 1+2+3+…+n dots in a triangular array with n dots on each side.

It’s been known for millennia that 1+2+3+…+n equals n×(n+1)/2. The Greeks proved it by slicing an n-by-n+1 rectangle2 into two triangles each containing 1+2+3+…+n dots, as shown below for n=4. Young Carl might have gotten wind of this.

But since we’re in the realm of fable, let’s leave young Carl and talk about young Clara, a fictional classmate of his I just invented. She used a variant of the regrouping trick, rewriting the sum as (1+99)+(2+98)+(3+97)+…+(49+51) with the 100 and the 50 left over for separate treatment. This gave her 4900 plus 100 plus 50, or 5050. But even though Clara got the right answer, she didn’t come to the teacher’s attention, because while Carl was brashly showing off his smarts by handing in his slate early, Clara was using the extra time to ponder a different question that caught her fancy: What’s the sum of the squares of the first hundred positive integers?

Clara added up the first ten perfect squares — that is, the squares of the first ten positive integers — and got 1+4+9+16+25+36+49+64+81+100 = 385. She was trying to find a general formula for the sum of the first n perfect squares, and she even came up with a formula that fit the data (more about that in a minute), but she ran out of time when trying to prove it. Unfortunately she left 385 on her slate instead of 5050 (the answer to the question the teacher had posed), and the teacher didn’t recognize 385 as the right answer to a different question. Consequently Clara’s mathematical talent wasn’t recognized or encouraged, she never pursued mathematics, and she never got into the mathematical history books. I believe that over the past few centuries there have been many “Claras” in many classrooms, boys as well as girls. I made Clara female and quiet and a tiny bit careless — all traits that can lead teachers to overlook talented students.

HOW TO PROVE INFINITELY MANY THINGS AT ONCE

In Endnote #3, I describe how Clara came to guess the formula 12 + 22 + 32 + … + n2 = n(n+1)(2n+1)/6. But she couldn’t figure out how to prove it holds for all n. Fictional Clara fits in with a long tradition of empirical research in mathematics; a lot of important work is done by (nonfictional) people who find patterns and make conjectures even if they can’t prove them.

Clara checked that the formula works for small numbers. For instance, with n=10 the left-hand side of the formula is 12 + 22 + 32 + … + 102 which equals 385, while the right hand side is (10)(11)(21)/6, which also equals 385. An empirically-minded person, after checking that the formula works for all n between 1 and 10 (say), might declare “Seems true enough” and call it a day. After all, isn’t this how science works? If you observe ten crows, then a hundred, then a thousand, and they’re all black, wouldn’t you conclude that crows are black and move on?4 Philosophers of science call this the method of induction. But mathematicians over the millennia haven’t found this to be a reliable guide to the truth, and Clara knew this. A property that’s satisfied by all the numbers you’ve tried can fail to apply to bigger numbers.

One contrast between science and math is that while there are only finitely many crows to look at, there are infinitely many positive integers n to consider. As you look at more and more crows, the percentage of the worldwide crow population that you’ve observed grows from 0 percent toward 100 percent, at least in principle. But no matter how many positive integers n you look at, the percentage stays at 0 percent, because there are only finitely many that you’ve seen and infinitely many that you haven’t seen.

Mathematicians have a way of evading this infinity problem in many cases of interest. It’s called mathematical induction (not to be confused with the other kind, sometimes called Baconian induction). The way one can prove that all the positive integers n satisfy some new law (such as 12+22+…+n2 = n(n+1)(2n+1)/6) is, (a) first check that the number 1 obeys the law, and (b) then show that the successor of every law-abiding positive integer is law-abiding too (where the successor of n is n+1).5

(Mathematicians often omit the word “mathematical” when discussing mathematical induction, because we so seldom use Baconian induction, and when we do, we don’t call it induction.)

We can use mathematical induction to prove that the formula

1+2+3+4+…+n = n(n+1)/2

holds for all positive integers n. Let’s call n “lawful” if it satisfies the equation and “unlawful” if it doesn’t. You can check that 1 is lawful: the left hand side of the equation is just a lonely 1, while the right hand side is (1)(2)/2, which equals 1. That takes care of proving (a) (often called the “base case”).

But what about proving (b) (often called the “induction step”)? That is, how do we show that for any lawful number n, the number n+1 is also lawful? Let’s get started. If n is a lawful number, then 1+2+3+…+n equals n(n+1)/2, so adding n+1 to both sides of the equation we learn that

1+2+3+…+n+(n+1) = n(n+1)/2 + (n+1)

The left hand side is encouraging, because it’s the sum of the numbers from 1 up to n+1, which is what we need to find a formula for if we’re going to prove that n+1 is lawful. But the right hand side seems to be giving us bad news, because if n+1 is going to turn out to be lawful, then we’ll need 1+2+3+…+n+(n+1) to satisfy the general formula; that is, we’ll need it to equal what you get when you replace n by n+1 in the expression n(n+1)/2. In short, what we were hoping to prove was

1+2+3+…+n+(n+1) = (n+1)(n+2)/2

But we’re not in trouble at all: you can use algebra to check that n(n+1)/2 + (n+1) and (n+1)(n+2)/2 (the right hand sides of those two equations) are the same number written in two different ways (just double both sides and expand using the distributive law). Since this argument works for every n, we’ve shown that whenever a number is lawful its successor is lawful too. So we’ve taken care of (b). Having shown that 1 is lawful, and having shown that the successor of a lawful number must be lawful as well, we can rely on the principle of mathematical induction to assure us that every positive integer is lawful. That is, the formula works for all n.

Wait a second: how did we get around the infinity problem and prove infinitely many equations in finite time? If you examine the proof, you’ll see that in order to show that 1+2+…+n equals n(n+1)/2 for all n, we used the fact that n(n+1)/2 + (n+1) equals (n+1)(n+2)/2 for all n. That’s infinitely many assertions, but that’s okay: the laws of arithmetic and algebra apply to all numbers, and since there are infinitely many numbers, these laws already have infinity baked into them. However, no appeal to the concept of infinity is required.

A variant of mathematical induction is the constant value principle6. It says that if you’ve got a sequence of numbers, and if every term is equal to the next, then all the terms are equal to the first term. For instance, consider the sequence whose nth term is the difference between 1+2+…+n and n(n+1)/2; we’d like to show that every term of this new sequence is zero (since that’s just a way of saying that 1+2+…+n equals n(n+1)/2 for all n). Now, if you can show that the first term of this new sequence is zero, and you can show that each term of this new sequence equals the next, then the constant value principle lets you conclude that every term equals zero, which is what we wanted to prove.

If the constant value principle reminds you of the principle of mathematical induction, it should! Each of them can be seen as a consequence of the other. In Endnote #6, I use the constant value principle to prove that the formulas 1+2+3+…+n = n(n+1)/2 and 12+22+32+…+n2 = n(n+1)(2n+1)/6 both hold for all n.

TWO MORE VARIANTS OF INDUCTION

Another variant of the principle of induction is the least element principle. It asserts that every non-empty set of positive integers contains a least element, that is, an element that is smaller than all the others. The way one uses this to prove that all the positive integers satisfy some law is to consider the “minimal criminal”, that is, to consider what would happen if some number n were the smallest positive integer that broke the law. In the case of our formula 1+2+3+…+n = n(n+1)/2, this means that we assume that n is unlawful while n–1 is lawful. This leads to a contradiction: if n–1 is lawful then 1+2+3+…+(n–1) = (n–1)n/2, and adding n to both sides and doing a little algebra tells us that 1+2+3+…+n = n(n+1)/2 after all, contradicting our assumption that n is unlawful. This contradiction shows that no such n can exist. The only way out of this contradiction is to assume that there are no “criminals” at all: all positive integers satisfy the law.

Yet another variant is called the principle of infinite descent. Mathematicians in Fermat’s time and the century afterwards liked it (and used it) a lot. This principle says that you can’t find an infinitely long decreasing sequence of positive integers. To forestall a common confusion, let me grant that you can make a decreasing sequence that’s as long as you like; for instance, just start with some huge number and then count backwards, subtracting 1 each time. But that individual sequence is finite; eventually you reach 1 and get stuck (remember, this is a sequence of positive integers). In fact, no matter what big integer N you pick as your first term, you’re already doomed to run out of numbers after a finite number of steps (at most N–1 steps, to be specific). You can’t pick the first term to be infinity, because there’s no such positive integer! So, while there’s no limit on how long a decreasing sequence of positive integers can be, no individual sequence of this kind is itself infinitely long. (The situation is reminiscent of the nature of the set of positive integers itself: the set is infinite, but its individual members are all finite.) There’s a fundamental asymmetry at work here, because it’s very easy to come up with infinitely long increasing sequences of positive integers. The infinite stairway of positive integers has no top but it does have a bottom.

We can use infinite descent to solve another problem related to perfect squares: can a perfect square be twice another perfect square? (We’ll want to know the answer in a later essay.) Suppose a and b are positive integers such that a2 = 2 b2. Let’s look at the last digits of a and b, which determine the last digits of a2 and b2, respectively:

2b2 must end in a 0, 2, or 8, and a2 can’t end in 2 or 8, so the only way a2 and 2b2 can end in the same digit is if they both end in 0. This implies that a and b must both be multiples of 5. That in turn implies that if we define a′ = a/5 and b′ = b/5 we get positive integers a’ and b’ with the property that (a′)2 = 2(b′)2. Having turned this crank once, we can turn it again, applying to a′ and b′ the same reasoning that we applied to a and b to deduce the existence of even smaller positive integers a” and b” satisfying (a”)2 = 2(b”)2. You see where this is going: we get decreasing sequences a > a’ > a” > … and b > b’ > b” > … of positive integers, which is impossible by the principle of infinite descent. So no perfect square can be twice another perfect square.

Here’s a different way to use infinite descent to show that no perfect square can be twice another perfect square.

If a2 is twice b2, we can set up an a-by-a square with two overlapping b-by-b squares inside it, one nestled at the upper left and the other nestled at the lower right. The overlap is a square, say a c-by-c square. To the upper right of that c-by-c square is a d-by-d square (say) and to the lower left of the c-by-c square is another d-by-d square. I claim that c2 is twice d2. To see why, equate the area of the big a-by-a square with the sum of the areas of the pieces (two L-shapes and three squares) into which it has been divided: a2 = (b2c2) + (b2c2) + d2 + c2 + d2 = 2b2c2 + 2d2 = a2c2 + 2d2 (that last step uses the property a2 = 2b2). The equation a2 = a2c2 + 2d2 implies that c2 = 2d2. What’s more, c and d are integers, since a and b are integers and d = ab and c = bd. So c and d satisfy the same properties as a and b, and we can keep turning the crank, getting two endless descending sequences a > c > … and b > d > …, which is impossible by the principle of infinite descent.

What you’ve just seen are two proofs of the irrationality of the square root of two, couched as facts about integers; for, if the square root of two were expressible as the rational number a/b, so that (a/b)2 = 2, we’d have a2/b2 = 2, and a2 = 2b2. But in the process you’ve also seen a proof that there’s no one “right” way to prove it! For more about these proofs see Endnote #7.

RECURSION

If the methods of proof we’ve just explored remind you of the use of recursion8 in computer programming, there’s a good reason for it. To see the connection between induction and recursion, let’s look at a puzzle. The mathematician and engineer Solomon Golomb loved puzzles, and some of his best puzzles involved tilings. He studied regions he named polyominos, such as the one shown below.

This particular polyomino is a 4-by-4 square from which one of the sixteen constituent 1-by-1 squares has been removed, leaving a polyomino composed of only 15 squares. This 15-omino has been tiled by five 3-ominos (or trominos) of the same shape as one another; they’re called L-trominos. (A 2-omino, better known as a domino, is just a 1-by-2 or 2-by-1 rectangle.) Golomb’s tromino theorem says that for any n, you can tile the region obtained by removing a single 1-by-1 square (any square you like) from a 2n-by-2n square.

We prove Golomb’s theorem by induction on n. First we check the base case9. Golomb’s claim holds for 21-by-21 boards since when n=1 the region to be tiled is itself just an L-tromino. Next we do the induction step. Suppose that Golomb’s claim is true for some number n; how can one conclude that Golomb’s claim is true for n+1? To make this task concrete, suppose I draw a 2n+1-by-2n+1 square and remove a single 1-by-1 square from it and challenge you to tile that slightly smaller region using L-trominos. What can you do?

Golomb counsels you to divide that big 2n+1-by-2n+1 square into four smaller 2n-by-2n squares (“divide and conquer”). One of the four is a square that used to contain the 1-by-1 square that you removed; you know that you can tile this region with L-trominos, because we assumed that Golomb’s claim holds for 2n-by-2n boards. What about the rest of the big region you’re supposed to tile? Place an L-tromino in the middle of the board containing a 1-by-1 square from each of the other three 2n-by-2n squares. Now you’ve got three 2n-by-2n squares, each of which is missing a 1-by-1 square in the corner, so once again, Golomb’s claim applies to assure you that those three regions can be tiled by L-trominos. And now you’ve covered the whole region I challenged you to tile. The construction works no matter which of the 1-by-1 squares I removed, so you can always carry out the task. This proves that Golomb’s claim holds for 2n+1-by-2n+1 boards, and completes the induction step. Hence the principle of mathematical induction tells us that every positive integer satisfies Golomb’s theorem.

You could use this idea to write a computer program that for any given “Golomb region” (which I’m defining as a square whose side-length is a power of 2 with a single 1-by-1 square removed) would generate a tiling of the region by L-trominos. This procedure recursively calls itself, reducing the problem of tiling a Golomb region to four smaller subproblems involving four smaller Golomb regions, each of which gets reduced in similar fashion, until the procedure finally bottoms out when it reaches the case n=1. I’m not saying it would be an efficient program, but it would always work, and thus would be a computational embodiment of Golomb’s theorem, with its recursive nature mimicking the inductive nature of the proof.

Pre-reader Evan Romer went ahead and did just this; for instance, here’s how his program tiled a 16-by-16 square with a particular 1-by-1 square removed.

(My eye sees a heap of 2-by-2, 4-by-4, and 8-by-8 squares occluding one another, but I don’t know if this parsing of the picture has any mathematical significance.)

Another puzzle that can be solved using induction/recursion is the Towers of Hanoi puzzle, but other writers have already discussed this puzzle in many places.

MORE THAN ONE STORY

Principles like mathematical induction show us that reason can take us places that mere reckoning can’t. Reckoning is fine for proving specific propositions about numbers, but when you’re trying to prove infinitely many such propositions at once, reckoning can’t get you there.

It’s interesting that the English word “reckon” has acquired colloquial meanings that go beyond its original application to arithmetic. We say “I reckon that …” to mean “I think that …”, while to “reckon with” something means to take it into account. (The word “account” arose in a numerical context as well.) Lexically, there seems to be an affinity between calculating, reasoning, guessing, and proving. An even more telling lexical blurring occurs with the word “tell”. Bank tellers don’t entertain bank patrons by telling them stories, or by recounting interesting events (though some bank tellers may be raconteurs in private life); they report and enact the results of calculations. Reread this paragraph and note the words “telling”, “recounting”, and “raconteur”; then go to a dictionary and check that all have a numerical original. (See also Endnote #10.)

Numbers tell stories, for those who listen for them. The Clara in my fable was a number-listener who looked at some numbers, noticed a pattern, and tried to prove it. The numbers aren’t free to do anything they wish, but once we’ve discovered the secret laws they follow and try to prove those laws, we can use our freedom to find paths that make sense to us. Every such path is a kind of story, and everyone who writes down a proof is a story-teller.

Sometimes the stories mathematicians tell us are about numbers; others are about shapes in our world, or shapes that can’t exist in our world but which we can still reason about. But they’re more like tales than calculations, because when you try to create a tale it’s not always clear what should happen next. And sometimes you don’t create the tale by starting at the beginning and seeing what happens; often you start with the ending and work backwards. Or, even more commonly, you start at both ends and work toward the middle.

Young Carl Friedrich Gauss (the real one) grew up to become a great mathematician who, among his other accomplishments, proved a theorem called the Law of Quadratic Reciprocity (about perfect squares, as it happens). In fact, he proved it half a dozen different ways. If all he’d cared about was knowing that the Law was true, a single proof would have sufficed; but to understand why the Law was true, Gauss needed more than one path to understanding, more than one story. Nowadays there are hundreds of proofs of the law of quadratic reciprocity, each turning on its own fulcrum.

The culture of math is full of stories. Sometimes the stories take place in the world of ideas. Other stories are set in our world, and are about the people who traveled to the world of ideas and brought back new idea-stories. Stories of the second kind sometimes have plot twists, because what people discover when they explore mathematical ideas is not always what they set out to discover.

And yet many people in our world reckon that math is just a harder kind of arithmetic. Go figure.

Thanks to Jeremy Cote, Sandi Gubin, Joe Malkevitch, Evan Romer, and Doron Zeilberger.

ENDNOTES

#1. Nitpickers will notice a few hidden invocations of the associative law here, since for instance 997 + 998 + 3 + 2 by default means ((997 + 998) + 3) + 2. My target audience consists of readers who won’t notice this, but pickier readers are welcome too!

#2. In fact the Greeks had a name for numbers of the form n×(n+1). They called them “heteromecic”; the Romans called them “pronic“, and modern English speakers (when they refer to them at all, which doesn’t happen often) call them “oblong”.

#3. To prepare for deriving Clara’s conjecture, it’ll be helpful to give Clara’s sums names: S1 = 1, S2 = 1+4=5, S3 = 1+4+9=14, S4 = 1+4+9+16=30, etc. While we’re at it, let’s give names to the triangle numbers: T1 = 1, T2 = 1+2=3, T3 = 1+2+3=6, T4 = 1+2+3+4=10, etc. Now, remember how I wrote up above that Carl might have guessed the formula for the Tn‘s by dividing Tn by n (to see what the average was) and noticing that the sequence of ratios forms an arithmetic progression? Clara did the same thing, only instead of dividing Sn by n (she tried that first but the numbers were unilluminating) she divided Sn by Tn: 1/1 = 1 = 3/3, 5/3 = 5/3, 14/6 = 7/3, 30/10 = 3 = 9/3, etc. She observed that these ratios form an arithmetic progression, with each ratio exceeding the preceding ratio by 2/3. So she conjectured that Sn / Tn is always equal to 1/3 + (2/3)n, or (2n+1)/3. That is, she conjectured that Sn equals (2n+1)/3 times n(n+1)/2, or n(n+1)(2n+1)/6.

Some pre-readers found this part of the Clara story unconvincing, and I don’t blame them: isn’t clever Clara, with her brilliant trick for divining a pattern, just a variant of the Gauss archetype? To make the story more realistic and morale-boosting, we’d have to depict Clara as trying half a dozen other approaches before hitting on the idea of dividing Sn by Tn. I recall that Gauss himself once claimed that many of his results in number theory arose from “systematic messing around” (which sounds more respectable in the original German). Or was it Euler, or someone else? I’d be grateful if one of my readers could provide the quotation I’m thinking of!

#4. It’s worth noting that albino crows are a thing.

#5. This principle is often explained with a metaphor involving a line of dominos. If domino 2 has been placed so that when domino 1 falls it will cause domino 2 to topple, and domino 3 has been placed so that when domino 2 falls it will cause domino 3 to topple, and so on, then toppling domino 1 will cause domino 2 to fall, which in turn will cause domino 3 to fall, and so on, ad infinitum. It’s a great metaphor, but it sometimes causes confusion because of the role that the passage of time plays. We shouldn’t think that when the nth domino topples, the number n “decides” to become law-abiding. Properties of numbers don’t change over time. It might be better to think that when the nth domino topples, we as mathematical reasoners become aware that the number n is law-abiding. But this can be misleading too, because it would take an infinite amount of time for all the dominos to fall (and we don’t want to wait that long before concluding that all the numbers are law-abiding). Maybe we should imagine that the dominos are falling faster and faster … but no, this is taking the metaphor beyond the point of usefulness.

#6. The constant value principle is nonstandard, to the point where I had to assign it a name myself, but it’s latent in the way computers prove formulas like Clara’s. It’s easy to prove the constant value principle by induction, and you can also prove the principle of induction using the constant value principle. Here’s how a computer might use the constant value principle to prove the formula 1 + 2 + 3 + … + n = n(n+1)/2. If we define An = 1 + 2 + 3 + … + n and Bn = n(n+1)/2, then for all n we have AnAn–1 = n and BnBn–1 = n, so for all n we have (AnBn) – (An–1Bn–1) = (AnAn–1) – (BnBn–1) = nn = 0. This implies (via the constant value principle) that for all n, AnBn equals A1B1, which is 0. So An = Bn for all n, which is what we wanted to prove.

Likewise, to prove Clara’s conjecture, let An = 12 + 22 + … + n2 and Bn = n(n+1)(2n+1)/6. The proof is similar, with two wrinkles: AnAn–1 is now not n but n2, and BnBn–1 is also n2 but it takes some algebra to see why. Here, I’ll help: n(n+1)(2n+1) = 2n3+3n2+n and (n–1)n(2n–1) = 2n3–3n2+n, so n(n+1)(2n+1) – (n–1)n(2n–1) = 6n2, and dividing by 6 gives n2.

For more about computer verification of formulas, see the book A=B by Petkovsek, Wilf and Zeilberger. Wilf and Zeilberger’s work can be viewed in part as a way of importing Baconian induction into mathematics by building supporting infrastructure that addresses the question “How many confirming instances of a proposition of this type do we need to gather until we can safely conclude that the proposition is universally true?” For instance, the Wilf-Zeilberger machine tells you that if you can find just four confirming instances of Clara’s conjecture then it’s true for all n. (If you want to know what’s inside the Wilf-Zeilberger machine, the answer is: lots of mathematical induction and linear algebra, among other things.)

There’s also a way to prove the formula geometrically, analogous to the way one can divide an n by n+1 rectangle into two triangles; see the nice video https://twitter.com/74wtungsteno/status/1424611824767578113?s=21

#7. I came up with the first proof myself and thought it was original, since it didn’t appear on an English-language web-page listing proofs of the irrationality of the square root of two, but Vincent Pantaloni informs me that it’s common in French textbooks. This makes me wonder what else I’ve missed by reading books and articles written in English and not ones written in French, German, Russian, Hindi, etc.

The second proof is due to Stanley Tennenbaum.

#8. If you’ve never seen recursion, consider the definition that says “Factorial(1) is 1, and for n bigger than 1, Factorial(n) is Factorial(n–1) times n“. This looks like a vicious circle, because the function Factorial() is defined in terms of itself, but it’s more of a helix than a circle. To compute Factorial(3) for instance, one must first compute Factorial(2), and to do that, one must compute Factorial(1). But Factorial(1) is not defined recursively; it’s defined to be 1. So now one can finish computing Factorial(2): it’s 1 times 2, or 2. And now that one has finished the subtask of computing Factorial(2), one can go back to its original task of computing Factorial(3): it’s 2 times 3, or 6. So even though the definition refers to itself itself, we don’t get an infinite regress; we get a finite regress, which is fine.

#9. The claim is true for n=0 as well, though that’s a bit mind-bendy to think about: When you remove a 1-by-1 square from a 1-by-1 square, what you’re left with is the empty region, which you might think cannot be tiled by L-trominos, but in fact can be tiled by using no L-trominos at all! See my essay “The Null Salad”.

#10. Even as the meaning of the verb “tell” has shifted over time, the noun “arithmetic” has acquired a specialized meaning in mathematics, mostly known only to mathematicians. If you don’t believe me, take a look at J.-P. Serre’s book “A Course in Arithmetic”, which begins like this:

Needless to say, when non-mathematicians say they suppose math is “just arithmetic”, this is not the sort of thing they have in mind.

REFERENCES

Brian Hayes, “Gauss’s Day of Reckoning”. American Scientist, vol. 94, no. 3, May-June 2006. Addenda can be found in the online document “Versions of the Gauss Schoolroom Anecdote”.

Doron Zeilberger, Herbert Wilf, and Marko Petkovšek, A=B.

## 3 thoughts on “Reckoning and Reasoning”

1. jamespropp Post author

Tom Edgar (@TedG) tweets: “An alternate visual proof of sum of squares due to Gardner and Kalman (independently) : youtu.be/-tJhH_k2LaM relevant for endnote 6 :)“

Like