Unlimited Powers

Today’s mathematical journey will go from India to Europe and back, starting with Madhava of Sangamagrama’s invention of infinite series and culminating in Srinivasa Ramanujan’s discovery of the most potent piece of mathematical clickbait of all time: the outrageous assertion that 1 + 2 + 3 + 4 + … is equal to −1/12.

TWO TRIGONOMETRIES

Like many people, I learned two trigonometries, a few years apart. The first was about triangles and it had pictures like this:

The angle θ was seldom more than 90 degrees and never more than 180 degrees (because what kind of triangle has an angle bigger than 180 degrees?).

The second trigonometry was about circles and it had pictures like this:

Now θ was allowed to go all the way from 0 degrees to 360 degrees and beyond, as the black point went round and round counterclockwise. This trigonometry wasn’t about trigons (“three-cornered things”); it should have been called cyclometry, since it was about measurements of circles, and it was by extension a tool for understanding all processes that go round and round, including less overtly geometrical ones like the lengthening and shortening of days in the circular parade of seasons. This expansive version of trigonometry helps us understand many processes that go up and down, as long as they, like carousel horses, go up and down in a predictable and repetitive fashion:

This graph could show the rises and falls of two such horses but it’s actually a graph of the sine function superimposed with a graph of its twin, the cosine function.

Stop for a minute to wonder: how are such pictures made?1 We’ve come to take graphing calculators for granted, forgetting how much work is involved with plotting a function like y = sin x. Even computing the sine of x for a single value of x to, say, six digits is a computational feat all on its own, one that would have taxed the powers of Archimedes. We know that an electronic calculator doesn’t have a tiny person inside it, drawing and measuring triangles or circles. So what does happen inside your calculator when you press the “sin x” button?

The answer came five centuries before the question, and it came from the Kerala province of India.2

MADHAVA OF SANGAMAGRAMA (1350 – 1425)

Madhava of Sangamagrama was one of the greatest mathematicians3 of the 14th century, yet we know little of his life. Indeed, none of his writings (if writings there were) survive; we know of his pioneering contributions only through commentaries written by his successors, who constituted the Kerala school of astronomy and mathematics.

Madhava discovered that the sine and cosine functions, which had been invented in something close to their modern form by the Indian mathematician Aryabhata a thousand years earlier, can be expressed as “polynomials” with infinitely many terms. Here (translated into modern notation) are Madhava’s formulas4:

sin x = xx3/6 + x5/120 − x7/5040 + …

cos x = 1 − x2/2 + x4/24 − x6/720 + …

Notice that as one reads each formula from left to right the exponent of x is increasing instead of decreasing, which makes sense because you can count up from zero but you can’t count down from infinity. These formulas look less random if we employ factorial notation, using “n!” (pronounced “n factorial”) to signify the product of the counting numbers from 1 to n. Then we can write

sin x = x1/1! x3/3! + x5/5! − x7/7! + …

cos x = x0/0! − x2/2! + x4/4! − x6/6! + …

Here I’ve written the first term in the right hand side of the first equation as x1/1!, and the first term in the right hand side of the second equation as x0/0!, to better bring out the pattern that rules Madhava’s twin formulas.5

One amazing feature of these formulas is that if you choose a value of x that’s far from zero, then each of Madhava’s two sums, after engaging in youthful excesses in which they trespass far out into the rightward and leftward expanses of the real line, settle down to become sober residents of the vicinity of 0, never again venturing to the right of +1 or to the left of −1. To see this dramatic phenomenon in action, suppose x is 10π radians, corresponding to an angle of 1800 degrees (five full turns). The terms of Madhava’s sum for the sine of 10π are initially medium-sized, starting with 31.4159, but quickly get large and oscillate wildly, jumping back and forth with alternating positive and negative steps of increasing size; the twelfth term of the sum is about negative one trillion, and the term after that is about positive two trillion. Yet eventually the terms start to get small, and astonishingly, the running sum of the terms gets closer and closer to zero. The positive and negative terms, no two alike and some of them upwards of a trillion, cancel out perfectly in the limit – as indeed Madhava’s formula says they must, since the sine of five full turns is precisely zero.

Sums like Madhava’s, involving unlimited powers of the independent variable x, are now called power series, and like polynomials, we can add or multiply them. Say we’ve got two power series

a0x0 + a1x1 + a2x2 + …

and

b0x0 + b1x1 + b2x2 + …

Then their sum is

(a0+b0) x0 + (a1+b1) x1 + (a2+b2) x2 +…

and their product6 is

(a0b0) x0 + (a0b1 + a1b0) x1 + (a0b2 + a1b1 + a2b0) x2 + …

For a fun surprise, use power series multiplication to multiply

sin x = x1/1! x3/3! + x5/5! − x7/7! + …

by itself, obtaining the first few terms of the power series representation of (sin x)2 . In a similar way find the power series representation of (cos x)2. Now add those two power series. What do you notice? Should you have been surprised?7

HOW MADHAVA (PROBABLY) DID IT

We don’t know exactly how Madhava came up with these formulas. But we have some intelligent guesses based on what his successors wrote. It’s likely that Madhava’s chief insight was essentially an application of differential calculus, by way of pondering the question “How do the sine and cosine of x change if you make a small change in x?”

What Madhava probably noticed by scrutinizing trig tables and then probably proved8 is that when you make a small change in x, the resulting change in the sine of x is proportional to the cosine of x, while the change in the cosine of x is proportional to the sine of x but with a minus sign in front.

This mutuality between sine and cosine – the way changes in the sine function are governed by the cosine function, and vice versa – might seem like a vicious circle (pun intended), but in fact this two-way relationship turned out to be not a cul-de-sac but a mighty rotary engine that enabled Madhava to churn out the respective power series for sine and cosine, with each turn of the crank spitting out a new coefficient.9 It doesn’t take anything away from Newton and Leibniz to acknowledge the Kerala mathematicians’ huge accomplishment: they came up with the idea of differentiating a function before anyone else did, and they applied the process in reverse to unify algebra and trigonometry.

For more on power series, read Steven Strogatz’s excellent article “How Infinite Series Reveal the Unity of Mathematics”. If you’re specifically interested in understanding the hidden geometric meaning lurking in the individual terms of Madhava’s formulas, check out Mathemaniac’s awesome video “The geometric interpretation of sin x = xx3/3! + x5/5! − …”. A more traditional approach can be seen in Gary Rubinstein’s presentation “Madhava sine series derivation”. An overview of the Kerala school and its accomplishments appears in the chapter on Madhava in Ian Stewart’s 2017 book Significant Figures. And if you want to learn from a mathematical historian what we know and what we don’t know about the Kerala school’s version of calculus, see Victor Katz’s article “Ideas of Calculus in Islam and India”.

So, what did Newton and Leibniz know that Madhava and his successors didn’t? That’s a question for someone who knows more mathematical history than I do. (If you’re such a person, please post to the Comments!) But I’d wager that one thing Madhava didn’t know is that his ideas could be applied to so many problems in science, such as optics (think of Pierre Fermat’s work on refraction) or time-keeping (think of Christiaan Huygen’s work on pendulums), or for that matter astronomy: how the Indian astronomer would have loved Newtonian celestial mechanics! But Madhava and his successors discovered the major ideas of calculus, applied them to trigonometry, and then didn’t apply their revolutionary methods to anything else. It’s as if the ancient Siberians who crossed the Beringia land bridge twenty thousand years ago had invented bicycles to make the mass migration go more smoothly, and then, having safely arrived in Alaska (“Whew!”), put all the bikes in their closets and forgot about them, not realizing that bicycles might have other uses.10

ISAAC NEWTON (1643 – 1727)

The binomial theorem that we teach in pre-calculus courses, expressing (x + y)n in expanded form as a sum of n+1 terms, goes back at least to Bhaskara II’s 12th century treatise Lılavatı, and versions that give less explicit descriptions of the coefficients go back several centuries further.

What Isaac Newton did around 1665, and wrote up a few years later in a privately distributed monograph “On Analysis by Equations Unlimited in their Number of Terms”11, was introduce a version that was at once more specialized (one of the summands is equal to 1) and much more general (the exponent can be any rational number). Here’s Newton’s binomial theorem:

with infinitely many terms on the right side of the equation. To get a sense of its power (pun intended), plug in r = 1/2, and recall (see the section from my essay “Denominators and Doppelgängers” called “The Principle of Permanence”) that the one-halfth power of a number is just a fancy name for the square root of that number:

(If you enjoyed squaring Madhava’s power series for the sine and cosine of x, try squaring Newton’s power series for the square root of 1+x. What do you get?) Plugging x = 1 into the formula, we get an equation whose right hand side is an infinite series whose partial sums get closer and closer to the square root of 2. And if instead of the square root of 2 you want the cube root of 2, Newton’s your guy: just replace the exponent 1/2 by the exponent 1/3 in his binomial theorem.

But these infinite series approximations to the square root and cube root of 2 don’t converge as quickly as we might hope, and that defect turns out to be a symptom of a much bigger problem: as soon as x becomes larger than 1, even an eensy-weensy bit larger, Newton’s infinite series for (1 + x)r fails catastrophically. The terms just get bigger and bigger, and the approximation gets worse and worse, forever! It’s like what we saw for Madhava’s way of computing the sine of 10π, but without the “and then they grew up and settled down” ending.12 For more about Newton’s work on the binomial power series expansion (and the problem that led him to discover it), see another excellent Steven Strogatz article: “How Isaac Newton Discovered the Binomial Power Series”.

Newton’s formula never works if x is bigger than 1 or less than −1, and to get a feeling for why it fails when x strays too far from 0, it helps to look at the special case r = −1. Newton tells us that

or in other words

Replacing x by −x and swapping the two sides of the equation we get

This is a very old formula; the sum on the left is called an infinite geometric series (you’ve probably seen it with x replaced by r). Zeno of Elea appears to have known it in the case x = 1/2, and Archimedes used the case x = 1/4 in his computation of the area enclosed by a parabola and a straight line (as described in Strogatz’s book). This formula works for all x satisfying −1<x<1, but it fails when x=1 and when x=−1. (The partial sums in those two cases should remind you of the two rejectees at Club Cantor in last month’s essay: the partial sums of 1−1+1−1+… oscillate between 1 and 0, while the partial sums of 1+1+1+1+… just count up to infinity.) We call such infinite sums divergent, meaning that they fail to converge to a limit.

Things are even worse when x is greater than 1 or less than −1: consider 1+2+4+8+… and 1+(−2)+(4)+(−8)+…. The formula for the sum of an infinite geometric series, blindly applied with no attention to the fine print on the warning label (“Use only for x between −1 and 1″), suavely assures us that

and

The former sum features infinitely many positive numbers whose total is a negative number; the latter sum features infinitely many whole numbers whose total is a fraction.

Total nonsense! Or is it?

LEONHARD EULER (1707 – 1783)

Leonhard Euler, the greatest mathematician of the 18th century, was more tolerant of nonsense than your typical 21st century mathematician. The one-eyed genius had a many-eyed view of the mathematics of his day, including both exponential functions and trigonometric functions, and he’d devised a way to unify those two subjects via his iconic formula eix = cos x + i sin x. He knew all too well that many of his contemporaries were uncomfortable with complex numbers, but he also knew how Italian algebraists of an earlier century had managed to find real solutions to equations by taking detours through the complex numbers. He hoped that divergent series, as intermediate stages in calculations, could analogously serve as useful fictions, provided one could find the right rules for manipulating them.13 For that matter, Newton had entertained a similar point of view; he regarded the polynomials 1 + x, (1 + x)2, (1 + x)3, etc. as constituting a generalization of the powers of the number eleven, and correspondingly viewed power series as generalizations of the number concept; just as non-terminating decimals fit in naturally with terminating decimals to form the real number system, he imagined power series and polynomials together as a new kind of number system – the kind that you can do calculus with.

Drawing by Ben “Math With Bad Drawings” Orlin. Visit https://mathwithbaddrawings.com

The key caveat in the previous paragraph is “provided one could find the right rules for manipulating them”, and the key verb in that caveat is “manipulate”. The verb that’s missing is “interpret”. Euler didn’t look at a sum like 1+2+4+8+… and ask himself “What might that mean?” Instead, he asked, “What value does it want to have?” He worked hard at divining the secret desires of divergent series. In one case, he considered the extremely badly-behaved infinite series 0! − 1! + 2! − 3! + … from four different points of view, starting from the false assumption that the series converges (it doesn’t) and seeing if his bag of numerical tricks would let him estimate the value (it did). He showed that all four of his approaches led to approximately 0.6. For him, the confluence of the different methods indicated that he’d found the “correct” way to assign a value to this prima facie valueless expression.

In his 1760 publication De seriebus divergentibus, Euler practiced a kind of extension of the Principle of Permanence. A typical illustration of his philosophy can be seen in the context of the formula 1+x+x2+x3 +… = 1/(1−x). Since the formula holds for lots of values of x (specifically all numbers x satisfying |x| < 1), isn’t it sensible to guess that the formula applies in some (possibly arcane) sense to all values of x? So for instance with x = −1 one would get 1+(−1)+1+(−1)+… = 1 = 1/2, though I would maintain that here “=” means something closer to “wants to equal”.14

We can apply Euler’s approach to series of other kinds, not just geometric series. For instance, let’s square both sides of the formula 1+x+x2+x3+… = 1/(1−x). On the left we get 1+2x+3x2+4x3+… and on the right we get 1/(1−x)2. (If you prefer, you can take Newton’s binomial theorem with exponent −2 and derive the same result.) Plugging in x = −1 we find that 1−2+3−4+… wants to equal 1/4. So, even though this infinite series diverges, there’s a sense in which you can associate the number 1/4 with it.

I promised I’d tell you about the seemingly similar sum 1+2+3+4+…, and I will, but I can’t yet, because Euler’s brand of wizardry doesn’t tell us how to associate a finite value with this expression. If you plug x = 1 into the equation 1+2x+3x2+4x3+… = 1/(1−x)2, you get 1+2+3+4+… = 1/(0)2, which is still infinite (or undefined). The wizard of Berlin15 was able to give finite values to 1−1+1−1+… and 1+2+4+8+… and 1−2+4−8+… and even 1−2+3−4+… but he wasn’t able to give 1+2+3+4+… a finite value of its own.

Before we meet a different wizard with a very different approach to 1+2+3+4+…, I pause to propose a puzzle: can you figure out, in the spirit of Euler, what the sum of the Fibonacci numbers wants to be? The Fibonacci sequence goes 1, 2, 3, 5, 8, 13, …, with each new term being the sum of the two before. Euler’s approach (codified later by the mathematician Niels Abel, and now known as Abel summation) would be to find an algebraic expression for the power series 1+2x+3x2+5x3+8x4+13x5 +…, say of the form p(x)/q(x) for suitable polynomials p(x) and q(x), and then plug in x = 1. I’ll give you a big hint: try q(x) = 1−xx2.16

SRINIVASA RAMANUJAN (1887 – 1920)

I’ve already told you the story of Ramanujan in a couple of my essays, so if you need a reminder (or if you’ve never heard his story), check out “Sri Ramanujan and the secrets of Lakshmi” and “The Man Who Knew Infinity: what the film will teach you (and what it won’t)”. Just over 110 years ago, on the 27th of February in the year 1913, Ramanujan wrote to his English colleague and future collaborator G. H. Hardy, saying:

“Dear Sir, I am very much gratified on perusing your letter of the 8th February 1913. I was expecting a reply from you similar to the one which a Mathematics Professor at London wrote asking me to study carefully Bromwich’s Infinite Series and not fall into the pitfalls of divergent series. … I told him that the sum of an infinite number of terms of the series 1+2+3+4+… = −1/12 under my theory. If I tell you this you will at once point out to me the lunatic asylum as my goal.”

Much as Euler had non-rigorously approached 0! − 1! + 2! − 3! + … in four different ways, Ramanujan proposed two different ways to assign a value to 1 + 2 + 3 + 4 + … and both approaches led to the value −1/12. Hardy was astonished. This unknown amateur mathematician in Madras was doing mathematics in the freewheeling style of an Euler, deriving results that went far beyond what Euler (and indeed beyond what Hardy and his contemporaries) had done!

This is not to say that Ramanujan was ignorant of Euler’s work; indeed, his approach to divergent series arose from a piece of mathematics (amazing in its own way) called the Euler-Maclaurin formula. But Ramanujan took this formula to places where no one had taken it before.

Hardy built a bridge between Ramanujan’s mathematics and mainstream number theory a few years later with his long-time collaborator J. E. Littlewood, when they laid the foundations of what has come to be called zeta-function regularization of divergent series. The roots of the method lie in other work of Euler, who had studied 1 + (1/2)s + (1/3)s + (1/4)s + … for various integer values of s, memorably proving that when s = 2, the series sums to π2/6. Bernhard Riemann later showed that allowing values of s in the complex plane yielded a function whose intricacies were intertwined with profound mysteries about prime numbers. Riemann’s zeta function remains an object of deep fascination, and the most celebrated open problem in mathematics, the Riemann Hypothesis, centers on it.

One often sees the zeta function defined by the formula ζ(s) = 1s + 2s + 3s + …, but the right hand defines ζ(s) only for complex values of s whose real part exceeds 1. However, there is a natural way to extend the domain of definition of ζ by way of a conceptual procedure called analytic continuation. Analytic continuation is sort of a super-powered version of Euler’s principle of domain-extension. Euler knew that there’s one and only one algebraic function that has the same value as the series 1 + x + x2 + … for all x between −1 and 1, namely, the algebraic function 1/(1−x). In an analogous but more difficult way, Riemann showed that there’s one and only one (jargon alert!) “meromorphic function”17 that assigns to each complex number s ≠ 1 a complex number ζ(s) subject to the requirement that ζ(s) = 1s + 2s + 3s + … for all s with real part exceeding 1.

Euler had shown that ζ(2) = π2/6. Riemann showed (among many other things) that ζ(0) = −1/2 and ζ(−1) = −1/12,18 and the latter gave Hardy a way to understand what Ramanujan’s manipulations were doing. Just as Euler was led to equate 1 + 2 + 4 + … with −1 by plugging the off-label value x=2 into the formula 1 + x + x2 + … = 1/(1−x) (a formula valid only for x between −1 and 1), Hardy corroborated Ramanujan’s “lunacy” by plugging the illicit value s=−1 into the formula 1s + 2s + 3s + … = ζ(s) (a formula valid only for s with real part exceeding 1). With s=−1 the left hand side becomes 11 + 21 + 31 + … which doesn’t converge to any real number, but the right hand side evaluates to −1/12, which convinced Hardy that the divergent series wants to converge to −1/12.

TRUTH IN LABELLING

Both Euler’s approach to 1+2+3+4+… and Ramanujan’s approach start from the idea that you shouldn’t think about that particular numerical expression in isolation; you should think of it as the value taken on by some function f(x) for some particular value of x. But which function? And which particular value? There’s the rub. Euler treated 1+2+3+4+… as the value associated with the function 1+2x+3x2+4x3+… at x = 1, and (after algebraic extrapolation) got the answer ∞. Ramanujan and Hardy and Littlewood treated 1+2+3+4+… as the value associated with the function 1x+2x+3x+4x+… at x = 1, and (after analytic continuation) got the answer −1/12.

More precisely, Hardy and Littlewood did that, inspired by Ramanujan’s work. Ramanujan himself didn’t mention the zeta function; he just did some manipulations of the series. But which manipulations are we allowed to do? There’s the (other) rub. There are ways to manipulate Ramanujan’s sum so as to lead to different conclusions than Ramanujan’s. For instance, there’s a way to “prove”19 that 1+2+3+4+… is 0, and there are infinitely many ways to “prove” that 1+2+3+4+… is −1/8. There are good reasons to prefer Ramanujan’s manipulations to these, but anyone who shows you Ramanujan’s derivation without explaining why his way of juggling symbols is profound and the others are mere curiosities isn’t telling you the whole story.

I think it’s misleading to say that 1+2+3+4+… equals −1/12. It would be better to say something more like “the divergent series 1+2+3+4+… is associated with the value −1/12” or “the zeta-regularized value of the series 1+2+3+4+… is −1/12” or “The Ramanujan constant of the series 1+2+3+4+… is −1/12.” Phrasing the result this way isn’t as catchy as asserting equality, but it’s more honest (while at the same time more respectable-sounding than “1+2+3+4+… wants to be −1/12”).

If you’re still inclined to buy the formula 1+2+3+4+… = −1/12, then it’s my duty to point out to you something else that you’re about to buy as part of the same deal. Remember ζ(0)? I told you earlier that it equals −1/2. So, taking the formula

and subtracting the equally valid formula

from it term by term, we get

But the left hand side of that last equation is Ramanujan’s 1+2+3+4+… with a 0 stuck at the front! What kind of number x has the property that 0+x is different from x? George Peacock, the originator of the Principle of Permanence, would have had a heart attack, and even Euler would have balked. Forget Kansas, Toto – we’re not even in Oz anymore!

I don’t want to come across as too critical of the irresponsible boggle-mongers who share tawdry mathematical factoids ripped from their proper contexts … even though I guess I did just call them “irresponsible boggle-mongers”. I mean, I know there are weirdness-junkies who want to have their minds blown by math and science on a regular basis, and there need to be dealers who provide those users with their fix. All I’m asking for is more truth in labelling, so that the people who are tripping on mathematical psychedelicacies don’t mistake what they’re consuming for actual food (just as responsible purveyors of hallucinogens try to keep their clients away from windows that could be mistaken for doors).

Heck, to show you that there are no hard feelings here, I’ll purvey some mathematical psychedelia of my own. Pssst, hey kid: didja know that the product of all the primes equals 4π2? No lie … except for the word “equals”.

Thanks to Jeremy Cote, Sandi Gubin, Joseph Malkevitch, Cris Moore, Henri Picciotto, Burkard Polster, Tzula Propp, and Glen Whitney.

This essay is a draft of chapter 7 of a book I’m writing, tentatively called “What Can Numbers Be?: The Further, Stranger Adventures of Plus and Times”. If you think this sounds cool and want to help me make the book better, check out http://jamespropp.org/readers.pdf. And as always, feel free to submit comments on this essay at the Mathematical Enchantments WordPress site!

ENDNOTES

#1. If you want to generate a sine curve, you can roll your own: see the how-to guide Cut a Sine Wave with One Straight Cut. (Don’t flatten the roll when you cut it; that’ll give you a sawtooth wave instead of a sine wave!) For extra credit, make two cuts in the cardboard tube, one at each end, so that you get two sine waves. Can you arrange the cuts so that the two sine waves are in phase but have different amplitudes? Can you arrange the cuts so that the two sine waves have the same amplitude but are 90 degree out of phase? In principle you could “slice” one end of the cardboard using a hyperbolic paraboloid, obtaining a sine wave with twice the frequency of the sine wave obtained by slicing with a plane, but I have no idea how to do that in practice.

#2. Calculators don’t actually approximate trig functions using power series. As we’ll see shortly, when x gets too far from 0, the power series for sin x and cos x have enormous terms that wreak havoc with numerical approximations. But figuring out the power series expansions of sine and cosine was the first step that led to the internal machinations that you trigger in your calculator when you push the “sin x” and “cos x” button.

#3. In calling Madhava a “mathematician” I’m already guilty of imposing a 21st century perspective on an earlier era. Disciplines weren’t as compartmentalized then as they are now. He might well have considered astronomy to be his chief interest and mathematics a sideline.

#4. Actually, I’m lying, but just a little bit. If I’d been scrupulously honest, you’d see some some nuisance-factors cluttering up these equations; those nuisance-factors are powers of a single conversion ratio relating angles to lengths, and they disappear if you measure angles in radians rather than more familiar units like degrees. To keep my formulas uncluttered, I’ll use radians and omit the nuisance-factors.

#5. To see why it makes sense to define x0 as 1, notice that for all n > 0, xn equals xn+1 divided by x, at least when x is nonzero; to support the Principle of Permanence (discussed in my January essay), it makes sense to define x0 as x1/x, i.e. 1, so that the equality will hold in that case as well. Likewise, for all n > 0, n! equals (n+1)! divided by n+1, so it makes sense to define 0! as 1!/1, i.e. 1. There are better reasons than the Principle of Permanence for defining x0 and 0! as we do, reasons that hinge on what powers and factorials mean – but it’s good to know that even when meaning is elusive, a sensitivity to pattern can steer us aright.

#6. The rule for the product of two power series might seem confusing at first, but it’s just like the rule for multiplying polynomials: when you have one sum times another sum and want to expand it as a sum of products rather than a product of sums, each individual term in the first sum must be multiplied by each individual term in the second sum. In particular, for all nonnegative integers i and j, we must multiply ai xi by bj xj, obtaining ai bj xi xj , or ai bj xi+j . For instance, we get the terms a0b2x2 and a1b1x2 and a2b0x2 which we can gather together as (a0b2 + a1b1 + a2b0)x2 .

#7. A basic formula of trig that follows from the definitions of sine and cosine and from the Pythagorean theorem is that (sin x)2 + (cos x)2 = 1; so it’s not exactly earth-shattering news that when we add the squares of the power series representations of sine and cosine, obtaining x2x4/3 + 2x6/45 −… and 1 − x2 + x4/3 − 2x6/45 +… respectively, massive cancellation occurs and we get 1 + 0x2 + 0x4 +…. But it’s surprising to me that we can derive this deeply geometrical fact about sine and cosine using just algebra. Evidently those power series have lots of geometry hiding inside them!

#8. Using the trig formula sin(x+y) = (sin x) (cos y) + (cos x) (sin y) we can show that sin(x+d) − sin(xd) = (2 sin d) (cos x); that is, as an angle increases from xd to x+d, the sine of the angle increases by (2 sin d)(cos x). Here 2 sin d is the constant of proportionality, but its precise value is of secondary importance; what’s crucial is that the change in the value of the sine function is proportional to the cosine of x, and has the same sign. Likewise, as an angle increases from xd to x+d, the cosine of the angle increases by (−2 sin d) sin x; the change is proportional to the sine of x, and has opposite sign.

#9. Here’s an historically inaccurate and incompletely rigorous but undeniably slick way to derive those power series a la Madhava, expressed in the language of modern calculus. Since cos x is an even function of x (that is, cos(−x) = cos(x) for all x) its power series has only even exponents of x:

cos x = a0 + a2x2 + a4x4 + ….

Similarly, sin x is an odd function of x (that is, sin(−x) = − sin(x) for all x) so its power series has only odd exponents of x:

sin x = b1x + b3x3 + b5x5 + ….

Taking derivatives term by term we see that the derivative of cos x is

d(cos x)/dx = 2a2x1 + 4a4x3 + … 

and that the derivative of sin x is 

d(sin x)/dx = b1 + 3b3x2 + 5b5x4 + ….

Equating cos x with the derivative of sin x term by term we get a0 = b1a2 = 3b3a4 = 5b5, …. Similarly equating sin x with the negative of the derivative of cos x, we get b1 = −2a2b3 = −4a4, …. And we have one more equation as our linchpin: a0 = 1, corresponding to the fact that the cosine of 0 is 1. Turning these equations around and putting them all together, we get b1 = a0 = 1, a2 = −b1/2 = −1/2, b3 = a2/3 = −1/6, a4 = −b3/4 = 1/24, b5 = a4/5 = 1/120, …

#10. I recently read about a radiocarbon/macrofossil/microfossil/DNA study that challenges the standard Land Bridge Theory of how North America was populated; it seems that one particular long stretch of the Beringian causeway was too barren to have calorically sustained so many humans on foot. But I prefer to think that this study provides support for Bicycle Theory.

#11. For a more detailed version of this story, read chapter 7 of the book by Strogatz.

#12. Although we can’t get approximations to the square root of 3 by simply plugging x = 2 into Newton’s formula, we can still exploit the formula if we’re clever. Specifically, by setting x = −1/4 in Newton’s formula we can estimate the square root of 3/4, and then we can double our approximation of 􏰂the square root of 3/4 to get an approximation of the square root of 3. Such tricks are built into what your calculator does when you press the square-root button, and they can also be useful for doing mental math. If you want to estimate the square root of some number n, and you happen to know a perfect square m2 near n, then write n as m2 times 1+x (where x can be positive or negative as long as it’s close to 0); then the square root of n equals 􏰂the square root of m2(1 + x), which equals m times the square root of 1+x, which according to Newton’s binomial power series is about m times 1+x/2$ since x is close to 0. So if n is (say) 10% bigger than m2, the square root of n will be about 5% bigger than m.

#13. Was Euler’s hope fulfilled? I am tempted to say “no”, but it might be more prudent to say that the jury is still out. Mathematicians don’t have much use for divergent series these days, but physicists seem to have a higher tolerance for them. String theorists actively like them. (Of course they do.)

#14. One problem with allowing 1 + −1 + 1 + −1 + … into math is that it collides with the associative property. To see the issue, put parentheses into this sum in two different ways, first as (1+−1)+(1+−1)+… and then as 1+(−1+1)+(−1+1)+… The former way of putting in parentheses gives us the compacted sum 0 + 0 + … while the latter way of putting in parentheses gives us the compacted sum 1 + 0 + 0 + …. The first compacted sum evaluates to 0, while the second sum evaluates to 1, so we’re in trouble! Euler’s predecessor Guido Grandi noticed this paradox, though he found it consistent with prevailing ideas in religion and cosmology: “By putting parentheses into the expression 1−1+1−1+… in different ways, I can, if I want, obtain 0 or 1. But then the idea of the creation ex nihilo is perfectly plausible.” It should be noted, however, that from a purely mathematical perspective Grandi, like Euler, favored the value 1/2.

#15. Euler is often linked with his hometown, Basel, but he spent much of his career in St. Petersburg, and he resided in Berlin when he wrote his treatise on divergent series.

#16. Multiplying 1+2x+3x2+5x3+8x4+13x5+… by 1−xx2, we get massive cancellation, leaving 1+x+0x2+0x3+0x4+ …. So our “Fibonacci power series” is associated with the function (1+x)/(1−xx2), and plugging in x = 1 yields −2 … though I don’t know what it means to say “The sum of the Fibonacci numbers is −2”.

#17. I won’t attempt to define “meromorphic”, but I’ll tell you what makes meromorphic functions magical. You know how each cell in your body (with the exception of a few in your naughty bits) contains your complete genome, and hence would allow someone to create an exact clone of you, at least in principle? Meromorphic functions are a lot like that. If you know the value of a meromorphic function f(z) for all complex numbers z in some tiny disk D in the complex plane, you can in principle reconstruct the value of f(z) for all values of z. It doesn’t matter how tiny the disk D is; if f is meromorphic, the way f behaves on D uniquely determines the way f behaves on the whole complex plane.

#18. Here I’m switching from explaining things that I understand from my own knowledge to reporting things that I know only second-hand; I’ve never read the proofs that ζ(0) = −1/2 and ζ(−1) = −1/12. If you want to delve more deeply into these matters, the Wikipedia pages on 1+1+1+1+… and 1+2+3+4+… listed at the end of the References would be a good place to start.

#19. If S = 1+2+3+… then 2S = 1+1+2+2+3+3+… = 1+(1+2)+(2+3)+… = 1+3+5+… = (1+2+3+…) − (2+4+6+…) = S − 2S = −S, and 2S = −S implies S = 0.

REFERENCES

E. Barbeau, Euler Subdues a Very Obstreperous Series, The American Mathematical Monthly, May, 1979, Vol. 86, No. 5, pp. 356–372.

BriTheMathGuy, I Found Out What Infinity Factorial Is.

E. Muñoz García, R. Pérez Marco, The Product Over All Primes is 4π2, Communications in Mathematical Physics, volume 277, pages 69–81 (2008).

Victor J. Katz, Ideas of Calculus in Islam and India, Mathematics Magazine, Vol. 68, No. 3 (Jun., 1995), pp. 163–174. http://www.jstor.org/stable/2691411

MathOverflow, Do Abel summation and zeta summation always coincide?

Mathologer, Numberphile v. Math: the truth about 1 + 2 + 3 + … = −1/12

Numberphile: ASTOUNDING: 1+2+3+4+5+… =−1/12

Ian Stewart, Significant Figures: The Lives and Work of Great Mathematicians, Basic Books, 2017.

Steven Strogatz, Infinite Powers: How Calculus Reveals the Secrets of the Universe, Mariner Books, 2019.

Wikipedia, Madhava series.

Wikipedia, 1+1+1+1+….

Wikipedia, 1+2+3+4+….

5 thoughts on “Unlimited Powers

  1. Pingback: Marvelous Arithmetics of Distance |

  2. Pingback: Apt Arithmetics of Distance – TOP Show HN

  3. Pingback: Expedient Arithmetics of Distance – TOP HACKER™

Leave a comment