Going Negative, part 2

Last month, when I gave some ideas about how to justify the law of signs, my focus was on the kind of explanation that works when kids first encounter negative numbers. But in a way I wasn’t being 100% honest, and my use of some farfetched examples (like the balloon-stealing clown) was a tip-off. I think that the real justifications of the law of signs — not the most pedagogically appropriate ones, but the most historically honest ones — come from the body of material the students will encounter later in their studies, long after they’ve learned, enthusiastically or reluctantly, to calculate products in the standard way. These are justifications teachers seldom talk about with their students, but I think they matter.

So this month I’ll talk about those other rationales, and try to resolve any remaining qualms you may have about the law of signs that stem from a sense of symmetry-violation. I’ll also discuss the option of chucking negative numbers entirely. (Seems extreme, but as a parent of two children, I can sympathize with this way of brokering the conflict between yes-it’s-negative and no-it’s positive. “You know what, kids? If you can’t agree on a restaurant, we won’t go out to dinner at all.”)

The law of signs is, at bottom, a human convention, and the reason mathematicians accept it is that it’s useful.  Specifically:

  • In algebra, we teach students the distributive law.  It’s a versatile law because it’s so general: we don’t need to know anything about the numbers a,b,c,d to know that (a+b)×(c+d) is going to be equal to ac+ad+bc+bd.  But if we defined multiplication of negative numbers in the deviant way (defining ab to be negative when a and b are both negative, and to have the same sign as a×b otherwise) the distributive law wouldn’t apply universally.  (In fact, turning this around, you can use ((1)+(−1))×((1)+(−1))=0×0=0 and the distributive law to prove that −1 times −1 equals 1, as in Endnote #1.)
  • When we teach analytic geometry, we show students that the graph of y = mx+b is always a straight line, and that the graph of y = ax2 +bx+c (with a nonzero) is always a parabola.  Straight lines, parabolas: these are useful things! But if we use deviant multiplication ❎ instead of normal multiplication ×, we find that the graph of y = (−2)❎x + 3 is a broken line rather than a straight one, while the graph of y = x❎x + (−4)❎x is not a parabola but a chimera obtained by grafting together parts of two parabolas:015-chimeraIf you know a sport where balls follow that kind of curve instead of a parabola, let me know!
  • When we teach calculus, students learn that \int_{0}^{-1} -1 \ dx = +1.  This is (−1)×(−1)=+1 in fancy clothes.
  • When we teach complex numbers, students learn that e^{i \theta_1} \times e^{i \theta_2} = e^{i (\theta_1 + \theta_2)}.  This formula, with its beautiful connection to rotation-angles, is an extension of the law of signs.  In fact, just put \theta_1 = \theta_2 = \pi and you get the law of signs. Suddenly we get a new choreography for the operation that sends a particle from each place on the number line to its twin-location on the opposite side of 0: instead of all the particles rushing toward their destinations and colliding at the origin, they gracefully swing 180 degrees around the origin. (For a fun class activity involving this kind of choreography, see the video on Jasmine Ma’s webpage, and check out Henri Picciotto’s writeup given in the References.)
  • In number theory, sign becomes malleable in a new way: −1 and +9, which have opposite signs as ordinary integers, are the same in mod-10 arithmetic.  So −1 times −1 mod 10 is the same as 9 times 9 mod 10, which is 81 mod 10, which is +1 mod 10.
  • In vector algebra, our students learn how geometrical concepts like the angle between two vectors or the area of a parallelogram are encoded algebraically by the dot-product and cross-product operations.  The relevant formulas apply to all vectors in 2 or 3 dimensions, regardless of what quadrant or octant they lie in.  But the dot-product and cross-product are defined in terms of multiplication of ordinary numbers, and the pleasing correspondence between vector algebra and ordinary geometry wouldn’t work for all vectors if we adopted a deviant law of signs.

In all these situations, the standard operation × for multiplying negative numbers plays an essential role, while the nonstandard operation ❎ is of no discernible use.

The formula (−1) × (−1) = +1 is woven through the mathematics curriculum in all kinds of ways.  These threads are usually hidden from students’ view, since when we teachers cover topics like analytic geometry or vectors, our focus isn’t on the arithmetic of negative numbers.  But pulling those threads to the surface here enables us to see a kind of helical coherence in the curriculum, where concept introduced in one turn of the helix play a supporting role in higher turns.  And, historically, all these ways in which the standard definition of multiplication gives satisfyingly simple formulas for things people care about account for the success of the definition over the course of time.

We should keep in mind that the order in which we teach kids about negative numbers and algebra is historically backwards: the rules for dealing with negative numbers grew out of algebra and proved useful for applications, but we only teach algebra and  after we’ve taught kids how to multiply signed numbers, and kids don’t always get to see the applications. No wonder some kids get confused!


Despite all the reasons for accepting the rule of signs, one can experience feelings of disquiet grounded in a sense of symmetry.  What’s to my right may be to another person’s left. “Shouldn’t math display symmetry between right and left?”

Here’s the answer: The breaking of bilateral symmetry resides in the fact that when he gave Europe the number line, John Wallis chose (emphasis on the word “chose”) to have positive numbers appear on the right of the origin and to have negative numbers appear to the left.  It makes just as much sense to use the reverse convention.  If Wallis had gone the other way, we’d have “left times left equals left” and “right times right equals left”.

“Okay,” you might reply, “let me put my question differently: Shouldn’t math display symmetry between plus and minus? After all, ‘positive’ and ‘negative’ are just value judgments; what’s positive to me may be negative to you, and vice versa. If positive times positive equals positive, shouldn’t negative times negative equal negative?”

Answer: We already gave up on symmetry when we allowed minus-times-plus and plus-times-minus to be minus.  In fact, +1 has a property that −1 doesn’t: multiplying any number x by +1 always gives us the number x we started with, no matter what x is, but −1 cannot make the same claim (for instance, multiplying +1 by −1 does not give +1).  If you want a symmetrical number system in which plus-times-plus equals plus and minus-times-minus equals minus, you’d better set plus-times-minus and minus-times-plus equal to zero — a pretty choice, but as far as I know a useless one.

Maybe what some students want is a way of describing multiplication geometrically that doesn’t treat minus-times-minus as a special case. Recall what Brahmagupta wrote: “The product of a negative and a positive is negative, of two negatives positive, and of positives positive; the product of zero and a negative, of zero and a positive, or of two zeros is zero.”) Is there a uniform way to define multiplication of all real numbers that makes Brahmagupta’s rules a consequence rather than a definition?

Here’s a geometrical definition of a×b that does the job.  To multiply two real numbers a and b, set two axes set at right angles, and draw (1,0) and (a,0) on the horizontal axis and (0,b) on the vertical axis. Now draw a line L through (1,0) and (0,b); if you then draw a line through (a,0) that’s parallel to L, it’ll cross the vertical axis at some point (0,c). We can define a×b to be precisely that value c. Figure 3 applies this definition to geometrically compute (±1)×(±2).


Figure 3. Brahmagupa’s rules, derived geometrically.

This uniform definition works for all non-zero choices of a and b, whether they’re positive or negative, or indeed zero (if you regard every line as being “parallel to itself”).

A final point about symmetry: We can make a diagram that shows how ordinary multiplication behaves for positive and negative numbers, using Cartesian geometry; the horizontal and vertical axes correspond to quantities x and y, and each quadrant is marked with the sign of the product x × y. If you rotate the picture by 180 degrees, it’s unchanged. The corresponding diagram for xy has less symmetry.


Figure 5: Laws of signs for × and ❎: which is more symmetrical?


If you’re unhappy with the law of signs, you’re in good company, as long as you don’t mind the company of dead people.  I already mentioned the Italian algebraist Cardano, who wasn’t sure what to make of minus-times-minus; in a later edition of his Ars Magna, he added an appendix in which he entertained the possibility that perhaps −1 × −1 was not +1 but maybe −1 after all.  Some of Cardano’s successors took this unease to such heights that they felt the need to bar negative numbers entirely. The Englishman William Frend attempted to recast the mathematics of his day in entirely “positive” terms, avoiding zero and negative numbers by paying scrupulous attention to every subtraction process and dividing the analysis into cases as required.  We see the precedent for this in Cardano’s treatment of quadratic equations; where the modern algebraist considers a single form ax2 + bx + c = 0, Cardano’s unease led him to divide the study of these equations into three cases:  ax2 = bx + c ,  ax2 + bx = c ,  and ax2 + c = bx , the last of which splits into subcases according to whether b2 is greater than, equal to, or less than 4ac.  Cardano was similarly impelled to study separately no fewer than thirteen different forms of cubic equations, where modern algebraists get away with the single form ax3 + bx2 + cx + d = 0. This economy led Cardano’s successors to accept negative numbers as a mathematician’s friends, but Frend, more than two centuries later, would have no traffic with such unnatural beings.

It’s easy to mock Frend, but in the interest of fairness we should acknowledge that his reservations about negative numbers were based on more than just misgivings about the law of signs; the use of negative numbers by Frend’s contemporaries went hand-in-hand with the use of complex numbers, which are even harder to fathom and lead to paradoxes like 1 = \sqrt{1} = \sqrt{(-1)(-1)} = \sqrt{-1} \sqrt{-1} = -1. It seemed best to Frend to purge mathematics of all quantities that might lead one into this kind of trouble.  Even zero needed to be banished; after all, one might be tempted to divide by it!  With great effort, Frend was able to devise a stripped-down algebra and, using only positive numbers, obtain the same results as algebraists who used negative numbers freely. Frend’s success in faithfully reproducing algebra under constraints of positivity was ironically part of what led others to view his approach as a dead end.  If the man’s puritanical algebra led to the exact same conclusions as the standard algebra, what was the value of his heroic refusal?  What was the point in being so negative about using negative numbers?

Frend’s son-in-law, the mathematician Augustus de Morgan, seems to be a transitional figure in this story, sympathetic to Frend’s scruples but unwilling to carry them to such zealous extremes. De Morgan, like other mathematicians, realized that negative numbers bring a much-needed unity to the study of algebra, allowing many equations that at first seem different to fit under a single roof.  Just because the equations (A+a)(B+b) = AB + Ab + aB + ab and (Aa)(Bb) = ABAbaB + ab are differently geometrically doesn’t mean we need to treat them differently algebraically.

But maybe you want to explore the possibility of an alternative algebra in which the product of two negative numbers is undefined, or negative, or “supernegative”.  If so, the book you should turn to is mathematical historian Alberto Martínez’s book “Negative Math: How Mathematical Rules Can Be Positively Bent” (see also the online reviews by Case and Darnell, given in the References). Martínez has an historian’s sense of what parts of mathematics are “truth” and which are mere convention.  He isn’t an applied mathematician, so doesn’t present any compelling applications of ❎; and he isn’t a theoretical mathematician, so he doesn’t present any compelling theorems or conjectures about the properties of ❎.  What he does convey, readably and convincingly, is the extent to which our mathematical conventions are the results of historical processes that are all too easily forgotten, and he leaves the door open to future insights that might conceivably bring ❎ into the fold of respectable mathematical constructs. Which is not to say that ❎ has any chance of displacing × ! It’s no mere historical accident that Asia and Europe converged on the accepted law of signs, but deviant multiplication might eventually turn out to have some narrow domain of applicability.


Abstract modern mathematics, with its abdication of claims to unconditional truth, gives us a framework for reconciling the different ideas of how negative numbers should be treated.  Nowadays we have different number systems, and we don’t fight about which one is true (whatever that means).  The most flexible system for dealing with real quantities is (R, + , − , ×, ÷), where R is the set of real numbers, and +, −, ×, and ÷ are the ordinary operations of arithmetic; a+b, ab, and a×b are defined for all a,b in R in the usual way, and a÷b is defined as long as b 0.  There’s also (R≥0, + , −, ×, ÷), where R≥0 is the set of nonnegative real numbers, and now ab is undefined if a<b.  For the even more stringent, there’s (R>0, + , −, ×, ÷), where R>0 (also written as R+) is the set of positive real numbers; ÷ is now unrestricted (you can’t mistakenly divide by 0 if 0 isn’t there!), but subtraction is even more restricted than before, since ab is undefined when a=b.  (R>0, + , −, ×, ÷) is Frend’s number system; (R, + , − , ×, ÷) is ours.

And what about deviant multiplication? The modern mathematician can simply call (R, +, −, ❎) a new kind of number system that we might apply to the real world (if it turns out to be useful) or study for its own sake (if we deem it to be beautiful or intriguing).  To take this modern perspective, we must give up (or relegate to philosophy) questions about which is the “right” number system; what we gain in return is a richer conceptual sphere in which all these possible meanings of “number” coexist with and shed light on one another.

Here are a couple of questions you might want to think about: How does division “work” in (R, +, −, ❎)?  And what about square roots?  See Endnote #4.

The idea of clashing algebraic conventions coexisting in a permissive conceptual framework can be traced back at least as far as Thomas Harriott, whose views in the debate about the law of signs were expressed in a poem, in which the conventions −1 × −1 = +1 and −1 ❎ −1 = −1 are respectively called “the rule of more” and “the rule of lesse”:

Yet lesse of lesse makes lesse or more
Use which is best keep both in store
If lesse of lesse you will make lesse
Then bate the same from that is lesse.

But if the same you will make more
Then adde it to the signe of more.
The rule of more is best to use
Yet for some cause the other choose

So both are one, for both are true
Of this inough and so adeu.

Thanks to Joerg Arndt, John Baez, Sandi Gubin, Tom Karzes, Mike Lawler, Alberto Martínez, David Mumford, Henri Picciotto, Mike Stay, James Tanton, and Glen Whitney.

Next month (Nov. 17): Breaking logic with self-referential sentences.


#1: Applying (a+b)(c+d) = ac+ad+bc+bd with a=c=1 and b=d=−1 we get 0 = (0)(0) = ((1)+(−1))((1)+(−1)) = (1)(1) + (1)(−1) + (−1)(1) + (−1)(−1). If we already accept the three less-problematical sign-rules of Brahmagupta, and (reserving judgment about minus-times-minus) replace (−1)(−1) by x, we find that the four-term sum equals 1 + (−1) + (−1) + x, or x−1. So we have 0 = x−1, implying x = 1. An even simpler version of this proof uses 0 = (0)(−1) = ((1)+(−1))(−1) = (1)(−1) + (−1)(−1). I suspect that most students who are able to follow such a proof have already made peace with the law of signs.

#2: If you want to define multiplication of signed numbers geometrically using just horizontal number lines, there’s a way to do it using the concept of homothety (which includes dilation and inversion as special cases): we can define a×b as the number that a goes to, under the homothety of the number line that sends +1 to b.  The role of +1 here is made explicit, and the definition makes me feel I understand where the asymmetry of the law of signs comes from.  But I don’t want to say more about homothety because this two-part essay is already too long, and besides, the definition looks circular: can one define homothety without using multiplication? I guess one can, but it seems contrived.

#3: There’s a kind of arithmetic in which (−1)(−1) = +1 and (−1)(−1) = −1 are both true! It’s called arithmetic “of characteristic 2”. Such arithmetics are useful in the theory of error-correcting codes. In characteristic 2 we have −1 = +1, and in fact x + x = 0 for all x. A pleasing application of this is that in characteristic 2 we have (x+y)2 = x2 + y2 for all x and y! But characteristic 2 arithmetic isn’t an arena in which “anything goes”; for instance, (x+y)3 = x3 + y3 does not hold.

#4: Division doesn’t work very well in (R, +, −, ❎): for instance, there’s no way to divide +1 by −1, and there two ways to divide −1 by −1. That is, the equation +1 = x ❎ −1 has neither +1 nor −1 as a solution, while the equation −1 = x ❎ −1 has both +1 and −1 as solutions. On the other hand, square roots in (R, +, −, ❎) work beautifully: just as in Frend’s number system (R>0, + , −, ×, ÷), every number has exactly one square root.

#5: One pre-reader of this essay worried that I’ll give the impression that it’s just an historical accident that we define multiplication the way we do. That wasn’t my intention at all. Leaving aside the questionable relevance of (R, +, −, ❎) to the sorts of situations in which multiplying together two negative numbers is meaningful, the deviant number system is absolutely awful to work with. One early draft of this essay envisioned a kind of “Monkey’s Paw” scenario, where a reader (wishing minus-times-minus were minus) gets his wish and finds himself in a world in which  ❎ reigns. His joy turns to shock and dread when he learns how much harder everything is. He can’t divide both sides of an equation by x without first showing that x is positive. Solving quadratic equations or simple systems of linear equations becomes a tangled mess. But then — sweet relief! — it turns out that the whole episode was just a bad dream.


James Case, “A Behind-the-Scenes View of the Development of Algebra” (book review), SIAM News, Volume 39, Number 9, November 2006.

Lewis Dartnell, Review of “Negative Math”, Plus Magazine, Issue 39.

Alberto A. Martínez, Negative Math: How Mathematical Rules Can Be Positively Bent, Princeton University Press (2006).

Henri Picciotto, “Kinesthetic Intro to Complex Numbers”, based on a presentation by Michael Pershan and Max Ray. (You can see this activity in action in a video on Jasmine Ma’s homepage.)

3 thoughts on “Going Negative, part 2

  1. Pingback: Going Negative, part 1 |

  2. Pingback: Going Negative, part 3 |

  3. Pingback: Going Negative, part 4 |

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s