Let Us Define Our Terms

It was a truism of mid-twentieth century popular intellectual culture that many disagreements were “merely semantic” and could be resolved if only people would agree on the meanings of the words they used, or at least were more clear about the different ways they used words so that they could focus on substantive issues rather than language.

Cartoon by Jules Feiffer. Permission pending.

It’s not hard to see that this idea has serious limitations. For instance, even though many legal issues surrounding abortion hinge on different definitions of the word “life”, when it comes to the moral side of the debate, definitions don’t change anyone’s mind. Usually we each choose the definition that matches an outcome we’ve decided on, not the other way around. But in mathematics (thank goodness for the consolations of math!), things are different.

Definitions have been on my mind lately for two reasons: I’m teaching lots of definitions to the students in my discrete mathematics course, and I’ve been reading about the work of Kevin Buzzard and his collaborators, who have been teaching lots of definitions to a computer program for doing mathematics.

Definitions are nothing new in mathematics — Euclid’s Elements starts with a few, such as “a point is that which hath no part”. But surprisingly, Euclid’s proofs don’t make much use of the initial definitions; it’s the axioms (and the later definitions) that do the heavy lifting.1 One modern point of view about this paradoxical situation is that even though Euclid’s first definitions gives readers a way to think about points, lines, planes, etc., it’s the axioms that implicitly tell us what these mathematical objects are. That is, at an abstract level, Geometry Is As Geometry Does, and a Euclidean “point”, rather than being that-which-hath-no-part, is any object of thought whose properties in relation to other points (and lines and planes) obey Euclid’s axioms. Under this perspective, many mathematical definitions can look a bit circular, though “relational” would be a more apt term.

In a mathematical treatise like Euclid’s, you don’t get all the definitions at the beginning; they’re peppered throughout, with later definitions depending on earlier ones. You could try to read all the definitions at the start, but aside from the fact that you’d overload your brain, a lot of the definitions, read in isolation, would seem arbitrary. You’d be missing the way that those definitions give rise to interesting theorems that retroactively justify those definitions. Amid all the definitions one could make, some are more natural, interesting, or useful than others, and it’s not clear from the start what those definitions will be. For instance, until you have some experience multiplying and factoring numbers, it may not clear why the concept of prime numbers matters; and I think it’s only when you’ve seen the unique factorization theorem that the true significance of the prime numbers comes into view.

I like what the late mathematician and educator Charles Wells wrote about definitions:

Some students don’t realize that a definition gives a magic formula — all you have to do is say it out loud. More generally, the definition of a kind of math object, and also each theorem about it, gives you one or more methods to deal with the type of object.

For example, n is a prime by definition if n>1 and the only positive integers that divide n are 1 and n. Now if you know that p is a prime bigger than 10 then you can say that p is not divisible by 3 because the definition of prime says so. (In Hogwarts you have to say it in Latin, but that is no longer true in math!) Likewise, if n>10 and 3 divides n then you can say that n is not a prime by definition of prime.

You now have a magic spell — just say it and it makes something true!

What the operability of definitions and theorems means is: A definition or theorem is not just a static statement, it is a weapon for deducing truth.

The role of definitions changes as one advances in one’s mathematical education. Some of the change is quantitative. New definitions build on earlier definitions, which build on even earlier definitions, and so on; the more math you learn, the taller your personal definition-tower becomes. The same is true on a communal level: understanding a recent definition like Peter Scholze’s notion of “perfectoid spaces” requires understanding dozens if not hundreds of concepts that the definition builds upon, to which thousands of mathematicians have contributed.

But a less obvious, qualitative difference is that sometimes definitions don’t even make sense without certain theorems. That is, a definition may make a tacit claim, and proving the claim may take hard work. Many definitions in advanced mathematics are like this.

Sometimes the tacit claim is existence or uniqueness. As a non-mathematical example, notice that the phrase “the Prime Minister of the United States” makes no sense; neither does the phrase “the baseball team of the United States”, but for a different reason. The first phrase doesn’t denote an actual person, because the U.S. has no Prime Minister; the second phrase doesn’t denote an actual team, because the U.S. has many baseball teams. In cases like these, the use of the word “the” followed by a singular noun or noun-phrase requires that its referent exist and that its referent be unique.2

Here’s a mathematical example: when we say “The infinite decimal .333… is defined as the unique number that lies in the intervals [.3,.4], [.33,.34], [.333,.334], etc.”, we’re asserting simultaneously that there is at least one such number and that there is at most one such number. Many mathematical definitions share this property of making tacit claims; proving these claims requires a side-bar. Those proofs may in turn depend on other theorems, and other definitions, which in turn depend on other theorems. So if you could look inside someone’s brain and somehow see a definition as literally sitting atop earlier definitions, the tower wouldn’t consist merely of definitions; there’d be theorems mixed in there too.

Another wrinkle in advanced mathematics is that many concepts are mergers of several different-looking concepts that turn out to be equivalent for non-obvious reasons. One might see a passage of an advanced math textbook that looks something like this:

Theorem: Let F be a foo [where a “foo” is some previously-defined kind of mathematical object]. Then the following conditions are equivalent:
(1) …
(2) …
(3) …

Proof:
(1) implies (2): …
(2) implies (3): …
(3) implies (1): …

Definition: Any foo satisfying conditions (1), (2), and (3) is called a fnord.

Which of the three conditions is the “true” definition of fnordness? All of them! Which of the three conditions one should focus on will depend on context.3 This situation crops up so often in mathematics that the initialism “TFAE” (for “The Following Are Equivalent”) has become a standard part of a mathematician’s education in the English-speaking world. (If any of you know corresponding initialisms in French or other languages, please post them in the Comments!)

Someone at the frontier of mathematical research may wind up in a situation where there are multiple conditions that are not quite equivalent, and must choose which one to canonize as the “right” definition. This requires a certain amount of prescience about the direction of future developments, and since mathematical history (like any other kind) has a way of surprising the people who live through it, sometimes mathematicians get it wrong. For instance, once upon a time it seemed natural to define the word prime so as to include the number 1, but nowadays mathematicians agree that, given the directions that number theory has gone in, it’s best to call 1 neither prime nor composite, but to call it a unit.4

There are interesting issues about how to tweak an existing definition to handle borderline cases (see the discussion of 0^0 in my May 2019 essay), but a higher order of creativity comes from devising (good) new definitions. Ideally a new definition should enable the creator of the definition to solve existing problems while introducing new directions for future research.

Here’s Kevin Buzzard writing about the research interests of the people he works alongside at Imperial College in London, and contrasting the definition of perfectoid spaces (and other hot topics) with the less fashionable notion of Bruck loops, about which I’ll say nothing except to mention that Buzzard defines them in the space of one long paragraph, thereby demonstrating that one can define them succinctly, under a suitable definition of succinctness:

I work in a mathematics department full of people thinking about mirror symmetry, perfectoid spaces, canonical rings of algebraic varieties in characteristic p, etale cohomology of Shimura varieties defined over number fields, and the Langlands philosophy, amongst other things. Nobody in my department cares about Bruck loops. People care about objects which it takes an entire course to define, not a paragraph.

So what are the mathematicians I know interested in? Well, let’s take the research staff in my department at Imperial College. They are working on results about about objects which in some cases take hundreds of axioms to define, or are even more complicated: sometimes even the definitions of the objects we study can only be formalised once one has proved hard theorems. For example the definition of the canonical model of a Shimura variety over a number field can only be made once one has proved most of the theorems in Deligne’s paper on canonical models, which in turn rely on the theory of CM abelian varieties, which in turn rely on the theorems of global class field theory. That’s the kind of definitions which mathematicians in my department get excited about — not Bruck Loops.

When a new definition like “perfectoid spaces” garners professional acclaim for the person who came up with it, and word gets out, it’s natural for the scientifically-interested public to want someone to tell them what the fuss is about. And this is where things get tricky. For the expert, the new definition is at the top of a personal Jenga-tower in their brain. There simply isn’t time to build a copy of that tower, or even a streamlined version of it, in the reader’s brain. There needs to be something simpler, and a certain amount of distortion is inevitable.5

Some writers resort to metaphor. Others connect the new concept with concepts slightly lower in the Jenga-tower, treating them all as black boxes and explaining how they relate to one another, saying things like “[Concept X] unifies [Concept Y] with [Concept Z]” without ever explaining the details of Concepts X, Y, and Z. Still others despair of explaining the math and resort to biography (e.g., “The crucial insight finally came to her while she was scuba-diving during her honeymoon in Australia”).6

To see what happened when Michael Harris accepted the challenge of trying to explain Scholze’s perfectoid spaces to a general scientific readership, read his essay “Is the tone appropriate? Is the mathematics at the right level?“, and for comparison read Gilead Amit’s essay The Shape of Numbers that New Scientist decided to publish instead of what Harris wrote. And then you’ll understand why I’ve decided to be a math essayist rather than a math journalist (much as I admire mathematicians who step into the fray).

I can’t say I understand Harris’ essay more than superficially. I’m intrigued by the idea that Scholze’s theory of diamonds allows you to “clone” a prime, but what does that really mean? Maybe if I’d studied Spec(Z) back in grad school (or if I took the time to learn about it now) I’d have a clue. And while we’re talking about Spec(Z) (or rather talking about not talking about it), I’ve always wondered what diophantine algebraic geometers mean when they say primes are like knots; I hope I’ll understand this someday!

Frank Quinn wrote an essay that has a very nice passage about the role of definitions in modern mathematics:

Definitions that are modern in this sense were developed in the late 1800s. It took awhile to learn to use them: to see how to pack wisdom and experience into a list of axioms, how to fine-tune them to optimize their properties, and how to see opportunities where a new definition might organize a body of material. Well-optimized modern definitions have unexpected advantages. They give access to material that is not (as far as we know) reflected in the physical world. A really “good” definition often has logical consequences that are unanticipated or counterintuitive. A great deal of modern mathematics is built on these unexpected bonuses, but they would have been rejected in the old, more scientific approach. Finally, modern definitions are more accessible to new users. Intuitions can be developed by working directly with definitions, and this is faster and more reliable than trying to contrive a link to physical experience.

I’ll end by quoting Peter Scholze himself:

What I care most about are definitions. For one thing, humans describe mathematics through language, and, as always, we need sharp words in order to articulate our ideas clearly. For example, for a long time, I had some idea of the concept of diamonds. But only when I came up with a good name could I really start to think about it, let alone communicate it to others. Finding the name took several months (or even a year?). Then it took another two or three years to finally write down the correct definition (among many close variants). The essential difficulty in writing “Etale cohomology of diamonds” was (by far) not giving the proofs, but finding the definitions. But even beyond mere language, we perceive mathematical nature through the lenses given by definitions, and it is critical that the definitions put the essential points into focus.

Thanks to Sandi Gubin for help with this piece.

Next month: The Null Salad.

ENDNOTES

#1. Sometimes Euclid’s definitions and axioms also hinge on unstated assumptions whose tacit role only came into view many centuries later, but that’s another story.

#2. The distinction between “a” and “the” came to my attention many years ago when I was touring the parts of the Mormon Tabernacle that are open to the public, and the tour guide said “The people who settled Utah were hard-working people; that’s why we call Utah a beehive state.” I asked her whether she meant “the”, since after all Utah is often called The Beehive State, but she said she meant “a”. I think she chose the indefinite article to focus her listeners on what kind of people Utahans were and are, rather than inviting comparisons between Utahans and non-Utahans.

#3. Following up on the definition of fnords as special kinds of foos, there might be other theorems, such as “If F1 and F2 are fnords, then so is F1+F2” (assuming that addition of foos has already been defined). A happy asymmetry comes to the aid of someone trying to prove such a theorem: since F1 and F2 are (by hypothesis) fnords, each of them satisfies all three of the magic fnord-properties, so all three properties may be legitimately assumed; but to prove that F1+F2 is a fnord too, it suffices to prove just one of the properties, since the other two come along for free, thanks to the Theorem.

#4. It hasn’t escaped my attention that, in a sense, letting the needs of mathematicians dictate the definitions mathematicians use is not entirely different from the way people let their verdicts on issues determine the definitions they use. People who condone abortion will define life one way, while people who condemn it will use a different definition. In a similar way, number theorists who value the unique factorization of numbers into primes will want to deny primeness to 1, while number theorists who couldn’t care less about unique factorization will — wait a minute, there are no number theorists like that! At least, none that I know of. The fact that number theorists call this result the Fundamental Theorem of Arithmetic tells you right away that there’s unanimity on that point. But, hypothetically, if there were a community of mathematicians who wanted to consider 1 to be prime, it wouldn’t cause a huge rift; we’d just need to introduce a second term, maybe “prome” or “primish”, to carry the variant meaning.

#5. The recently deceased mathematician John Tate, who laid much of the early groundwork for Scholze’s work over half a century ago in one of the most revolutionary doctoral theses of all time, was glumly resigned to the difficulty of conveying to non-mathematicians what he did for a living or why it mattered. His obituary quotes him as saying:

Unfortunately it’s only beautiful to the initiated, to the people who do it. It can’t really be understood or appreciated on a popular level the way music can. You don’t have to be a composer to enjoy music, but in mathematics you do. That’s a really big drawback of the profession. A non-mathematician has to make a big effort to appreciate our work; it’s almost impossible.

#6. I’m not actually aware of any mathematician making a crucial discovery during their honeymoon, but I’d bet it’s happened; I only hope that the mathematician had the restraint to wait until the end of the honeymoon before starting to write it up.

REFERENCES

Gilead Amit, “The shape of numbers”, posted as “‘Perfectoid geometry’ may be the secret that links numbers and shapes“, April 25, 2018.

Kevin Buzzard, “A computer-generated proof that nobody understands”, posted July 6, 2019.

Michael Harris, “Is the tone appropriate? Is the mathematics at the right level?”, posted around June 1, 2018.

Michael Harris, “The perfectoid concept: Test case for an absent theory“.

Frank Quinn, “A Revolution in Mathematics? What Really Happened a Century Ago and Why It Matters Today”, Notices of the American Mathematical Society, January 2012.

 

5 thoughts on “Let Us Define Our Terms

  1. xenaproject

    Random comment: when teaching a computer mathematics, you *cannot* do this “1 2 3 so we define fnord to be all of them” — you have to choose one! Ultimately of course it does not matter which one you choose, however there is this very minor subtlety: if you define fnord to be “all of them” then there are more things about fnords which you can say are “true by definition”!

    Grothendieck changed the definition of a scheme once (to something non-equivalent — he dropped a separatedness condition) but, perhaps more interestingly, he also changed the definition of a smooth morphism of schemes to *an equivalent definition* (compare EGA IV definition 6.8.1 to definition 17.3.1). For him this issue was clearly important — he felt that he had chosen the wrong one first time around (EGA IV was published in four volumes and the two definitions are in different volumes).

    I find it interesting that Scholze, like Grothendieck, places so much importance on the concept of a definition.

    Liked by 1 person

    Reply
    1. jamespropp Post author

      What I found even more surprising is the importance Scholze places on the WORD attached to a concept by a definition. He claims he needed to find the word “diamond” before he could dig into the concept. This matches nothing in my own experience as a researcher and an occasional coiner of words and phrases. But Schulze’s claim accords with my belief that mathematics is the domain of human activity in which the Sapir-Whorf hypothesis (“you can’t think what you can’t say”) applies most strongly.

      Like

      Reply
  2. Pingback: Guess Again: The Ehrenfeucht-Mycielski Sequence |

  3. yesthatsablog

    Actually, technically. “the baseball team of the United States” does (occasionally) exist. What else do you think had won a gold medal in the baseball event at the 2000 Olympics?

    There are, of course, many other examples of what you had in mind that actually work, though, such as, I dunno, “the interstate highway of the United States”.

    Liked by 1 person

    Reply

Leave a comment