A New Game with Infinity

Aleph-null bottles of beer on the wall,
Aleph-null bottles of beer!
Take one down, pass it around,
Aleph-null bottles of beer on the wall!

— Math nerd drinking song

You may already know the standard story about infinite sets like {1,2,3,…} and {2,3,4,…}. Even though the second set seems to be smaller (it’s missing one of the elements in the first set), Cantor taught us that the two sets are the same size (in the sense that there’s a one-to-one correspondence between them). The two sets have the same “number” of elements (namely aleph-null), and aleph-null minus one equals aleph-null. For many students, that anomaly takes some getting used to.

Cartoon by Ben Orlin, now the author of https://mathwithbaddrawings.com/2018/05/23/math-with-bad-drawings-the-book/. (I urge you to buy a copy rather than steal one, since there are only finitely many copies.)

But there’s a perfectly respectable mathematical sense in which the two sets do not have the same number of elements. With a suitable notion of what it means to “count” the elements of an infinite set of numbers, different from Cantor’s, the size of {2,3,4,…} is smaller than the size of {1,2,3,…}; in fact, it has one fewer element. Likewise, in this alternative way of measuring how big sets of numbers are, the set {1,3,5,…} is slightly bigger than the set {2,4,6,…}. How much bigger? Half an element! (Though see Endnote #2.)

The way we measure the size of a set of numbers in this strange system is by adding together infinitely many powers of q, where q is some number less than 1 and the exponents we use are the numbers in our set, and seeing what happens to the infinite sum as q approaches 1. The set {1,3,5,…} gives us the sum q1 + q3 + q5 + … while the set {2,4,6,…} gives us q2 + q4 + q6 + …

If we put q = 1, both of these sums diverge. But suppose q is just a bit smaller than 1. Then the two infinite sums converge; they are geometric series, converging to q1 / (1 q2) and q2 / (1 q2), respectively. If we subtract the second sum from the first, we get (q1 q2) / (1 q2); factoring the numerator and denominator we get q (1 q) / (1 q) (1 + q), and cancelling the common factor 1 q, we get q / (1 + q). If we now let q approach 1, we see that the fraction q / (1 + q) approaches the limit 1/2. That’s the sense in which the size of {1,3,5,…} exceeds the size of {2,4,6,…} by 1/2.

Here’s a plot that makes this more graphic. There are two curves that diverge as q approaches 1: the upper curve is q1 / (1 q2) and the lower curve is q2 / (1 q2). The third curve shows the difference between them, which goes to 1/2 as q approaches 1.

A different way to think about this last result is to imagine two players A and B playing a never-ending game in which A wins all the odd-numbered points (1st, 3rd, 5th, etc.) and B wins all all the even-numbered points (2nd, 4th, 6th, etc.). If you pick a random large integer n and ask “How far ahead was A after the first n points?”, then half the time n is even and A and B have won equally many points among the first n points, and the other half the time n is odd and A has won 1 more point than B. On average, A is ahead of B by half a point.

You can try this approach for counting other infinite sets of numbers. How do the sizes of the three sets {1,4,7,10,…}, {2,5,8,11,…}, and {3,6,9,12,…} compare? See Endnote #7. What about the set of numbers with an odd number of digits versus the set of numbers with an even number of digits? See Endnote #8.

You can also look at more interesting sets like {1,4,9,16,…} (the perfect squares) and {1,3,6,10,…} (the triangle numbers). How do they compare with each other? If we divide the size of the larger set by the size of the smaller set, what do we get?

This theory isn’t completely new, and in particular is related to the still somewhat hazy notion of “numerosity” some people have studied (see the Endnotes), but I don’t think anyone has pointed out that, if we restrict ourselves to simple sets of numbers like arithmetic progressions, we can set up the theory in a nice self-contained way that’s extremely accessible.

I mention this new way of measuring sets partly because I think it’s fun, but also because I think it’s freeing. There can be many ways to measure the sizes of things, based on different intuitions. If a student says “I don’t think that a set can stay the same size when you throw some of its elements away,” you can say “You’re absolutely right if by ‘size’ you mean numerosity, but we’re talking about cardinality, which is a different thing.” My guess is that even if the student never goes on to learn anything about numerosity, she’ll be readier to learn about cardinality knowing that her objection has a place within the multiverse of mathematics.

And even if she never learns more about cardinality either, she’ll have learned something about the nature of the mathematical enterprise. Math is about the surprising consequences that can follow from simple, clearly-stated assumptions. The mathematical systems I like best skirt the boundary between sense and nonsense: they never run afoul of self-contradiction, but they never stop surprising us. Yet even a wonderful mathematical systems like Cantor’s isn’t — can’t be — a complete description of the realm of mathematical possibility. If you turn different intuitions into different definitions, new phenomena emerge. That’s part of what makes it so much fun to be a mathematician. I’ve said this before, but it bears repeating: In math, if you don’t like the ride, you can change the road!

Thanks to Aaron Abrams.

Next month (Oct. 10), special Global Math Week edition: ChipChip: A new sort of sorting.

ENDNOTES

#1. The very first step of the definition involves replacing an infinite set S of positive integers by the infinite series that sums the term qn over all n belonging to S. This function of q is the generating function of the indicator function of the set S.

#2. Technically, the difference in size between {1,2,3,…,} and {2,3,4,…} in this theory isn’t exactly the real number 1; it’s 1 minus the infinitesimal 1𝔮. Likewise the difference in size between {1,3,5,…} and {2,4,6,…} isn’t exactly the real number 1/2; it’s 1/2 minus the infinitesimal (1𝔮)/4 minus the infinitesimal (1𝔮)2/8 minus higher-order infinitesimals (obtained by expanding q/(1+q) as a Taylor series in the variable 1q). The subtle font-difference between q and 𝔮 is not an accident; here q represents a real number that approaches to 1, whereas 𝔮 represents something a bit more abstract: it’s a mathematical object called a germ that captures the way a function (in this case the identity function) behaves as its independent variable approaches some particular value (in this case the number 1) in a particular way (in this case from the left).

One curious feature of this kind of counting is that no two finite sets are the exact same size. For instance, the set {1,4} is infinitesimally bigger than the set {2,3}; that’s because, even though (q1 + q4) (q2 + q3) approaches 0 as q approaches 1 from the left, it stays positive up until the very last moment.

#3. I call this approach to measuring sets of numbers “regularized cardinality”. In the theory of infinite series, regularization refers to the process of making sense of a divergent infinite sum. For instance, the infinite sum 1 + 0 + 1 + 0 + … is what you get when you try to count the odd positive integers by going through all the positive integers one by one to see which ones are odd and which aren’t (“1 is odd, 2 is not, 3 is odd, 4 is not, …”), and the infinite sum 0 + 1 + 0 + 1 + … is what you get when you try to count the even positive integers in a similar way, so the difference in sizes between the two sets can be represented by the infinite sum 1 1 + 1 1 + … One of the first people to consider sums like this was Luigi Guido Grandi, who assigned this particular divergent sum the value 1/2. The mathematician Niels Henrik Abel later made sense of this assignment by pointing out that as q approaches 1 from below, q1 q2 + q3 q4 + … approaches 1/2. I’m applying this trick in the context of measuring how big sets of positive integers are. I’m not the first to do this. Analytic number theorists have already tackled questions like “Are there more primes that are 1 more than a multiple of 4 than there are primes that are 1 less  than a multiple of 4?”, and for problems like this, fancier tricks than Abel regularization are required.

#4. Some mathematicians and philosophers have recently tried to see how far you can push approaches to counting in which throwing out elements from a set always makes the set smaller; see the article by Benci and Di Nasso listed in the References. Also, the mathematician Ilya Chernykh has independently been studying similar ideas on his own; see his web page on extending the real numbers. And there’s also the work of Sergeyev on the concept of the Grossone, though it’s unclear to me whether there’s “any there there”.

#5. What led me to this subject was my interest in the studying of packings. I needed a way to say that if you pack disks in the plane as tightly as possible, but then take some away, you’ll have fewer disks than if you pack disks in the plane as tightly as possible but don’t take some away! Abel regularization and similar tricks give a precise way to define the word “fewer” to make the previous sentence true. My write-up on disk-packing in two dimensions isn’t ready yet, but you can look at slides from a talk I gave on the subject at Brown University this past spring, though the kind of regularization I use for two dimensions is trickier. You can also read the article by Abrams et al. (I’m part of the “et al.”) to see what Abel regularization can do for one-dimensional packing problems. See the References.

#6. It’s important to notice that my approach to “counting” infinite sets of numbers relies heavily on the fact that the things we’re counting are, in fact, numbers. In contrast, Cantor’s approach to counting is extremely general and makes no assumptions about what sorts of objects are being counted. The people working on numerosity are trying to find ways to do non-Cantorian counting in broader contexts than just sets of numbers, but I think they’ll hit road-blocks and paradoxes as they move onto terrain in which the objects being counted come equipped with less and less structure.

#7. If we divide q1 + q3 + q6 + q10 + … by q1 + q4 + q9 + q16 + …, the ratio approaches the square root of 2 as q approaches 1. That seems less surprising when you approximate the kth triangle number (equal to k (k + 1) / 2) by k2 / 2, so that the number of triangle numbers less than some cutoff N is about sqrt(2N) (check: k2 / 2 is less than N if and only if k2 is less than 2N). On the other hand, the number of perfect squares less than N is about sqrt(N). Now note that sqrt(2N) divided by sqrt(N) is sqrt(2). For more discussion of such matters, see the last MathOverflow post listed in the References.

#8. If we subtract the two generating functions (one for positive integers with an even number of decimal digits, one for positive integers with an odd number of decimal digits), we get a power series that doesn’t approach any particular value as q approaches 1 from the left. In fact, this function of q behaves really badly. Here’s a plot from q=0 to q=0.95:

Here’s a plot from q=0.95 to q=0.995:

And here’s a plot from q=0.995 to q=0.9995:

As q gets closer and closer to 1, the function change sign infinitely often with ever-wilder amplitude swings. (For an example of a related function with similar properties, but with the base 2 replaced by the base 10, see the MathOverflow post On an example of an eventually oscillating function.) So the germ-approach to regularized cardinality says that the two sets are incomparable. If you’re going to apply the approach to measuring the sizes of a really broad class of sets, including sets with gaps that get bigger and bigger, then you’re going to have to give up on the idea that when you’ve got two sets of numbers, either they’re the same size or one of them is bigger than the other.

There’s a pervasive metamathematical theme at work here: when you expand the scope of a mathematical system by enlarging its purview, you typically have to give something up in exchange. Such trade-offs are familiar from tales of supernatural creatures who offer some lucky (or maybe not so lucky!) person a special magical power for a price. We also see this principle at work in the history of number systems: as one progresses from the real number system to the complex number system to the quaternions and so on, each step into the mathematical Beyond gives riches with one hand while taking away conceptual infrastructure with the other. We find ourselves chasing theories that describe ever-higher realms of mathematical Being, but say less and less.

REFERENCES

Aaron Abrams, Henry Landau, Zeph Landau, Jamie Pommersheim, James Propp, and Alexander Russell, “One-dimensional packing: maximality and rationality“. arXiv:1807.06495.

Vieri Benci and Mauro Di Nasso, “Numerosities of labelled sets: a new way of counting“. Advances in Mathematics 173 #1 (2003), pp. 5067.

Ilya Chernykh, “Non-trivial extension of real numbers“. viXra:1701.0617.

MathOverflow post, “Literature that helps explain what the theory of numerosities contributes with“.

MathOverflow post, “What is a … Grossone?“.

MathOverflow post, “Comparing sizes of sets of natural numbers“.

James Propp, A macro-meso-microscopic view of disk packing (slides).