dedicated to the memory of Elwyn Berlekamp
The mistaken formula (x+y)2 = x2 + y2 is sometimes called the First Year Student’s Dream, but I think that’s a bad name for three reasons. First, (x+y)2 = x2 + y2 is not exactly a rookie error; it’s more of a sophomoric mistake based on overgeneralizing the valid formula 2(x+y) = 2x + 2y. (See Endnote #1.) Second, most high-school and college first-year students’ nocturnal imaginings aren’t about equations. Third, the Dream is not a mere dream — it’s a visitor from a branch of mathematics that more people should know about. The First Year Student’s Dream is a formula that’s valid and useful in the study of fields of characteristic two.
FIELDS: THE BETTER NUMBER SYSTEMS
What’s a field? At the most non-technical level, a field is a playground where things that behave like numbers get to romp under the influence of operations that behave like addition, subtraction, multiplication, and division. The Wikipedia page for fields has a more technical definition. The most famous fields are the rational number system (ℚ), the real number system (ℝ), and the complex number system (ℂ); they’re the envy of many other number systems because of how nicely subtraction and division work out in those three realms. The set of integers (ℤ) doesn’t get to be a field (sorry, ℤ!), even though it tries really hard. ℤ‘s got the whole subtraction game down cold, but when you divide one integer by another (nonzero) integer, the answer isn’t always an integer.

We often write the operations in a field using the conventional symbols +, −, etc. even when the elements of the field aren’t numbers in the ordinary sense. Every field has two special elements called 0 and 1 that behave a lot like the familiar numbers 0 and 1; for instance, the formulas x + 0 = x and x × 1 = x and x × 0 = 0 are valid for all elements x of the field. (See Endnote #2.) Here’s an example of field with just three elements, called a, b, and c, with the operations of addition and multiplication defined by tables:
In this field, a is the 0-element (because x + a = x for all x in the field) and b is the 1-element (because x × b = x for all x in the field), so it’d be better manners to call a “0” and to call b “1” (and to call c “2” while we’re at it).
THE FIELD WITH TWO ELEMENTS
What does it mean for a field to “have characteristic two”? It means that in that field, 1+1 = 0 and more broadly x + x = 0 for all elements of the field. (The three-element field I just showed you does not have characteristic two because 1 + 1 isn’t 0, that is, because b + b isn’t a.) If Noah had lived in a world of characteristic two, he would have been extremely vexed when trying to load his ark: every time he paired up two animals in preparation for boarding, they’d mutually annihilate. (But see Endnote #3.) In characteristic two, adding an odd number of 1’s gives 1, while adding an even number of 1’s gives 0.

The smallest field of characteristic two has just 0 and 1 as elements; it’s called 𝔽2 (or GF(2)), and its addition and multiplication tables look like this:
The only entry in either of these tables that looks strange is 1+1 = 0; the rest are soothingly familiar. And even 1+1 = 0 might be familiar to you if you’ve seen modular arithmetic; what we’ve called “+” and “×” here are mod-2 addition and mod-2 multiplication. Mod-2 addition is the kind of addition that applies when two wrongs make a right, when the enemy of your enemy is your friend, when cancelling your cancellation of an appointment means you plan to show up after all, and in other real-world situations that I’m hoping some of you will contribute in the Comments.
THE FIELD WITH FOUR ELEMENTS
Things get a bit stranger when we move on to the next field of characteristic two, 𝔽4, which has four elements that I’ll write as 0, 1, α, and α+1. (That letter is intended to be an alpha, though in some fonts it might look more like an a.) Here’s how they play together under the influence of + and ×.
(See Endnote #4.)
What are α and α+1? They don’t really have meaning, or rather, they don’t have meaning independent of the system 𝔽4 they belong to — and 𝔽4 only acquires meaning if we play with it (says the pure mathematician) or if we find uses for it (says the applied mathematician) and get comfortable with it. If the analogy helps, you can think of α as something like the number i that we encounter when we progress from the real number system to the complex number system; where i has the defining property i2 = −1, α has the defining property α2 = α + 1.
Speaking of −1, if I’d given you the subtraction table for 𝔽4, you would have noticed that the subtraction table is the same as the addition table! In characteristic two, every element x satisfies x + x = 0, so every element is its own additive inverse; and −x = x implies that subtracting x is the same as adding x.
This brings us back to the First Year Student’s Dream. In both ordinary algebra and the characteristic-two kind, (x+y)2 = x2 + xy + xy + y2, but we handle the repetition of that xy in different ways. In ordinary algebra, we collect them to get xy + xy = 2xy; in characteristic two, we cancel them to get xy + xy = 0.
Once you’ve taught your brain how to dance to the music of characteristic two, the addition table for 𝔽4 doesn’t look very mysterious; when you add two elements, the 1’s and α‘s cancel in pairs, just like the annihilating animals in the characteristic-two version of Genesis.
The multiplication table for 𝔽4 looks wonkier, but every element other than 0 has an alias of the form αi for some integer i, and these aliases make the multiplication table simpler while making the addition table harder. We can write 1 as α0, α as α1, and α+1 as α2.
When you multiply elements of 𝔽4, if one of the factors is 0, then the product is 0, but if both factors are non-zero and we write them as αi and αj where each of i, j is 0, 1, or 2, then the product is αk where k is i + j mod 3 (that is, k is the remainder you get when i + j is divided by 3). So for instance α2 times α2 is α1 since 2+2=4 which leaves remainder 1 when you divide it by 3.
ERROR-CORRECTING CODES
For every number q of the form pk, where p is a prime number and k is a positive integer, there’s a finite field of size q, called 𝔽q or GF(q). The “G” is for Galois, the French mathematician who invented finite fields in his pioneering work on abstract algebra before participating in a senseless duel and dying of a gunshot wound at the age of 20. Little did Galois realize that Galois fields of size 2,4,8,16,… would play a vital role in communication technology a century and a half later.

Suppose I want to send digital information over a noisy channel. The information consists of four bits, each a 0 or a 1. The most obvious thing to do is to send each bit once, but the noise on the channel can flip a 0 to a 1 or vice versa, resulting in the recipient receiving the wrong message. What to do? I can’t get rid of the channel noise, but I can try to overcome it by building more redundancy into my transmission.
For instance, I could transmit each bit twice in a row, so that the message 1011 gets transmitted as 11 00 11 11 (where I’ve inserted spaces to make it easier for you to parse the string into pairs). If the recipient knows that I’m using this protocol, and if the channel introduces at most one error, the recipient can detect the occurrence of a mistake. For instance, if the recipient receives 11 00 11 10, and believes that at most one error occurred, then she can conclude that there was a corrupted bit in either the last position or the second-to-last position, so that the transmitted bit-pattern was actually either 11 00 11 00 or 11 00 11 11. But which was it?
To build more resiliency into the protocol, I could transmit each bit three times, so that the message 1011 gets transmitted as 111 000 111 111. If the recipient knows that I’m using this protocol, and if the channel introduces at most one error, the recipient can detect and correct the mistake by using a simple “majority vote” within each triple of bits: if all three bits in a triple agree, there was no corruption of that part of the message by noise; if the bits don’t agree, the bit that disagrees with the other two is the corrupted bit. This protocol is robust under the assumption that at most one of the twelve transmitted bits gets corrupted. That is, our repetition protocol is a single-error correcting code.
That’s an effective way to combat error, but one pays a steep price: for every four bits of actual meaningful data, the protocol requires that I transmit eight extra bits. This means that the transmission process will take three times as long as it would have without the repetition. That wouldn’t be a problem if the message really was just 4 bits long, but more likely my message is a movie consisting of gigabytes of data, divided up into 4-bit packets (or, in realistic applications, longer packets), so that increase of transmission time by a factor of three is a major pain.
Fortunately there’s a clever way to get the same single-error-correcting robustness with transmissions that use just three extra transmitted bits instead of eight. It requires the 8-element finite field 𝔽8. Just as the elements of 𝔽4 can be written as degree-1 polynomials in α, the element of 𝔽8 can be written as degree-2 polynomials in some element β. Here are the nonzero elements of 𝔽8, along with their aliases:
(β7 is just 1 again. For more about 𝔽8, see Endnote #5.)
To add elements of 𝔽8, use the left-hand alias and add just as you would ordinarily add polynomials, but with the cancellation rules 1 + 1 = 0 and β + β = 0 and β2 + β2 = 0. For instance, β2 + β plus β2 + 1 is just β + 1 (the β2‘s cancel). To multiply nonzero elements of 𝔽8, use the right-hand alias and add the exponents mod 7. For instance, β4 times β6 is β4+6 which is β3.
𝔽8 gives me a way to pad my four-bit payload with three extra check-bits to get a seven-bit transmission that’s robust against single-bit corruptions. Say that my message bits are b1, b2, b3, and b4. Thinking of those 0’s and 1’s as elements of 𝔽8, I form the element b1β6 + b2β5 + b3β4 + b4β3; it can be rewritten as a degree-2 polynomial in β, say b5β2 + b6β1 + b7β0. If I transmit the bits b1,b2,b3,b4,b5,b6,b7 and an error occurs in any single one of the 7 positions, the recipient can use 𝔽8 arithmetic to detect the occurrence of an error, to diagnose where the error occurred, and to fix the error, obtaining precisely the 7-bit packet I transmitted, whose first 4 bits are the message I was trying to send. (See Endnote #6.)
The data-transmission protocol I’ve just described was invented by Richard Hamming, who didn’t think of it in terms of finite fields. Later researchers in the theory of error-correcting codes figured out that Hamming’s construction, and other, even more powerful ways of building resiliency into digital communication, were related to the mathematics Galois had invented over a century earlier. Nowadays Galois fields play a role not just in making communication noise-resistant but in making it secure from snooping.
I teach college math, so most of my students already have learned not to write (x+y)2 = x2 + y2, and hopefully have actual understanding of what the equation (x+y)2 = x2 + 2xy + y2 means and why it’s true. I don’t teach pre-college algebra, so I have no experience helping students transition from (x+y)2 = x2 + y2 to (x+y)2 = x2 + 2xy + y2. As a teacher I try to find the kernel of truth in students’ wrong answers, so if I were in a high school classroom and someone fell for the First Year Student’s Dream, I might say something like “In abstract algebra, where x and y don’t count or measure things, there are situations where mathematicians actually do write (x+y)2 = x2 + y2!” before bringing the class back to the mundane world where x and y are ordinary numbers. But maybe the detour would be distracting or just plain confusing. It might be best to just focus students on the meaning of x and y and the meaning of (x+y)2. Even so, I can imagine things going badly. (See Endnote #7.)
Thanks to Jeremy Cote, Sandi Gubin, Joe Malkevitch, Evan Romer and Glen Whitney.
Next month: The Positive Side of Impossible.
ENDNOTES
#1. The true newbie’s delusion, I think, is that if you take an expression like 2+3×4, it doesn’t matter whether you do the multiplication first or the addition first. After all, 2+3+4 is the same whether it’s (2+3)+4 or 2+(3+4), and 2×3×4 is the same whether it’s (2×3)×4 or 2×(3×4). So you might think that the order of operations in mixed expressions shouldn’t matter either. But since (2+3)×4 = 20 while 2+(3×4) = 14, the first year student learns that order matters.
It’s the sophomore who, having learned the distributive law (a+b)×c = a×c + b×c, mistakenly overgeneralizes the underlying principle and slips into thinking that (a+b)↑c = a↑c + b↑c (where I’ve somewhat unconventionally written exponentiation as ↑, for reasons that the next sentence will make clear). It doesn’t help that mathematical convention has us write these formulas as (a+b)c = ac + bc and (a+b)c = ac + bc ; it’s hard to focus one’s awareness on the difference between the properties of multiplication and the properties of exponentiation when the formulas don’t contain symbols corresponding to the two operations.
#2. A comical property of abstract algebra is that part of the definition is 0 ≠ 1. This might seem like a strange thing to see in a math book written for advanced undergraduates; I mean, who arrives at college not knowing that 0 and 1 are different numbers? But in the abstract setting, where the elements of a field might not even be numbers at all, and 0 and 1 could be quite strange beasts, it needs to be said as part of the definition of a field that we require that 0 and 1 not be the same beast.
#3. Of course, in the sort of universe where the formula a+a=0 rules and everything is one-of-a-kind, the whole idea of sexual reproduction makes no sense, and Noah would need only one creature of each kind anyway.
#4. If you’ve seen modular arithmetic, you’ve seen mod-4 arithmetic, whose operation tables look like this:
The Junior’s Dream (I’m just making up this nomenclature as I go) is that the field with four elements is just mod-4 arithmetic, but this is false. In a field, every element besides 0 has a multiplicative inverse; that is, for every x ≠ 0, there’s a y such that x × y = 1. But in mod-4 arithmetic, 2 doesn’t have a reciprocal, so mod-4 arithmetic doesn’t give us a field.
What is true is that whenever p is a prime, mod-p arithmetic gives a field with p elements called 𝔽p. In fact, the first finite field mentioned in this article (with elements called a, b, and c) was just mod-3 arithmetic in disguise. When q is of the form pk for prime p with exponent k > 1, the finite field 𝔽q is more complicated to describe. (In case you were wondering, when q = pk, the field 𝔽q has characteristic p, meaning that if you add 1 to itself p times you get 0. Also, in 𝔽q the First Year Student’s Dream works with exponent p; that is, all elements of 𝔽q satisfy (x+y)p = xp + yp.)
#5. The field with 2 elements sits inside the field with 4 elements, so you might think that the field with 4 elements sits inside the field with 8 elements. But that’s what we might call the Senior’s Dream. A finite field with q elements sits inside a finite field with r elements whenever r is a power of q, but 8 isn’t a power of 4.
#6. If the recipient receives the bit-string c1,c2,c3,c4,c5,c6,c7, then she can use those bits to compute the element c1β6 + c2β5 + c3β4 + c4β3 + c5β2 + c6β1 + c7β0 in 𝔽8; if it’s 0, then no bit was corrupted, and if it’s one of the seven nonzero elements of 𝔽8, then which particular element of 𝔽8 it is determines which of the seven positions in the bit-string is the location of the corrupt bit. Once the receiver knows which bit got corrupted, she can correct it, reconstructing the transmitted message, whose first four bits are the actual payload.
#7. Here’s my morbid fantasy about the First Year Student’s Dream. A student comes up to me and says “Teacher, I know you say that (x+y)2 = x2 + y2 is wrong, but I don’t see why.”
I reply “Well, let’s try it with numbers. What does (x+y)2 become when we replace x by 2 and y by 3?”
The student answers “2+3 is 5, so (2+3)2 is 52 which is 25.”
“Right!” I say. “And what does x2 + y2 become when we replace x by 2 and y by 3?”
“That’s 22 + 32, which is 4 plus 9, which is 13.”
“Right!” I say. “A different number.”
The student nods.
“Let’s try drawing a picture,” I say. I draw this picture:
“What’s the area of the square?”
“The side length is x+y, so the area is x+y squared.”
“Great!” I say. “Now let’s compute the area a different way by dividing that square up into pieces.” I draw this picture:
“What are the areas of the pieces?”
“Well, there’s an x-by-x square at the upper left, which has area x2, and there’s a y-by-y square at the lower right, which has area y2, and there are two rectangles left over.”
“Great! So when you say x+y squared equals x squared plus y squared, you’re on the right track, but you’re leaving out these two rectangles.”
“I see it!” says the student. “Those two rectangles each have area x times y. So that’s the 2xy that I was leaving out.”
“Excellent! So, what have you just learned?”
The student says “I learned that (x+y)2 = x2 + y2 isn’t true for numbers and isn’t true for geometry. But I still think it’s true for algebra.”
At this point, I think I start to cry.
REFERENCES
John Baylis, Error Correcting Codes.
Elwyn Berlekamp, Algebraic coding theory.
Al Doerr and Ken Levasseur, Applied Discrete Structures.
Raymond Hill, A First Course in Coding Theory.
Steven Roman, Introduction to Coding and Information Theory
Endnote #7 is wonderful — I wish every math teacher understood this dynamic:
> “Excellent! So, what have you just learned?”
>
> The student says, “I learned that (x+y)^2 = x^2 + y^2 isn’t
> true for numbers and isn’t true for geometry. But I still
> think it’s true for algebra.”
(Except that a real student wouldn’t say that last thing out loud, and, worse, might not be even be consciously aware of it.)
As I teacher, I sort of knew this, but endnote #7 really crystalizes and clarifies it for me.
———
Here’s a real-life conversation that is very similar to Prof. Propp’s imagined conversation. This comes from a math ed course taught by Jean Schmittau at Binghamton University, and has always been a touchstone for me as a teacher.
She recounted a story of an elementary student who added 17 and 5 and got 12 because he didn’t carry. The teacher patiently went through an example with 17 cookies and 5 cookies and the student agreed that altogether there were 22 cookies. Teacher: “So if you have 17 cookies and I give you 5 more, how many cookies will you have?” Student: “22.” Teacher: “So what’s the right answer to 17+5?” Student: “12.”
Because to him the “right answer” is what you get when you follow the procedures you’ve been taught to follow. Cookies are cookies, math is math.
———
One more related example is from Abbott & Costello’s “Who’s On First” routine. (If you’ve never heard it, stop and listen to it right now. It deserves its reputation as a brilliant comedy piece.) Amid all his confusion, at some point Costello figures out that if he says “I don’t know,” he’ll end up on third base, and he starts anticipating Abbott’s response. He has no idea idea why he’s on third base, no understanding, but he knows that’s the answer. How often are our students operating the same way in math class?
LikeLiked by 1 person
That’s a great story with a profound lesson.
LikeLike
There is an infinite number of finite fields, corresponding to primes and all their powers. There is also the rational’s, real, and complex numbers. Are their other categories or types of fields?
Could square matrices with non-zero determinants be considered another type of field?
Since every field is part of a group, and the kinds of groups can be exhaustively delineated, does that mean that all fields can be determined to be of a particular type? Sorry but I am trying to understand my own question. Thanks.
LikeLiked by 1 person
There are many other fields, such as the p-adic numbers, which I will write about some other time (though while you’re waiting, check out https://www.quantamagazine.org/how-the-towering-p-adic-numbers-work-20201019/). Square matrices with non-zero determinants form a group (under matrix multiplication) but not a field, since the set of such matrices is not closed under addition (consider the sum of the identity matrix with its negative). Every field is a group under addition, and if you throw away the 0 element what’s left is also a group under multiplication. The interplay between the two operations gives the theory of fields much of its “rigidity”, and explains why fields are easier to classify than groups. Classifying finite fields is MUCH easier than classifying finite groups!
LikeLiked by 1 person
Pingback: Playful Math Education 142 – Denise Gaskins' Let's Play Math
Pingback: How Can Math Be Wrong? |
Pingback: Numbers from Games |
Pingback: Good Shurik Grothendieck |