Many family restaurants offer paper placemats that entice children into solving puzzles as an alternative to kicking each other under the table, blowing bubbles in their beverages, and so on. I remember those placemats from my own childhood and I recall mazes in particular. The mazes weren’t large, but the designers, in their quest to keep us kids occupied as long as possible, would put long dead-ends near the start of the maze. I quickly hit on a strategy that the emphatically named Thomas T. Thomas also found, as he later recounted in his charming essay “Working Backward“:

One of the most valuable techniques in problem solving I learned in the third grade. But it certainly wasn’t a lesson my teacher intended.

On a day with time to kill between the morning session and lunch bell, she passed out mimeographed sheets with a maze on them and told us to complete it. Busywork for nine-year-olds. Everyone picked up their pencils and began at the start. I looked at the maze for a moment and then—I don’t know why—began at the end, the goal. All the possible traps and dead ends in a maze are designed for the pencil moving forward. Moving backward, I found, was a clear path to the starting point. I was finished in about a minute.

When I took the paper up to the teacher’s desk, she was stunned. “How did you do it so fast?”

“I worked it backward,” I said. “I started at the goal.”

“Thomas!” she said loudly enough for the class to hear. “That’s CHEATING!”

I walked back to my desk, thoroughly embarrassed. Everyone was looking at me with an expression of triumph (“Aha, she caught you!”) or pity (“Don’t you know how to be good?”) or confusion (“How do you cheat on a maze?”).

Fortunately, although I was chagrined and abashed, I didn’t really take the teacher’s verdict to heart. After all, she had simply posed the maze as a problem. She didn’t say, “Start at the beginning.” That’s kind of assumed, but it wasn’t made into a rule. Working problems backward has served me well ever since.

Working backwards is indeed a wonderful tool. But for many problems, what works best is to start at both ends and work your way toward the middle, the way Greek engineer Eupalinos of Megara did when he led the excavation of a thousand-meter-long tunnel on the island of Samos over two millennia ago. For instance, when a chess-player seeks to trap her opponent, she thinks about where the pieces are now, and where she wants them to be, and how she can get them there.

The Eupalinos tactic is also well-suited to the mathematical exercises that we math professors assign students who are just beginning to learn the craft of writing proofs, the way a piano teacher assigns finger-exercises to novices.

**STUMBLING IN THE DARK WITH THE LIGHTS ON**

For example, I might ask my students to show that if *A*, *B*, and *C* are three sets such that *A* is a subset of *B* and *B* is a subset of *C*, then *A* is a subset of *C*. This is a funny thing to ask a student to prove; if she already has any sort of feeling for what “subset” means, the claim is obvious and not in need of demonstration, and if she doesn’t have a feeling for what “subset” means, she’s unlikely to care about the claim at all. If pressed to support such an obvious truth with an argument, a student might be inclined to draw a picture like this:

The problem with this kind of proof is that it doesn’t scale, nor does it help you find proofs of propositions when you don’t have intuition to serve as a guide (and when perhaps you don’t even know whether the thing you’re trying to prove is true). You need to learn a methodology for stumbling around in the dark until you can find the light-switch. Unfortunately, when you practice stumbling around in the dark *in the dark* you tend to bump into things or get stuck in corners, so we mathematicians first have students practice doing it with the lights on, by which I mean, by applying methods of systematic reasoning to prove propositions that are so transparently true that finding a proof doesn’t make you believe in the propositions any more than you did before you found the proof. (See Endnote #1 for a caveat about this kind of pedagogy.)

In this case, what does “*A* is a subset of *B*” even mean? If you’re my student, then you already learned that it means “Every element of *A* is an element of *B*“. So the proposition I’ve asked you to prove reduces to the proposition that if every element of *A* is an element of *B*, and every element of *B* is an element of *C*, then every element of *A* is an element of *C*.

Our first draft of a proof (wordier than it should be, but I want to make the logic 100% clear) looks something like this: “Suppose *x* is some arbitrary element of *A*. [ … ? … ] We have shown that *x* is an element of *C*. Since *x* was arbitrary, we have shown that every element of *A* is an element of *C*, which implies that *A* is a subset of *C*, as was to be proved.”

Note the wishful ellipsis in the middle of the draft. We know how the proof should start, and we know how we want it to end. How does the middle go? That’s the thing we figure out last. This jumbled approach to Time takes some getting used to! For one thing, the medium of paper isn’t well-suited to writing beginnings and endings first if we don’t know how long the middle will be.

In this particular case, the middle of the proof isn’t hard to find; see Endnote #2.

Many math students have seen the meet-in-the-middle paradigm before, in the context of proofs of algebraic propositions. To show that two complicated expressions *A* and *B* are equal, it’s often enough to simplify them and to check that, when they’re expressed in simplest form, the two complicated expressions turn into the same simpler expression *C*.

*A* = *A’* = *A”* = *C*

*B* = *B’* = *B”* = *C*

It might be tricky to figure out off the top of one’s head how to take *C* and “de-simplify” it to obtain *B*, but the prover doesn’t have to; she can just take the simplification process that turns *B* into *C* and run it in reverse:

*A* = *A’* = *A”* = *C* = *B”* = *B’* = *B*

This kind of proof can seem a bit magical; someone reading it is likely to say “But how did you know that this specific sequence of de-simplifications would lead you from *C* to *B*? It seems like magic!” But it’s not magic; it’s proofcraft.

**CLEAVING A PROBLEM AT THE JOINTS**

For both mazes and proofs, the thing you’re trying to find is in some literal or figurative sense a path. We’ve talked about the outside-in approach, but there’s also an inside-out approach, where you figure out the middle first. In many cases, figuring out the midpoint of the journey (or, more generally, figuring out how to break the journey into stages) is the key to making the trip. A famous example is the proof of Fermat’s Last Theorem. It wasn’t until the 1970s that anyone imagined that the shortest path to a proof of the theorem (or at least the shortest one currently known) might go by way of a detour into the theory of elliptic curves. This insight suggested a proof consisting of two parts. The hard part was proved by Ken Ribet, and the *really* hard part was proved by Andrew Wiles with a last-minute assist from Richard Taylor.

Both the Ribet and Wiles-Taylor theorems were, in a certain historical sense, much easier to prove than Fermat’s Last Theorem. It took Ribet only a few years to prove his result once he set out to do it, and Wiles needed only about a decade; in contrast, Fermat’s Last Theorem stayed unsolved for centuries, despite its having attracted the attention of some of the brightest minds in mathematics.

A term I’ve heard to describe the process of productively splitting apart a problem into the right pieces is “cleaving a problem at its joints”. That makes the procedure sound as easy as carving a turkey. But when it comes to mathematical questions, finding the joints can be highly non-trivial. Metaphorically, we might say that the big juicy bird of a problem that Fermat cooked for us sat uneaten on the table of unsolved problems for centuries, as one mathematician after another vainly sought the right position for the carving knife, until in 1972 mathematician Yves Hellegouarch ran out of the dining room and into the house next door, and shouted “Hey guys, I think I found it!”

Actually, that joke is a little misleading, because it suggests that proofs that come out of left field are a novelty. But in fact the history of mathematics is full of them. A different example from number theory is the Prime Number Theorem that governs the way primes become rarer the farther out you go. The original proof found by Jacques Hadamard and Charles de la Vallée Poussin had two parts: the first part showed that ζ(*s*) is non-zero whenever the complex number *s* has real part 1 (where ζ is Riemann’s zeta function ), and the second part showed that this property of the zeta function implies the Prime Number Theorem. Neither part is easy, but the hardest part was realizing that this was the route to take. As Hadamarad quipped, “The shortest path between two truths in the real domain passes through the complex domain.”

A simple and very literal example of good carving comes from Paul Lockhart’s famous “A Mathematician’s Lament“. The problem is: How can one prove that the area of triangle *ABC* is exactly half the area of rectangle *ABDE*?

A beautiful proof goes by way of cleaving the rectangle (and with it the triangle) into the right two pieces:

Once you have the idea of drawing the dashed line, you’ve split the original problem into two parts: showing that (area-wise) the triangle on the left is half the rectangle on the left, and showing that the triangle on the right is half the rectangle on the right.

This way of solving a problem is the flip side of the adage (should one call it an “add-age”?) “The whole is greater than the sum of its parts”. When you cut a whole into the *right* parts, the parts can look much smaller than the whole did before you cut it!

Note that I’m not talking about splitting a hard problem into two problems each of which is *half as hard* as the original. That’s a kind of progress, but not the most exciting kind. What really makes a mathematician’s day (or week, month, or year) is finding a way to cut a seemingly hard problem into two pieces that are much, *much* easier than the original problem seemed to be. If you have any favorite examples of this, please let me know!

Mathematician Cris Moore, who read an earlier version of this essay, reminded me of *the* classic example of recursively solving a problem by breaking it into two sub-problems: the Towers of Hanoi puzzle. In this puzzle, you’re given three spindles, two of which are initially empty and one of which (call it A) has some circular disks mounted on it, in decreasing size from bottom to top. Your goal is to transfer all the disks to another spindle (call it C), moving disks one at a time from spindle to spindle, without ever placing a larger disk on top of a smaller disk.

The fruitful question to ask is “What does the puzzle look like when it’s halfway solved?” We know that sooner or later, the largest disk must be moved, and it can only be moved from one spindle to another when there are no disks above it and the spindle it is moving to is completely empty. So we ask ourselves “How do we move all the disks except the biggest one from A to B? And then, once the big disk has been moved from A to C, how do we move all those other disks from B to C?” That is, we reduce the Towers of Hanoi problem with *n* disks to two instances of the Towers of Hanoi problem with *n−*1 disks. If you’re patient enough to repeat that reduction process, you can get all the disks to end up where you want them to be.

**PUZZLES**

A different metaphor for proofcraft is solving jigsaw puzzles. Like many people, I solve jigsaw puzzles from the outer frame and then work in. The topological term for the outer frame is *boundary*. In three dimensions, the boundary of a cube consists of the six squares faces that bound it. In two dimensions, the boundary of a rectangle consists of the four line segments that bound it. In one dimension, the boundary of a line segment consists of the two points that bound it, one at each end. So the Eupalinos tactic can be seen as a special case of the more general tactic of solving a problem by munching away at it from the boundary, as in the case of jigsaw puzzles (and maybe other sorts of puzzles too; anyone care to mention some in the Comments?). Munching away at things from the outside in is something kids do instinctively inside and outside restaurants, though in the case of ice cream cones the tactic must of course be applied judiciously.

Here’s a puzzle that ties in with this month’s theme that also has some bearing on the real world, and more specifically, with the design of cryptographic protocols for safeguarding privacy. In this puzzle, I’m the good guy who’s designed a code that lets me converse in private with a friend, and you’re not that friend; you’re a bad guy who’s trying to invade my privacy. You’ve somehow managed to intercept both the plaintext of a message of mine and the ciphertext (the encrypted version of the plaintext), and you hope to leverage that advantage.

The communication protocol my friend and I employ uses two secret six-digit numbers, *x* and *y*, known only to the two of us. When I want to send my friend a message *M*, I encrypt it using the encryption key *x*, obtaining a seemingly meaningless string of letters *M’*, and then for good measure I encrypt the resulting string of letters again, using the encryption key *y*, obtaining a string of letters *M”*. I then send the twice-encrypted message *M”* over a possibly insecure channel to my friend. My friend applies the decryption algorithm with the key *y* to convert *M”* back into *M’*, and then applies the decryption algorithm with the key *x* to convert *M’* back into *M*, the original plaintext message.

But you, the bad guy, have intercepted both some plaintext message *M* and the associated ciphertext message *M”*. Also, you know my encryption and decryption procedures, even though you don’t know which specific keys *x* and *y* my friend and I are using. So you have high hopes of figuring out what *x* and *y* are, which will enable you to eavesdrop on all my future correspondence with my friend.

You could apply the encryption algorithm to *M* with the encryption key *x* for all million possibilities for *x*, and then for each of those million values of *x*, try all million possible values of *y*, to see which combination of keys turns the plaintext *M* into the ciphertext *M”*. But that’s a million squared possibilities, and let’s assume that you don’t have the computer power to try that many possibilities. Is there a smarter, faster way for you to figure out what *x* and *y* are, thereby defeating my code? See Endnote #3.

If you’re interested in the different sorts of problem-solving strategies I’ve mentioned − working backwards, working from the outside in, working from the inside out − you might enjoy the writings of mathematician George Pólya, who in books like *How to Solve It* described various general strategies that mathematicians find helpful when approaching problems.

**OTHER KINDS OF MAZES**

So far, when I’ve been talking about mazes, I’ve been talking about the kind that can be viewed from above and taken in at a glance. If you’re actually *in* the maze, and all you can see is one small part of it, then the project of getting to your goal is considerably harder. If you’re in a maze with only one non-backtracking path from the start to the end, also known as a unicursal maze, then there’s a simple way to get to the end, if you’ve got time to kill and don’t mind backtracking: just put your right hand (or left hand) out, touch the wall, and keep walking without breaking contact with the wall. But of course maze designers want to make their mazes challenging, so many mazes aren’t unicursal and can’t be solved by that method.

Nowadays there are many kinds of mazes in which where you can go next depends on more than where you are now. One of the pioneers of modern mazecraft was Robert Abbott, known to readers of Martin Gardner’s Mathematical Games columns as the inventor of the game Eleusis. Here’s a maze designed by Abbott that you can walk through at the Museum of Mathematics in New York City (it’s intermittently on display as part of the Math Square on the lower floor). You can see the maze in its entirety beneath your feet throughout your journey, but escaping the maze isn’t as easy as it looks, and no simple rule (like “always keep your right hand on the wall”) will help you.

You can learn more about Abbott’s mazes at his logicmazes.com website. Another masterful maze designer is Andrea Gilbert, whose mazes are on display at her clickmazes.com website. Or, if you’re into real-world mazes that you can physically walk through, visit Adrian Fisher’s mazemaker.com website to learn about the hundreds of witty mazes he’s built all over the world. There’s a lot more to mazes than what you’ll see on restaurant placemats!

**LIFE**

One could take a more philosophic view of the theme of this essay, as Søren Kierkegaard did when he famously declared “Life can only be understood backwards, but it must be lived forwards.” Of course, since most of us try to understand our lives even as we’re in the midst of living them, things can get complicated!

There’s an episode of the TV show Deep Space Nine in which humans encounter aliens (reminiscent of the aliens in the movie Arrival) whose relation to time is very different from ours. The aliens find us repellently “linear”, and we only convince them to let us use their wormhole when we demonstrate that our faculties of memory and imagination allow us to simultaneously inhabit past, present, and future. “Not … linear!” the aliens say, when they finally get it. (Though if they go through a process of “getting it”, doesn’t that make them linear too?)

If I were to meet some aliens who found humans too linear, I might want to show them some clever mathematical proofs and explain how we humans constructed them. Yes, the chain of implications from the start of a proof to its end must have a kind of unidirectionality (that’s what logic is like), but like mazes, these proofs have twists and turns, and when you see such a twist for the first time, you’re prone to wonder “Why do we take *this* turn and not *that* turn?” To explain the turn, one tells the story of how one found the proof. This story is unidirectional in its own way (since it transpires in human time, and that’s what human time is like), but proof-time and prover-time work differently. Sometimes they evolve in tandem, but just as often they progress back-to-front, or inside-out, or outside-in, relative to one another. Sometimes exploring dead ends that play no part in the finished proof is a key part of the process of finding the proof. It’s all very time-wimey and it can be disorienting before you get the hang of it. I think the nonlinear aliens would approve.

*Thanks to Sandi Gubin, Adam Goucher, Brian Hayes, Scott Kim, Andy Latto, Cindy Lawrence, Cris Moore, Henri Picciotto, Evan Romer, Shecky Riemann, and Glen Whitney.*

Next time (May 17): A Mathematician in the Jury Box, or, “But how should we define ‘intoxicated’?”

**ENDNOTES**

#1: Math educator Henri Picciotto, who read an early version of this essay, points out a pitfall of math pedagogy that puts too much emphasis on proving things that already seem obvious: it will make students think that rigor is only about proving the obvious. (Which is pretty obvious when you think about it!) We shouldn’t forget that the main purpose of proof is to increase someone’s conviction (maybe yours, maybe someone else’s) that a result is true. If your students are future mathematicians or future scientists in highly mathematical disciplines like computer science, then it makes sense to inculcate the practice of rigorous proof for its own sake. But, as Picciotto says in his essay Geometry: A Guided Inquiry, if you’re teaching high schoolers, “it verges on insanity to start with formal proofs of self-evident results.”

#2: A complete proof might go like this:

“Suppose *x* is some arbitrary element of *A*. Since *A* is a subset of *B*, *x* must also be an element of *B*. And since *B* is a subset of *C*, *x* must also be an element of *C*. Since *x* was arbitrary, we have shown that every element of *A* is an element of *C*, which implies that *A* is a subset of *C*, as was to be proved.”

(Here I’m erring on the side of being verbose, just to make my point clear. How verbose one wants to be depends on the audience one is writing for.)

#3: The trick is to encode *M* using all million possible single-stage encryption procedures, obtaining a list of a million nonsense strings, and *decode* *M”* using all million possible single-stage decryption procedures, obtaining another list of a million nonsense strings, and then look for a string that’s in both lists. If the *x*th string in the first list coincides with the *y*th string in the second list, then bingo! you’ve found *M’* and *x* and *y*, and you’ve cracked the code.

(You might be thinking “Wait; if for each of the million strings in the first list you have to check for a match with each of the million strings in the second list, wouldn’t that require up to a million-squared comparisons? And wouldn’t that make the procedure infeasible?” But there’s a smarter way to find a string that appears in two lists, based on the existence of smart ways to sort lists. Although the most obvious way to sort a list of length *N* takes on the order of *N*^{2} steps, it’s long been known to computer scientists that you can do much better; there are algorithms that take only on the order of *N* log *N* steps. In the case where *N* is a million, that means a few million steps rather than a few trillion. So, to find the string that appears on each of two long lists, first sort the lists (using something like alphabetical or numerical order; the details don’t matter), and then look for a match intelligently, starting with trying to match the first element on the first list with the first element on the second list.)

This so-called meet-in-the-middle attack on a two-stage code (first proposed by Diffie and Hellman in 1977) is the reason why, when the Data Encryption Standard (DES) code began to exhibit vulnerabilities, it was replaced with Triple DES rather than Double DES.

**REFERENCES**

Robert Abbott, Mad Mazes, 1990.

Robert Abbott, SuperMazes, 1997.

Martin Gardner, “Mazes”, chapter 10 in The Second Scientific American Book of Mathematical Puzzles and Diversions.

Martin Gardner, “Eleusis: The Induction Game”, chapter 15 in The Second Scientific American Book of Mathematical Puzzles and Diversions.

Paul Lockhart, “A Mathematician’s Lament”. https://www.maa.org/external_archive/devlin/LockhartsLament.pdf

MathOverflow, Hard implications, that become easy with the right intermediate step, April 2019.

Henri Picciotto, Geometry: A Guided Inquiry,

George Polya, How To Solve It.

Thomas T. Thomas, “Working Backwards”, http://www.thomastthomas.com/Working_Backward_051511.htm

Pingback: Flip Your Students, Flip Yourself |

Shecky RThe talk of solving problems backwards and unidirectionality oddly reminds me of the surprise I felt long ago when the New Yorker magazine began running “caption contests”: I always presumed witty New Yorker cartoonists thought up funny punchlines/jokes first, and then drew pictures to accompany such (operating unidirectionally toward a single result). But when they started “caption contests” 100s of folks turned in multiple, hilarious captions to oddball drawings, and I suddenly realized the process could work in reverse (quirky picture first, followed by multiple punchlines); a bit like turning the deductive or logic process on its head to produce creativity. Perhaps more of comedy works that way than I realize!

LikeLiked by 1 person

Simon Gregg (@Simon_Gregg)As someone that’s never studied formal proof, a naïve question: why is it more of a proof to talk about a member of the sets than it is to talk about one set being a subset of another?

LikeLiked by 1 person

jamesproppPost authorIt’s not “more of a proof”, exactly. It’s just that if our definition of is-a-subset-of was built on the notion of is-an-element-of, then (at least at the beginning) all theorems that refer to the former notion need to appeal to properties of the latter notion.

If for some reason someone decided to develop an approach to set theory in which the subset notion was regarded as fundamental and the element notion was built on that, the roles would be reversed!

LikeLike

samuelfhopkinsThe “work at the problem from both ends” philosophy remind me of something I heard about someone trying to resolve a famous conjecture where on even numbered days they would try to prove the positive statement, and on odd numbered days they would search for a counterexample.

LikeLiked by 1 person