How To Be Wrong

“To be wrong, or not to be wrong?” That is the wrong question.  You are going to spend a lot of your time being wrong, especially if you become a scientist or mathematician.  The question is, are you going to do it right?

Let’s start by asking how to be wrong without putting too much of a dent in your reputation.  Here the answer is clear: the right way to be wrong is to do it in private, the way P.E. did, and not in public, the way R.S. and a dozen others did.  Here P.E. is Paul Erdős, whom I’ve written about before. I won’t tell you who R.S. is, because the poor guy has already confessed his sins and suffered enough for them (which puts him miles ahead of the rest of the dozen, who never even did proper penance).

Both P.E. and R.S. stubbed their toes against an infamous probability puzzle called the M.H. Problem, and I’m not going to tell you who M.H. is either, or what his problem was, because that deserves an essay all its own. Instead, I’m going to treat the M.H. Problem as a black box, which I will scrupulously not open so that I can focus on broader issues of telling right from wrong.  If you insist upon knowing what’s inside the box, come back to my cave in a year and I’ll tell you.  All you need to know for now is that it’s a fiendishly tricky probability puzzle that even really smart people get wrong.

Erdős, who among his other accomplishments pioneered an approach to combinatorial problems called the Probabilistic Method, knew a lot about how to solve problems that were already formulated in the language of the mathematical theory of probability.  Unfortunately, he wasn’t so experienced with the subtle ways in which that theory interfaces with the real world, and the numerous ways in which common sense can lead us astray.  But he was aware, as most mathematicians are, that probability theory has its pitfalls.  So, when his brain got hacked by the M. H. Problem, and he became convinced that Andrew Vazsonyi (the person telling him about the problem) was mistaken, he didn’t just say “You must be mistaken”; he did some experiments. More precisely, Erdős let Vazsonyi run the experiments and show him the results.  The experiments led him to reverse his opinion.  That’s the end of the story of P.E. and the M.H. problem, and a happy ending too, as far as P.E. was concerned.

R.S. was not so lucky.  He read about the problem in Marilyn vos Savant’s column in Parade Magazine in the early 1990s.  R.S. did not carry out an experiment to see where the truth lay, or discuss the problem with people who work in probability and statistics.  Instead, with full faith in his own powers of reason, he fired off a letter to her, saying: “As a professional mathematician, I’m very concerned with the general public’s lack of mathematical skills. Please help by confessing your error and in the future being more careful.”

R.S. wasn’t the only person to say such things, or the most intemperate; for instance, E.R.B. (again, initials only) wrote: “You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively towards the solution of a deplorable situation. How many irate mathematicians are needed to get you to change your mind?”

These mathematicians and others were quoted verbatim in Mark Haddon’s novel “The Curious Incident of the Dog in the Night-Time” as examples of pomposity linked to unclear thinking.  End result: If you google E.R.B., you won’t find much about his mathematical accomplishments; the top hit is a page addressed to the question “Did they ever apologize to Marilyn vos Savant?” (R.S. did apologize, graciously; E.R.B. and the others did not, as far as I’m aware.) That is not the kind of immortality that anyone would want.

So there are right ways to be wrong, and wrong ways to be wrong.  And come to think of it, there are also wrong ways to be right.  One example that springs to mind is young J.P. (name withheld), who was in the habit of preventing the spread of misinformation by undiplomatically correcting his teachers when they made mistakes. (Sorry, Mrs. Gold.) A better and certainly more important example was Ignaz Semmelweis, the medical pioneer whose unyielding adherence to principles of hygiene may have been responsible for the deaths of millions. Sure, he saved lots of people, but it’s conceivable that the practice of routine hand-washing in surgical theatres might have spread farther, and faster, if the man had never been born (and therefore never antagonized so much of the medical establishment with his rigidity and brusqueness, and therefore never put hand-washing in such bad odor). Semmelweis died of sepsis two weeks after being committed to a lunatic asylum.  That’s even worse than the fates R.S. and E.R.B. suffered.

The right question, I think, is: How can we learn to be wrong in the right ways? And, if you’re a teacher like me: How can I coach others in the art of being wrong in the right ways?

MISTAKES INSIDE AND OUTSIDE THE CLASSROOM

I tell my students that, in the first place, they need to get over the fear of making mistakes.  After all, one definition of an expert (attributed to physicist Niels Bohr) is, any person who’s made all the mistakes that can be made in some narrow area of human endeavor, and has learned not to make those mistakes anymore. So making those mistakes is an unavoidable part of the learning process. Remember the movie Groundhog Day?  The main character, Phil, eventually acquires various skills, such as playing the piano, throwing playing cards into a hat from far away, avoiding insurance salesman Ned, and wooing coworker Rita, but it’s by messing up over and over again; he has to stick his foot in the same hole in the ground, or in his own mouth, a bunch of times before he learns not to stick it there.

One way I try to make the classroom a safe place for making mistakes is by making a lot of mistakes of my own during the first week of class.  A great way to do this is to try to learn all the students’ names, so that I can go around the whole room and tell everyone who they are.  Linking names and faces is something I’m naturally bad at, so I always make some mistakes, and sometimes I even reveal embarrassing things about the way my brain classifies people (say by mixing up two students who belong to the same non-Caucasian ethnic group even though one of them is a full head taller than the other).

Then I stress to my students that the earlier you make your mistakes, the better.  Every mistake you make in the classroom, or on your homework, is a mistake you probably won’t make on the exam, where mistakes can really hurt you.  And, carrying this idea further: a mistake you make in college or graduate school is a mistake you’re less likely to make after you graduate, when you’re building bridges or designing cancer treatment protocols. So my answer to the question “How to be wrong?” is: “Early and often!”

Of course, students rightfully want to know how not to be wrong, and I give them some tips for this, such as: If you can solve a problem in two different ways, and they both give the same answer, then the chances of your being wrong go down dramatically.  If the methods give different answers, then at some point you need to figure out which solution was wrong and why; you’ll end up with a sharper set of tools and you’ll wield them more expertly next time.  I’ll say more next month about how not to be wrong (and I’ll also talk about Jordan Ellenberg’s very nice book by that name). But today’s sermon is: Accept the fact that you are frequently going to be wrong, and plan ahead by adopting a proper humility about your own thought processes.

And if you can’t be genuinely modest, fake it.  Develop a habit of always acting in such a way that if you prove to be wrong, it won’t make you look really really bad.  You never know when it’ll be one of those days when you’re sure of something that turns out to be wrong.  I had my first experience of this kind at an early age, when I bet my older brother a million dollars over some proposition, 100% confident of a belief that turned out to be wrong.  I had to learn this mistake more than once; later on I bet my sister a hundred dollars, and once again lost.  Fortunately million-dollar bets between minors aren’t legally binding (sorry, Bill), and my parents declared my hundred-dollar bet void (much to my sister’s annoyance; sorry, Sharman).  So I got off more cheaply than R.S. and E.R.B. did.

One reason not to be dogmatic like R.S. and E.R.B. (aside from the fact that it makes you look bad) is that it ends up making people like vos Savant more closed-minded too.  I tried to correspond with her a couple of decades ago, when she made some uninformed remarks about Fermat’s Last Theorem not too long after the whole M.H. Problem affair, but it became clear that her respect for the opinions of math professors had been undermined by people like R.S. and E.R.B., to the point where she felt that information coming from the likes of us was fairly useless.  And I thought that was a shame, since mathematicians are natural allies and sources of information for someone who writes about puzzles for the public.

TELLING RIGHT FROM WRONG: SOME TIPS

Speaking of vos Savant, why am I so sure that she was right (and the Dogmatic Dozen were wrong) about the M.H. Problem?  Most of the reasons are mathematical, and since I’m not telling you what the puzzle says, I’m certainly not going to tell you the answer or why it’s right.  What I want to discuss instead is a certain kind of socio-epistemological reason for assessing beliefs independent of their content, which lends additional support to my faith in my own mathematical assessment.  It’s related to the heuristic of siding with the majority (“How can so many people be wrong?”) but it’s a little subtler and, I think, a lot more reliable. There have been many people who started out believing what P.E. and R.S. and E.R.B. at first believed, but who, like P.E. and R.S., reversed their opinions.  I don’t know of anyone who started out with vos Savant’s view and then, on “deeper” consideration of the question, switched to the contrary view.  That’s got to tell you something about what’s going on, and about who’s likely to be right.

Being alert to asymmetries like this can be a good (though not infallible) guide for deciding between conflicting claims when you don’t have access to full information.  Consider, for instance, the question of what S.O.S. stands for.  There are some people who’ll tell you it stands for “Save Our Ship” (or “Save Our Souls”), while others will tell you “I used to believe that too, till I learned that S.O.S. actually originated as a Morse code sequence chosen for its distinctiveness, and interpretations of it as an acronym came after the fact.” Whom do you believe?

Sometimes something even simpler than the “I used to think so too but now I know better” criterion can be used to unmask bunk; if there’s a good story and one that’s disappointingly banal, usually the latter is the one that’s unfortunately true.  For instance, some people say that the reason there’s no Nobel Prize in math is that a famous mathematician had an affair with Alfred Nobel’s wife.  Others say that Nobel never married, and that the reason he didn’t create a prize in mathematics is that he didn’t find math compelling or think it important (all of the five original Nobel Prizes, with the exception of Medicine, were in areas of personal interest to Nobel). Whom do you believe?

Likewise, if you hear a quote attributed to someone who’s famous and someone who’s obscure, it’s likely that the obscure person was actually the one who said it, or at least said it first.  (Hmm, was it really Niels Bohr who originated that definition of an “expert” that I quoted earlier?) In a way, this criterion for truth is a sibling of Occam’s Razor: Given two stories, the more boring one is usually true. Keep in mind, though, that these are just heuristics. Indeed, when it comes to the history of science (as opposed to science itself), the simple accounts of who-did-what-when are usually the false ones.

Even with the help of such heuristics, and even when we know a lot about the subject we’re opining about, we’re inevitably going to make some mistakes. So, another good answer to the question “How should we be wrong?” is: “Modestly.” Of course, this is also a good answer to the question “How should we be right?” The two questions have the same answer, because the experience of being wrong feels exactly like the experience of being right in the moment when we’re committing it; it’s only afterward (if at all) that we recognize which one we’ve been doing.

AT LAST, AN ACTUAL PUZZLE

Other good thoughts about mistakes come from artist/programmer Nathan Selikoff’s Tedx talk. Amplifying on his advice, I say that it’s best to make lots of small mistakes, instead of a few big ones.  Making many small mistakes gets you used to the mistake-making process, so it isn’t as scary.  Also, if you have lots of experience with mistakes, you’ll know what they “smell” like, and perhaps you’ll have a better sense of when you’ve made one.  More specifically, you’ll have an awareness of the circumstances in which you tend to make mistakes, and you’ll know which stretches of the intellectual landscape (such as probability theory) are especially full of holes that trip up lots of people, not just you.

Selikoff’s point about learning from one’s mistakes may be a familiar one, but it bears repetition. Failing to learn from our mistakes is the Big Mistake we all keep making over and over again. One error from my high school years that I still remember acutely, if not fondly, which I’d made before as a much younger child, and which I’ve made many times since then, involved a puzzle I learned about from Murray Klamkin. I was attending a training camp for the U.S.A. Mathematical Olympiad team, and Klamkin assigned the problem as part of one night’s homework. Here’s a simplified version of the problem: I’ve picked three positive integers, x, y, and z, which you have to be able to guess correctly after asking me just two questions of a very specific kind. The first question you get to ask me is “What is a x + b y + c z ?”, where a, b, and c are positive integers that you get to choose. The second question you get to ask me is “What is d x + e y + f z ?”, where d, e, and f are positive integers you get to choose. Is there a way for you to choose a, b, c, d, e, and f so that you’re guaranteed to win? I thought that the answer was “No”, and I even provided what I thought was a proof that I was right. When my homework came back the next day, I saw I’d gotten a zero on the problem. Klamkin had written on my solution: “Figure out what you did wrong and don’t do it again.” Can you figure out how to choose a, b, c, d, e, and f so that you’re guaranteed to be able to win the game, no matter how big x, y, and z are? Check the End Notes if you want to see the answer (and if you want to know how I went astray).

MISTAKES I’M GLAD I MADE

Selikoff also talks about serendipitous mistakes: goofs that leads us to an interesting destination that “correct” thinking would not have. In his case, programming mistakes led to artistic innovations. I want to suggest that even in the narrow domain of mathematics, mistakes can play this role, and that certain truths may be denied to us unless we pass through a preparatory stage of error.  I know this from my own experiences as a math researcher. The centerpiece of my doctoral thesis was a theorem built on two pillars, each of which would have been useless without the other.  Up until the end of the work, I was nearly always mistaken about how much work remained to be done; when I was working on Pillar A, I believed Pillar B to be an easy problem, which wasn’t true, and then when I finally got to work on Pillar B and encountered obstacles, I had the fortitude to face these obstacles only because I thought I’d completely constructed Pillar A.  But I hadn’t; my construction was flawed.  Fortunately I didn’t discover my mistake unless Pillar B was finished.  Or rather, until I thought it was finished. And so on.  I suspect I wouldn’t have had the courage to undertake the project if I’d realized the scope of the problem.

Figure 1. A surface made of squares spanning a nonplanar hexagon. Figure 1. A three-dimensional surface made of squares spanning a nonplanar hexagon.
Figure 2. A soap film spanning a nonplanar hexagon. Photo courtesy of Richard Kenyon. Figure 2. A soap film spanning a nonplanar hexagon. Photo courtesy of Richard Kenyon.

A decade later, I made the best mistake of my professional life when I latched onto the naive belief that random stepped surfaces (like the one shown in Figure 1) should behave like soap films (like the one shown in Figure 2).  This was at the time a genuinely new mistake; as far as I am aware, nobody had ever made it before.  Maybe if I’d taken more physics courses I wouldn’t have made it! In any case, I pursued the idea, and by the time mathematical reality forced me to abandon it, my collaborators and I had the beginnings of a picture that was even richer and more interesting than the picture I’d been trying to create in the first place.  Without the mirage, I probably wouldn’t have walked toward the oasis.

I’ll tell that story some other time.  For now, I’ll skip ahead to the moral:  Sometimes, when you find the right hole, you shouldn’t just put one foot into it.  You should jump in with both feet the way Alice did, and see what kind of wonderland it leads to.

Next month (Feb. 17): When Not to Expect What You’re Expecting.

REFERENCES (with web-links)

Wikipedia page on the M**** H*** problem.

Jordan Ellenberg, How Not to Be Wrong.

Peter Ross, Why Isn’t There a Nobel Prize in Mathematics?

Marilyn vos Savant, “Game Show Problem”.

Andrew Vazsonyi, “Which Door Has the Cadillac?”.

END NOTES

One solution to the guess-my-numbers puzzle is to pick a = b = c = 1 and then, after learning the value of x + y + z , to take n to be some power of ten that’s bigger than x + y + z, and then to pick d = n2, e = n, and f = 1. To see why the strategy works, suppose my secret numbers are x = 17, y = 11, and z = 23, so that x + y + z = 51. Then when you ask me to reveal the value of 10000 x + 100 y + 1 z, and I tell you that it’s 171123, you can just read off x, y, and z from my answer.

My mistake lay in assuming that a, b, c, d, e, and f must all be chosen in advance. (If the number-guesser isn’t allowed to craft her second question based on the answer to the first question, then there’s no way she can be sure of defeating the number-picker.) This is a classic example of a common kind of mistake: making an unwarranted assumption, also called failing to think outside the box. I made a similar unwarranted assumption as a young child when I was presented with (and failed to solve) the infamous nine dots puzzle. I’ve made unwarranted assumptions since then and will continue to make them in the future, though hopefully less and less and less. You could rightly say that my mistake was caused by a lack of imagination. Numerous disasters have justly been attributed to lack of imagination, but it’s a tough mistake to stop making. I could admonish myself every morning “Stop lacking imagination!” but I don’t think it’d do much good.

Thanks to John Baez, Jordan Ellenberg, Sandi Gubin, Henri Picciotto, and Glen Whitney for helpful suggestions. The first draft of this article was written at the Power Cafe in Watertown, Massachusetts (http://www.facebook.com/thepowercafe); this place has special math-mojo since Galit Schwartz, the owner, is the sister of mathematician Noam Elkies. If other Boston Area folks think it might be fun to hold a math slam there (like a poetry slam, but with math), let me know!

11 thoughts on “How To Be Wrong

  1. hpicciotto

    Important post!

    Here are some additional teacher strategies to teach students “how to be wrong” in pre-college math-class discussions.

    1. Make mathematical mistakes on purpose or otherwise, and model a positive response to those.
    2. Do not praise correct answers. They are their own reward, and teacher praise only serves to intimidate students from speaking up if they’re not 100% sure of their answer. I know this is counter-intuitive, but I am confident it helps create a safer atmosphere.
    3. If you must praise something: praise student participation when they speak up even when they are unsure — whether their answer is correct or not.
    4. Keep a poker face and record several student answers whether right or wrong, (without associating answers to student names,) then see whether students want to change their answers after a class discussion.
    5. Encourage students to evaluate and critique each other’s answers, but do not tolerate any put-downs, even if the student claims they are “joking”.
    6. Give a chance for students to discuss their answers to your questions with their neighbors, before asking them to speak to the whole class.
    Finally, one more technique, suggested by a colleague of mine: when a student makes a mistake, shake the student’s hand, and thank them. Then explain that only by struggling through mistakes do we learn, and therefore this student just made a big contribution to the whole class.

    –Henri

    Like

    Reply
  2. Pingback: Will ’17 be the Year of the Pig? |

  3. Pingback: “Really Big Numbers” |

  4. Pingback: New top story on Hacker News: How to Be Wrong – World Best News

  5. Pingback: New top story on Hacker News: How to Be Wrong – New Content

  6. Pingback: New top story on Hacker News: How to Be Wrong – Latest news

  7. Pete

    Great description. The hard part is keeping your ego in check. Maybe it’s a question of values, do you value the chance to learn something new more than the chance to be right in front of other people. If you think the person you are about to correct knows far less than you, you might undervalue learning from them. If they are a public figure or in public discourse you might overvalue the chance to be right in front of other people. So a public figure that isn’t perceived to be an expert makes a statement that experts think is wrong on twitter, it’s game on.

    Another solution to the olympiad question is choosing a, b and d such that bd/a = e and dc/a!=f and you can remove both x and y from the 2nd equation, solve for z with the two numbers given, now you have two variables in two equations and can solve. Simplest example is a=2, b=2, c=1, d=2, e=2, f=2

    Like

    Reply
  8. Pingback: How to Be Wrong (2016) – Hacker News Robot

  9. Yuval Peres

    Marvelous post!
    By the way, the reluctance of contestants to switch doors in the Monty Hall (M.H.) problem may be well founded, as the version closest to reality of the problem might be “The host opens a door and makes the offer to switch 100% of the time if the contestant initially picked the car, and 50% of the time otherwise. (Mueser and Granberg 1999). In that case switching wins the car with probability 1/2; and if the 50% is replaced by a lower value, then it is better not to switch. See https://en.wikipedia.org/wiki/Monty_Hall_problem#Criticism_of_the_simple_solutions

    Like

    Reply
  10. nishantchandgotia

    Quite an amazing article. Thanks!

    I have a personal story with M.H. The first time I encountered the problem was when I was much younger and getting interviewed for a scholarship. The question was posed to me a room-full of interviewers (mostly non-mathematicians). Needless to say, it cost me my scholarship and I was very bitter about not having been able to solve it correctly. Somehow the experience that Savant had gives me some reconciliation. In general mistakes continue to further my understanding than pushing it back. I hope to continue doing them in these (otherwise) non-consequential ways.

    Like

    Reply

Leave a comment