Going Negative, part 4

One reason negative integers can be confusing is that their resemblance to counting numbers makes us think we should understand them through counting. And you can’t use negative numbers to count things – or can you?

Here’s a setup that gives negative integers the opportunity to count things. It bears some resemblance to dangerous experiments you could (in principle) perform with particles and antiparticles, but it’s a lot safer because it doesn’t involve all those annoyingly lethal gamma rays that result from actual annihilation of matter and antimatter. It’s a pastime you can play with (real or imagined) bags and small objects of two easily distinguished colors, which I’ll call dots and antidots.1

I’ll use blue dots and red antidots. My rule is that when a dot and antidot meet, they cancel each other out, so that for example, a bag containing five dots and two antidots turns into a bag containing four dots and one antidot which then turns into a bag containing three dots and no antidots.

The bag with three dots and no antidots is stable, or reduced: no more cancellations are possible.

There are three kinds of reduced bags: bags containing one or more dots but no antidots, bags containing one or more antidots but no dots, and empty bags. If you start with a bag containing more dots than antidots, it turns into a bag containing just dots; if you start with a bag containing more antidots than dots, it turns into a bag containing just antidots; and if you start with a bag containing equal numbers of dots and antidots, it turns into an empty bag. If you think of each dot as a one dollar credit and each antidot as a one dollar debit, then those three kinds of bags can can be thought of as corresponding to positive integers, negative integers, and zero, respectively. We’ll say that the value of the bag containing a dots and b antidots is the integer ab: it’s positive if a is greater than b, negative if a is less than b, and zero if a equals b. When we perform a single dot-antidot cancellation inside a bag, we don’t affect the bag’s value, because (a–1)–(b–1) equals ab. Likewise, if we perform cancellation multiple times, the bag’s value doesn’t change.

If we start with a bag containing a dots and b antidots (let’s agree to write that as (a|b) for short), then after all possible cancellation has taken place we’ll end up with the reduced bag (ab|0) (containing ab dots and 0 antidots) if a is greater than b, the reduced bag (0|ba) (containing 0 dots and ba antidots) if a is less than b, and the reduced bag (0|0) (containing 0 dots and 0 antidots) if a equals b. Here’s a picture to summarize the situation:

ADDING BAGS

Now that we’ve got the idea of bags “in the bag” (do people still say that?), we can think about adding bags (which may or may not be reduced at the start). Adding two bags means combining the contents of the two bags into a single bag. Schematically, let’s define the “bag-sum” of (a|b) and (c|d) as (a+c|b+d): when we combine a bag containing a dots and b antidots with a bag containing c dots and d antidots, we get a bag containing a+c dots and b+d antidots.

For instance, let a=1, b=2, c=3, and d=4. Adding (1|2) and (3|4) gives (1+3|2+4), or (4|6). Notice that (1|2) has value –1, (3|4) has value –1, and (4|6) has value –2, so we’ve got two bags of value –1 whose bag-sum has value –2.

If you try some more experiments, you’ll find that in each case the value of the bag-sum (a+c|b+d) is the value of the bag (a|b) plus the value of the bag (c|d) when we add these integers in the ordinary way. And if you’re handy with algebra, it’s not hard to see why this always works: the value of (a+c|b+d) is (a+c)–(b+d), while the sum of the values of (a|b) and (c|d) is (ab)+(cd), which equals (a+c)–(b+d).

Let’s call (b|a) the “antibag” of the bag (a|b); when we bag-add (a|b) and (b|a), we get the bag (a+b|a+b), which reduces to (0|0).

MULTIPLE MULTIPLICATIONS

What about multiplying bags? We’re doing pure math, so you can choose any definition you want and name it after yourself (though getting other people to praise and/or pay you for it is another matter). It’s tempting to define the bag-product of (a|b) and (c|d) to be (a×c|b×d), but trouble awaits us if we do this. If we multiply (1|2) (a bag of value –1) by (3|4) (a bag of value –1) using that temptingly simple definition of bag-multiplication, we get 3|8, whose value is –5, which is not equal to –1 times –1. “Ah,” you may think, “maybe these bags are trying to teach us that –1 times –1 really should have been defined to be –5 all along!” And I like that kind of unfettered thinking, but please don’t stop there. What happens if we multiply (2|3) by (4|5) using that same tempting definition? We get (8|15), whose value is –7. Do you want to say –1 times –1 is both –5 and –7?

Wait, I’m not done making you uncomfortable. What if we multiply (2|1) (a bag of value +1) by (4|3) (a bag of value +1) using that maybe-now-less-tempting definition of bag-multiplication? Then we get (8|3), whose value is +5, which is not equal to +1 times +1.

What’s your next move in this game of I-can-choose-any-definition-I-want?

A good tactical retreat on your part would be to say “Let’s only apply this kind of bag-multiplication to bags AFTER we’ve reduced them; that’s what I had in mind all along, ha ha.” For instance, to multiply (1|2) by (3|4), reduce both bags to (0|1), and then compute (0x0|1×1), or (0|1). Hey; with this modified form of the tempting definition the product of two bags of value –1 is a bag of value –1! That’s certainly … different. You can also check that with this definition, (2|0) times (3|0) is (6|0), (0|2) times (0|3) is (0|6), and (2|0) times (0|3) and (0|2) times (3|0) are both (0|0). That is, in terms of values: +2 times +3 is +6, –2 times –3 is –6 (wild!), and +2 times –3 and –2 times +3 are both 0 (wilder!). Perhaps these sign rules appeal to you: “When we multiply numbers of the same sign the product should have the same sign too, just like addition, but multiplying numbers of opposite signs make my brain hurt so I can’t figure out the answer so there is no answer so the answer is zero, goodbye.”

Let’s praise your funny definition for a couple of its virtues. First of all, it’s well-defined, by which I mean, the answer to the question “What is the product of these two integers?” is never “It depends” or “The rules don’t tell us” or “Wait, I think there’s an elephant behind you!” Second of all, this funny way of multiplying integers is consistent with the way multiplication is defined for natural numbers; it just gives us an unorthodox answer when one or both of the numbers being multiplied is negative. Third of all, your funny multiplication satisfies the commutative and associative properties, which ensures that if you want to funny-multiply a long list of numbers, you have a lot of freedom about the order in which you perform the operations. I’ll let you celebrate your definition (and yourself) by calling it “me multiplication”.

Drawing by Ben Orlin. Watch for his upcoming book “Math Games with Bad Drawings”!

Okay, party’s over. I was gracious enough to let you make your definition; now it’s your turn to let me make mine.

I’ll start with two special cases before launching into the real thing. I decree that to bag-multiply (a|b) by (c|0), take the bag-sum of c copies of the bag (a|b); to bag-multiply (a|b) by (0|d), take the bag-sum of d copies of the antibag of (a|b) (which I defined as (b|a)); and (drumroll) …

to bag-multiply (a|b) by (c|d), take c copies of (a|b) and d copies of (b|a), and add them all together using bag-addition. For example, under this definition, (1|2) times (1|2) is what you get when you add together 1 copy of (1|2) and 2 copies of (2|1), which is (1|2) plus (2|1) plus (2|1), which is (1+2+2|2+1+1) = (5|4) = (1|0).

More generally, (a|b) times (c|d), under my definition, is (a|b) + … + (a|b) (c copies) plus (b|a) + … + (b|a) (d copies), which equals (ac|bc) plus (bd|ad), or (ac+bd|bc+ad).2

This gnarly-looking way to multiply bags has the neat feature that when you multiply two bags in the gnarly way, obtaining a new bag, the value of that new bag (defined as the number of dots minus the number of antidots) equals the value of the first bag times the value of the second bag, when you multiply integers in the ordinary way. The example in the previous paragraph shows this phenomenon at work: (1|2) has value –1, and when you multiply (1|2) by itself you get a bag whose value is –1 × –1 = +1. If you’re skeptical/curious and algebraically inclined, you can check that the value of (ac+bd|bc+ad) equals the value of (a|b) times the value of (c|d) (because ab times cd equals (ac+bd)–(bc+ad)).

With gnarly bag-multiplication, “–1 times –1 equals +1” (and more broadly, “minus times minus equals plus”) isn’t an extra rule; it’s a result of way we chose to define multiplication.

Here’s a picture where I’ve rotated the array of bags I sketched before by 45 degrees and placed them above a straightened-out number line; the array is divided into vertical lanes, and all the bags belonging to the same lane have the same value.

THE SWITCHEROO

Switching from numbers to bags of dots and antidots is a bit unsettling, but unfamiliarity has the virtue of disengaging prejudice. If you don’t like negative numbers, you’re in respectable company – the 19th century dissident mathematician William Frend felt negative numbers were a philosophical mistake – but neither you nor he can object that my rule “(a|b) times (c|d) equals (ac+bd|bc+ad)” is somehow philosophically wrong (well, Frend can’t because he’s dead, but you know what I mean). You may have a preconceived notion of number-ness, but you probably don’t have a prior ideological commitment to a certain notion of bag-ness. I’ve moved the action to a blank conceptual territory where I’m free to make whatever rules I like.

If you’re a negative-number-skeptic but a genial person nevertheless, permitting me to define bag-multiplication in the way that I did might seem like a harmless act of tolerance on your part, unlikely to undermine your deep beliefs. But beware! If you open the doors of your mind to bags of dots and antidots and the associated baggage of bag-addition and bag-multiplication, these concepts can colonize your mind. Sure, for a while you’ll maintain a principled distinction between the number 3 and the bag (3|0), but trust me, over time, you’re likely to conflate them. (It’s kind of like the way I switched from writing “bag-plus” and “bag-times” to writing “plus” and “times” when talking about combining bags and I bet you didn’t notice.) And once you accept that numbers and bags are not so different, you might forget that (0|3) is a bag that doesn’t correspond to a number. You might just start thinking of negative integers as actual numbers.

In the final stages of infection, you stop thinking about bags; all you’re left with is a solid conviction that negative integers are perfectly fine numbers, along with a vague sense that you used to mistrust them but you can’t remember why.

That’s right; you’ll stop thinking about bags and dots and antidots. Bags of dots and antidots aren’t a helpful way to do calculations with integers. They’re a conceptual crutch, designed to bring a level of concreteness to negative integers before we get so comfortable with –17 and its ilk that the concreteness is no longer psychologically necessary to us; eventually we throw the bags and dots away. In fact, not everyone likes bags and dots even as a temporary transitional representation, and I’m okay with that. Dots and antidots just aren’t everyone’s bag (do people still say that?).

I borrowed dots and antidots from James Tanton, who got some of his ideas about games with numbers and dots from me twenty-five years ago, so it’s hard to say how much of this is mine and how much is his, but one thing I know for sure: neither of us is the first person to approach integer arithmetic in this way. I recently learned that, forty years ago, my friend the math educator Henri Picciotto, back before we became acquainted, gave his students worksheets having much the same content, though he wrote (a,b) instead of (a|b). But Henri certainly wasn’t the first person to come up with notions tantamount to bags of dots and antidots either. He informs me that these ideas have been in the air among educators for a long time — for as long as teachers have been teaching negative numbers at the precollege level, there have been manipulatives like dots and antidots.

The aforementioned switcheroo (“Think about negative numbers this way; …; now use them without thinking about them”) has its roots in the 19th century project of reducing the number-concept to its simplest possible rudiments. To show that some shiny and new but not-yet-trusted number system is fundamentally as sensible and free of contradiction as some simpler, more familiar number system, you make a contrived model of the new number system using trusted components from the old number system, and you use the model to show that the new number system has the properties you want it to have, without anguishing over whether, and in what sense, it actually exists. This takes some work, and while you’re doing that work you have to keep two versions of the new system in your mind, a vague intuitive version and a contrived reliable version, making sure that the contrived version has the properties the intuitive version is supposed to have. Then, when everything is up and running smoothly, you can safely conflate the two pictures because distinguishing between them turns out not to be helpful. You’re no longer reaching inside those bags to see what’s in them; the bags become opaque, and you just operate on the bags as things-in-themselves.

I came up with the notation (a|b) that I’m using here, but it’s not a standard notation, so don’t expect anyone else you meet to recognize it. A more standard symbol we could use to denote (a|b) is what you get when you rotate that “|” by a quarter-turn: ab. But if I’d written that, you would’ve thought about subtraction; you would have tried to lean on things you were already taught. Instead, I used a symbol that I hoped would have no preassigned meaning, to free your mind to imagine new possibilities. More than most non-mathematicians believe, mathematical creativity is about “Beginner’s Mind”; letting go of what you know and inviting something new to fill that vacated space. Or as Edmund Landau famously wrote in his Foundations of Analysis almost a century ago: “Please forget everything you have learned in school; for you haven’t learned it.”3

In future columns you’ll see other examples of constructions of new number systems from old. (Some of these specific systems aren’t usually called “number” systems, but the distinction between numbers and non-numbers isn’t something I’ll worry about too much. Can you add the things? Can you multiply them? If so, they’re number-ish enough for me.) In most of the cases I’ll write about, a new number system was invented to solve a specific sort of problem; it wasn’t invented by someone who just wanted to come up with a new number system.5 But in each case, we can rewrite history and reinvent these number systems using the modern mathematician’s prerogative of saying “Here’s how we add these things, and here’s how we multiply them.” If the rules fit together nicely, and there are no contradictions, then what we’re doing is math, even if when we start doing it we don’t have a clear mental picture of what we’re doing. Sometimes the best way to figure out the rules of a new game is to start playing and adjust the rules as you go, guided by criteria of consistency and elegance. Bafflingly often, our pursuit of consistency and elegance results in something not just beautiful but useful too.

A STEP BEYOND

As a parting challenge, let me invite you to consider a three-colored version of our dots-in-bags scenario. Suppose we have three colors of objects (red, blue, and green), and suppose our cancellation rule is that when we have a red, a blue, and a green, the three objects cancel. So now instead of pairs (a|b) we have triples (a|b|c), where a, b, and c are natural numbers; a legal reduction-move is to replace (a|b|c) by (a–1|b–1|c–1) as long as a, b, and c are all positive; and the reduced triples are the triples where at least one of the three numbers is 0. How should we picture these reduced triples? We want something analogous to the way we draw the integers on a number line, but our picture will have to be higher-dimensional.

If that’s not enough of a challenge for you, answer me this: How should we add these reduced triples? And how should we multiply them? Post your answers in the Comments; I’ll moderate the discussion over the next few weeks and post a summary in early July.

Thanks to Dan Asimov, Maximilian Hasler, Andy Latto, Henri Picciotto, Evan Romer, James Tanton, Allan Wechsler, and Glen Whitney.

REFERENCES

Edmund Landau, Foundations of Analysis, 1929.

Jim Propp, Going Negative, part 1.

Jim Propp, Going Negative, part 2.

Jim Propp, Going Negative, part 3.

ENDNOTES

#1. James Tanton tells me that in his experience, when children are told “Here’s this new kind of dot; when it meets an ordinary dot, both it and the ordinary dot disappear” and they are asked “What should we call this new kind of dot?”, they decide to call it a “tod” – “dot” spelled backwards – rather than “antidot”. In much the same way, many adults, when asked to come up with a password, pick “drowssap”. Some of these adults are looking for a password that’s easy to remember and don’t care so much whether it’s easy for others to guess, but others are under the misimpression that this is a secure password – that only they are clever enough to think of spelling “password” backwards. I suspect that there are many other instances in which people don’t realize that ideas they’ve originated are commonplace, and in particular, I suspect that many mathematical cranks suffer from this metacognitive deficit. Nor am I immune! But that’s not today’s topic.

#2. My way of multiplying bags uses c and d to determine how many copies of (a|b) to take and how many copies of (b|a) to take, respectively. We might imagine a different way to multiply bags that switches the roles of (a|b) and (c|d), using a and b to decide how many copies of (c|d) and how many copies of (d|c) to take, respectively. Specifically, I could have said “To multiply (a|b) by (c|d), add together a copies of the bag (c|d) and b copies of the bag (d|c)”. This definition certainly looks different! But if you work it out, you’ll find that the result is (ac+bd|ad+bc), which equals what we got before.

#3. The paradoxical second half of Landau’s quote is worth remembering as well: “Please keep in mind at all times the corresponding portions of your school curriculum; for you haven’t actually forgotten them.” Landau didn’t construct the integers from the natural numbers the way I did, but he did something very similar: he constructed the positive rational numbers from the positive integers, using multiplication where I used addition, and using fractions with numerators and denominators instead of funny symbols (a|b).

If I were teaching a Foundations of Math course instead of blogging for non-mathematicians, I might have introduced an equivalence relation ~ on bags, where (a|b) ~ (a‘|b‘) if and only if a+b‘ = a‘+b, that is, if and only if they lie in the same lane; and I might have defined integers as equivalence classes of bags (or, phrasing it visually, as lanes); thereafter I might have shown that (a+c|b+d) ~ (a‘+c‘|b‘+d‘) and (ac+bd|ad+bc) ~ (ac‘+bd‘|ad‘+bc‘) whenever (a|b) ~ (a‘|b‘) and (c|d) ~ (c‘|d‘), thereby demonstrating that addition and multiplication are well-defined operations on lanes; and I might have concluded by showing that lane-addition and lane-multiplication satisfy the commutative, associative, and distributive properties4. These kinds of proofs are great for building up one’s mathematical muscles (though beyond a certain point they become tiresome). Landau wasn’t the first mathematician to see that one could build up the real numbers from scratch in this way, but he was the first to actually write down all the nitty-gritty details.

Interestingly, when it came time for Landau to bring negative numbers into the fold, he didn’t make the same choice I did in this essay. By the end of Chapter III of his book, he’d constructed all the positive real numbers, both rational and irrational, starting from just the positive integers. He could have then defined real numbers as equivalence classes of pairs of positive real numbers, and gone on to reprise the sort of “rigor-marole” he’d applied to the construction of the positive rational numbers from the positive integers, but instead he essentially said “Make a second copy of the set of numbers we’ve constructed so far, stick in a zero, and there’s your new number system.”

#4. It’s worth mentioning that “me multiplication” doesn’t have the distributive property. For instance, if we me-multiply (3 |1) and (1|2) individually by (1|0), we get (2|0) and (0|0) respectively (don’t forget, when we me-multiply we have to reduce first), and the sum of those two bags is (2|0); whereas if we first add (3|1) to (1|2), we get (4|3), and when we me-multiply (4|3) by (1|0) we get (1|0).

#5. Sometimes new number systems are invented by people who just want to come up with a system having certain esthetically pleasing properties. My favorite example is William Hamilton’s invention of quaternions, which appears to have been motivated mainly by a desire to know whether complex numbers were the end of the road, or whether something might lie beyond, in the same way that the complex numbers lie beyond the real numbers.

6 thoughts on “Going Negative, part 4

  1. Joseph

    “In future columns you’ll see other examples of constructions of new number systems from old.”

    Maybe one of those will be the surreal numbers? Please please? 🙂

    Liked by 1 person

    Reply
  2. jamespropp Post author

    Here’s a hint for the three-colors-of-dots puzzle I raised at the end. The set of ALL triples (a|b|c) (with a,b,c nonnegative integers) is naturally seen as an infinite octant; then the set of reduced triples is the surface of the octant, composed of three infinite quadrants. But could there be a way to push this surface down into two dimensions where we can understand it better?

    Like

    Reply
  3. Joseph

    For the three colors puzzle: I’ve been recusing myself, since I think I know which extension of the integers you’re going for here. But with no responses after two weeks, I think I’ll speak up now.

    Here’s how I think the operations should work:

    Addition: do it component-wise. So, (a|b|c)+(d|e|f)=(a+d|b+e|c+f).

    Multiplication: rotate the components. That is, (a|b|c)*(d|e|f) should be d copies of (a|b|c), plus e copies of (c|a|b), plus f copies of (b|c|a). Or, as one formula: (ad+ce+bf|bd+ae+cf|cd+be+af).

    Liked by 1 person

    Reply
      1. MathCookie17

        I also thought of the Eisenstein integers when I saw your three-color case, but I’d also like to add that this is a simplified version of how quarks work (see https://en.wikipedia.org/wiki/Color_charge). Basically, quarks have a property called “color charge”, which comes in “red”, “green”, and “blue”, and they cancel like in your example: one red, one green, and one blue cancel. Antiquarks have “antired”, “antigreen”, and “antiblue”, and a charge cancels out with its opposite, so one red and one antired cancel. To represent the cancellation, antired, antigreen, and antiblue are often depicted as cyan, magenta, and yellow respectively, the complementary colors of the color charges of the normal quarks. The particles that quarks make always have zero color charge: baryons (like protons and neutrons) have three quarks, one of each charge, while mesons have one quark and one antiquark of opposing color charges. In a sense, the “negative” of red in your system is a combination of green and blue, as it takes a green and a blue to cancel with a red; likewise for quarks, though antiparticles muddle things a bit “green + blue” and “antired” are different things. This is, of course, a simplification. Quarks “change” color because of interactions between them via gluons, and their actual color charges aren’t single colors but probabilistic mixtures of colors, because this is quantum physics and everything is probability in quantum physics. The notion of “color charge” has nothing to do with actual color – quarks are far too small for visible light – but the name came about because of the three primary colors, and indeed color blending is a somewhat useful way to think about how quark charges work. I don’t understand all this in full, of course, since I gained this knowledge about quark charges from the Internet, so I wouldn’t be surprised if I got some things wrong here.

        Linking this with the Eisenstein integers, one can note that in color systems like HSL and HSV, hue is a circular value, and is often represented in degrees, so red is 0°, green is 120°, and blue is 240°. Thus, your three-color bags could be represented via a 2D colorful plane with a colorless origin (grey or black or something like that); the colors go around in a circle, with different angles corresponding to different hues, the color getting brighter the further you go from the origin (the complex plane is actually represented this way somewhat frequently; see https://en.wikipedia.org/wiki/Domain_coloring, and then see https://vqm.uni-graz.at/pages/complex/01_id1.html for examples; in the system used by the latter, red, green, and blue correspond to 1, ω, and ω^2 of the Eisenstein integers, which in some sense makes them the “true” units of this representation of the complex plane rather than 1, i, -1, and -i, which get red, chartreuse, cyan, and violet). In this “color blending” perspective of your 3-color bags, it becomes clear why combining a red, a green, and a blue dot gets rid of them all: blending red, green, and blue together in an RGB color system gets you something colorless (I’d say grey here; white is for when you ADD the colors, grey is for when you BLEND them. Blending is clearly the right choice here: for example, yellow and red blended give orange as expected, but yellow and red added just gives yellow because yellow already contains the maximum of both red and green. Adding makes some sense for RGB, but for HSL or HSV we want blending). In this system, “antired” really IS cyan, which is green + blue. Addition being commutative is retained here, as addition becomes a sort of strange color blending, although I suppose it’s best to describe it as a variant of color adding with no limit on brightness: red + red is a brighter red, bright red + cyan is a less bright red, red + green is yellow, yellow + red is orange, and red + green + blue = yellow + blue = cyan + red = magenta + green = 0. (Fun fact about the Eisenstein integers: we know from basic complex number properties that abs(n + m), where m has the same magnitude as n but it 180° out of phase with n, is 0, and if we change the phase difference between n and m, we get abs(n + m) = 2n when the phase difference is 0° and abs(n + m) = n * sqrt(2) when the phase difference is 90°. It turns out that 120° is the difference where abs(n + m) = n, so abs(1 + ω) is 1, as is abs(1 + ω^2) and abs(ω + ω^2); therefore, in the color blending representation, red + green = yellow without any brightness offset. This isn’t true for the red and yellow mix, though, since those are only 60° apart; if the red and the yellow are both brightness 1, then the orange has a brightness of sqrt(3)). This color blending representation is just a different way at looking at the complex plane, using the Eisenstein units of 1, ω, and ω^2 as the primaries red, green, and blue, which are symmetric under addition but not under multiplication. Admittedly, multiplication doesn’t work so well for color (green x blue = red, yellow x green = cyan, green x cyan = magenta, etc.), because while complex addition results in an angle that’s somewhere between the angles of the two complex numbers (which is what you expect with color mixing), multiplication just adds the angles (which, as seen two parentheticals ago, results in colors that don’t make sense, and also breaks the symmetry of the color wheel).

        Honestly, you could probably write a whole essay on the advantages and disadvantages between the rectangular way of working with the complex numbers, the Eisenstein way of working with the complex numbers, and the polar way of working with the complex numbers. What I’m wondering now is: what happens for four-color bags? If the colors are red, yellow, green, and blue, and you say “red and blue cancel, and yellow and green cancel”, then you could get a definition that represents the rectangular complex integers (like how the two-color case represents the integers and the three-color case represents the Eisenstein integers), but what if you need all four colors to cancel? My first thought is that, since the two-color case forms a number line and the three-color case forms a complex number triangle, the resulting system would have its unit “numbers” form a tetrahedron, but that implies a 3D number system, and hasn’t it been proven that only power of two dimensions really work? Alas, color is only three-dimensional (since we have three color receptors in our eyes), so the quaternions elude its grasp…

        Liked by 1 person

Leave a comment