On Hunting Unicorns

When I was in high school and very much in the crackpot stage of my life, I used to be a big follower of cryptozoology; the study, and attempt to prove the existence of, creatures only described in the folkloric record. (Note that I say “study” here in the same sense that a serial killer might say that “murder is the study of human death” in a schlocky police procedural.) Whenever somebody would ask me why I thought flying deer or unicorns existed, I’d simply ask them to prove me wrong: hOw dO YoU KnOw tHeRe aReN’T AnY? dId yOu lOoK EvErYwHeRe aNd nOt fInD OnE?

When I got older and “wiser”, I realized my follyhad unicorns existed, we would have simply either shot them all to death or genetically modified them for optimal horn production to the extent that we’d only be left with large bony masses and an atrophied horse body attached to them. But colt meeting Colt notwithstanding, believing in unicorns (or cryptid of your choice) tends to be tricky to deal with both philosophically and scientifically, and in this entry I’ll try to show you why.

unicorn2 copy

You see, scientists are in general cranky old codgers, and this grouchiness manifests itself through an all-pervading negativity when it comes to the discussion of scientific pursuits. (If I had a nickel for every time I’ve heard a scientist call another scientists work some variant of the word garbage, I’d have enough metal to run Smith & Wesson out of business.) And, like anything made by cranky old codgers, this negativity has been philosophically formalized into what can be summarized briefly as “you’re wrong, we just don’t know how yet”.* In that sense, scientists instinctively value beliefs that are (nominally) easy to disprove, which makes believing in the existence of unicorns tricky business.

*Incidentally, this is precisely why scientists call the claims of science “theories”; no matter how correct or proved these claims may be, we’re only one unicorn away from having people throw trash at us from the auditorium back seats during our university colloquium.

To illustrate, imagine we’re in some stodgy Viennese café in the early 1900’s and two philosophers with opposite claims about unicorns are trying to convince us of their viewpoint; Philosopher 1 claims that no unicorns exist, while Philosopher 2 states that at least one unicorn exists somewhere on Earth. If we are the same kind of grouch that I discussed before, we would set about attempting to show which claim is “less wrong” by attempting to disprove one claim or the other.

However, notice the difference in effort to do so; to refute Philosopher 1, we’d only need to hunt (or hire someone to find for us) one unicorn to completely blow his claim out of the water. To refute Philosopher 2, we’d need to scour every single part of the Earth to make sure that unicorns aren’t hiding out in some remote unreachable corner of the planet.* Sure, finding a single unicorn would be a herculean challenge for any hunter, but it is far more challenging to survey the ends of the Earth to make sure that there aren’t any.

*Spoiler alert: Yes, that includes the ocean.


From the perspective of optimizing the amount of time we spend drinking coffee and making unpleasant faces at passerby, it is far more convenient to entertain the notion that no unicorns exist; simply because it’s far easier to disprove it. And if you notice carefully, it turns out that claims that refer to a universal truth (i.e. there are no unicorns/everything is unicorns) in general are far, far easier to disprove than claims that are their direct refusal—namely, that at least one unicorn exists/at least one thing isn’t a unicorn. These types of claims are usually referred to as existential, in the sense that they claim that a specific thing exists (or doesn’t), and this “oppositeness” always holds; the refutation of a universal claim is always an existential claim, and vice-versa. If we were particularly pretentious—and if we’re in a Viennese coffee shop in the early 1900’s, that’s extremely likely—we might even call it something like the “no unicorns principle”:

No Unicorns Principle: Claims that are universal have more philosophically scientific worth than those that are existential.

If you look at the different theories in physics that have popped up throughout human history, you’ll find they share something in common; their claims are almost always universal, like the claim made by Philosopher 1, in the sense that they make statements like “everything is ████ ” or “everything of this type behaves according to █████ “. Perhaps more strikingly, you’ll find that there are far fewer theories regarding the existence of things in the vein of what Philosopher 2 proposed; statements like “there is at least one object that behaves according to ████ ” or “there is a ████ ” are fairly sparse in the history of science.

This is, in the context of my previous arguments, no coincidence. That innate grumpiness in scientists has manifested itself as the historical tendency that almost all of our physical theories are of the universal type, and statements about the existence of weird objects in our universe are almost always made as a direct consequence of their prediction in universal theories (and even then we get into big fights about them). It is for that reason that most of our experimental efforts are focused on finding specific things, since it is through this method that we go about disproving all theories that don’t predict these things’ existence. And when it comes to hunting these unicorns, you better make sure you bring a big gun.

Quantum III: On Terrible Jokes and Cosmic Terrors

This is the third of three posts on quantum mechanics. See the first and second here and here, respectively.

I absolutely hate science “jokes”. I utterly, completely, despise them. Sure, they’re not all bad—but the overwhelming amount of them live in a weird space where the entirety of the punchline relies on a sense of smug self-satisfaction for knowing what the joke is referencing, making these jokes a great litmus test for finding out which of your friends is a pretentious tool that wouldn’t know comedy if it broke into their house and took their kids for ransom. To give you an example of this kind of crime against laughter, here’s a classic joke of this type (where I mean “classic” in the sense that the Hindenburg disaster is a “classic”):

Werner Heisenberg gets pulled over while driving. A cop comes over and asks, “Do you know how fast you were going?”

Heisenberg replies, “No, but I know exactly where I am”.

This joke is about as funny as dialysis. Medgar Evers may have said that you can’t kill an idea, but when it comes to this joke, God help me I can try. If the only way to get rid of the concept of this joke was to go back in time and prevent cars from ever existing, I’d get my jogging playlist ready in record time. This is a science joke in the sense that an Aston Martin made entirely out of compacted coffee grounds is a coffee machine.

But I digress; the point of this blog is not to point out bad jokes, but to explain what they’re referencing. And now that we’ve managed to get through the key conceptual hoops of quantum mechanics and what it can/can’t do in the previous entries, we’re in a good position to address this!

Let’s do a quick recap of what we’ve figured out so far:

  1. The universe doesn’t determine outcomes precisely, so the laws of quantum mechanics deal only with the probabilities of things happening.
  2. The way those probabilities change with time is -very- weird, to the point where we can’t describe this change using the typical method we’d use for probabilities in a non-quantum world. This is what causes quantum objects to be “glitchy” until you interact with them.

So really, the only thing keeping quantum mechanics from being a boring old theory about statistics like how you lose money in a casino is how those probabilities are changing over time! The question du jour is then obvious; what’s causing those probabilities to change over time?

Well, it turns out the core of what causes all of the weirdness in quantum mechanics is at the very heart of physics: energy! And, because it’s so important to the quantum world, let’s digress a little bit to talk about what we mean by energy. For the purposes of this entry, energy is just a number that depends on two things; how the object in question is moving, and how it interacts with everything else around it. As a result, energy depends on things like where other things are relative to the object, and how the object itself moving.

As it turns out, energy is incredibly important in quantum mechanics because an object not having a specific energy is precisely what causes all of its probabilities to change over time. And if the probabilities don’t change over time, then there’s no difference between the behavior of a quantum pencil and its classical, statistical, brother (boring!). Hence, the universe often not assigning specific energies to quantum objects is where all the properly crazy quantum stuff comes from.

Let’s take a look at one way this tidbit triggers a weirdness cascade throughout the rest of quantum physics by delving into an example. Consider a quantum teapot, zipping through a completely empty universe. In this situation, if we knew the energy of the teapot precisely, then we would know its speed as well; in an empty universe, the energy of this teapot is exclusively dependent on its speed and vice-versa. In addition—if you trust my previous statements—then the probabilities of the teapot’s observable properties shouldn’t change with time. If the teapot had, say, a 50% chance of being in position A when we measured its energy, then it should retain that probability for the rest of time.

But how could it? Remember, even though we might not where the teapot is, we know where it can be, and we know that it has to be moving with a specific speed that we can discern. So if we also knew that the teapot was in the neighborhood of position A, we know that it would have to eventually move away from the neighborhood of position A, and the probability of it still being in position A would now have changed with time, contradicting our previous statement!

Teapot 1

And no matter how much you play around with the information you have about where the teapot can be, there’d only be one scenario in which this wouldn’t be a paradox: if the teapot had the same probability of being everywhere, in which case the concept of position doesn’t have any meaning at all. It would become some kind of cosmic entity, omnipresent, eerily lurking in a “glitch” state, steadily moving through a desolate universe of itself where movement has no purpose.

Teapot 2

Scary, right? Welcome to quantum mechanics. This innate relationship between the nature of position and velocity, combined with velocity’s connection to energy and energy’s connection to changing probabilities, are what leads to that “Heisenberg uncertainty principle” hoopla everyone keeps talking about when they try to explain the punchline of their unfunny jokes to you. And trust me, all the other weird stuff you hear about in quantum mechanics pops out of similar thought experiments to this; marrying this little energy-time change connection with other boring classical physics results, such as velocity being the rate of change of position in the example above.

With this in mind, I’ll discuss just one more quick example. The central tenet of all physics (and you definitely don’t want to mess with that) says that the energy of an object which isn’t exchanging energy with some other object is constant in time—energy can’t just spontaneously appear or disappear. This naturally implies that, once we know the energy of a single isolated object, it can’t change from that value the next time we measure it. As you can see from the previous example, this puts a lot of restrictions on how such an object can behave—and in more realistic and restrictive situations than the empty universe above, only specific energy values yield probabilities that are consistent with the extra restrictions imposed by the environment. And as you can probably surmise, this means you very often can’t find an object to have just any arbitrary energy after interacting with it, only specific values of it; this is the historical hallmark and experimental mine canary of quantum mechanics.

That’s enough for now! I’ll conclude this entry by providing you with a science joke of my own:

Werner Heisenberg gets pulled over while driving. A cop comes over and asks, “Do you know how fast you were going?”

Heisenberg replies, “Now I do.”

He vanishes into thin air. The world feels changed; the colors off, the hues subdued.

The officer stares blankly into the empty seat. A nylon face forms in the seat upholstery; it whispers a single phrase.

“I am arriving.”

The officer begins to form the concept of fear. He vanishes before being able to do so.

Heisenberg has become the demiurge—he shapes and reshapes the universe as he sees fit. Stars die and are reborn in instants. Comets pulse in green and red as fractal Bauhaus palaces made of solid xenon crystals shatter and reform in the region once occupied by Saturn’s rings.

The Earth and its inhabitants fluctuate chaotically in the same manner; an irradiated wasteland consumed by eldritch nightmares one second, a savannah of polygons dotted with wireframe people the next. They are none the wiser to their predicament, their collective consciousness a fleeting mayfly. They are beyond hope now—they are beyond most things.

Ohm resists.

On Silly Shapes

Entropy is a tricky concept. Although it pops up everywhere—take this scene from Portlandia, for example—there are many different definitions for it, and as a result it tends to be hand-waved around as some kind of measure of “disorder” when explained in a casual situation. That being said, entropy is such a pervasive concept in physics that it has been blamed for anything from the eventual heat death of the universe to the fact that time exists as we know it! So to try and give this topic the attention it deserves, and to dispel the notion that entropy is somehow fundamentally tied to disorder, I’ll try to demystify the concept of entropy without ever touching on disorder by presenting a fairly specific (but conceptually broad) definition of it.

Consider some sort of “shape generator” that randomly pops out either a diamond, arrow, or circle with equal probability. In fact, you’ll find an (only approximately random) gif version of such a contraption below!

Shape Generator Fast

Left-click the gif and drag it to randomly pick a shape.

As you can see, there are only three distinct possible outcomes of picking a shape. And, at least for this example, these outcomes contain all the information you could get out of the shape generator. In fact, by repeatedly picking shapes over and over, you could understand everything there is to know about the shape generator itself! We can call these shapes complete shapes* since they are both shown completely and they collectively provide complete information on the shape generator.

*The formal name for these in statistical mechanics doesn’t make much intuitive sense here.

Now I’m going to block off some information from you by taking the exact same shape generator from above and putting a blue bar over the right half of it. Try picking a shape below now!

Blocked Shape Generator 2

Left click the gif and drag it to select your (blocked) shape!

Notice that, even though the underlying shape generator is the same, you can only pull out two distinct outcomes out of this blocked generator compared to the three in the unblocked one; triangle or semicircle. (This is because the left-hand side of both the arrow and the diamond are exactly the same.) We can call the shapes we get from this generator incomplete shapes since they don’t show the underlying shape completely thanks to the blue bar.

From an informational perspective, the blue bar also doesn’t let us fully know what the underlying shape generator is; someone who has no idea what’s under the blue bar might not even think of—and can’t prove—the fact that there are two distinct “triangle-y” shapes. This is because, although the semicircle in the blocked case indicates a circle in the unblocked case, the triangle in the blocked case doesn’t uniquely correspond to either the diamond or the arrow. And this is precisely where entropy comes in.

Shape Mapping

Entropy is just a measurement of the number of complete shapes associated with an incomplete shape, which can be measured in this case by counting the lines attached to each of the incomplete shapes in the diagram above. Here, the triangle shape has an “entropy” of 2, while the semicircle shape has an “entropy” of 1—which is the lowest value it could possibly be in any shape generator, since there has to be at least one line attached to an incomplete shape. (You couldn’t put a blue bar on our shape generator and suddenly expect to see a hexagon, could you?)

In a roundabout way, entropy is giving you a quantitative measurement of how little information you get about an underlying system when you observe some specific “blocked” or inefficient measurement of that system.* A shape with large entropy has a large number of lines attached to it, which means that a bunch of complete shapes could be associated with the incomplete shape; a shape with low entropy has only a few complete shapes associated with it, which gives you less uncertainty about the underlying shape, and by extension, the underlying generator creating those shapes.

*Curiously enough, this quantity as described in terms of specific measurements isn’t called entropy in information theory; informational entropy is defined as the average of this quantity over all blocked measurements, which means that it’s really a function of the blocking itself rather than of a specific blocked measurement. I consider this to be very annoying.

That’s mostly all there is to it! One important thing I have to mention, which I consider a bit of a boring formality, is that the proper definition of entropy is actually the logarithm of the number of lines between each shape. This changes nothing about the qualitative statements I made above and, depending on your point of view, is more pragmatic than fundamental; it’s done chiefly so that if I have two identical shape generators and pulled out two triangle shapes, the entropy of both of those triangle shapes together is the sum of each individual triangle entropy rather than the multiplication of them. Bo-ring!

Also note that because the triangle shape has a larger entropy, and each complete shape is equally likely to pop up in the shape generator, the triangle is more likely to show up than the semicircle in our blocked generator. In fact, if there were a massive amount of triangle-y complete shapes compared to circular ones in our shape generator, the entropy of the triangle shape would be far larger than the semicircular one, and it would be almost certain that you’d pull out a triangle from the blocked shape generator.

What does this all have to do with thermodynamics? Well, in thermodynamics, we’re dealing with massive ensembles of individual physical objects (atoms/molecules) with many different physical properties such as energy, momentum, and so forth. Because it’s hopeless to keep track of all of the properties of all of these individual objects, we decide to “block off” our information about every individual object and keep track of only averages of these quantities in the entire ensemble. This then automatically defines an entropy connecting each of these “incomplete” averaged measurements to the “complete” collective states of the individual objects. And, if the largest entropy of an averaged measurement is far larger than the second-largest entropy as in the multi-triangle scenario in the paragraph above, then it is overwhelmingly likely for the ensemble to be in that largest-entropy state.

Note that disorder never even came into the picture! The connection to disorder only obviously pops up when your “complete” system consists of a very large amount of identical subsystems, in which case the most likely incomplete measurements tend to be ones with little “organization” (and, by extension, little information) in the complete underlying system. In short, entropy is only a measure of disorder when disorder implies a lack of information about the system. And if you’re interested in seeing some examples of this, well, you’ll have to do a little bit of extra reading.

Quantum II: On G̴̡̕͝l̷̛̀͝i̧̧t̶̡̕҉͞c̀̕h̨̛̀͟e̸̶̡̕ş̴͡͏͞

This is the second of three entries on quantum mechanics. Read the first here.

Now that I’ve spent some time talking about how other people get quantum mechanics wrong, it’s about time I get my own chance to screw it up. Following the set of arbitrary rules I laid out in my last quantum mechanics entry, I’m going to start by listing some things you cannot do with quantum mechanics:

  • You cannot influence the universe by thinking about something.
  • You do not have a chance of spontaneously teleporting to Mars.
  • Your ex does not have a chance of spontaneously teleporting to Mars.
  • You cannot travel into alternate universes.
  • You cannot travel faster than light.

This can all be summarized by stating:

  • You cannot do anything impossible in “normal” physics using quantum physics.*

If you retain anything from reading this post, let it be this! Also, see that asterisk? I did say before that I’d alert you whenever I said something that is debatable, and that’s what I’m doing there. But fear not, I’ll explain it all in fine print at the end of my third QM entry; ignore it for now.

Anyways, given that little postulate stated above, you may ask what the difference between quantum physics and normal physics is. Well, back in the Renaissance, the prevailing notion was that the universe was like a complex mechanical computer; that given the state of the universe as it is now, the laws of physics would procedurally generate every future state without any ambiguity. Another way to describe this is that you could always predict everything that’s going to happen in the future if you knew everything about the universe right now. However, the basis for quantum mechanics is that this is false and you should feel bad for thinking that!

The basis of quantum mechanics is that the laws of physics are fundamentally incomplete. For certain situations, the laws of physics don’t say what should happen specifically, only what can and can’t happen. An intuitive example of this is standing a normal pencil on the pointy end; after letting it go, it will inevitably fall down, but there’s no obvious preference of direction in which it will fall down to. What Renaissance physicists would counter with is that knowledge of the pencil tip shape at the microscopic level and the motion of the air molecules in your room will always tell you how the pencil will fall down—and that is absolutely true, with the caveat that it is functionally impossible to obtain that information.


The real problem appears when you go down to the smallest scales of the universe, to a single isolated particle (or a few); there’s no hidden features there, no “tip shapes” or other properties to rely on when the laws of physics don’t explicitly tell you what’s going to happen. Of course, there are no subatomic pencils, but there are multiple processes in the quantum world (mostly involving radioactive decay) that preserve that feature of not having a precise or well-defined outcome.

This leads to two critical questions, the first being pragmatic and the second philosophical:

  1. What do these objects do when the laws of physics don’t determine their specific future?
  2. What do these objects become when the laws of physics don’t determine their specific future?

The answer to the first question, which may or may not be intuitive, is that they’ll do one of the things the laws of physics say they can do! Our quantum pencil will fall in one direction sometimes and another direction sometimesand the best we can do is quantify those probabilities using experiments, develop some physical laws for those probabilities based on our data, and that’s that.

If you think about it, there isn’t really anything quantum-y about that at all. This is exactly the way we’d try to describe the physics of our normal pencil falling down; setting it up to fall down a bunch of times and recording the different outcomes to guess the probabilities of it going one way or the other. This is so ubiquitous in the world of science that it is its own long-standing field of physics: in short, bo-ring!

Where things get spooky is when discussing the answer to the second question I presented above. See, for normal objects, not measuring what state they’re in doesn’t affect anything about the probabilities in which you can find them. When the normal pencil falls down and you’re not around to check on it, you can still say it fell down to some specific position; you just don’t know which until you check. For a quantum pencil, this can’t be true!

This is due to the fact that, for objects with no specific future defined by the laws of physics, the probabilities of its possible outcomes can affect each other. This is completely insane! For a quantum pencil, the fact that it could fall to the right can affect its probability of falling in every other direction*. As a result, the chances that a quantum object will behave one way or another can be affected by when and how you interact with the object as you check its status! (The precise way in which those probabilities affect each other is so tricky that they not only needed to describe it indirectly by instead describing a related quantity, but they also had to use imaginary numbers to be able to describe that.)

For this reason, we can’t really say the quantum pencil exists in the same way a classical one does before you interact with it. In fact, it is literally impossible to comprehend it traditionally since it violates one of the three classic laws of thought! For the purposes of our brain, when the quantum pencil falls down and you’re not around to measure it, it’s in some weird glitch state until you interact with it, where you’ll find the quantum pencil fell down either to the left or right just like the normal one would.

Pencil 5

To illustrate this difference from a philosophical point of view, let’s say that I was an amoral psychotic and decided to link the life or death of a cat to the direction in which a normal pencil is falling down. If the pencil falls to the left, the cat lives, while if the pencil falls to the right, the cat dies. I, meanwhile, am getting some coffee as this insane little experiment is going on.

At some point while the barista is preparing my ristretto, the normal pencil falls down and the cat either lives or dies, and will remain in that state as I go check on what happened with hot coffee in hand. Nothing weird here (other than the part where cats are dying).

Pencil 2

Now we replace the normal pencil with a quantum one and observe that, because the life and death of the cat depend on the state of the pencil, the cat gets put into a glitch state too after the quantum pencil falls down! As a result, I can’t really say anything about whether or not the cat is alive or dead until I go check on it. (This is a very famous thought experiment). Also worth mentioning is that in normal physics, this linking of statistical outcomes between pencil and cat has a name, but in quantum mechanics they call it something fancier even though it’s functionally the same thing.

Pencil 3

If we want to make things really interesting, we can extend this experiment a little bit. Let’s say that, when I look at the cat post-pencil drop, I have an emotional reaction that differs based on what state I find the cat in. If it’s alive, I’m happy, and if it’s dead, I’m sad. Now let’s get a bit meta and say that some AI that can detect emotions is experimenting on me experimenting on cats, and that it was installing some Windows update until after I see the outcome of my experiment. In that situation, from the point of view of the computer, both the cat and me are going to be in a glitch state until it checks what emotion I’m feeling; even though I am clearly seeing the cat as being either alive or dead!

Pencil 4

This means the strange glitchiness is totally dependent on frame of reference and independent of whether or not something “conscious” is measuring things! This isn’t the first time we’ve run into this craziness, but it does have a lot of philosophical juice in QM that people love to spend hours debating on (presumably while getting high on something). As a result, and this is a thing a lot of people get wrong, human consciousness does not affect quantum mechanics.

Alright! That’s enough for one quantum entry. In the next one, I’ll discuss some more cute examples of counter-intuitive behavior in QM and keep trying to stay on my mission of making quantum boring again. Wish me luck!

Quantum I: On Being a Former Crackpot

This is the first of three entries on quantum mechanics.

I have been trying for quite a while now to write something on quantum mechanics, but QM is a notoriously difficult thing to deal with in pop-sci. There is perhaps no more misinterpreted field of physics than quantum physics, and any informal talk of the interesting things that quantum mechanics can do will inevitably lead to comments or questions involving things like parallel universes, teleportation, the nature of consciousness, and so on. Unfortunately, these questions tend to be less about learning quantum mechanics and more about reinforcing misguided opinions about what people already think quantum mechanics is. As a result, for my first QM entry, I’m going to do something a bit peculiar and talk about the difficulties of writing about quantum mechanics rather than write about quantum mechanics itself. (Meta, I know!)

You see, quantum mechanics, thanks to its esoteric charm and strange predictions, serves as a magnet for all sorts of kooks and cranks who are raring to tell you that all your problems can be fixed with a quick dose of quantum snake oil. The reason they get away with this is two-fold:

  1. Most science advocates have marketed quantum mechanics as an exotic otherworldly concept where “impossible” things happen.
  2. All simplified descriptions of quantum mechanics are bound to be missing something essential.

The first is a regrettable but expected consequence of trying to get people engaged with physics. Just like everyone going into acting dreams of being the next big Hollywood star, most would-be physicists started getting into the field because they thought they’d figure out the secret to time travel/teleportation/etc. and saw quantum mechanics as their “in”. (This sometimes being because a book by [insert pop-sci author of choice] talked about things that sounded like that).

The problem with this is generated by those who decided not to continue their interest in quantum mechanics through formal study, and proceeded to take all the fanciful metaphors and simplified explanations in these books literally. Then they see someone famous tell them that they can improve their life through some weird mystical quantum nonsense, conclude it matches more or less what they read in those pop-sci books, and get reeled right in.

I am intimately familiar with the allure of this quantum quackery because I was one of the suckers who fell for it! In fact, the entire reason I got into physics in the first place was because, when in high school, I watched a “documentary” on quantum physics by what I later learned was an insane cult whose leader believes she can psychically channel a Lemurian warlord from 33,000 BC. (I’m not kidding.) It was only after I had made several science teachers worried about my opinions and took a proper look at quantum mechanics that I realized how much of a moron I had been. I still remember the look* on my physics teacher’s face when I told him that people can control the molecular structure of water with their thoughts.

*It was a liberal mix of Vietnam war flashback thousand-yard stare and that face you make when you find your dog took a dump on your carpet.

In fact, why don’t you take a look for yourself! Watch Academy Award-winner Marlee Matlin really earn her paycheck by listening to a Liza Minnelli lookalike tell her about how saying nice things to water can make it “better” (and also through getting hit on by who I can only describe as the used car salesman version of Cipher from The Matrix).

Seriously, the only thing that could have made that guy creepier is if he pulled out a ticket wheel and offered her a VIP pass to climb Mt. Baldy. But I digress.

It’s worth noting that the reason I got suckered into believing this garbage is precisely because nothing that anyone said in this fake documentary sounded at odds with what I had read in pop-sci books. In fact, every single book talked about what QM could do and no one talked about what it couldn’t do. So I’ll jot that down as one of my tenets for my following entries on QM; explicitly lay out what quantum mechanics can’t do.

The second item on my list is a more fundamental result of the difference between the language of mathematics and conventional spoken/written language. Just like there are certain words in other languages that are untranslatable to English (saudade, hygge), most of physics is not fully translatable into any verbal language. The most we can get away with, as is the case with the words above, is to try and give our best attempt at nice albeit incomplete explanations of it. If that wasn’t the case, why would we even bother putting up with math at all? Physicists could just learn everything we needed to know through blog posts like this one instead of expensive math-laden textbooks! And like the meal descriptions at your local dodgy Chinese buffet, when things are hard to translate, you usually wind up getting at least a little bit of nonsense.


The people who tend to succumb to this the most are philosopher types who want to associate the concept of quantum mechanics to some particular metaphysical or philosophical viewpoint. And that’s totally fine! In fact, all of science was originally conceived for this particular purpose; that’s why it used to be called “natural philosophy”. But the necessity of learning the mathematics that QM is written in to make that sort of argument is paramount. For example, I once had a discussion with a philosophy student that went something like this:

  • My work is on relating quantum mechanics to philosophical principle x/y/z.
  • Wow, you should find a way to argue it using the mathematical formulation of QM!
  • Well, I’m not really interested in learning the mathematics.
  • Then you’re not a good philosopher.

Prickly, I know, but trying to argue philosophical viewpoints in QM without using the math is like getting an orchestra to play Strauss’s Also sprach Zarathustra by humming what they should play to them without any sheet music. It is, although theoretically possible, hilariously time-consuming, and almost certainly going to sound like this.

Does that mean that all pop-sci descriptions of QM are useless? Of course not! But what it does mean is that all pop-sci descriptions of QM are going to be incomplete, and the best that us pop-sci writers can do is to point out those missing pieces in our write-ups. So let me put that down as my second tenet: I am going to tell you what parts of QM I haven’t explained or don’t fit my analogies. Essentially, if you can’t spot the mistakes in your pop-sci explanation of quantum mechanics, the mistake is that you’re explaining it.

In short, what the next two entries will represent is not an attempt at describing quantum mechanics correctly, since doing that is impossible without talking about Hilbert spaces and probability amplitudes and whatever. What it is is an attempt at explaining quantum mechanics as best as I can while following the two little rules I set up for myself:

  1. Explain what quantum mechanics can’t do.
  2. Explain what’s missing from my pop-sci description of quantum mechanics.

It turns out that, once you establish one or two mind-boggling things, everything else is fairly obvious! In fact, if after reading these entries you come away with the idea that quantum mechanics is boring, I will have been doing my job.

On Losing Your Marbles

There is, perhaps, nothing scarier to a physicist or mathematician than losing the ability to perceive reality objectively. We rely critically on our abilities to discern when something makes logical sense or not, and experiencing a reality that is individual to us and not shared is a horribly isolating thought. I know this from experience; having spent a significant amount of my childhood around older family members suffering from varied forms of dementia, I am perennially sobered by the statistical likelihood that I will, like them, retreat into a private screening room of the world, a personal universe where what I see and do only makes sense to myself and appears as irrational nonsense to everyone else. However, the universe has decided to play a mean prank on those of us who fear lack of objectivity, and I’ll try to show you in this entry just how bad this goof is (and how a very common physical phenomena is a result of it).

Imagine a rocket, traveling to Mars very close to the speed of light, zooming past you over the horizon. Strangely enough, you see that the rocket has a very strange asymmetrical shape as it flies by, and this doesn’t make sense to you because you’ve been on this rocket to Mars before and saw it as perfectly rocket-shaped back then. You then try to look at it when it docks back on Earth and there it is, stationed in the hangar, looking exactly the same as it was when you were on it; normal and symmetrical. “That’s strange” you may say, and dismiss your previous observation as some trick of your mind; until you see this strange lopsided rocket over the horizon again on its next trip! Having been prepared, you take a picture, and show it to the permanent inhabitants of the rocket as proof. They will say without hesitation that they have never seen this rocket change shape; and to prove it to you, present on-board camera footage that shows the rocket has never altered its form! They would then probably accuse you of doctoring your photograph, quietly assess you as a lunatic, and send you off amidst scattered chuckles.

The root of the problem here is that the universe will alter your perspective of things that are moving quickly relative to you. In fact, since all the universe cares about is relative speed, the people on the rocket would have seen you in a similarly strange asymmetric shape too if they had windows!

Rockets 2

These “hallucinations” are not just visual either; if the rocket had an electrical charge, you would detect an electric field that’s compressed in the direction of motion. This is strange because someone on the rocket would also get a completely different measurement of the electric field, and would call you insane and your equipment defective just like in the example above. The only difference between a “normal” hallucination and this specific type of hallucination is that, since they affect our equipment as well, we can document them consistently, make predictions about them, and subsequently generate a set of physical laws governing this movement-induced psychosis called special relativity.*

*It is an interesting and unfortunate historical coincidence that Einstein, who created special relativity, had a son who suffered from schizophrenia.

Getting to the meat of this entry, I’m going to give you a slightly deeper look at how special relativity gunks up our understanding of the universe. Picture a positively charged particle next to a very long wire with an equally distributed amount of positive and negative charges. Since the charges in the wire are equally distributed, the net charge of the wire is zero, and the single particle doesn’t feel any push or pull towards it.

Current 4

Now consider what happens from the particle’s point of view if I make all the positive charges, both itself and the charges in the wire, move to the right at the same speed. The positive particles don’t change since they don’t move relative to the particle; but the negative ones appear as if they’re moving to the left, and become compressed horizontally like the electric field from the rocket. What’s astounding about this is that the deformation causes the negative charges to be more densely packed than the positives from the particle’s point of view, and so the particle feels a wire with a slightly negative net charge! And, since opposite charges attract in electricity, the particle would also move toward the wire as it moves to the right.

Current 3

Now let’s see what happens if I try to move the individual charge and the positive charges in the wire in opposite directions. The negative charges would still appear deformed like in the example above, but the positive particles in the wire will be even more deformed/tightly packed because they’re traveling twice as fast to the left as their negative counterparts! Consequently, the particle feels a net positive charge on the wire, and would travel away from the wire as it moves forward.

Current 5

Now, these are all things that only the moving particle is perceiving. If we were sitting down in a lab and accelerated the particle while a current was running through the wire, we wouldn’t feel the wire suddenly gain or lose a net charge; we’d only see the particle moving forward and then somehow start drifting towards/away from the wire for no reason! From our point of view, our particle is hallucinating the existence of some net charge in the wire, and is reacting as the laws of electricity dictate it should in that situation. Describing how special relativity warps the perception of charges directly is a little complex; luckily for us, we can avoid this by interpreting these strange behaviors in charged particles as due to another physical phenomenon called magnetism.

Voilà! Now you know why currents running in the same directions attract, while currents running in opposite directions repel. You now also know why people say “electromagnetism” (other than it sounding cool), since the laws of magnetism are just the laws of electricity in a different frame of reference. Sadly, this doesn’t intuitively explain the most common source of magnetism to humans, which are the bar magnets you see everywhere; these don’t have an electric current, so how do they generate these magnetic effects? A simplified answer is that the electrons in the atoms of these magnets are all spinning around in a synchronized way; this is, for the purposes of external charges, equivalent to a very large electrical current running along the surface of the magnet, and this “current” is what triggers the magnetic effects.

I’ll finish off by answering a question I once had regarding how special relativity distorts the laws of physics; “If that happens to electricity, doesn’t some kind of magnetic analogue exist for gravity too?” And the answer is yes, there absolutely are effects in gravitational physics that pop up thanks to the movement of mass. The only reason we don’t really talk about them much is because 1) the laws of gravity already involve special relativity directly (hence our name for them) and 2) these effects are much weaker than their electromagnetic counterparts.

On Gambling Your Savings Away

Everyone who knows me knows I am a betting man. I have an almost comical obsession with putting money down on everything, from the mundane to the ridiculous; I once bet a friend 10 bucks a Pulitzer Prize-winning author would get my name wrong in a signed dedication. (I won.)


I have, however, avoided casinos throughout my life like the plague. The windowless rooms, purified oxygen, and neutral expressions of fellow gamblers have led me to believe that casinos are some sort of terrestrial purgatory where you slowly but surely rid yourself of your sins (read: money). In this entry, I’ll try to convey just why I resist the allure of these glitzy gambling institutions and explain how the flow of heat from hot to cold is connected to the flow of money from your wallet to the craps dealer.

Gamblin’ Heat

Say you go to a casino and play a simplified version of roulette, where you can bet on either red or black with both having equal chances of being the landed color. (You could imagine betting heads or tails on a coin flip, it’s functionally the same thing.) Since this is a casino that’s interested in taking your money, let’s say that they give you slightly less than double what you bet when you win. In addition, I’m going to assume for simplicity that you have a terrible taste in bets and gamble on red all the time. In this system, I can easily show you all the possible gambling outcomes if you just gamble twice.

Bet 1Simple enough; note that there are two different ways in which you can win one bet, and a single way to either lose or win all your bets. Here are all the possible outcomes for a 4 bet gambling run:

Bet 2

Now there are six different ways for you to win half of your total bets, while still just one way for you to lose or win all your bets. For a gambler like me, a useful thing to do is to observe the number of outcomes for a given number of successful bets, as that tells me the relative likelihood of me winning some number of bets (and that’s all I really care about). As this quantity appears to be so important, I’m going to plot it below and keep plotting it while we go to longer gambling runs.BetPlot4Since the amount of outcomes is too large to list individually for bigger betting runs, let’s see how our outcome vs. betting wins plot evolves when we analyze runs from 5 bets to 150:

ezgif.com-cropThe amount of ways in which you can win half of your bets for a 150 bet run is ridiculously huge! In fact, I’ll type the number out just to scare you: 92826069736708789698985814872605121940117520. But the thing I want you to focus on is the fact that the graphs are getting both taller and narrower as we increase the number of bets; this means it’s becoming more and more likely for me to win a certain number of bets (half, in this case) and less likely for me to win any other amount of bets. This tendency is important to spot because every casino game will behave like this simple version of roulette when the amount of bets is very large.  In fact, the tendency is such that these plots will eventually become infinitely narrow as I increase the number of bets, leading to the following general statement for any kind of casino with any number of different games:

For a sufficiently long betting run, a gambler will always win an essentially fixed proportion of his bets.

I say essentially here because the probability of winning a number of bets close to this proportion doesn’t go down too quickly as you increase the betting run, but the difference definitely becomes negligible very fast. (If you had enough money to make a trillion roulette bets, would you care that you won 500000000001 times instead of 500000000000?) It is also not impossible for you to win every single bet you ever make, of course; it is just phenomenally improbable.

Since a casino will always manipulate payouts to ensure that winning that magical proportion of bets gives you a net loss, what this is effectively saying is that you will always lose money if you gamble long enough. And since one gambler betting a large number of times is the same as a bunch of people betting a moderate number of times, a busy casino will always make money. All an Atlantic City hotshot needs is to get morons to stay in their big ritzy oxygen chamber and cash will just come pouring out! Note that there’s absolutely nothing stopping you for walking in, winning every single bet you make, and walking away with a fortune; a sufficiently busy casino knows there’s some other poor schmuck somewhere in its glamorous bowels losing more money than you just won. And again, it’s not impossible for everyone to suddenly get a lucky streak and break the casino’s bank; it is just so fantastically unlikely that it is more probable for a plane to crash on your casino every year than to have to deal with a group of 10 people winning 15 consecutive bets at the same time.

Old-Fashioned Heat

Moving on to the science-y part of this entry, the statement I made in bold above is strongly linked to the laws of thermodynamics, which like that statement, are actually just very strong statistical tendencies. In some stable gas, kinetic energy is constantly shuffled around among all its particles, as if every particle was simultaneous gambler & casino. However, if you try to measure the kinetic energy of some large number of these, it becomes more and more likely to measure a certain total energy for a given number of particles; just like it becomes more and more likely to win a certain number of bets (half) as you increase your total bets. Take a gander below if you don’t believe me!


Another way to look at this is by saying that the ratio of total measured energy vs. particle number becomes effectively fixed as the amount of particles you measure becomes very large. This quantity, after scaling by some constants, is what we call temperature. If we looked instead at the ratio of total measured energy vs. particle density, we would get (again after scaling by some constants) the thermodynamic definition of pressure.

If the amount of measured particles is very small, these notions of temperature and pressure would not make any sense, as these quantities would fluctuate wildly for different measurements. Correspondingly, we would not be able to make any predictions based on these quantities, and thermodynamics as a field would cease to exist! Luckily, every chunk of matter at our scale contains an enormous amount of particles (a liter of water contains 3.346*1025 molecules of H2O), so it is still much more likely for a plane to crash on you than for you to read a fever on your thermometer when you’re actually fine.

Going back to the gas example, say I now heated some small section of it for a while; for a gas with a decent amount of particles, it would be very unlikely for heated particles to remain in the same region and/or avoid nonheated particles wandering close to their turf. In short, there are many more outcomes in which that extra kinetic energy gets distributed to the rest of the gas, while only a handful of outcomes in which that energy stays with the original gang in the same area. This means that the second law of thermodynamics, the fact that heat flows from hot to cold, is not a fixed law of nature; it is just an overwhelmingly likely tendency.

I’ll finish off with a little addendum; notice how quickly those numbers got big for our outcomes vs. wins plots in our roulette example. In fact, my computer couldn’t even handle doing the calculations for a betting run of 200! In order to size these numbers down in a practical way, scientists and mathematicians take something called the logarithm of the number of outcomes for a given condition (number of bets won in the roulette example, energy for thermodynamics) and base all their calculations and theorems on that. This quantity, which behaves qualitatively just like the number of outcomes for a given condition, is called entropy; and that is why you hear the second law sometimes quoted as “entropy tends to a maximum”.

On Writing Novels (and Being Fun at Parties)

Physicists are not much fun at parties. Conversations with us about the universe usually follow a common pattern; you’ll mention or discuss some creative and interesting idea about reality like “cold stars” or faster-than-light travel, and we’ll coldly shut the idea down with some unnecessarily verbose, almost pedantic technobabble. In this entry, I’ll try to explain why our consistent stuffiness is an acquired trait of the business, and how being boring with our ideas prevents us from conceptually destroying billions of galaxies.

Say somebody asks you to write the next novel starring Hero X, the star of a multi-decade science-fiction franchise. Enthused, you proceed to write about his thrilling adventures in the Crab Nebula, only for your editor to tell you that the Crab Nebula got blown up in a comic book spinoff starring Hero X’s dog companion. You then try to write a story about stopping a cryogenically preserved Soviet cyborg, until your editor lets you know that the Soviet Union never existed because a side character in Hero X’s time travel novel mentioned Tajikistan in 1934. Eventually, you decide to write a quaint book about Hero X facing his inner demons during a beach retreat in the Maldives; that is, until the editor mentions that the creator of Hero X wrote a letter to a fan stating that Hero X is terrified of the ocean and that the Maldives were teleported onto Deimos in an audio drama from the 1980s. At this point, you would probably strangle your editor and start your own franchise unless you were insane (or a physicist).

Hero X 2

See, devising interesting and creative ideas in physics that stick is difficult in exactly the same way as writing a Hero X novel; any single addition to our catalogue of creation interferes with everything else in monumental ways. Inventing anything that goes even slightly faster than the speed of light means you can now time travel into the past. Making a single magnet with only one pole causes electric charge everywhere in the universe to be restricted to specific values (this may actually be true). Creating a machine that harvests the tiniest amount of energy out of nowhere means that no coherent laws of physics can exist, as I explained in my previous post. Even an idea that seems harmless, like assuming that there is some inconceivably small minimum distance in the universe, puts the “relativity” part of general/special relativity straight in the shredder. Even though these examples would take far too long to elaborate on in this single entry, I’ll discuss in relative detail another creative idea physicists had some years back with similarly unexpected consequences.

Here’s a thought; say that there’s some large, isolated patches of the universe made up purely of antimatter, with antigalaxies and antistars and antiplanets full of antipeople just like you and me. Doesn’t seem like a problem at first glance, right? Antimatter is just matter with the electric charges (and some other things) flipped backwards, and that doesn’t really stop you from generating all the elements needed to make anti-you. In fact, the laws of physics don’t seem to indicate any difference between matter and antimatter other than that little inversion quirk, so why shouldn’t there be just as much antimatter floating around somewhere?

The first problem with this idea is that the “borders” of these antimatter domains would often come into contact with our own normal matter domains; and since matter and antimatter don’t like each other very much, these collisions would utterly delete everything in their immediate vicinity from existence (normal and anti), triggering bursts of intensely powerful light that would serve as the single brightest events in the known universe. Given that quite a few experiments have been looking for these huge flashes to no avail (and trust me, we’d see them), this little attempt at creativity of ours would seem to be a bust.

Another, even bigger, problem; if the universe was actually made up of equal parts matter and antimatter, there’s no reason why both kinds of matter wouldn’t have destroyed each other completely right after the Big Bang, when the universe was the size of a walnut! We’d have been left with an empty universe, filled only with faint, dead light. Consequently, we are forced to conclude that the laws of physics discriminate in some tiny way between matter and antimatter, and are left to figure out why this caused our universe to be composed of (almost) exclusively matter. Good luck, theorists!

Strange Science 2

This should give you a bit of insight as to why the last two major theoretical advances in physics, quantum mechanics and relativity, have been such complete retcons of our previous laws of physics instead of just additions “tacked on” to the old ones. Imagine how conceptually crowded physics was back at the start of the 20th century that the only way scientists could explain (without contradictions) why lava glows red was by developing a theory that lets everything have the probability to exist almost everywhere in the universe at the same time. This is like getting away with your Hero X beach retreat novel by writing a chapter where an intergalactic version of the CIA finds an exact clone of the Earth (Maldives and all) in the Andromeda Galaxy, controlled by aliens that have replaced all the water with a virtually identical synthetic compound. Hell, I’d read that!


I’ll finish this entry by mentioning one of the very few ideas in physics that is both simple and non-disruptive; dark matter. To make a long story short, a scientist in the 70’s realized that for galaxies to spin the way they do, either 1) Einstein’s theory of gravity is fundamentally wrong or 2) there’s a bunch of invisible stuff we haven’t discovered yet floating around in every galaxy. Ambitious and creative theorists have jumped at the chance to rewrite the laws of gravity to account for this, but the above examples may clue you in as to why these theories have not been largely successful; rewriting even a small piece of Einstein’s equations leaves you unable to explain why stars last longer than 2 weeks or why light bends around stars the way it does. On the other hand, if you just assumed that there’s a specific type of particle in existence zooming around that’s invisible to our telescopes, then that’s all there is to it! No need to rewrite every known principle in the laws of physics. All that’s left is to actually find the damn thing, which is (of course) hilariously difficult; but there are dozens of scientific collaborations filled with people much smarter than me working on it. I should know; I used to work for one of them! Wish them luck, and don’t be too hard on your physicist friend when he tells you that black holes being wormholes to another part of the universe would violate conservation of energy.

On the Behavior of Cats

In this entry, I’m going to try and show you what might be the most fundamental concept in all of physics by talking about some of the universe’s most mysterious objects: cats.

Whether it’s the perpetual obsession with sleeping in boxes, the constant desire to go outside and then immediately come back inside, or their fascination for canned spaghetti (this may be just my cats), the mind of a feline would appear to be an utterly undecipherable thing. But every cat owner has attempted to understand its cat’s behavior in some manner, and I’ll try to explain this process to those who have never owned a cat.*

*Cat people will agree that it’s technically the cat that owns you.

In attempting to figure out my cat’s strange culinary preferences (which appear to include everything from Swiss cheese to guava paste), I decided to leave some broccoli out on a plate one day and see if he’d take a bite. At the end of the day, my cat decided to leave the plate of broccoli untouched; rejecting them for the roasted chicken leg I was holding in my hand at the time, and leading me to conclude that my cat indeed does not like eating his greens.

To present the above in a more abstract way, I had attempted to make a guess at some aspect of my cat’s behavior, which can be described by a statement like “My cat does not like eating broccoli”. Now, since the nature of a cat is inscrutable, I have no way of fundamentally knowing what my cat truly likes eating; but I can attempt to associate my prediction of my cat’s food preferences with some measurable constant quantity like “the amount of broccoli on my cat’s plate”. If the amount of broccoli on the plate didn’t change after a day or two, then I’d reason that my cat didn’t eat any because he tasted it and didn’t like it. If instead I see that the amount of broccoli drops by some amount every day, I might say something along the lines of “my cat enjoys eating broccoli every day”. Simple!

Cat Example 1

Although strange at first glance, the association of a behavioral prediction/statement with some constant quantity is something we do every day. You may predict your father likes steak because he visits a local steakhouse a regular amount of times a month, for example; you may also observe that a friend takes a constant type and amount of pills every week and conjecture that he suffers from a chronic medical condition. Both behavioral statements, “my father likes eating steak” & “my friend has a chronic condition”, are fundamentally tied to some constant quantity that we can observe (“the amount of times my father visits a steakhouse a month” & “the amount/type of pills my friend takes a week” respectively).

Now, I can take it a little bit further philosophically and (cue the Inception horn) make a statement about my statements of what my cat’s behavior is like. An obvious one is to assume that my statements are always going to be right; that is to say, that a statement I make now will hold for all of time. That does not mean that my cat’s behavior can’t change over time; it just means that I have to describe that change in my statement, which itself cannot change over time. One may argue fairly easily, from a philosophical perspective, that this is a requirement in order for the statements to be correct or consistent. But the point is that this meta-statement about the “correctness” of my statements is a statement in and of itself.

Cat Example 2With all of this in mind, consider that a theoretical physicist’s job is to devise a series of statements/predictions about the behavior of everything in the universe; these being what we call “the laws of physics”. All of these laws can be judged to be “consistent” by stating that even though things in the universe may change over time, the laws of physics themselves shouldn’t change over time. Nothing controversial about that!

But what is absolutely stunning is that a very smart mathematician realized that this statement about the consistency of the laws of physics is also associated with a measurable constant quantity just like in my cat’s example! And in fact, this mysterious conserved quantity tied to the consistency of our laws of physics is something called energy. The whole point of this preamble was to get you to believe me when I state what might be the most fundamental principle in all of physics:

Conservation of energy is the consequence of unchanging laws of physics.

Consider what would happen in a universe where the laws of physics did change over time. I’d perform an experiment today and get some result; then I’d do the same experiment the next day and see something completely different! In fact, it would become completely impossible for science to exist at all since I wouldn’t be able to make any lasting conclusions based on my experimental observations. In this alternate reality, we’d have no guarantee that we wouldn’t suddenly fly out of our office windows, transform into polar bears, or collectively develop a sharp hatred for people who wear lime-green sandals. The entire concept of engineering goes into the gutter since buildings could just spontaneously transform into VHS cassettes of The Sandlot. Our existences would be a chaotic nightmare; our unknowable universe would just be one big metaphorical cat. Thanks, science!

For another fun fact, that very same mathematician I mentioned above went on to state that if the laws of physics are consistent across all of space as well as time, another conserved quantity shows up called momentum. Regardless of whatever definition of energy and momentum you may have been taught in high school, their true definitions are “the things conserved when the laws of physics are consistent in time and space (respectively)”. Now you know the real reason why your teachers kept mentioning those two things over and over!

Cat Example 3

I’ll finish off this post by saying that scientists have a funny term for these kinds of meta-statements about the laws of physics, and that many people dedicate their careers to proving or disproving these by performing experiments on the quantities associated with them. In fact, a very smart physicist discovered through such experiments that a meta-statement we intuitively thought was correct (“left and right are relative concepts in the laws of physics”) was actually completely wrong! Who knows; perhaps our universe is more cat-like than we thought.