## On the Benefits of Being a Dumb Tourist

I’ve stayed at a fair share of different places over the last few years, and using public transportation takes the cake for being the most stressful and annoying day-to-day experience in every place I’ve been to. From riding 5-and-a-half hours every week in a packed Chevy Astro through hot Puerto Rican highways to starting your workweek at Berkeley with the fresh sight and smell of body parts, I’ve never had a positive relationship with public transportation (and don’t expect that to change anytime soon). However, for someone who can’t afford to buy a car—and who is universally described as driving “like a grandmother politely trying to get to the hospital while having a heart attack”—it is a regrettably indispensable part of my life.

As a result, I’ve had to spend a considerable amount of time thinking about how to maneuver the crowded Roman trains and smelly New York buses, and have stumbled onto some weird tricks that might be of use for both tourists and daily commuters. For this post specifically, my intent is to show you the “paradox” that, when trying to get on a packed metro train, being a dumb tourist is better than being a smart one; and I’m going to do it by using something just as annoying, stress-inducing and indispensable as public transportation. Statistics.*

*Cue Inception horns and distant screams.

If mathematics were a family, probability & statistics would be the bizarre great-uncle that won’t stop talking about how taxidermy is a spiritually fulfilling hobby at the dinner table. It is a field of study that is simultaneously too trivial for “real” mathematicians (they’re too busy writing proofs no one understands) and so strange that one of the best mathematicians of all time didn’t believe a simple statistics result until someone showed him a computer simulation proving it. But rather than go into any detailed description of this curious field of math, I’ll just give you a small primer on the basics of this strange field before we delve into any commuting weirdness.

Perhaps the two most important pieces of information in the statistical sciences are the long-term average and the single likeliest outcome. The names are pretty straightforward, but just in case, I’ll explain them with a six-sided die.

1. The single likeliest outcome is just that. For one six-sided die, there isn’t any single likeliest outcome because you have an equal chance of getting any number between 1 and 6 (unless you’ve been loading your dice, you cheater). It’s easy to spot in an outcome graph, because it’s the outcome that happens the most.
2. The long-time average is a little more detailed, but not very: it’s the average of your results after you obtain a very large amount of them! For a single six-sided die, that number is 3.5. You can’t spot this one in an outcome graph, but you can deduce/guess it if the shape is simple.

Now that we’ve got our statistics bases covered, allow me to illustrate the promised “dumb tourist paradox” through my experience living in the Bay Area. Trying to get on a BART train (the Bay Area’s metro system) during the busy hours was mostly a game of chance; you had to hope you picked a waiting spot close to where the train door lands or you’re looking at a 15 minute wait for the next one to roll in.

However, let’s say you knew that the train door always pops up within the same 100-foot strip of train station, but you don’t know exactly where. Assuming there’s an equal chance of it showing up anywhere in the strip, the instinctively smart thing to do would be to always wait smack-dab in the center of it; that’s the position that puts you closest to the train door in the worst-case, and it certainly feels like it’s your best bet.

In this scenario, you might claim you’re making the smartest choice, so let’s call this the smart tourist scenario. Now, instead of using some fancy math theorems to tell you what the most likely distance and long-term average distance are in this case, I’m going to be 100% thorough and actually simulate it! Let’s take a look at what being a smart tourist comes out to when you simulate the train arriving a million times:

There are two things to take away from this graph. First, since the graph indicates that the train stopped everywhere about the same number of times, there’s no single likeliest outcome. It’s equally likely for the train door to land right in front of you than it is for it to wind up 50 feet away! Second, if you used the train over and over, your average distance from the train door would be 25 feet (which you could calculate by finding the average of all the distance outcomes). Nothing unexpected here.

Now we’re going to go into “paradox” territory. Let’s say you take a page from your weird great-uncle’s book and, instead of carefully planning things out, you just decide to randomly pick a spot inside of the 100-foot strip to wait in.

In this case, you’re not making any decision at all about what’s best or not; you’re just randomly waiting somewhere. Let’s call this the dumb tourist scenario, and here’s what that looks like when you pick random spots a million times:

The simulations don’t lie: the likeliest outcome now is that the train stops right in front of you, and the average distance between you and the train will be about 33 feet.

Comparing both scenarios, there’s nothing weird going on if you commute all the time; the long-time average distance is bigger when you randomly wander around the train station (33 ft) versus when you wait in the middle (25 ft), so doing the smart thing is still your best bet in that case. But, when you’re a tourist and only plan on riding the train once or twice, this somehow seems to imply that it’s better to randomly pick a spot to wait in than to pick the best logical spot!

This “dumb tourist paradox” is profoundly counter-intuitive on many levels; how can a “dumb” action turn out to be better than a “smart” one? How can my random action cause the train to usually arrive closer to me? How can I understand this result intuitively? Well, I could try to calm you down by pointing out that being a dumb tourist has two negatives, which is that 1) your long-time average distance is larger and that 2) you have a nontrivial chance of having the train show up more than 50 feet away from you, which is impossible for the smart tourist. If you’re like me, though, you are probably still very puzzled.

The answer, however, is pretty mundane—even though it’s certainly true that the single likeliest outcome is that the train door stops directly in front of a dumb tourist, whereas there isn’t a likeliest outcome for a smart tourist, the actual chances of the train arriving directly in front of a smart tourist and a dumb tourist are effectively the same. As a result, you can’t actually game the system—it just ultimately looks like you can. If that wasn’t obvious to you, you can take solace from the fact that the smartest man who ever lived once said that “in mathematics you don’t understand things, you just get used to them”, and my advice is the following: get used to it. This is by no means the only “paradox” in the statistical sciences, as great many others are known to exist, and they’ve puzzled everyone just as much as this little factoid does. The best thing you can do is to learn about them and why they happen so that you don’t get surprised by them (or more importantly, make wrong assumptions because of them). And who knows! With time you may find some new ones yourself, if you decide to formally study statistics—or if you commute enough.

## Quantum I: On Being a Former Crackpot

##### This is the first of three entries on quantum mechanics.

I have been trying for quite a while now to write something on quantum mechanics, but QM is a notoriously difficult thing to deal with in pop-sci. There is perhaps no more misinterpreted field of physics than quantum physics, and any informal talk of the interesting things that quantum mechanics can do will inevitably lead to comments or questions involving things like parallel universes, teleportation, the nature of consciousness, and so on. Unfortunately, these questions tend to be less about learning quantum mechanics and more about reinforcing misguided opinions about what people already think quantum mechanics is. As a result, for my first QM entry, I’m going to do something a bit peculiar and talk about the difficulties of writing about quantum mechanics rather than write about quantum mechanics itself. (Meta, I know!)

You see, quantum mechanics, thanks to its esoteric charm and strange predictions, serves as a magnet for all sorts of kooks and cranks who are raring to tell you that all your problems can be fixed with a quick dose of quantum snake oil. The reason they get away with this is two-fold:

1. Most science advocates have marketed quantum mechanics as an exotic otherworldly concept where “impossible” things happen.
2. All simplified descriptions of quantum mechanics are bound to be missing something essential.

The first is a regrettable but expected consequence of trying to get people engaged with physics. Just like everyone going into acting dreams of being the next big Hollywood star, most would-be physicists started getting into the field because they thought they’d figure out the secret to time travel/teleportation/etc. and saw quantum mechanics as their “in”. (This sometimes being because a book by [insert pop-sci author of choice] talked about things that sounded like that).

The problem with this is generated by those who decided not to continue their interest in quantum mechanics through formal study, and proceeded to take all the fanciful metaphors and simplified explanations in these books literally. Then they see someone famous tell them that they can improve their life through some weird mystical quantum nonsense, conclude it matches more or less what they read in those pop-sci books, and get reeled right in.

I am intimately familiar with the allure of this quantum quackery because I was one of the suckers who fell for it! In fact, the entire reason I got into physics in the first place was because, when in high school, I watched a “documentary” on quantum physics by what I later learned was an insane cult whose leader believes she can psychically channel a Lemurian warlord from 33,000 BC. (I’m not kidding.) It was only after I had made several science teachers worried about my opinions and took a proper look at quantum mechanics that I realized how much of a moron I had been. I still remember the look* on my physics teacher’s face when I told him that people can control the molecular structure of water with their thoughts.

*It was a liberal mix of Vietnam war flashback thousand-yard stare and that face you make when you find your dog took a dump on your carpet.

In fact, why don’t you take a look for yourself! Watch Academy Award-winner Marlee Matlin really earn her paycheck by listening to a Liza Minnelli lookalike tell her about how saying nice things to water can make it “better” (and also through getting hit on by who I can only describe as the used car salesman version of Cipher from The Matrix).

Seriously, the only thing that could have made that guy creepier is if he pulled out a ticket wheel and offered her a VIP pass to climb Mt. Baldy. But I digress.

It’s worth noting that the reason I got suckered into believing this garbage is precisely because nothing that anyone said in this fake documentary sounded at odds with what I had read in pop-sci books. In fact, every single book talked about what QM could do and no one talked about what it couldn’t do. So I’ll jot that down as one of my tenets for my following entries on QM; explicitly lay out what quantum mechanics can’t do.

The second item on my list is a more fundamental result of the difference between the language of mathematics and conventional spoken/written language. Just like there are certain words in other languages that are untranslatable to English (saudade, hygge), most of physics is not fully translatable into any verbal language. The most we can get away with, as is the case with the words above, is to try and give our best attempt at nice albeit incomplete explanations of it. If that wasn’t the case, why would we even bother putting up with math at all? Physicists could just learn everything we needed to know through blog posts like this one instead of expensive math-laden textbooks! And like the meal descriptions at your local dodgy Chinese buffet, when things are hard to translate, you usually wind up getting at least a little bit of nonsense.

The people who tend to succumb to this the most are philosopher types who want to associate the concept of quantum mechanics to some particular metaphysical or philosophical viewpoint. And that’s totally fine! In fact, all of science was originally conceived for this particular purpose; that’s why it used to be called “natural philosophy”. But the necessity of learning the mathematics that QM is written in to make that sort of argument is paramount. For example, I once had a discussion with a philosophy student that went something like this:

• My work is on relating quantum mechanics to philosophical principle x/y/z.
• Wow, you should find a way to argue it using the mathematical formulation of QM!
• Well, I’m not really interested in learning the mathematics.
• Then you’re not a good philosopher.

Prickly, I know, but trying to argue philosophical viewpoints in QM without using the math is like getting an orchestra to play Strauss’s Also sprach Zarathustra by humming what they should play to them without any sheet music. It is, although theoretically possible, hilariously time-consuming, and almost certainly going to sound like this.

Does that mean that all pop-sci descriptions of QM are useless? Of course not! But what it does mean is that all pop-sci descriptions of QM are going to be incomplete, and the best that us pop-sci writers can do is to point out those missing pieces in our write-ups. So let me put that down as my second tenet: I am going to tell you what parts of QM I haven’t explained or don’t fit my analogies. Essentially, if you can’t spot the mistakes in your pop-sci explanation of quantum mechanics, the mistake is that you’re explaining it.

In short, what the next two entries will represent is not an attempt at describing quantum mechanics correctly, since doing that is impossible without talking about Hilbert spaces and probability amplitudes and whatever. What it is is an attempt at explaining quantum mechanics as best as I can while following the two little rules I set up for myself:

1. Explain what quantum mechanics can’t do.
2. Explain what’s missing from my pop-sci description of quantum mechanics.

It turns out that, once you establish one or two mind-boggling things, everything else is fairly obvious! In fact, if after reading these entries you come away with the idea that quantum mechanics is boring, I will have been doing my job.

## On Writing Nonsense and Getting Away With It

Roald Dahl was a master of the written word, and this was perhaps most exemplified by his ability to use nonsense words like “flushbunkled” and “frothbuggling” without making the reader question whether or not they are viewing the product of an elderly Welshman having a mild stroke on his typewriter. Regardless of how silly they sound to us, such nonsense words have a rich history in the English language; in fact, they form a large part of it! Back in the 16th century, Shakespeare is claimed to have invented over 1700 words that are sure to have made a few English eyes squint back in the day. Examples include fracted, propugnation, and fairly hilariously, elbow. (What the hell were they calling elbows before Shakespeare came along? Did the English just point to their elbows and go “I have some pain in my…um…well, you know what I mean”?)

In any case, this type of creativity is not limited to just dead English writers, as mathematicians have often attempted to transcend the boundaries of the established and the intuitive to make use of similar nonsensical concepts in their equations. In this entry, I’ll talk about the most commonly dreaded example of this; the often-frightsome complex/imaginary numbers.

Whenever imaginary numbers were brought up in high school math class, I’m sure mostly everyone wanted to leap up dramatically from their desk and shout “Why the hell are we studying imaginary numbers? What is the purpose of this? Why don’t we learn things like how to balance a checkbook/do taxes/apply for a job?”

If I were a high-school math teacher*, my response would be “You’re right! You shouldn’t be studying imaginary numbers.” There really is no reason for a general high-school math course to cover them, and the discussion of this kind of thing is best left to STEM-track math courses for everyone your class liked to hoist from the school flagpole. I would be happy to leave you to your Balance a Checkbook 101 class, where you can revel in the fact that you are taking a class for something that is both mind-numbingly tedious and so conceptually simple that you learned all the skills to do it when you were 8 (except for the skill to realize that you don’t need a state educator to remind you how to add and subtract).

*I tragically don’t qualify to be a high-school math teacher, as all high-school math teachers are mandated by the state to have bushy mustaches, square-rim glasses, and an unironically ugly wool sweater. (Wool sweaters are expensive.)

The point of this entry, however, is not to tell you whether or not you should know about imaginary numbers; in fact, I’m not even going to try and explain what they are. My goal here is to try and explain why they’re useful to Melvins like me. And, like your high-school math teacher trying to explain why model trains are a fun and interesting hobby, I probably won’t do a good job of it.

The gist of it is that, like nonsense words, the importance of imaginary numbers lies not in what they are but rather what they do; how they interact with the rest of the normal parts of the medium, be it literature or mathematics. It doesn’t matter what the Gizzardgulper meant when he squawked “I is slopgroggled” in The BFG, it matters what this implies about the giant’s ability to speak the English language and the richness of context such a simple statement can provide to a book. In the same sense, imaginary numbers would just be some daydream an Italian guy had in the 15th century if they didn’t let us use the very simplest tools in math (adding, multiplying, etc.) to perform some interesting and useful tricks.

Take, for example, what happens when you multiply $2$ by itself some number of times. (I’ve graphed the results below.) Nothing strange or unexpected here; feel free to check that $2*2*2 = 8$ and $2*2*2*2*2 = 32$.

Now let’s try to do the same thing for the imaginary/complex number $2^{i}$. What $2^{i}$ actually is doesn’t matter; what matters is what the values of the multiplications are once I get rid of all the gunk that has $i$‘s on it.

See that? The value is oscillating! We’d see the same thing if we tried multiplying something like $3^{i}$ or $4^{i}$, except the frequency of the oscillations would change.

As it turns out, the chief usefulness of imaginary numbers is that they make it very easy to describe things that oscillate*, and this is what makes them show up everywhere from electrical engineering (currents in wires tend to oscillate, hence why AC stands for alternating current) to quantum mechanics (the central object of QM has wave-y behavior). Imaginary numbers are not required to describe any of those phenomena, but trying to avoid them requires altering your math so much that you generate things almost as nonsensical as them anyways. Attempting not to use them because they don’t “feel” right is just as gauche and annoying as it would be for you to keep calling your elbow “the thing that connects your upper arm thingy to your lower arm thingy”.

*Clever mathematicians have found other surprising uses for complex numbers, such as finding the areas under curves that are mathematically difficult to deal with, but these applications are more esoteric than anything.

Imaginary numbers are certainly not the only nonsensical objects that mathematicians have come up with; they stand in company with a bounty of strange concepts that have been invented over the years, like numbers that represent the size of infinities or numbers that aren’t really numbers at all. And the fact you can’t conjure up an image of $2^{i}$ apples shouldn’t deter you from thinking these ideas are somehow different from the numbers you’re familiar with! After all, Shakespeare also invented words like moonbeam, submerge and obscene. These words would’ve sounded just as strange to 16th century Englishmen as fracted and propugnation; the only reason we find them normal is because we’ve always used them. If we did the same for complex numbers, it might be possible for us to easily imagine balancing $2^{i}$ apples on those jointed pointy things at the end of our hands.

## On Becoming Big Brother (or Why Can’t 2+2=5?)

Orwell famously stated in 1984 that, since the external world exists only through our mental perception of it, and that the mind is controllable, a perfect totalitarian regime would be able to fully direct and redefine our perception of reality. His grand example of this is that his nightmarish Party could state that $2 + 2 = 5$ and that everyone would believe it; but would they? And more importantly, would the Party want to make such a statement? In this entry, I’ll show you why you should be careful about which mathematical statement you pick a fight with, and how bending your proletariat’s perception of reality is a bit harder than changing the answer of a sum.

Let’s say that in an Orwellian thought-universe, where addition is represented by a $+'$ symbol instead of $+$, two plus two is indeed five. Let’s also try to say that addition between all other numbers remains the same as in our universe, so $2+'1=3$, for example. How this could affect the consistency of their results can be checked fairly quickly by observing what happens if I add three numbers. (The parentheses indicate which two numbers get added first.)

$(2+'2)+'1 = 5+'1 = \mathbf{6}$

$2+'(2+'1) = 2+'3 = \mathbf{5}$

Looks like we’ve run into a very big problem; the order in which we add the numbers affects the result! This is like saying that the amount of money you use to pay for something depends on the order in which you give your bills to the cashier, or that the length of a fence on Mr. Pig’s animal farm depends on which side he measures first. The issue is not that the answer is “wrong”, since the Party defines what is “right” and “wrong”; it’s that there is no definitive answer. Clearly, this system of addition doesn’t seem very useful.

The only apparent way to save Orwellian addition is by tacking on an extra 1 to every sum. That is to say, $x+'y = y+'x = x+y+1$ for all numbers $x$ and $y$; that removes the ordering dependence, and Mr. Pig can measure his fence in an (apparently) consistent way. Have we managed to save the concept of $2 + 2 = 5$? Well, not quite.

Problems arise again when we attempt to define multiplication. An intuitive way to define multiplication in our own universe is simple; $3\times2$, for example, is simply adding 2 three times, $2+2+2$. In the Orwellian universe, we can say the same concept applies; $3\times' 2 = 2 +' 2 +' 2.$ Naturally, the answers differ between universes, as $3\times2 = 2+2+2 = 6$ and $3\times'2=2+'2+'2=8$ because of the extra ones from the Orwellian addition. Regardless of the difference, so far so good; we don’t need the answer to be correct, we just need a consistent answer to exist.

The problem is that Orwellian multiplication invariably runs into the same problem we saw above. For example:

$3\times'2 = 2 +' 2 +' 2 = 2 + 2 + 1 + 2 + 1 = \mathbf{8}$

$2\times'3 = 3 +' 3 = 3 + 3 + 1 = \mathbf{7}$

Like in our primitive guess of Orwellian addition, the order by which we multiply numbers in this universe is affecting our result! Mr. Pig might have been able to get a consistent measurement of the perimeter of his farm, but he won’t be able to get a consistent measurement of its area. And weaseling our way out of this one like we did before with addition is not an option*, since abandoning this definition of multiplication means we’re abandoning a central concept behind it (that multiplication is just repeated addition no matter what universe we’re in).

*One can formally demonstrate that it is completely impossible for Orwellian multiplication to be distributive no matter what universal concept of multiplication you abandon, but that is far too complicated for this post.

In short, $2+'2=5$ is not just incorrect, it is anarchic. It’s one thing for a totalitarian government to tell you that two plus two is always five instead of four, but it’s another thing entirely to tell you that five times three times two is one thing and another thing at the same time but also some other thing too. Faced with this lack of absolutism, the citizens of this Party would eventually begin to form factions based on the result they believe is correct, and generate internal conflicts that would escalate until the Party collapses. In this Orwellian thoughtscape, the government has not gained a stranglehold over your perception of the world by stating that $2+'2=5$; it has completely let go of it. It just goes to show you have to be very careful with what aspects of reality you try to bend to your authoritarian will!

All of this may seem like some frivolous flight of fancy, but this type of analysis is commonly performed in a field of mathematics called abstract algebra. This section of math is dedicated to studying collections of objects, usually numbers, and general actions you can do to them. In our case, we attempted to justify $2+'2=5$ by defining new “Orwellian” operations similar to traditional addition and multiplication and, by doing so, saw that it failed to hold certain properties which are essential for its use as a reliable mathematical base.

This sort of rigmarole is much more in tune with what mathematicians really do in comparison to the kind of mathematics most people see in their classes, a.k.a. solving for $x$. It’s a crying shame that, because of this, a perception of mathematics as something stale and trite is nearly universal among the general public. If you’d like to attempt a problem that looks like something a proper mathematician might do, try and see if you can find some other form of addition and multiplication that is both consistent and allows two plus two to equal five! Perhaps you can be a better mathematician/dictator than I can.

There is, perhaps, nothing scarier to a physicist or mathematician than losing the ability to perceive reality objectively. We rely critically on our abilities to discern when something makes logical sense or not, and experiencing a reality that is individual to us and not shared is a horribly isolating thought. I know this from experience; having spent a significant amount of my childhood around older family members suffering from varied forms of dementia, I am perennially sobered by the statistical likelihood that I will, like them, retreat into a private screening room of the world, a personal universe where what I see and do only makes sense to myself and appears as irrational nonsense to everyone else. However, the universe has decided to play a mean prank on those of us who fear lack of objectivity, and I’ll try to show you in this entry just how bad this goof is (and how a very common physical phenomena is a result of it).

Imagine a rocket, traveling to Mars very close to the speed of light, zooming past you over the horizon. Strangely enough, you see that the rocket has a very strange asymmetrical shape as it flies by, and this doesn’t make sense to you because you’ve been on this rocket to Mars before and saw it as perfectly rocket-shaped back then. You then try to look at it when it docks back on Earth and there it is, stationed in the hangar, looking exactly the same as it was when you were on it; normal and symmetrical. “That’s strange” you may say, and dismiss your previous observation as some trick of your mind; until you see this strange lopsided rocket over the horizon again on its next trip! Having been prepared, you take a picture, and show it to the permanent inhabitants of the rocket as proof. They will say without hesitation that they have never seen this rocket change shape; and to prove it to you, present on-board camera footage that shows the rocket has never altered its form! They would then probably accuse you of doctoring your photograph, quietly assess you as a lunatic, and send you off amidst scattered chuckles.

The root of the problem here is that the universe will alter your perspective of things that are moving quickly relative to you. In fact, since all the universe cares about is relative speed, the people on the rocket would have seen you in a similarly strange asymmetric shape too if they had windows!

These “hallucinations” are not just visual either; if the rocket had an electrical charge, you would detect an electric field that’s compressed in the direction of motion. This is strange because someone on the rocket would also get a completely different measurement of the electric field, and would call you insane and your equipment defective just like in the example above. The only difference between a “normal” hallucination and this specific type of hallucination is that, since they affect our equipment as well, we can document them consistently, make predictions about them, and subsequently generate a set of physical laws governing this movement-induced psychosis called special relativity.*

*It is an interesting and unfortunate historical coincidence that Einstein, who created special relativity, had a son who suffered from schizophrenia.

Getting to the meat of this entry, I’m going to give you a slightly deeper look at how special relativity gunks up our understanding of the universe. Picture a positively charged particle next to a very long wire with an equally distributed amount of positive and negative charges. Since the charges in the wire are equally distributed, the net charge of the wire is zero, and the single particle doesn’t feel any push or pull towards it.

Now consider what happens from the particle’s point of view if I make all the positive charges, both itself and the charges in the wire, move to the right at the same speed. The positive particles don’t change since they don’t move relative to the particle; but the negative ones appear as if they’re moving to the left, and become compressed horizontally like the electric field from the rocket. What’s astounding about this is that the deformation causes the negative charges to be more densely packed than the positives from the particle’s point of view, and so the particle feels a wire with a slightly negative net charge! And, since opposite charges attract in electricity, the particle would also move toward the wire as it moves to the right.

Now let’s see what happens if I try to move the individual charge and the positive charges in the wire in opposite directions. The negative charges would still appear deformed like in the example above, but the positive particles in the wire will be even more deformed/tightly packed because they’re traveling twice as fast to the left as their negative counterparts! Consequently, the particle feels a net positive charge on the wire, and would travel away from the wire as it moves forward.

Now, these are all things that only the moving particle is perceiving. If we were sitting down in a lab and accelerated the particle while a current was running through the wire, we wouldn’t feel the wire suddenly gain or lose a net charge; we’d only see the particle moving forward and then somehow start drifting towards/away from the wire for no reason! From our point of view, our particle is hallucinating the existence of some net charge in the wire, and is reacting as the laws of electricity dictate it should in that situation. Describing how special relativity warps the perception of charges directly is a little complex; luckily for us, we can avoid this by interpreting these strange behaviors in charged particles as due to another physical phenomenon called magnetism.

Voilà! Now you know why currents running in the same directions attract, while currents running in opposite directions repel. You now also know why people say “electromagnetism” (other than it sounding cool), since the laws of magnetism are just the laws of electricity in a different frame of reference. Sadly, this doesn’t intuitively explain the most common source of magnetism to humans, which are the bar magnets you see everywhere; these don’t have an electric current, so how do they generate these magnetic effects? A simplified answer is that the electrons in the atoms of these magnets are all spinning around in a synchronized way; this is, for the purposes of external charges, equivalent to a very large electrical current running along the surface of the magnet, and this “current” is what triggers the magnetic effects.

I’ll finish off by answering a question I once had regarding how special relativity distorts the laws of physics; “If that happens to electricity, doesn’t some kind of magnetic analogue exist for gravity too?” And the answer is yes, there absolutely are effects in gravitational physics that pop up thanks to the movement of mass. The only reason we don’t really talk about them much is because 1) the laws of gravity already involve special relativity directly (hence our name for them) and 2) these effects are much weaker than their electromagnetic counterparts.

## On Gambling Your Savings Away

Everyone who knows me knows I am a betting man. I have an almost comical obsession with putting money down on everything, from the mundane to the ridiculous; I once bet a friend 10 bucks a Pulitzer Prize-winning author would get my name wrong in a signed dedication. (I won.)

I have, however, avoided casinos throughout my life like the plague. The windowless rooms, purified oxygen, and neutral expressions of fellow gamblers have led me to believe that casinos are some sort of terrestrial purgatory where you slowly but surely rid yourself of your sins (read: money). In this entry, I’ll try to convey just why I resist the allure of these glitzy gambling institutions and explain how the flow of heat from hot to cold is connected to the flow of money from your wallet to the craps dealer.

### Gamblin’ Heat

Say you go to a casino and play a simplified version of roulette, where you can bet on either red or black with both having equal chances of being the landed color. (You could imagine betting heads or tails on a coin flip, it’s functionally the same thing.) Since this is a casino that’s interested in taking your money, let’s say that they give you slightly less than double what you bet when you win. In addition, I’m going to assume for simplicity that you have a terrible taste in bets and gamble on red all the time. In this system, I can easily show you all the possible gambling outcomes if you just gamble twice.

Simple enough; note that there are two different ways in which you can win one bet, and a single way to either lose or win all your bets. Here are all the possible outcomes for a 4 bet gambling run:

Now there are six different ways for you to win half of your total bets, while still just one way for you to lose or win all your bets. For a gambler like me, a useful thing to do is to observe the number of outcomes for a given number of successful bets, as that tells me the relative likelihood of me winning some number of bets (and that’s all I really care about). As this quantity appears to be so important, I’m going to plot it below and keep plotting it while we go to longer gambling runs.Since the amount of outcomes is too large to list individually for bigger betting runs, let’s see how our outcome vs. betting wins plot evolves when we analyze runs from 5 bets to 150:

The amount of ways in which you can win half of your bets for a 150 bet run is ridiculously huge! In fact, I’ll type the number out just to scare you: 92826069736708789698985814872605121940117520. But the thing I want you to focus on is the fact that the graphs are getting both taller and narrower as we increase the number of bets; this means it’s becoming more and more likely for me to win a certain number of bets (half, in this case) and less likely for me to win any other amount of bets. This tendency is important to spot because every casino game will behave like this simple version of roulette when the amount of bets is very large.  In fact, the tendency is such that these plots will eventually become infinitely narrow as I increase the number of bets, leading to the following general statement for any kind of casino with any number of different games:

For a sufficiently long betting run, a gambler will always win an essentially fixed proportion of his bets.

I say essentially here because the probability of winning a number of bets close to this proportion doesn’t go down too quickly as you increase the betting run, but the difference definitely becomes negligible very fast. (If you had enough money to make a trillion roulette bets, would you care that you won 500000000001 times instead of 500000000000?) It is also not impossible for you to win every single bet you ever make, of course; it is just phenomenally improbable.

Since a casino will always manipulate payouts to ensure that winning that magical proportion of bets gives you a net loss, what this is effectively saying is that you will always lose money if you gamble long enough. And since one gambler betting a large number of times is the same as a bunch of people betting a moderate number of times, a busy casino will always make money. All an Atlantic City hotshot needs is to get morons to stay in their big ritzy oxygen chamber and cash will just come pouring out! Note that there’s absolutely nothing stopping you for walking in, winning every single bet you make, and walking away with a fortune; a sufficiently busy casino knows there’s some other poor schmuck somewhere in its glamorous bowels losing more money than you just won. And again, it’s not impossible for everyone to suddenly get a lucky streak and break the casino’s bank; it is just so fantastically unlikely that it is more probable for a plane to crash on your casino every year than to have to deal with a group of 10 people winning 15 consecutive bets at the same time.

### Old-Fashioned Heat

Moving on to the science-y part of this entry, the statement I made in bold above is strongly linked to the laws of thermodynamics, which like that statement, are actually just very strong statistical tendencies. In some stable gas, kinetic energy is constantly shuffled around among all its particles, as if every particle was simultaneous gambler & casino. However, if you try to measure the kinetic energy of some large number of these, it becomes more and more likely to measure a certain total energy for a given number of particles; just like it becomes more and more likely to win a certain number of bets (half) as you increase your total bets. Take a gander below if you don’t believe me!

Another way to look at this is by saying that the ratio of total measured energy vs. particle number becomes effectively fixed as the amount of particles you measure becomes very large. This quantity, after scaling by some constants, is what we call temperature. If we looked instead at the ratio of total measured energy vs. particle density, we would get (again after scaling by some constants) the thermodynamic definition of pressure.

If the amount of measured particles is very small, these notions of temperature and pressure would not make any sense, as these quantities would fluctuate wildly for different measurements. Correspondingly, we would not be able to make any predictions based on these quantities, and thermodynamics as a field would cease to exist! Luckily, every chunk of matter at our scale contains an enormous amount of particles (a liter of water contains 3.346*1025 molecules of H2O), so it is still much more likely for a plane to crash on you than for you to read a fever on your thermometer when you’re actually fine.

Going back to the gas example, say I now heated some small section of it for a while; for a gas with a decent amount of particles, it would be very unlikely for heated particles to remain in the same region and/or avoid nonheated particles wandering close to their turf. In short, there are many more outcomes in which that extra kinetic energy gets distributed to the rest of the gas, while only a handful of outcomes in which that energy stays with the original gang in the same area. This means that the second law of thermodynamics, the fact that heat flows from hot to cold, is not a fixed law of nature; it is just an overwhelmingly likely tendency.

I’ll finish off with a little addendum; notice how quickly those numbers got big for our outcomes vs. wins plots in our roulette example. In fact, my computer couldn’t even handle doing the calculations for a betting run of 200! In order to size these numbers down in a practical way, scientists and mathematicians take something called the logarithm of the number of outcomes for a given condition (number of bets won in the roulette example, energy for thermodynamics) and base all their calculations and theorems on that. This quantity, which behaves qualitatively just like the number of outcomes for a given condition, is called entropy; and that is why you hear the second law sometimes quoted as “entropy tends to a maximum”.

## On Writing Novels (and Being Fun at Parties)

Physicists are not much fun at parties. Conversations with us about the universe usually follow a common pattern; you’ll mention or discuss some creative and interesting idea about reality like “cold stars” or faster-than-light travel, and we’ll coldly shut the idea down with some unnecessarily verbose, almost pedantic technobabble. In this entry, I’ll try to explain why our consistent stuffiness is an acquired trait of the business, and how being boring with our ideas prevents us from conceptually destroying billions of galaxies.

Say somebody asks you to write the next novel starring Hero X, the star of a multi-decade science-fiction franchise. Enthused, you proceed to write about his thrilling adventures in the Crab Nebula, only for your editor to tell you that the Crab Nebula got blown up in a comic book spinoff starring Hero X’s dog companion. You then try to write a story about stopping a cryogenically preserved Soviet cyborg, until your editor lets you know that the Soviet Union never existed because a side character in Hero X’s time travel novel mentioned Tajikistan in 1934. Eventually, you decide to write a quaint book about Hero X facing his inner demons during a beach retreat in the Maldives; that is, until the editor mentions that the creator of Hero X wrote a letter to a fan stating that Hero X is terrified of the ocean and that the Maldives were teleported onto Deimos in an audio drama from the 1980s. At this point, you would probably strangle your editor and start your own franchise unless you were insane (or a physicist).

See, devising interesting and creative ideas in physics that stick is difficult in exactly the same way as writing a Hero X novel; any single addition to our catalogue of creation interferes with everything else in monumental ways. Inventing anything that goes even slightly faster than the speed of light means you can now time travel into the past. Making a single magnet with only one pole causes electric charge everywhere in the universe to be restricted to specific values (this may actually be true). Creating a machine that harvests the tiniest amount of energy out of nowhere means that no coherent laws of physics can exist, as I explained in my previous post. Even an idea that seems harmless, like assuming that there is some inconceivably small minimum distance in the universe, puts the “relativity” part of general/special relativity straight in the shredder. Even though these examples would take far too long to elaborate on in this single entry, I’ll discuss in relative detail another creative idea physicists had some years back with similarly unexpected consequences.

Here’s a thought; say that there’s some large, isolated patches of the universe made up purely of antimatter, with antigalaxies and antistars and antiplanets full of antipeople just like you and me. Doesn’t seem like a problem at first glance, right? Antimatter is just matter with the electric charges (and some other things) flipped backwards, and that doesn’t really stop you from generating all the elements needed to make anti-you. In fact, the laws of physics don’t seem to indicate any difference between matter and antimatter other than that little inversion quirk, so why shouldn’t there be just as much antimatter floating around somewhere?

The first problem with this idea is that the “borders” of these antimatter domains would often come into contact with our own normal matter domains; and since matter and antimatter don’t like each other very much, these collisions would utterly delete everything in their immediate vicinity from existence (normal and anti), triggering bursts of intensely powerful light that would serve as the single brightest events in the known universe. Given that quite a few experiments have been looking for these huge flashes to no avail (and trust me, we’d see them), this little attempt at creativity of ours would seem to be a bust.

Another, even bigger, problem; if the universe was actually made up of equal parts matter and antimatter, there’s no reason why both kinds of matter wouldn’t have destroyed each other completely right after the Big Bang, when the universe was the size of a walnut! We’d have been left with an empty universe, filled only with faint, dead light. Consequently, we are forced to conclude that the laws of physics discriminate in some tiny way between matter and antimatter, and are left to figure out why this caused our universe to be composed of (almost) exclusively matter. Good luck, theorists!

This should give you a bit of insight as to why the last two major theoretical advances in physics, quantum mechanics and relativity, have been such complete retcons of our previous laws of physics instead of just additions “tacked on” to the old ones. Imagine how conceptually crowded physics was back at the start of the 20th century that the only way scientists could explain (without contradictions) why lava glows red was by developing a theory that lets everything have the probability to exist almost everywhere in the universe at the same time. This is like getting away with your Hero X beach retreat novel by writing a chapter where an intergalactic version of the CIA finds an exact clone of the Earth (Maldives and all) in the Andromeda Galaxy, controlled by aliens that have replaced all the water with a virtually identical synthetic compound. Hell, I’d read that!

I’ll finish this entry by mentioning one of the very few ideas in physics that is both simple and non-disruptive; dark matter. To make a long story short, a scientist in the 70’s realized that for galaxies to spin the way they do, either 1) Einstein’s theory of gravity is fundamentally wrong or 2) there’s a bunch of invisible stuff we haven’t discovered yet floating around in every galaxy. Ambitious and creative theorists have jumped at the chance to rewrite the laws of gravity to account for this, but the above examples may clue you in as to why these theories have not been largely successful; rewriting even a small piece of Einstein’s equations leaves you unable to explain why stars last longer than 2 weeks or why light bends around stars the way it does. On the other hand, if you just assumed that there’s a specific type of particle in existence zooming around that’s invisible to our telescopes, then that’s all there is to it! No need to rewrite every known principle in the laws of physics. All that’s left is to actually find the damn thing, which is (of course) hilariously difficult; but there are dozens of scientific collaborations filled with people much smarter than me working on it. I should know; I used to work for one of them! Wish them luck, and don’t be too hard on your physicist friend when he tells you that black holes being wormholes to another part of the universe would violate conservation of energy.

## On the Behavior of Cats

In this entry, I’m going to try and show you what might be the most fundamental concept in all of physics by talking about some of the universe’s most mysterious objects: cats.

Whether it’s the perpetual obsession with sleeping in boxes, the constant desire to go outside and then immediately come back inside, or their fascination for canned spaghetti (this may be just my cats), the mind of a feline would appear to be an utterly undecipherable thing. But every cat owner has attempted to understand its cat’s behavior in some manner, and I’ll try to explain this process to those who have never owned a cat.*

*Cat people will agree that it’s technically the cat that owns you.

In attempting to figure out my cat’s strange culinary preferences (which appear to include everything from Swiss cheese to guava paste), I decided to leave some broccoli out on a plate one day and see if he’d take a bite. At the end of the day, my cat decided to leave the plate of broccoli untouched; rejecting them for the roasted chicken leg I was holding in my hand at the time, and leading me to conclude that my cat indeed does not like eating his greens.

To present the above in a more abstract way, I had attempted to make a guess at some aspect of my cat’s behavior, which can be described by a statement like “My cat does not like eating broccoli”. Now, since the nature of a cat is inscrutable, I have no way of fundamentally knowing what my cat truly likes eating; but I can attempt to associate my prediction of my cat’s food preferences with some measurable constant quantity like “the amount of broccoli on my cat’s plate”. If the amount of broccoli on the plate didn’t change after a day or two, then I’d reason that my cat didn’t eat any because he tasted it and didn’t like it. If instead I see that the amount of broccoli drops by some amount every day, I might say something along the lines of “my cat enjoys eating broccoli every day”. Simple!

Although strange at first glance, the association of a behavioral prediction/statement with some constant quantity is something we do every day. You may predict your father likes steak because he visits a local steakhouse a regular amount of times a month, for example; you may also observe that a friend takes a constant type and amount of pills every week and conjecture that he suffers from a chronic medical condition. Both behavioral statements, “my father likes eating steak” & “my friend has a chronic condition”, are fundamentally tied to some constant quantity that we can observe (“the amount of times my father visits a steakhouse a month” & “the amount/type of pills my friend takes a week” respectively).

Now, I can take it a little bit further philosophically and (cue the Inception horn) make a statement about my statements of what my cat’s behavior is like. An obvious one is to assume that my statements are always going to be right; that is to say, that a statement I make now will hold for all of time. That does not mean that my cat’s behavior can’t change over time; it just means that I have to describe that change in my statement, which itself cannot change over time. One may argue fairly easily, from a philosophical perspective, that this is a requirement in order for the statements to be correct or consistent. But the point is that this meta-statement about the “correctness” of my statements is a statement in and of itself.

With all of this in mind, consider that a theoretical physicist’s job is to devise a series of statements/predictions about the behavior of everything in the universe; these being what we call “the laws of physics”. All of these laws can be judged to be “consistent” by stating that even though things in the universe may change over time, the laws of physics themselves shouldn’t change over time. Nothing controversial about that!

But what is absolutely stunning is that a very smart mathematician realized that this statement about the consistency of the laws of physics is also associated with a measurable constant quantity just like in my cat’s example! And in fact, this mysterious conserved quantity tied to the consistency of our laws of physics is something called energy. The whole point of this preamble was to get you to believe me when I state what might be the most fundamental principle in all of physics:

Conservation of energy is the consequence of unchanging laws of physics.

Consider what would happen in a universe where the laws of physics did change over time. I’d perform an experiment today and get some result; then I’d do the same experiment the next day and see something completely different! In fact, it would become completely impossible for science to exist at all since I wouldn’t be able to make any lasting conclusions based on my experimental observations. In this alternate reality, we’d have no guarantee that we wouldn’t suddenly fly out of our office windows, transform into polar bears, or collectively develop a sharp hatred for people who wear lime-green sandals. The entire concept of engineering goes into the gutter since buildings could just spontaneously transform into VHS cassettes of The Sandlot. Our existences would be a chaotic nightmare; our unknowable universe would just be one big metaphorical cat. Thanks, science!

For another fun fact, that very same mathematician I mentioned above went on to state that if the laws of physics are consistent across all of space as well as time, another conserved quantity shows up called momentum. Regardless of whatever definition of energy and momentum you may have been taught in high school, their true definitions are “the things conserved when the laws of physics are consistent in time and space (respectively)”. Now you know the real reason why your teachers kept mentioning those two things over and over!

I’ll finish off this post by saying that scientists have a funny term for these kinds of meta-statements about the laws of physics, and that many people dedicate their careers to proving or disproving these by performing experiments on the quantities associated with them. In fact, a very smart physicist discovered through such experiments that a meta-statement we intuitively thought was correct (“left and right are relative concepts in the laws of physics”) was actually completely wrong! Who knows; perhaps our universe is more cat-like than we thought.

## On Stuff & Things

Everything in the universe can be said to belong in one of two different categories: stuff and things. Now, stuff and things are very different, and I’ll do my best to explain why below (and how the difference between them leads to the state of our universe as we know it).

### Stuff

Stuff is, well, stuff. If you get some stuff and you put it on some other stuff, you just have more stuff. Boom! That’s that for now; there’s nothing else to it.

### Things

Things are a little bit more complicated, but not by a lot. See, things are things, and if you try to put one thing where another thing is, you obviously can’t because they’re two different things. Simple enough.

When you were a child, this was explained in chemistry class (probably while you were sleeping) and in time-travel fiction (while you should have been studying for chemistry) using the following common-sense rule: No two things can occupy the same space at the same time.

But what’s actually pretty funny is that the “real” version of this is much simpler in principle:

No two things can be the same thing.

That’s it! It’s obvious; if two things were the same thing, then why would they be two things? They’d just be one thing, and then all of that same kind of thing would just be one big collective thing, and that would just be stuff. Capisce? Here’s a simple (read: made in Paint) visual comparison to help solidify what I mean.

With this in mind, it’s time to move away from the world of philosophical abstraction and actually mention examples of stuff and things in our universe.

### Electrons & Atomic Structure (or Things on Things)

At the elementary level, practically everything is a thing; electrons, protons, neutrons, and everything made out of them (which is essentially, well, everything).

What’s actually very interesting about things is that the simple, essentially philosophical statement I made in the previous section about thingness is directly responsible for the existence of chemistry as we know it. I’ll show how & why below, but first I need to explain what makes an electron its own thing.

The first thing we need to know is that a single electron doesn’t exist as some tiny ball, but as a probability cloud with some specific shape; this is why it was important to generalize our common-sense statement about things not being in the same place & time since, technically, every electron exists practically everywhere in the universe at the same time. So now, that property of the electron we’d call its “location” can be replaced by a property we’ll call “cloud shape” or shape.

The other property that makes a single electron “its own thing” (that’s relevant to us) is a property called spin. Spin is tricky (it deserves its own post), but all you need to know is that it’s a property an electron has and that there are two possible values of it. In fact, if electrons all had name-tags that said either Harry or Bob instead of having spin, chemistry as we know it (sort of) would still exist. Go ahead, name them!* I’ll take the approach of painting a Harry electron blue and a Bob electron red in my illustrations, and say that our electrons now have names instead of spin.

*Just don’t feed them or you’ll have to absorb them.

The whole reason I bring these properties up is to explain why our atoms look the way they do. See, thanks to their names, electrons can indeed exist in the same place at the same time (a.k.a. have the same shape) as long as they have different names; because that means they’re still different things!

With that out of the way, consider the following two atoms in two different universes, one in which electrons are things (our own) and one in which electrons are stuff. Here’s what hydrogen would look like:

Nothing seems different there. The simplest (a.k.a. lowest-energy) shape an electron can have around the nucleus is just spherical, and both electrons here take that same shape. Here’s what lithium would look like:

Now we’re getting somewhere. Once both Harry & Bob electrons exist in that first shape in our atom, you can’t pile on another electron without it possessing a different shape because the extra electron would be identical to either Harry or Bob, and would therefore not be a different thing. This is why chemistry teachers say things like “The first electron shell holds 2 electrons”, and why the first row of the periodic table only has 2 elements. For lithium to form, what then winds up happening is that the extra electron would have to change its shape to get bound to the nucleus (with a little energy boost), and that’s exactly what the extra Harry electron in our lithium illustration does.

In contrast, stuffium-3 doesn’t need each electron to be individual things, and so allows all of them pile up indistinguishably in that lowest-energy sphere shape.

Now here’s what carbon would look like:

Completely different! The trend is evident; our electron-thing atoms seem to obtain spatial structure when we pile up electrons, while different elements of stuffium look exactly the same since we can just pile up electrons indefinitely in that lower-energy sphere shape. In carbon, we’ve already filled out that second spherical shape we saw in lithium with another Harry and Bob, and so we see the two extra Harrys forced to take on some funky non-spherical shapes. (Two ovals on the same axis are the same electron). Note that you can always distinctly identify every electron in our atoms, while stuff-electrons have no individual “identity” to speak of.

A stuffium universe would be drastically different to ours; there is no energetic barrier to the formation of heavier elements such as iron (ahem, stuffium-26), and most of these elements would exist in even denser polyatomic molecular forms. The abundant presence of these heavier elements and molecules leads gravity to form galaxies much quicker than they began to form in our universe, and most nebulae would rapidly collapse into massive, dense planets. Black holes would be a common sight, and with chemistry changing completely, it’s unclear if stars could even exist. Cool beans!

### Light & Gravity (or Stuff and Some Other Stuff)

Now there’s not a lot of different kinds of stuff in the universe, but here’s a big one: Light is stuff! This is actually pretty intuitive, as I’m sure you’ve noticed that light doesn’t really behave like most things (pardon the pun) you’ve seen before. For one, clearly unlike things, you can essentially pile up “identical” light indefinitely just like the stuffium electrons above*. In fact, piling up identical light and then emitting it is precisely the definition of a laser.

*Eventually, piling up extremely large amounts of light creates things out of stuff, but that’s way out of our league here.

Gravity, or more accurately what causes it, is also stuff. This is actually a bit of a mind-boggler, since we haven’t really discovered the stuff that causes gravity; but we know for sure that it has to be stuff because gravity just wouldn’t be gravity if it wasn’t. Sadly there’s not much more to say about gravitational stuff at the moment, but we’re working on it.

At the risk of complicating things even more, scientists have actually discovered ways to make things behave like stuff! For example, in certain materials, you can actually trick every available electron (all ~10^23 of them) into piling up into the same material-wide shape, leading to some extremely strange phenomena.

Anyways, before I finish off this post, I should probably reveal the scientific names for things and stuff; now you can brag that you know what the second part of that whole “Higgs boson” hullabaloo means! (Admittedly, Higgs stuff doesn’t sound as interesting).