On Hunting Unicorns

On Hunting Unicorns

When I was in high school and very much in the crackpot stage of my life, I used to be a big follower of cryptozoology; the study, and attempt to prove the existence of, creatures only described in the folkloric record. (Note that I say “study” here in the same sense that a serial killer might say that “murder is the study of human death” in a schlocky police procedural.) Whenever somebody would ask me why I thought flying deer or unicorns existed, I’d simply ask them to prove me wrong: hOw dO YoU KnOw tHeRe aReN’T AnY? dId yOu lOoK EvErYwHeRe aNd nOt fInD OnE?

When I got older and “wiser”, I realized my follyhad unicorns existed, we would have simply either shot them all to death or genetically modified them for optimal horn production to the extent that we’d only be left with large bony masses and an atrophied horse body attached to them. But colt meeting Colt notwithstanding, believing in unicorns (or cryptid of your choice) tends to be tricky to deal with both philosophically and scientifically, and in this entry I’ll try to show you why.

unicorn2 copy

You see, scientists are in general cranky old codgers, and this grouchiness manifests itself through an all-pervading negativity when it comes to the discussion of scientific pursuits. (If I had a nickel for every time I’ve heard a scientist call another scientists work some variant of the word garbage, I’d have enough metal to run Smith & Wesson out of business.) And, like anything made by cranky old codgers, this negativity has been philosophically formalized into what can be summarized briefly as “you’re wrong, we just don’t know how yet”.* In that sense, scientists instinctively value beliefs that are (nominally) easy to disprove, which makes believing in the existence of unicorns tricky business.

*Incidentally, this is precisely why scientists call the claims of science “theories”; no matter how correct or proved these claims may be, we’re only one unicorn away from having people throw trash at us from the auditorium back seats during our university colloquium.

To illustrate, imagine we’re in some stodgy Viennese café in the early 1900’s and two philosophers with opposite claims about unicorns are trying to convince us of their viewpoint; Philosopher 1 claims that no unicorns exist, while Philosopher 2 states that at least one unicorn exists somewhere on Earth. If we are the same kind of grouch that I discussed before, we would set about attempting to show which claim is “less wrong” by attempting to disprove one claim or the other.

However, notice the difference in effort to do so; to refute Philosopher 1, we’d only need to hunt (or hire someone to find for us) one unicorn to completely blow his claim out of the water. To refute Philosopher 2, we’d need to scour every single part of the Earth to make sure that unicorns aren’t hiding out in some remote unreachable corner of the planet.* Sure, finding a single unicorn would be a herculean challenge for any hunter, but it is far more challenging to survey the ends of the Earth to make sure that there aren’t any.

*Spoiler alert: Yes, that includes the ocean.

unicorn1

From the perspective of optimizing the amount of time we spend drinking coffee and making unpleasant faces at passerby, it is far more convenient to entertain the notion that no unicorns exist; simply because it’s far easier to disprove it. And if you notice carefully, it turns out that claims that refer to a universal truth (i.e. there are no unicorns/everything is unicorns) in general are far, far easier to disprove than claims that are their direct refusal—namely, that at least one unicorn exists/at least one thing isn’t a unicorn. These types of claims are usually referred to as existential, in the sense that they claim that a specific thing exists (or doesn’t), and this “oppositeness” always holds; the refutation of a universal claim is always an existential claim, and vice-versa. If we were particularly pretentious—and if we’re in a Viennese coffee shop in the early 1900’s, that’s extremely likely—we might even call it something like the “no unicorns principle”:

No Unicorns Principle: Claims that are universal have more philosophically scientific worth than those that are existential.

If you look at the different theories in physics that have popped up throughout human history, you’ll find they share something in common; their claims are almost always universal, like the claim made by Philosopher 1, in the sense that they make statements like “everything is ████ ” or “everything of this type behaves according to █████ “. Perhaps more strikingly, you’ll find that there are far fewer theories regarding the existence of things in the vein of what Philosopher 2 proposed; statements like “there is at least one object that behaves according to ████ ” or “there is a ████ ” are fairly sparse in the history of science.

This is, in the context of my previous arguments, no coincidence. That innate grumpiness in scientists has manifested itself as the historical tendency that almost all of our physical theories are of the universal type, and statements about the existence of weird objects in our universe are almost always made as a direct consequence of their prediction in universal theories (and even then we get into big fights about them). It is for that reason that most of our experimental efforts are focused on finding specific things, since it is through this method that we go about disproving all theories that don’t predict these things’ existence. And when it comes to hunting these unicorns, you better make sure you bring a big gun.

On Sci-Fi Horror (or The Attack of the Nexocytes)

On Sci-Fi Horror (or The Attack of the Nexocytes)

When I was in high school, I really wanted to be a writer. I knew I’d have to write nonstop and suffer for my work if I wanted to be successful at it, which probably explains why I wound up becoming a scientist—it’s like being a writer in the sense that you have to write all the time and that your career prospects are realistically garbage, but different in the sense that you don’t get a cultural pass to wear scarves all the time and smoke cigarettes outside of snowy cafés without getting branded as a massive tool.  But even though I’ve read as much Flaubert and Forster on the subway as the next guy (and you better believe I was rocking scarves and a pensive look when I was doing that), I’ll always be a big fat sucker for outlandish sci-fi horror. Whether it be about a group of hyperintelligent seagulls and shark-punching frogmen handling a worldwide birdpocalypse caused by an Egyptian god or a concept-eating basking shark let loose in a hyperdimensional Portland, I always wanted to someday write about an utterly bonkers world-ending scenario or something of that ilk. And in this entry, I’ll try to pitch you my idea for it—the nexocyte.

Cue scary trumpets.

One of the longest-lasting sci-fi horror tropes consists of self-replicating nanobots; nanoscopic little machines that are able to create new copies of themselves without dying in the process. These little nanobots are usually referred to as “grey goo“, and one recent execution of this concept was in the (profoundly mediocre) 2008 remake of The Day The Earth Stood Still. Take a look at the scene involving them below and enjoy one of the few sci-fi action scenes that portrays lab safety protocols semi-accurately:

This flawless self-replicator concept has been around for ages, but for all its countless variants, it’s always stuck to the same simple formula; after they accumulate enough “food”, they’ll generate a copy of themselves while leaving the original “source” individual intact.

Grey Goo Cycle

Although this seems harmless enough, you’ll find that the number of grey goo nanites in your tacky secret military bunker can get real big real fast. Take it from the guy who invented the concept:

Imagine such a replicator floating in a bottle of chemicals, making copies of itself…the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined — if the bottle of chemicals hadn’t run dry long before.

But note that these replicators are, in some sense, “dumb”; their reproductive process is entirely independent of the state of the group, which you would expect if each of these little robobugs didn’t know anything about the group/collective/colony it belongs to. But what if it did?

Let’s consider some strange new type of replicator, which I’m going to call a nexocyte, that does know the state of the colony it belongs to thanks to some kind of intra-colony communication network. As a result, its reproductive process can be informed by the state of the colony, and in fact, it may try to clone the colony itself through its (presumably long and arduous) reproduction process.

Nexocyte Cycle

Although the illustrations don’t make it seem like the nexocytes and the nanites are too different (only a factor of 2 off after the second reproductive cycles), the difference very quickly adds up when you do the math. In fact, I’ll do the math for you:

Nexocyte Growth

Look at thatthe nexocytes have already hit 65,000 while the nanites haven’t even clocked 50. And, just in case you’re curious, the total number of nexocytes after the sixth reproductive cycle is precisely 18,446,744,073,709,551,616. Let me repeat that; after only six reproduction cycles, the nexocyte colony numbers 18 quintillion, 446 quadrillion, 744 trillion, 73 billion, 709 million, 551 thousand, six hundred and sixteen individuals.

To show precisely how catastrophic the existence of such a replicator would be, let’s envision a scenario like the one quoted above, where each nexocyte has the same mass and volume as an HIV virus, and replicates freely without any concerns for food, chemicals, or the laws of physics. If the timing for the reproductive cycles is the same as above (1000 seconds between each), here’s what the result of each cycle looks like:

Cycles 0-5 (0 seconds to 1 hour and 23 minutes)

Everything here is still well in the microscopic range; even though there are a huge number of nexocytes in our colony by Cycle 5, they’ll collectively be around the size of a dust mite and weigh accordingly tooalmost completely imperceptible.

Cycle 6 (1 hour and 40 minutes)

Now we’re getting somewhere. After the sixth reproductive cycle, our little colony is not so little anymore, weighing in at about 40 pounds and measuring up to the size of a decent paint can (4.5 gallons). For some unlucky sci-fi horror protagonist, it will probably be very shocking to see a weird paint can-sized lump of goo pop out of (what appears) to be thin air, but all in all this isn’t so bad! If our protagonist doesn’t get laughed off by emergency services, the authorities might be able to close in on this thing just as it undergoes its next cycle 16 minutes(ish) later. That’s fine, though—I mean, how big could this thing get?

Cycle 7 (1 hour and 57 minutes)

Goodbye protagonist. The nexocyte colony after Cycle 7, if it keeps a nice spherical shape, will be about 840 kilometers wide and weigh in at a whopping 3.4 *10^20 kilograms, which is enough to wipe almost all of the state of New York off the map. Our cute little “nexosphere” is now comparable to the dwarf planet Ceres, and our colony just about makes the cut to be called a dwarf planet (if it was zipping around in orbit instead of crushing our protagonist’s internal organs). This will definitely grab your omniscient secret government of choice’s attention, and if we’re very lucky, they’ll get their ducks in a row and let more than a couple of ballistic nukes rip on this thing by the sixteen minute mark. Because if not…

Cycle 8 (2 hours and 13 minutes)

…goodbye solar system. Our former little dwarf planet is now a sphere bigger than most small galaxies and about a hundredth the width of the Milky Way, with a radius of 309.1 light-years. That’s right: it would take light 309.1 years to travel from one end of our nexocyte colony to the other. Luckily, it won’t splat over the entirety of the Milky Way, and will definitely (and finally!) grab the attention of the advanced alien civilization of your choice. And luckily, they won’t even have to do that much to get rid of the nexocytes either!

Because they’ll become a black hole.

See, it turns out that our galactic-sized ball of nexocyte is sufficiently massive (35,000 times the weight of the entire observable universe) to cause it to become a black hole orders of magnitude bigger than any black hole that could ever plausibly exist. Our alien friends will simply have to enjoy whatever of life’s simple pleasures they can get before they’re sucked into the black hole/thrown off their orbit/fried with ionizing radiation/etcetra as the Milky Way slowly but surely collapses from the sudden disruptive presence of the nexocyte singularity.

So there you go; our colony of nexocytes may not be as long-lived as the nanites’, but we did get to destroy the Solar System in 2 hours, 13 minutes and 20 seconds (with the Milky Way getting sucked in soon after).  Luckily for us, this kind of replicator is completely implausible, because it would take too much time and resources for it to replicate its colony! The only way this replicator concept would ever even potentially get off the ground is if it existed inside of a medium with nearly unlimited amounts of energy compared to each nexocyte’s energy consumption, and where information exchange between nexocytes could be extremely fast and efficientand even then it would just wind up wrecking the place at breakneck speed. Good thing a place like that doesn’t exist, huh?

Quantum III: On Terrible Jokes and Cosmic Terrors

Quantum III: On Terrible Jokes and Cosmic Terrors
This is the third of three posts on quantum mechanics. See the first and second here and here, respectively.

I absolutely hate science “jokes”. I utterly, completely, despise them. Sure, they’re not all bad—but the overwhelming amount of them live in a weird space where the entirety of the punchline relies on a sense of smug self-satisfaction for knowing what the joke is referencing, making these jokes a great litmus test for finding out which of your friends is a pretentious tool that wouldn’t know comedy if it broke into their house and took their kids for ransom. To give you an example of this kind of crime against laughter, here’s a classic joke of this type (where I mean “classic” in the sense that the Hindenburg disaster is a “classic”):

Werner Heisenberg gets pulled over while driving. A cop comes over and asks, “Do you know how fast you were going?”

Heisenberg replies, “No, but I know exactly where I am”.

This joke is about as funny as dialysis. Medgar Evers may have said that you can’t kill an idea, but when it comes to this joke, God help me I can try. If the only way to get rid of the concept of this joke was to go back in time and prevent cars from ever existing, I’d get my jogging playlist ready in record time. This is a science joke in the sense that an Aston Martin made entirely out of compacted coffee grounds is a coffee machine.

But I digress; the point of this blog is not to point out bad jokes, but to explain what they’re referencing. And now that we’ve managed to get through the key conceptual hoops of quantum mechanics and what it can/can’t do in the previous entries, we’re in a good position to address this!

Let’s do a quick recap of what we’ve figured out so far:

  1. The universe doesn’t determine outcomes precisely, so the laws of quantum mechanics deal only with the probabilities of things happening.
  2. The way those probabilities change with time is -very- weird, to the point where we can’t describe this change using the typical method we’d use for probabilities in a non-quantum world. This is what causes quantum objects to be “glitchy” until you interact with them.

So really, the only thing keeping quantum mechanics from being a boring old theory about statistics like how you lose money in a casino is how those probabilities are changing over time! The question du jour is then obvious; what’s causing those probabilities to change over time?

Well, it turns out the core of what causes all of the weirdness in quantum mechanics is at the very heart of physics: energy! And, because it’s so important to the quantum world, let’s digress a little bit to talk about what we mean by energy. For the purposes of this entry, energy is just a number that depends on two things; how the object in question is moving, and how it interacts with everything else around it. As a result, energy depends on things like where other things are relative to the object, and how the object itself moving.

As it turns out, energy is incredibly important in quantum mechanics because an object not having a specific energy is precisely what causes all of its probabilities to change over time. And if the probabilities don’t change over time, then there’s no difference between the behavior of a quantum pencil and its classical, statistical, brother (boring!). Hence, the universe often not assigning specific energies to quantum objects is where all the properly crazy quantum stuff comes from.

Let’s take a look at one way this tidbit triggers a weirdness cascade throughout the rest of quantum physics by delving into an example. Consider a quantum teapot, zipping through a completely empty universe. In this situation, if we knew the energy of the teapot precisely, then we would know its speed as well; in an empty universe, the energy of this teapot is exclusively dependent on its speed and vice-versa. In addition—if you trust my previous statements—then the probabilities of the teapot’s observable properties shouldn’t change with time. If the teapot had, say, a 50% chance of being in position A when we measured its energy, then it should retain that probability for the rest of time.

But how could it? Remember, even though we might not where the teapot is, we know where it can be, and we know that it has to be moving with a specific speed that we can discern. So if we also knew that the teapot was in the neighborhood of position A, we know that it would have to eventually move away from the neighborhood of position A, and the probability of it still being in position A would now have changed with time, contradicting our previous statement!

Teapot 1

And no matter how much you play around with the information you have about where the teapot can be, there’d only be one scenario in which this wouldn’t be a paradox: if the teapot had the same probability of being everywhere, in which case the concept of position doesn’t have any meaning at all. It would become some kind of cosmic entity, omnipresent, eerily lurking in a “glitch” state, steadily moving through a desolate universe of itself where movement has no purpose.

Teapot 2

Scary, right? Welcome to quantum mechanics. This innate relationship between the nature of position and velocity, combined with velocity’s connection to energy and energy’s connection to changing probabilities, are what leads to that “Heisenberg uncertainty principle” hoopla everyone keeps talking about when they try to explain the punchline of their unfunny jokes to you. And trust me, all the other weird stuff you hear about in quantum mechanics pops out of similar thought experiments to this; marrying this little energy-time change connection with other boring classical physics results, such as velocity being the rate of change of position in the example above.

With this in mind, I’ll discuss just one more quick example. The central tenet of all physics (and you definitely don’t want to mess with that) says that the energy of an object which isn’t exchanging energy with some other object is constant in time—energy can’t just spontaneously appear or disappear. This naturally implies that, once we know the energy of a single isolated object, it can’t change from that value the next time we measure it. As you can see from the previous example, this puts a lot of restrictions on how such an object can behave—and in more realistic and restrictive situations than the empty universe above, only specific energy values yield probabilities that are consistent with the extra restrictions imposed by the environment. And as you can probably surmise, this means you very often can’t find an object to have just any arbitrary energy after interacting with it, only specific values of it; this is the historical hallmark and experimental mine canary of quantum mechanics.

That’s enough for now! I’ll conclude this entry by providing you with a science joke of my own:

Werner Heisenberg gets pulled over while driving. A cop comes over and asks, “Do you know how fast you were going?”

Heisenberg replies, “Now I do.”

He vanishes into thin air. The world feels changed; the colors off, the hues subdued.

The officer stares blankly into the empty seat. A nylon face forms in the seat upholstery; it whispers a single phrase.

“I am arriving.”

The officer begins to form the concept of fear. He vanishes before being able to do so.

Heisenberg has become the demiurge—he shapes and reshapes the universe as he sees fit. Stars die and are reborn in instants. Comets pulse in green and red as fractal Bauhaus palaces made of solid xenon crystals shatter and reform in the region once occupied by Saturn’s rings.

The Earth and its inhabitants fluctuate chaotically in the same manner; an irradiated wasteland consumed by eldritch nightmares one second, a savannah of polygons dotted with wireframe people the next. They are none the wiser to their predicament, their collective consciousness a fleeting mayfly. They are beyond hope now—they are beyond most things.

Ohm resists.

On Silly Shapes

On Silly Shapes

Entropy is a tricky concept. Although it pops up everywhere—take this scene from Portlandia, for example—there are many different definitions for it, and as a result it tends to be hand-waved around as some kind of measure of “disorder” when explained in a casual situation. That being said, entropy is such a pervasive concept in physics that it has been blamed for anything from the eventual heat death of the universe to the fact that time exists as we know it! So to try and give this topic the attention it deserves, and to dispel the notion that entropy is somehow fundamentally tied to disorder, I’ll try to demystify the concept of entropy without ever touching on disorder by presenting a fairly specific (but conceptually broad) definition of it.

Consider some sort of “shape generator” that randomly pops out either a diamond, arrow, or circle with equal probability. In fact, you’ll find an (only approximately random) gif version of such a contraption below!

Shape Generator Fast
Left-click the gif and drag it to randomly pick a shape.

As you can see, there are only three distinct possible outcomes of picking a shape. And, at least for this example, these outcomes contain all the information you could get out of the shape generator. In fact, by repeatedly picking shapes over and over, you could understand everything there is to know about the shape generator itself! We can call these shapes complete shapes* since they are both shown completely and they collectively provide complete information on the shape generator.

*The formal name for these in statistical mechanics doesn’t make much intuitive sense here.

Now I’m going to block off some information from you by taking the exact same shape generator from above and putting a blue bar over the right half of it. Try picking a shape below now!

Blocked Shape Generator 2
Left click the gif and drag it to select your (blocked) shape!

Notice that, even though the underlying shape generator is the same, you can only pull out two distinct outcomes out of this blocked generator compared to the three in the unblocked one; triangle or semicircle. (This is because the left-hand side of both the arrow and the diamond are exactly the same.) We can call the shapes we get from this generator incomplete shapes since they don’t show the underlying shape completely thanks to the blue bar.

From an informational perspective, the blue bar also doesn’t let us fully know what the underlying shape generator is; someone who has no idea what’s under the blue bar might not even think of—and can’t prove—the fact that there are two distinct “triangle-y” shapes. This is because, although the semicircle in the blocked case indicates a circle in the unblocked case, the triangle in the blocked case doesn’t uniquely correspond to either the diamond or the arrow. And this is precisely where entropy comes in.

Shape Mapping

Entropy is just a measurement of the number of complete shapes associated with an incomplete shape, which can be measured in this case by counting the lines attached to each of the incomplete shapes in the diagram above. Here, the triangle shape has an “entropy” of 2, while the semicircle shape has an “entropy” of 1—which is the lowest value it could possibly be in any shape generator, since there has to be at least one line attached to an incomplete shape. (You couldn’t put a blue bar on our shape generator and suddenly expect to see a hexagon, could you?)

In a roundabout way, entropy is giving you a quantitative measurement of how little information you get about an underlying system when you observe some specific “blocked” or inefficient measurement of that system.* A shape with large entropy has a large number of lines attached to it, which means that a bunch of complete shapes could be associated with the incomplete shape; a shape with low entropy has only a few complete shapes associated with it, which gives you less uncertainty about the underlying shape, and by extension, the underlying generator creating those shapes.

*Curiously enough, this quantity as described in terms of specific measurements isn’t called entropy in information theory; informational entropy is defined as the average of this quantity over all blocked measurements, which means that it’s really a function of the blocking itself rather than of a specific blocked measurement. I consider this to be very annoying.

That’s mostly all there is to it! One important thing I have to mention, which I consider a bit of a boring formality, is that the proper definition of entropy is actually the logarithm of the number of lines between each shape. This changes nothing about the qualitative statements I made above and, depending on your point of view, is more pragmatic than fundamental; it’s done chiefly so that if I have two identical shape generators and pulled out two triangle shapes, the entropy of both of those triangle shapes together is the sum of each individual triangle entropy rather than the multiplication of them. Bo-ring!

Also note that because the triangle shape has a larger entropy, and each complete shape is equally likely to pop up in the shape generator, the triangle is more likely to show up than the semicircle in our blocked generator. In fact, if there were a massive amount of triangle-y complete shapes compared to circular ones in our shape generator, the entropy of the triangle shape would be far larger than the semicircular one, and it would be almost certain that you’d pull out a triangle from the blocked shape generator.

What does this all have to do with thermodynamics? Well, in thermodynamics, we’re dealing with massive ensembles of individual physical objects (atoms/molecules) with many different physical properties such as energy, momentum, and so forth. Because it’s hopeless to keep track of all of the properties of all of these individual objects, we decide to “block off” our information about every individual object and keep track of only averages of these quantities in the entire ensemble. This then automatically defines an entropy connecting each of these “incomplete” averaged measurements to the “complete” collective states of the individual objects. And, if the largest entropy of an averaged measurement is far larger than the second-largest entropy as in the multi-triangle scenario in the paragraph above, then it is overwhelmingly likely for the ensemble to be in that largest-entropy state.

Note that disorder never even came into the picture! The connection to disorder only obviously pops up when your “complete” system consists of a very large amount of identical subsystems, in which case the most likely incomplete measurements tend to be ones with little “organization” (and, by extension, little information) in the complete underlying system. In short, entropy is only a measure of disorder when disorder implies a lack of information about the system. And if you’re interested in seeing some examples of this, well, you’ll have to do a little bit of extra reading.

On Greek Goddesses and Flying Furniture

On Greek Goddesses and Flying Furniture

During my last trip back home to Puerto Rico, I struck up a conversation with a friendly bartender I met in Old San Juan on the topic of achieving emotional stability. They mentioned that they believed in the ability to sporadically channel “emotional energy” from the moon, and that they relied on this to keep themselves stable through an unenviably long & rough patch of their life. Far from treating this as the ramblings of a person detached from realityas many scientists would doI was deeply sympathetic; regardless of how non-scientific that belief was, it certainly felt real to them, and they may not have come out of that stretch of bad times (which many of my compatriots have been sharing) without it. So as the nocturnal hub-bub of the old city sank back into the cobblestone, and I into my Trinidad Sour, I wondered; to what extent should we scientists attempt to point out and chastise nonsensical, potentially harmful beliefs, and to what extent should we avoid using science as a one-size-fits-all appeal to authority while refuting people’s emotionally valuable belief systems? To that end, I will pull the classic “physicist thinks he can solve everything” trick in this entry to try and find meaningful “operating conditions” and restrictions for non-scientific phenomena, using arguments less within the scope of science itself than in an often-ignored field of philosophy.

The first question to consider is, what is a science? This can be answered simply by a cursory stroll to your nearest dictionary; a science is a branch of study whose conclusions derive from consistently testable physical phenomena. We know Newton’s laws are laws because we don’t spontaneously see people fall upwards; every time we drop our exes’ furniture out of our apartment window, we can reliably count on hearing a loud thud on the street a couple of seconds later. If we did see couches sporadically flying upwards and sometimes falling downwards, science as we know it would be in a hell of a lot of trouble.

As a result, our definition of science hinges on repeated assessments of the behavior of things, and scientific laws & theorems must be validated by these experiments time and time again in order to be properly considered as such. But there are bound to be flukes, for example; the only thing more unbounded than the universe is the capacity for human error, and it is only a matter of time before someone tries dropping a couch over an open manhole, fails to hear a thud, and concludes that gravity isn’t real because their couch has found a new home in the upper atmosphere.

Couch 1

This is why any modern scientific enterprise, be it physical or biological, relies on statistics (cue screams). The existence of the Higgs boson, for example, is concluded entirely from statistics; based on the data they’ve observed, there is a chance of less than 1 in 1 million that the data they measured consistent with the existence of this particle is associated with experimental noise (or femtoscale flying couches). The process through which medical treatments become approved for public use is also statistical; it is foreseeable that someone with a currently undiscovered genetic anomaly will die from the use of Tylenol, but that certainly doesn’t disqualify it from being a worthwhile medical treatment nor should you update your will every time you take one. Scientific flukes are always expected; just not too many.

So where do beliefs fit into all this? Most science communicators are quick to discard religion and other such belief systems post-haste, while “believers” will always point to their deities (or themselves) as supreme authorities over all matters scientific or not. So where does science hold reign and what lies outside its domain?

In order to clarify this, we should first classify belief systems into two categories:

  1. Scientific belief systems: A belief system in which all statements are consistent with current scientific laws/theorems.
  2. Non-scientific belief systems: A belief system in which some statements are not consistent with current scientific laws/theorems.

Using these definitions, it’s quite clear that scientific belief systems have no trouble with scientific results, because they come to the same conclusions regarding testable physical behavior. It doesn’t really matter whether you think the invisible hand of the Flying Spaghetti Monster is pulling your exes’ couch onto the pavement or if you think it’s because of impossibly tiny strings, all that matters is that you wind up getting the equations of general relativityand a broken couch on the pavementeither way.

But arguably, such belief systems are fairly scarce; perhaps the largest appeal of belief systems is precisely the fact that they “overcome” science, and so our attention should be focused firmly on the non-scientific kind, and particularly on the restrictions that experimental (scientific) tests would “impose” on these. In particular, what would a “scientifically plausible” set of non-scientific beliefs look like?

The best way to illustrate such a “plausible” non-scientific belief system is by example. Consider a group of a million believers and non-believers who are afflicted by a well-understood disease that has been scientifically shown in laboratory experiments to kill approximately 50% of people who suffer from it. However, some people in this group believe that the Greek goddess of health Iaso will rescue and cure a select group (0.1% of the total afflicted) who are exceptionally “righteous” that would have died otherwise.

Plague 1Plague 2

Using computer simulations*, we can test two different scenarios of this plague; one in which Iaso doesn’t exist, and one in which Iaso does exist and her behavior was correctly predicted by her believers. If we’re only measuring the number of people who survive/die, here’s the numbers in one simulated run of this plague for each case.

*See here for the simple algorithm: you can copy/paste it here and run it yourself!

% dead % survived % total
Iaso doesn’t exist 49.9724% 50.0276% 100%
Iaso intervenes 50.0081% 49.9919% 100%

Look at that; more people died in the case where Iaso rescued some of the doomed than in the case where Iaso didn’t exist! And, more importantly, the difference between both cases in this “study” is almost indistinguishable!

The key reason we can observe something like this happen is due to the fact that the plague kills approximately 50% of the afflicted; we don’t expect to see exactly 50,000 people die in a group of a million afflicted people, but we do expect the actual number to be so close to 50,000 that it may as well be that. And there is a good mathematical reason for why the Iaso-intervening plague killed more people than in the godless case; the amount of people Iaso explicitly saved (which is the evidence for non-scientific phenomena) is contained entirely inside the statistical uncertainty of our scientific prediction for the kill rate of this plague. In this situation, the existence of a Greek healing goddess (or deity of your choice) going around healing people doesn’t necessarily contradict scientific theories, because its effect is quantitatively negligible in comparison to the effects predicted by scientific theories. If the effect was large enough, though, we’d certainly see it pop up in the statistics, and scientists would be able to start sniffing around for new physics. We can use all of this to establish a statement on belief systems of this type:

Statement 1: Physical effects of non-scientific belief systems that disagree with scientific theorems must be statistically trivial.

However, the astuteness of scientists places a further constraint; you see, a keen experimentalist who is also an Iasic believer may attempt to find evidence of Iaso’s meddling, and specifically look at the survival rates of those he knows are exceptionally righteous. Such a scientist will note that that these “righteous” always keep surviving this plague, note the Iasic belief above as a testable hypothesis, and create a scientific law proving the existence of Iaso (or at least the immunity of those who are exceptionally righteous in Iasic religion to this plague). This would place the Iasic belief in the domain of scientific belief systems, so Iaso’s behavior would have to be at least a little bit erratic or inconsistent in order to avoid scientific characterization. In short,

Statement 2: Physical effects of non-scientific belief systems cannot be verifiably consistent.

For the example above, either Iaso has to be sufficiently unpredictable in saving the doomed to avoid consistent testable observation of her cures, or the criteria by which she saves people (righteousness) has to be sufficiently uncharacterizable as to be unable to be controlled for in a scientific experiment. In some sense, Iaso “must work in mysterious ways”.

Does this spell out some sort of argument to bury non-scientific belief systems? Of course not! In my opinion, it does quite the opposite; this clearly delineates the limitations of science to disprove a “sufficiently modest” belief system that predicts physical phenomena which disagree with scientific theories. Simultaneously however, non-scientific belief systems have to be very careful about predicting physical phenomena, lest we scientists swoop in and start stealing their thunder (or proving them wrong). So though you may certainly find that science has something to say about the existence of flying couches, you won’t find me complaining too much about you feeding off of that moon energy. Peace out!

On Pointless Pastimes

On Pointless Pastimes

Everybody has a strange pastime. Whether it’s something routine, like watching really old cartoons, or something more exoticlike intentionally calling every barista you interact with “Greg”—these quirky habits have the tendency of inexplicably making you a little bit happier than you were before (and potentially causing others to look at you like you escaped from an insane asylum). The different kinds of weird hobbies available to us has grown exponentially in the modern age, and you can basically be sure that for any kind of bizarre activity, you can find a poorly maintained hobbyist forum from the early 2000’s for it. (Haven’t had any luck finding a community of people who intentionally misname baristas, though.)

Some of the oddest pastimes take something easy and come up with a way to make it really hard for no apparent reason. For example, ironing your clothes is usually very boring, but you’ll probably get a bunch of YouTube views if you film yourself doing it while hanging off of a cliff. For the people who pursue these, a lot of the motivation comes from being the first or only person to succesfully do that strange specific action; you can rest assured that the guy who created the largest ball of paint by repeatedly coating a baseball won’t have his record taken from him anytime soon, even though I can make an even bigger paint ball by just pouring a bunch of paint into a spherical tank and letting it dry.

For this blog post however, I want to focus on two particular odd pastimes of this type that piqued my interest, the first of which I like to call constrained Super Mario 64. The gist of it is that a very dedicated group of gamers decided they would try and see whether or not they could beat specific levels in Super Mario 64 without ever performing important actions, like moving the joystick or pressing the A button. In fact, one particular guy has been trying for years to solve the “open problem” of figuring out how to beat the entire game without ever pressing the A button. It may be the case that doing so is impossible, but no one can say the guy hasn’t tried: just take a look at his channel in the hyperlink above and you’ll see all of his insanely complex attempts at beating levels without pressing the A button.

My personal favorite, and the video that made this guy Internet famous, is his successful attempt at beating this one specific level without pressing the A button. (Technically, he’s left it pressed since before entering the level, but we don’t need to be too pedantic.) Words can’t describe the amount of effort, dedication, and ingenuity he spent on doing this: you’ll have to see this work of art unfold for yourself. The video explaining his techniques below is about a half-hour long; you can find the much shorter uncommentated version here. If you do decide to watch it though, buckle up.

I was literally more excited watching the execution of this than I was when they found the Higgs boson (and I saw it live!). The fact this man was not immediately hired by NASA to coordinate rocket launches after the making of this video convinced me that there is no such thing as cosmic justice. If I could take any one person on an all-expenses-paid trip with me to the Bahamas, I would either take this guy or my favorite barista Greg.

In any case, I have nowhere near the skill or technical know-how to play Super Mario 64 like this, and every problem of this type in constrained SM64 that’s considered difficult has probably already been described; after all, there are only so many buttons you can’t press. As a result, if I want to get famous off of a weird pastime, I need to find some other strange activity which has undiscovered problems to solve, and that brings me to the second topic of this blog post: number theory.

Logo

Number theory is the study of numbers (great writing, Arnaldo), in particular the study of groups of numbers and facts about them. Some facts are easy to show, and some aren’tbut luckily for me, number theory has a massive amount of undiscovered problems! See, number theory is just like constrained Super Mario 64; it is extremely difficult, very interesting, mostly fun, and largely pointless (except for some key applications in cryptography*). The key difference, though, is that there’s only so much Super Mario 64; there are no limits to the amount of numbers and number groups.

*If you get all 120 stars without pressing the A button, you can find Yoshi on the castle roof and he’ll give you the private key to an offshore cryptocurrency wallet.

Perhaps the best thing about problems in number theory is that, as long as it’s not easy and it’s not impossible, I can basically claim some arbitrary unsolved problem is as important as any other famous problem because no problems are really “important” in any concrete objective sense. It’s like saying that beating a Super Mario 64 level without moving the joystick is more important than beating it without pressing the A button; one or the other might be easier, but they’re both pretty damn impressive, and doing either is ultimately pointless.

Easier said than done, you might think. Well, why don’t we actually take a crack at finding an “important” number theory problem? Let’s give it a shot by following these key steps:

  1. Find topics that are “hot” in number theory.
  2. Find an arbitrary specific problem involving these “hot” topics.
  3. Show this problem isn’t easy or equivalent to another known “important” problem.

We first need to look at what’s “hot” in the field of number theory, and perhaps the hottest topic in number theory is the study of what are called prime numbers. (It’s so hot that Wikipedia has an entire section on unsolved prime number theory problems!) These are numbers that can’t be divided by any number other than 1 or themselves without creating a bunch of decimal gunk. An example of a big number that’s prime is 89: try dividing it by any number other than 1 or 89 and you’ll always get a number with stuff past the decimal point. For clarity, the first few prime numbers are:

P_{i} = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, ...]

Another hot topic is the Fibonacci numbers; these are a bunch of numbers on a list defined so that the next number on the list is equal to the sum of the two last numbers. By defining both the first and second Fibonacci numbers as 1, the list of Fibonacci numbers begins as:

F_{i} = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...]

Both prime numbers and Fibonacci numbers have been studied to death, and pop up very often in pop-math books and related media; I even vaguely recall seeing the Fibonacci numbers show up in The Da Vinci Code*. However, one thing that isn’t known (and is considered a well-known “important” problem to number theorists) is whether or not there are an infinite number of prime numbers that are also Fibonacci numbers. We can certainly spot a few prime numbers in the starting Fibonacci numbers I listed: 2, 3, 5, 13, 89, and 233.

*I always assume that science and math topics reach “peak pop-sci” when featured in a Dan Brown book. I send Mr. Brown emails every day about how cool low-Reynolds number fluid dynamics is, but he hasn’t taken the bait yet.

Anyways, figuring out the number of primes that are also Fibonacci numbers is a well-known problem; in order to come up with a new problem, we need to be a little bit more specific. Let’s think about the following list of related (and completely arbitrarily defined) numbers:

Start the list off by picking some prime number a. Pick the next number on the list by finding the a-th Fibonacci number. Then find the Fibonacci number corresponding to that number and put it on the list.

That’s it! This is just a list of specific Fibonacci numbers. To get a more intuitive sense of this list of numbers, I’ll call this list of numbers the “pointless sequence” T and start rattling off the first couple of numbers on the list if I pick, say, a = 7:

T_{1} = 7

T_{2} = F_{7} = 13

T_{3} = F_{13} = 233

T_{4} = F_{233} = 2211236406303914545699412969744873993387956988653

Jeez, that got out of hand really quickly! It seems like our arbitrary list is pulling in big numbers even at the start. But that’s great for number theorists; the bigger the numbers involved, the more difficult it is to deal with them, and the more challenging and “important” a problem is. One thing you may notice is that, if I pick 2 or 3 as my starting value, this sequence of numbers will just eventually start spouting out 1 forever. If I picked 5, it would just keep spouting out 5 forever, but if I pick any prime number bigger than that, I’ll start seeing the crazy blow-up we saw for a = 7.

Another thing you may notice is that those first three numbers on our list for a = 7 are prime! (The fourth one unfortunately isn’t.) We can then ask pointless questions about this list of numbers and hope we hit on a tough one, like if there exists an a other than 5 so that every number on this list would be prime. Because mathematicians don’t like problems in the forms of questions, we can guess that this isn’t true and reduce our problem to answering whether or not the following pointless conjecture is true; I’ll even put it into videogame format to make it pop a bit more.

Pointless Conjecture

Now that we have Step 1 and 2 out of the way, let’s proceed to Step 3 and check if this problem is easy or equivalent to another problem, particularly the “important” problem of whether or not there are an infinite number of Fibonacci primes. If there was a finite number of Fibonacci primes, then this sequence would have to eventually hit a non-prime and our problem would be solved. (Lucky for us it’s unsolved!) However, if the number of Fibonacci primes was infinite, it wouldn’t tell us anything about whether or not our list of specific numbers would eventually have a non-prime, which means our problem isn’t equivalent. Score!

So we know verifying the conjecture isn’t equivalent to this other problem, and we know that showing it’s true isn’t easy (because we’d solve this other nasty problem about infinite Fibonacci primes if we did). However, we need to figure out if showing that it’s false is easy, and that isn’t something we can check straightforwardly; so we’ll just have to drop it onto a math forum like stackexchange and see if anyone berates us for wasting their time on an easy problem.

Once we’ve completed all three steps, now we have to go through the hard process of actually trying to solve it; and for that, there’s no steps or rules other than staying dedicated, being creative, and enjoying it every day. In my case, I think it’ll probably be best if I just stick to calling baristas Greg.

Quantum II: On G̴̡̕͝l̷̛̀͝i̧̧t̶̡̕҉͞c̀̕h̨̛̀͟e̸̶̡̕ş̴͡͏͞

Quantum II: On G̴̡̕͝l̷̛̀͝i̧̧t̶̡̕҉͞c̀̕h̨̛̀͟e̸̶̡̕ş̴͡͏͞
This is the second of three entries on quantum mechanics. Read the first here.

Now that I’ve spent some time talking about how other people get quantum mechanics wrong, it’s about time I get my own chance to screw it up. Following the set of arbitrary rules I laid out in my last quantum mechanics entry, I’m going to start by listing some things you cannot do with quantum mechanics:

  • You cannot influence the universe by thinking about something.
  • You do not have a chance of spontaneously teleporting to Mars.
  • Your ex does not have a chance of spontaneously teleporting to Mars.
  • You cannot travel into alternate universes.
  • You cannot travel faster than light.

This can all be summarized by stating:

  • You cannot do anything impossible in “normal” physics using quantum physics.*

If you retain anything from reading this post, let it be this! Also, see that asterisk? I did say before that I’d alert you whenever I said something that is debatable, and that’s what I’m doing there. But fear not, I’ll explain it all in fine print at the end of my third QM entry; ignore it for now.

Anyways, given that little postulate stated above, you may ask what the difference between quantum physics and normal physics is. Well, back in the Renaissance, the prevailing notion was that the universe was like a complex mechanical computer; that given the state of the universe as it is now, the laws of physics would procedurally generate every future state without any ambiguity. Another way to describe this is that you could always predict everything that’s going to happen in the future if you knew everything about the universe right now. However, the basis for quantum mechanics is that this is false and you should feel bad for thinking that!

The basis of quantum mechanics is that the laws of physics are fundamentally incomplete. For certain situations, the laws of physics don’t say what should happen specifically, only what can and can’t happen. An intuitive example of this is standing a normal pencil on the pointy end; after letting it go, it will inevitably fall down, but there’s no obvious preference of direction in which it will fall down to. What Renaissance physicists would counter with is that knowledge of the pencil tip shape at the microscopic level and the motion of the air molecules in your room will always tell you how the pencil will fall down—and that is absolutely true, with the caveat that it is functionally impossible to obtain that information.

Pencil

The real problem appears when you go down to the smallest scales of the universe, to a single isolated particle (or a few); there’s no hidden features there, no “tip shapes” or other properties to rely on when the laws of physics don’t explicitly tell you what’s going to happen. Of course, there are no subatomic pencils, but there are multiple processes in the quantum world (mostly involving radioactive decay) that preserve that feature of not having a precise or well-defined outcome.

This leads to two critical questions, the first being pragmatic and the second philosophical:

  1. What do these objects do when the laws of physics don’t determine their specific future?
  2. What do these objects become when the laws of physics don’t determine their specific future?

The answer to the first question, which may or may not be intuitive, is that they’ll do one of the things the laws of physics say they can do! Our quantum pencil will fall in one direction sometimes and another direction sometimesand the best we can do is quantify those probabilities using experiments, develop some physical laws for those probabilities based on our data, and that’s that.

If you think about it, there isn’t really anything quantum-y about that at all. This is exactly the way we’d try to describe the physics of our normal pencil falling down; setting it up to fall down a bunch of times and recording the different outcomes to guess the probabilities of it going one way or the other. This is so ubiquitous in the world of science that it is its own long-standing field of physics: in short, bo-ring!

Where things get spooky is when discussing the answer to the second question I presented above. See, for normal objects, not measuring what state they’re in doesn’t affect anything about the probabilities in which you can find them. When the normal pencil falls down and you’re not around to check on it, you can still say it fell down to some specific position; you just don’t know which until you check. For a quantum pencil, this can’t be true!

This is due to the fact that, for objects with no specific future defined by the laws of physics, the probabilities of its possible outcomes can affect each other. This is completely insane! For a quantum pencil, the fact that it could fall to the right can affect its probability of falling in every other direction*. As a result, the chances that a quantum object will behave one way or another can be affected by when and how you interact with the object as you check its status! (The precise way in which those probabilities affect each other is so tricky that they not only needed to describe it indirectly by instead describing a related quantity, but they also had to use imaginary numbers to be able to describe that.)

For this reason, we can’t really say the quantum pencil exists in the same way a classical one does before you interact with it. In fact, it is literally impossible to comprehend it traditionally since it violates one of the three classic laws of thought! For the purposes of our brain, when the quantum pencil falls down and you’re not around to measure it, it’s in some weird glitch state until you interact with it, where you’ll find the quantum pencil fell down either to the left or right just like the normal one would.

Pencil 5

To illustrate this difference from a philosophical point of view, let’s say that I was an amoral psychotic and decided to link the life or death of a cat to the direction in which a normal pencil is falling down. If the pencil falls to the left, the cat lives, while if the pencil falls to the right, the cat dies. I, meanwhile, am getting some coffee as this insane little experiment is going on.

At some point while the barista is preparing my ristretto, the normal pencil falls down and the cat either lives or dies, and will remain in that state as I go check on what happened with hot coffee in hand. Nothing weird here (other than the part where cats are dying).

Pencil 2

Now we replace the normal pencil with a quantum one and observe that, because the life and death of the cat depend on the state of the pencil, the cat gets put into a glitch state too after the quantum pencil falls down! As a result, I can’t really say anything about whether or not the cat is alive or dead until I go check on it. (This is a very famous thought experiment). Also worth mentioning is that in normal physics, this linking of statistical outcomes between pencil and cat has a name, but in quantum mechanics they call it something fancier even though it’s functionally the same thing.

Pencil 3

If we want to make things really interesting, we can extend this experiment a little bit. Let’s say that, when I look at the cat post-pencil drop, I have an emotional reaction that differs based on what state I find the cat in. If it’s alive, I’m happy, and if it’s dead, I’m sad. Now let’s get a bit meta and say that some AI that can detect emotions is experimenting on me experimenting on cats, and that it was installing some Windows update until after I see the outcome of my experiment. In that situation, from the point of view of the computer, both the cat and me are going to be in a glitch state until it checks what emotion I’m feeling; even though I am clearly seeing the cat as being either alive or dead!

Pencil 4

This means the strange glitchiness is totally dependent on frame of reference and independent of whether or not something “conscious” is measuring things! This isn’t the first time we’ve run into this craziness, but it does have a lot of philosophical juice in QM that people love to spend hours debating on (presumably while getting high on something). As a result, and this is a thing a lot of people get wrong, human consciousness does not affect quantum mechanics.

Alright! That’s enough for one quantum entry. In the next one, I’ll discuss some more cute examples of counter-intuitive behavior in QM and keep trying to stay on my mission of making quantum boring again. Wish me luck!