On Greek Goddesses and Flying Furniture

During my last trip back home to Puerto Rico, I struck up a conversation with a friendly bartender I met in Old San Juan on the topic of achieving emotional stability. They mentioned that they believed in the ability to sporadically channel “emotional energy” from the moon, and that they relied on this to keep themselves stable through an unenviably long & rough patch of their life. Far from treating this as the ramblings of a person detached from realityas many scientists would doI was deeply sympathetic; regardless of how non-scientific that belief was, it certainly felt real to them, and they may not have come out of that stretch of bad times (which many of my compatriots have been sharing) without it. So as the nocturnal hub-bub of the old city sank back into the cobblestone, and I into my Trinidad Sour, I wondered; to what extent should we scientists attempt to point out and chastise nonsensical, potentially harmful beliefs, and to what extent should we avoid using science as a one-size-fits-all appeal to authority while refuting people’s emotionally valuable belief systems? To that end, I will pull the classic “physicist thinks he can solve everything” trick in this entry to try and find meaningful “operating conditions” and restrictions for non-scientific phenomena, using arguments less within the scope of science itself than in an often-ignored field of philosophy.

The first question to consider is, what is a science? This can be answered simply by a cursory stroll to your nearest dictionary; a science is a branch of study whose conclusions derive from consistently testable physical phenomena. We know Newton’s laws are laws because we don’t spontaneously see people fall upwards; every time we drop our exes’ furniture out of our apartment window, we can reliably count on hearing a loud thud on the street a couple of seconds later. If we did see couches sporadically flying upwards and sometimes falling downwards, science as we know it would be in a hell of a lot of trouble.

As a result, our definition of science hinges on repeated assessments of the behavior of things, and scientific laws & theorems must be validated by these experiments time and time again in order to be properly considered as such. But there are bound to be flukes, for example; the only thing more unbounded than the universe is the capacity for human error, and it is only a matter of time before someone tries dropping a couch over an open manhole, fails to hear a thud, and concludes that gravity isn’t real because their couch has found a new home in the upper atmosphere.

Couch 1

This is why any modern scientific enterprise, be it physical or biological, relies on statistics (cue screams). The existence of the Higgs boson, for example, is concluded entirely from statistics; based on the data they’ve observed, there is a chance of less than 1 in 1 million that the data they measured consistent with the existence of this particle is associated with experimental noise (or femtoscale flying couches). The process through which medical treatments become approved for public use is also statistical; it is foreseeable that someone with a currently undiscovered genetic anomaly will die from the use of Tylenol, but that certainly doesn’t disqualify it from being a worthwhile medical treatment nor should you update your will every time you take one. Scientific flukes are always expected; just not too many.

So where do beliefs fit into all this? Most science communicators are quick to discard religion and other such belief systems post-haste, while “believers” will always point to their deities (or themselves) as supreme authorities over all matters scientific or not. So where does science hold reign and what lies outside its domain?

In order to clarify this, we should first classify belief systems into two categories:

  1. Scientific belief systems: A belief system in which all statements are consistent with current scientific laws/theorems.
  2. Non-scientific belief systems: A belief system in which some statements are not consistent with current scientific laws/theorems.

Using these definitions, it’s quite clear that scientific belief systems have no trouble with scientific results, because they come to the same conclusions regarding testable physical behavior. It doesn’t really matter whether you think the invisible hand of the Flying Spaghetti Monster is pulling your exes’ couch onto the pavement or if you think it’s because of impossibly tiny strings, all that matters is that you wind up getting the equations of general relativityand a broken couch on the pavementeither way.

But arguably, such belief systems are fairly scarce; perhaps the largest appeal of belief systems is precisely the fact that they “overcome” science, and so our attention should be focused firmly on the non-scientific kind, and particularly on the restrictions that experimental (scientific) tests would “impose” on these. In particular, what would a “scientifically plausible” set of non-scientific beliefs look like?

The best way to illustrate such a “plausible” non-scientific belief system is by example. Consider a group of a million believers and non-believers who are afflicted by a well-understood disease that has been scientifically shown in laboratory experiments to kill approximately 50% of people who suffer from it. However, some people in this group believe that the Greek goddess of health Iaso will rescue and cure a select group (0.1% of the total afflicted) who are exceptionally “righteous” that would have died otherwise.

Plague 1Plague 2

Using computer simulations*, we can test two different scenarios of this plague; one in which Iaso doesn’t exist, and one in which Iaso does exist and her behavior was correctly predicted by her believers. If we’re only measuring the number of people who survive/die, here’s the numbers in one simulated run of this plague for each case.

*See here for the simple algorithm: you can copy/paste it here and run it yourself!

% dead % survived % total
Iaso doesn’t exist 49.9724% 50.0276% 100%
Iaso intervenes 50.0081% 49.9919% 100%

Look at that; more people died in the case where Iaso rescued some of the doomed than in the case where Iaso didn’t exist! And, more importantly, the difference between both cases in this “study” is almost indistinguishable!

The key reason we can observe something like this happen is due to the fact that the plague kills approximately 50% of the afflicted; we don’t expect to see exactly 50,000 people die in a group of a million afflicted people, but we do expect the actual number to be so close to 50,000 that it may as well be that. And there is a good mathematical reason for why the Iaso-intervening plague killed more people than in the godless case; the amount of people Iaso explicitly saved (which is the evidence for non-scientific phenomena) is contained entirely inside the statistical uncertainty of our scientific prediction for the kill rate of this plague. In this situation, the existence of a Greek healing goddess (or deity of your choice) going around healing people doesn’t necessarily contradict scientific theories, because its effect is quantitatively negligible in comparison to the effects predicted by scientific theories. If the effect was large enough, though, we’d certainly see it pop up in the statistics, and scientists would be able to start sniffing around for new physics. We can use all of this to establish a statement on belief systems of this type:

Statement 1: Physical effects of non-scientific belief systems that disagree with scientific theorems must be statistically trivial.

However, the astuteness of scientists places a further constraint; you see, a keen experimentalist who is also an Iasic believer may attempt to find evidence of Iaso’s meddling, and specifically look at the survival rates of those he knows are exceptionally righteous. Such a scientist will note that that these “righteous” always keep surviving this plague, note the Iasic belief above as a testable hypothesis, and create a scientific law proving the existence of Iaso (or at least the immunity of those who are exceptionally righteous in Iasic religion to this plague). This would place the Iasic belief in the domain of scientific belief systems, so Iaso’s behavior would have to be at least a little bit erratic or inconsistent in order to avoid scientific characterization. In short,

Statement 2: Physical effects of non-scientific belief systems cannot be verifiably consistent.

For the example above, either Iaso has to be sufficiently unpredictable in saving the doomed to avoid consistent testable observation of her cures, or the criteria by which she saves people (righteousness) has to be sufficiently uncharacterizable as to be unable to be controlled for in a scientific experiment. In some sense, Iaso “must work in mysterious ways”.

Does this spell out some sort of argument to bury non-scientific belief systems? Of course not! In my opinion, it does quite the opposite; this clearly delineates the limitations of science to disprove a “sufficiently modest” belief system that predicts physical phenomena which disagree with scientific theories. Simultaneously however, non-scientific belief systems have to be very careful about predicting physical phenomena, lest we scientists swoop in and start stealing their thunder (or proving them wrong). So though you may certainly find that science has something to say about the existence of flying couches, you won’t find me complaining too much about you feeding off of that moon energy. Peace out!

On the Benefits of Being a Dumb Tourist

I’ve stayed at a fair share of different places over the last few years, and using public transportation takes the cake for being the most stressful and annoying day-to-day experience in every place I’ve been to. From riding 5-and-a-half hours every week in a packed Chevy Astro through hot Puerto Rican highways to starting your workweek at Berkeley with the fresh sight and smell of body parts, I’ve never had a positive relationship with public transportation (and don’t expect that to change anytime soon). However, for someone who can’t afford to buy a car—and who is universally described as driving “like a grandmother politely trying to get to the hospital while having a heart attack”—it is a regrettably indispensable part of my life.

As a result, I’ve had to spend a considerable amount of time thinking about how to maneuver the crowded Roman trains and smelly New York buses, and have stumbled onto some weird tricks that might be of use for both tourists and daily commuters. For this post specifically, my intent is to show you the “paradox” that, when trying to get on a packed metro train, being a dumb tourist is better than being a smart one; and I’m going to do it by using something just as annoying, stress-inducing and indispensable as public transportation. Statistics.*

*Cue Inception horns and random screams.

If mathematics were a family, probability & statistics would be the bizarre great-uncle that won’t stop talking about how taxidermy is a spiritually fulfilling hobby at the dinner table. It is a field of study that is simultaneously too trivial for “real” mathematicians (they’re too busy writing proofs no one understands) and so strange that one of the best mathematicians of all time didn’t believe a simple statistics result until someone showed him a computer simulation proving it. For the moment, I’ll start by giving you a small primer on the basics of this strange field before we delve into any commuting weirdness.

Perhaps the two most important pieces of information in the statistical sciences are the long-term average and the single likeliest outcome. The names are pretty straightforward, but just in case, I’ll explain them with a six-sided die.

  1. The single likeliest outcome is just that. For one six-sided die, there isn’t any single likeliest outcome because you have an equal chance of getting any number between 1 and 6 (unless you’ve been loading your dice, you cheater). It’s easy to spot in an outcome graph, because it’s the outcome that happens the most.
  2. The long-time average is a little more detailed, but not very: it’s the average of your results after you obtain a very large amount of them! For a single six-sided die, that number is 3.5. You can’t spot this one in an outcome graph, but you can deduce/guess it if the shape is simple.

Now that we’ve got our statistics bases covered, allow me to illustrate the promised “paradox” through my experience living in the Bay Area. Trying to get on a BART train (the Bay Area’s metro system) during the busy hours was mostly a game of chance; you had to hope you picked a waiting spot close to where the train door lands or you’re looking at a 15 minute wait for the next one to roll in.

However, let’s say you knew that the train door always pops up within the same 100-foot strip of train station, but you don’t know exactly where. Assuming there’s an equal chance of it showing up anywhere in the strip, the instinctively smart thing to do would be to always wait smack-dab in the center of it; that’s the position that puts you closest to the train door in the worst-case, and it certainly feels like it’s your best bet.

Train 3

In this scenario, you might claim you’re making the smartest choice, so let’s call this the smart tourist scenario. Now, instead of using some fancy math theorems to tell you what the most likely distance and long-term average distance are in this case, I’m going to be 100% thorough and actually simulate it! Let’s take a look at what being a smart tourist comes out to when you simulate the train arriving a million times:

Train 1

There are two things to take away from this graph. First, since the graph indicates that the train stopped everywhere about the same number of times, there’s no single likeliest outcome. It’s equally likely for the train door to land right in front of you than it is for it to wind up 50 feet away! Second, if you used the train over and over, your average distance from the train door would be 25 feet (which you could calculate by finding the average of all the distance outcomes). Nothing unexpected here.

Now we’re going to go into “paradox” territory. Let’s say you take a page from your weird great-uncle’s book and, instead of carefully planning things out, you just decide to randomly pick a spot inside of the 100-foot strip to wait in.

Train 4

In this case, you’re not making any decision at all about what’s best or not; you’re just randomly waiting somewhere. Let’s call this the dumb tourist scenario, and here’s what that looks like when you pick random spots a million times:

Train 2

Look at that; the train stopped more times in places that were closer to you! The simulations don’t lie: the likeliest outcome now is that the train stops right in front of you, and the average distance between you and the train will be about 33 feet.

Comparing both scenarios, there’s nothing weird going on if you commute all the time; the long-time average distance is bigger when you randomly wander around the train station (33 ft) versus when you wait in the middle (25 ft), so doing the smart thing is still your best bet in that case. But, when you’re a tourist and only plan on riding the train once or twice, this implies that it’s better to randomly pick a spot to wait in than to pick the best logical spot!

This is profoundly counter-intuitive on many levels; how can a “dumb” action turn out to be better than a “smart” one? How can my random action cause the train to usually arrive closer to me? How can I understand this result intuitively? Well, I could try to calm you down by pointing out that being a dumb tourist has two negatives, which is that 1) your long-time average distance is larger and that 2) you have a nontrivial chance of having the train show up more than 50 feet away from you, which is impossible for the smart tourist. If you’re like me, though, you are probably still very puzzled.

For that, you can take solace from the fact that the smartest man who ever lived once said that “in mathematics you don’t understand things, you just get used to them”, and my advice is the following: get used to it. This is by no means the only “paradox” in the statistical sciences, as great many others are known to exist, and they’ve puzzled everyone just as much as this little factoid does. The best thing you can do is to learn about them and why they happen so that you don’t get surprised by them (or more importantly, make wrong assumptions because of them). And who knows! With time you may find some new ones yourself, if you decide to formally study statistics—or if you commute enough.