On Pointless Pastimes

On Pointless Pastimes

Everybody has a strange pastime. Whether it’s something routine, like watching really old cartoons, or something more exoticlike intentionally calling every barista you interact with “Greg”—these quirky habits have the tendency of inexplicably making you a little bit happier than you were before (and potentially causing others to look at you like you escaped from an insane asylum). The different kinds of weird hobbies available to us has grown exponentially in the modern age, and you can basically be sure that for any kind of bizarre activity, you can find a poorly maintained hobbyist forum from the early 2000’s for it. (Haven’t had any luck finding a community of people who intentionally misname baristas, though.)

Some of the oddest pastimes take something easy and come up with a way to make it really hard for no apparent reason. For example, ironing your clothes is usually very boring, but you’ll probably get a bunch of YouTube views if you film yourself doing it while hanging off of a cliff. For the people who pursue these, a lot of the motivation comes from being the first or only person to succesfully do that strange specific action; you can rest assured that the guy who created the largest ball of paint by repeatedly coating a baseball won’t have his record taken from him anytime soon, even though I can make an even bigger paint ball by just pouring a bunch of paint into a spherical tank and letting it dry.

For this blog post however, I want to focus on two particular odd pastimes of this type that piqued my interest, the first of which I like to call constrained Super Mario 64. The gist of it is that a very dedicated group of gamers decided they would try and see whether or not they could beat specific levels in Super Mario 64 without ever performing important actions, like moving the joystick or pressing the A button. In fact, one particular guy has been trying for years to solve the “open problem” of figuring out how to beat the entire game without ever pressing the A button. It may be the case that doing so is impossible, but no one can say the guy hasn’t tried: just take a look at his channel in the hyperlink above and you’ll see all of his insanely complex attempts at beating levels without pressing the A button.

My personal favorite, and the video that made this guy Internet famous, is his successful attempt at beating this one specific level without pressing the A button. (Technically, he’s left it pressed since before entering the level, but we don’t need to be too pedantic.) Words can’t describe the amount of effort, dedication, and ingenuity he spent on doing this: you’ll have to see this work of art unfold for yourself. The video explaining his techniques below is about a half-hour long; you can find the much shorter uncommentated version here. If you do decide to watch it though, buckle up.

I was literally more excited watching the execution of this than I was when they found the Higgs boson (and I saw it live!). The fact this man was not immediately hired by NASA to coordinate rocket launches after the making of this video convinced me that there is no such thing as cosmic justice. If I could take any one person on an all-expenses-paid trip with me to the Bahamas, I would either take this guy or my favorite barista Greg.

In any case, I have nowhere near the skill or technical know-how to play Super Mario 64 like this, and every problem of this type in constrained SM64 that’s considered difficult has probably already been described; after all, there are only so many buttons you can’t press. As a result, if I want to get famous off of a weird pastime, I need to find some other strange activity which has undiscovered problems to solve, and that brings me to the second topic of this blog post: number theory.

Logo

Number theory is the study of numbers (great writing, Arnaldo), in particular the study of groups of numbers and facts about them. Some facts are easy to show, and some aren’tbut luckily for me, number theory has a massive amount of undiscovered problems! See, number theory is just like constrained Super Mario 64; it is extremely difficult, very interesting, mostly fun, and largely pointless (except for some key applications in cryptography*). The key difference, though, is that there’s only so much Super Mario 64; there are no limits to the amount of numbers and number groups.

*If you get all 120 stars without pressing the A button, you can find Yoshi on the castle roof and he’ll give you the private key to an offshore cryptocurrency wallet.

Perhaps the best thing about problems in number theory is that, as long as it’s not easy and it’s not impossible, I can basically claim some arbitrary unsolved problem is as important as any other famous problem because no problems are really “important” in any concrete objective sense. It’s like saying that beating a Super Mario 64 level without moving the joystick is more important than beating it without pressing the A button; one or the other might be easier, but they’re both pretty damn impressive, and doing either is ultimately pointless.

Easier said than done, you might think. Well, why don’t we actually take a crack at finding an “important” number theory problem? Let’s give it a shot by following these key steps:

  1. Find topics that are “hot” in number theory.
  2. Find an arbitrary specific problem involving these “hot” topics.
  3. Show this problem isn’t easy or equivalent to another known “important” problem.

We first need to look at what’s “hot” in the field of number theory, and perhaps the hottest topic in number theory is the study of what are called prime numbers. (It’s so hot that Wikipedia has an entire section on unsolved prime number theory problems!) These are numbers that can’t be divided by any number other than 1 or themselves without creating a bunch of decimal gunk. An example of a big number that’s prime is 89: try dividing it by any number other than 1 or 89 and you’ll always get a number with stuff past the decimal point. For clarity, the first few prime numbers are:

P_{i} = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, ...]

Another hot topic is the Fibonacci numbers; these are a bunch of numbers on a list defined so that the next number on the list is equal to the sum of the two last numbers. By defining both the first and second Fibonacci numbers as 1, the list of Fibonacci numbers begins as:

F_{i} = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, ...]

Both prime numbers and Fibonacci numbers have been studied to death, and pop up very often in pop-math books and related media; I even vaguely recall seeing the Fibonacci numbers show up in The Da Vinci Code*. However, one thing that isn’t known (and is considered a well-known “important” problem to number theorists) is whether or not there are an infinite number of prime numbers that are also Fibonacci numbers. We can certainly spot a few prime numbers in the starting Fibonacci numbers I listed: 2, 3, 5, 13, 89, and 233.

*I always assume that science and math topics reach “peak pop-sci” when featured in a Dan Brown book. I send Mr. Brown emails every day about how cool low-Reynolds number fluid dynamics is, but he hasn’t taken the bait yet.

Anyways, figuring out the number of primes that are also Fibonacci numbers is a well-known problem; in order to come up with a new problem, we need to be a little bit more specific. Let’s think about the following list of related (and completely arbitrarily defined) numbers:

Start the list off by picking some prime number a. Pick the next number on the list by finding the a-th Fibonacci number. Then find the Fibonacci number corresponding to that number and put it on the list.

That’s it! This is just a list of specific Fibonacci numbers. To get a more intuitive sense of this list of numbers, I’ll call this list of numbers the “pointless sequence” T and start rattling off the first couple of numbers on the list if I pick, say, a = 7:

T_{1} = 7

T_{2} = F_{7} = 13

T_{3} = F_{13} = 233

T_{4} = F_{233} = 2211236406303914545699412969744873993387956988653

Jeez, that got out of hand really quickly! It seems like our arbitrary list is pulling in big numbers even at the start. But that’s great for number theorists; the bigger the numbers involved, the more difficult it is to deal with them, and the more challenging and “important” a problem is. One thing you may notice is that, if I pick 2 or 3 as my starting value, this sequence of numbers will just eventually start spouting out 1 forever. If I picked 5, it would just keep spouting out 5 forever, but if I pick any prime number bigger than that, I’ll start seeing the crazy blow-up we saw for a = 7.

Another thing you may notice is that those first three numbers on our list for a = 7 are prime! (The fourth one unfortunately isn’t.) We can then ask pointless questions about this list of numbers and hope we hit on a tough one, like if there exists an a other than 5 so that every number on this list would be prime. Because mathematicians don’t like problems in the forms of questions, we can guess that this isn’t true and reduce our problem to answering whether or not the following pointless conjecture is true; I’ll even put it into videogame format to make it pop a bit more.

Pointless Conjecture

Now that we have Step 1 and 2 out of the way, let’s proceed to Step 3 and check if this problem is easy or equivalent to another problem, particularly the “important” problem of whether or not there are an infinite number of Fibonacci primes. If there was a finite number of Fibonacci primes, then this sequence would have to eventually hit a non-prime and our problem would be solved. (Lucky for us it’s unsolved!) However, if the number of Fibonacci primes was infinite, it wouldn’t tell us anything about whether or not our list of specific numbers would eventually have a non-prime, which means our problem isn’t equivalent. Score!

So we know verifying the conjecture isn’t equivalent to this other problem, and we know that showing it’s true isn’t easy (because we’d solve this other nasty problem about infinite Fibonacci primes if we did). However, we need to figure out if showing that it’s false is easy, and that isn’t something we can check straightforwardly; so we’ll just have to drop it onto a math forum like stackexchange and see if anyone berates us for wasting their time on an easy problem.

Once we’ve completed all three steps, now we have to go through the hard process of actually trying to solve it; and for that, there’s no steps or rules other than staying dedicated, being creative, and enjoying it every day. In my case, I think it’ll probably be best if I just stick to calling baristas Greg.

On the Benefits of Being a Dumb Tourist

On the Benefits of Being a Dumb Tourist

I’ve stayed at a fair share of different places over the last few years, and using public transportation takes the cake for being the most stressful and annoying day-to-day experience in every place I’ve been to. From riding 5-and-a-half hours every week in a packed Chevy Astro through hot Puerto Rican highways to starting your workweek at Berkeley with the fresh sight and smell of body parts, I’ve never had a positive relationship with public transportation (and don’t expect that to change anytime soon). However, for someone who can’t afford to buy a car—and who is universally described as driving “like a grandmother politely trying to get to the hospital while having a heart attack”—it is a regrettably indispensable part of my life.

As a result, I’ve had to spend a considerable amount of time thinking about how to maneuver the crowded Roman trains and smelly New York buses, and have stumbled onto some weird tricks that might be of use for both tourists and daily commuters. For this post specifically, my intent is to show you the “paradox” that, when trying to get on a packed metro train, being a dumb tourist is better than being a smart one; and I’m going to do it by using something just as annoying, stress-inducing and indispensable as public transportation. Statistics.*

*Cue Inception horns and random screams.

If mathematics were a family, probability & statistics would be the bizarre great-uncle that won’t stop talking about how taxidermy is a spiritually fulfilling hobby at the dinner table. It is a field of study that is simultaneously too trivial for “real” mathematicians (they’re too busy writing proofs no one understands) and so strange that one of the best mathematicians of all time didn’t believe a simple statistics result until someone showed him a computer simulation proving it. For the moment, I’ll start by giving you a small primer on the basics of this strange field before we delve into any commuting weirdness.

Perhaps the two most important pieces of information in the statistical sciences are the long-term average and the single likeliest outcome. The names are pretty straightforward, but just in case, I’ll explain them with a six-sided die.

  1. The single likeliest outcome is just that. For one six-sided die, there isn’t any single likeliest outcome because you have an equal chance of getting any number between 1 and 6 (unless you’ve been loading your dice, you cheater). It’s easy to spot in an outcome graph, because it’s the outcome that happens the most.
  2. The long-time average is a little more detailed, but not very: it’s the average of your results after you obtain a very large amount of them! For a single six-sided die, that number is 3.5. You can’t spot this one in an outcome graph, but you can deduce/guess it if the shape is simple.

Now that we’ve got our statistics bases covered, allow me to illustrate the promised “paradox” through my experience living in the Bay Area. Trying to get on a BART train (the Bay Area’s metro system) during the busy hours was mostly a game of chance; you had to hope you picked a waiting spot close to where the train door lands or you’re looking at a 15 minute wait for the next one to roll in.

However, let’s say you knew that the train door always pops up within the same 100-foot strip of train station, but you don’t know exactly where. Assuming there’s an equal chance of it showing up anywhere in the strip, the instinctively smart thing to do would be to always wait smack-dab in the center of it; that’s the position that puts you closest to the train door in the worst-case, and it certainly feels like it’s your best bet.

Train 3

In this scenario, you might claim you’re making the smartest choice, so let’s call this the smart tourist scenario. Now, instead of using some fancy math theorems to tell you what the most likely distance and long-term average distance are in this case, I’m going to be 100% thorough and actually simulate it! Let’s take a look at what being a smart tourist comes out to when you simulate the train arriving a million times:

Train 1

There are two things to take away from this graph. First, since the graph indicates that the train stopped everywhere about the same number of times, there’s no single likeliest outcome. It’s equally likely for the train door to land right in front of you than it is for it to wind up 50 feet away! Second, if you used the train over and over, your average distance from the train door would be 25 feet (which you could calculate by finding the average of all the distance outcomes). Nothing unexpected here.

Now we’re going to go into “paradox” territory. Let’s say you take a page from your weird great-uncle’s book and, instead of carefully planning things out, you just decide to randomly pick a spot inside of the 100-foot strip to wait in.

Train 4

In this case, you’re not making any decision at all about what’s best or not; you’re just randomly waiting somewhere. Let’s call this the dumb tourist scenario, and here’s what that looks like when you pick random spots a million times:

Train 2

Look at that; the train stopped more times in places that were closer to you! The simulations don’t lie: the likeliest outcome now is that the train stops right in front of you, and the average distance between you and the train will be about 33 feet.

Comparing both scenarios, there’s nothing weird going on if you commute all the time; the long-time average distance is bigger when you randomly wander around the train station (33 ft) versus when you wait in the middle (25 ft), so doing the smart thing is still your best bet in that case. But, when you’re a tourist and only plan on riding the train once or twice, this implies that it’s better to randomly pick a spot to wait in than to pick the best logical spot!

This is profoundly counter-intuitive on many levels; how can a “dumb” action turn out to be better than a “smart” one? How can my random action cause the train to usually arrive closer to me? How can I understand this result intuitively? Well, I could try to calm you down by pointing out that being a dumb tourist has two negatives, which is that 1) your long-time average distance is larger and that 2) you have a nontrivial chance of having the train show up more than 50 feet away from you, which is impossible for the smart tourist. If you’re like me, though, you are probably still very puzzled.

For that, you can take solace from the fact that the smartest man who ever lived once said that “in mathematics you don’t understand things, you just get used to them”, and my advice is the following: get used to it. This is by no means the only “paradox” in the statistical sciences, as great many others are known to exist, and they’ve puzzled everyone just as much as this little factoid does. The best thing you can do is to learn about them and why they happen so that you don’t get surprised by them (or more importantly, make wrong assumptions because of them). And who knows! With time you may find some new ones yourself, if you decide to formally study statistics—or if you commute enough.

On Writing Nonsense and Getting Away With It

On Writing Nonsense and Getting Away With It

Roald Dahl was a master of the written word, and this was perhaps most exemplified by his ability to use nonsense words like “flushbunkled” and “frothbuggling” without making the reader question whether or not they are viewing the product of an elderly Welshman having a mild stroke on his typewriter. Regardless of how silly they sound to us, such nonsense words have a rich history in the English language; in fact, they form a large part of it! Back in the 16th century, Shakespeare is claimed to have invented over 1700 words that are sure to have made a few English eyes squint back in the day. Examples include fracted, propugnation, and fairly hilariously, elbow. (What the hell were they calling elbows before Shakespeare came along? Did the English just point to their elbows and go “I have some pain in my…um…well, you know what I mean”?)

In any case, this type of creativity is not limited to just dead English writers, as mathematicians have often attempted to transcend the boundaries of the established and the intuitive to make use of similar nonsensical concepts in their equations. In this entry, I’ll talk about the most commonly dreaded example of this; the often-frightsome complex/imaginary numbers.

Imaginary

Whenever imaginary numbers were brought up in high school math class, I’m sure mostly everyone wanted to leap up dramatically from their desk and shout “Why the hell are we studying imaginary numbers? What is the purpose of this? Why don’t we learn things like how to balance a checkbook/do taxes/apply for a job?”

If I were a high-school math teacher*, my response would be “You’re right! You shouldn’t be studying imaginary numbers.” There really is no reason for a general high-school math course to cover them, and the discussion of this kind of thing is best left to STEM-track math courses for everyone your class liked to hoist from the school flagpole. I would be happy to leave you to your Balance a Checkbook 101 class, where you can revel in the fact that you are taking a class for something that is both mind-numbingly tedious and so conceptually simple that you learned all the skills to do it when you were 8 (except for the skill to realize that you don’t need a state educator to remind you how to add and subtract).

*I tragically don’t qualify to be a high-school math teacher, as all high-school math teachers are mandated by the state to have bushy mustaches, square-rim glasses, and an unironically ugly wool sweater. (Wool sweaters are expensive.)

The point of this entry, however, is not to tell you whether or not you should know about imaginary numbers; in fact, I’m not even going to try and explain what they are. My goal here is to try and explain why they’re useful to Melvins like me. And, like your high-school math teacher trying to explain why model trains are a fun and interesting hobby, I probably won’t do a good job of it.

The gist of it is that, like nonsense words, the importance of imaginary numbers lies not in what they are but rather what they do; how they interact with the rest of the normal parts of the medium, be it literature or mathematics. It doesn’t matter what the Gizzardgulper meant when he squawked “I is slopgroggled” in The BFG, it matters what this implies about the giant’s ability to speak the English language and the richness of context such a simple statement can provide to a book. In the same sense, imaginary numbers would just be some daydream an Italian guy had in the 15th century if they didn’t let us use the very simplest tools in math (adding, multiplying, etc.) to perform some interesting and useful tricks.

Take, for example, what happens when you multiply 2 by itself some number of times. (I’ve graphed the results below.) Nothing strange or unexpected here; feel free to check that 2*2*2 = 8 and 2*2*2*2*2 = 32 .

Complex1

Now let’s try to do the same thing for the imaginary/complex number 2^{i}. What 2^{i} actually is doesn’t matter; what matters is what the values of the multiplications are once I get rid of all the gunk that has i‘s on it.

Complex2.png

See that? The value is oscillating! We’d see the same thing if we tried multiplying something like 3^{i} or 4^{i}, except the frequency of the oscillations would change.

As it turns out, the chief usefulness of imaginary numbers is that they make it very easy to describe things that oscillate*, and this is what makes them show up everywhere from electrical engineering (currents in wires tend to oscillate, hence why AC stands for alternating current) to quantum mechanics (the central object of QM has wave-y behavior). Imaginary numbers are not required to describe any of those phenomena, but trying to avoid them requires altering your math so much that you generate things almost as nonsensical as them anyways. Attempting not to use them because they don’t “feel” right is just as gauche and annoying as it would be for you to keep calling your elbow “the thing that connects your upper arm thingy to your lower arm thingy”.

*Clever mathematicians have found other surprising uses for complex numbers, such as finding the areas under curves that are mathematically difficult to deal with, but these applications are more esoteric than anything.

Imaginary numbers are certainly not the only nonsensical objects that mathematicians have come up with; they stand in company with a bounty of strange concepts that have been invented over the years, like numbers that represent the size of infinities or numbers that aren’t really numbers at all. And the fact you can’t conjure up an image of 2^{i} apples shouldn’t deter you from thinking these ideas are somehow different from the numbers you’re familiar with! After all, Shakespeare also invented words like moonbeam, submerge and obscene. These words would’ve sounded just as strange to 16th century Englishmen as fracted and propugnation; the only reason we find them normal is because we’ve always used them. If we did the same for complex numbers, it might be possible for us to easily imagine balancing 2^{i} apples on those jointed pointy things at the end of our hands.

On Becoming Big Brother (or Why Can’t 2+2=5?)

On Becoming Big Brother (or Why Can’t 2+2=5?)

Orwell famously stated in 1984 that, since the external world exists only through our mental perception of it, and that the mind is controllable, a perfect totalitarian regime would be able to fully direct and redefine our perception of reality. His grand example of this is that his nightmarish Party could state that 2 + 2 = 5 and that everyone would believe it; but would they? And more importantly, would the Party want to make such a statement? In this entry, I’ll show you why you should be careful about which mathematical statement you pick a fight with, and how bending your proletariat’s perception of reality is a bit harder than changing the answer of a sum.

Let’s say that in an Orwellian thought-universe, where addition is represented by a +' symbol instead of +, two plus two is indeed five. Let’s also try to say that addition between all other numbers remains the same as in our universe, so 2+'1=3, for example. How this could affect the consistency of their results can be checked fairly quickly by observing what happens if I add three numbers. (The parentheses indicate which two numbers get added first.)

(2+'2)+'1 = 5+'1 = \mathbf{6}

2+'(2+'1) = 2+'3 = \mathbf{5}

Looks like we’ve run into a very big problem; the order in which we add the numbers affects the result! This is like saying that the amount of money you use to pay for something depends on the order in which you give your bills to the cashier, or that the length of a fence on Mr. Pig’s animal farm depends on which side he measures first. The issue is not that the answer is “wrong”, since the Party defines what is “right” and “wrong”; it’s that there is no definitive answer. Clearly, this system of addition doesn’t seem very useful.

Pig 1

The only apparent way to save Orwellian addition is by tacking on an extra 1 to every sum. That is to say, x+'y = y+'x = x+y+1 for all numbers x and y; that removes the ordering dependence, and Mr. Pig can measure his fence in an (apparently) consistent way. Have we managed to save the concept of 2 + 2 = 5? Well, not quite.

Problems arise again when we attempt to define multiplication. An intuitive way to define multiplication in our own universe is simple; 3\times2, for example, is simply adding 2 three times, 2+2+2. In the Orwellian universe, we can say the same concept applies; 3\times' 2 = 2 +' 2 +' 2. Naturally, the answers differ between universes, as 3\times2 = 2+2+2 = 6 and 3\times'2=2+'2+'2=8 because of the extra ones from the Orwellian addition. Regardless of the difference, so far so good; we don’t need the answer to be correct, we just need a consistent answer to exist.

The problem is that Orwellian multiplication invariably runs into the same problem we saw above. For example:

3\times'2 = 2 +' 2 +' 2 = 2 + 2 + 1 + 2 + 1 = \mathbf{8}

2\times'3 = 3 +' 3 = 3 + 3 + 1 = \mathbf{7}

Like in our primitive guess of Orwellian addition, the order by which we multiply numbers in this universe is affecting our result! Mr. Pig might have been able to get a consistent measurement of the perimeter of his farm, but he won’t be able to get a consistent measurement of its area. And weaseling our way out of this one like we did before with addition is not an option*, since abandoning this definition of multiplication means we’re abandoning a central concept behind it (that multiplication is just repeated addition no matter what universe we’re in).

Pig 2

*One can formally demonstrate that it is completely impossible for Orwellian multiplication to be distributive no matter what universal concept of multiplication you abandon, but that is far too complicated for this post.

In short, 2+'2=5 is not just incorrect, it is anarchic. It’s one thing for a totalitarian government to tell you that two plus two is always five instead of four, but it’s another thing entirely to tell you that five times three times two is one thing and another thing at the same time but also some other thing too. Faced with this lack of absolutism, the citizens of this Party would eventually begin to form factions based on the result they believe is correct, and generate internal conflicts that would escalate until the Party collapses. In this Orwellian thoughtscape, the government has not gained a stranglehold over your perception of the world by stating that 2+'2=5; it has completely let go of it. It just goes to show you have to be very careful with what aspects of reality you try to bend to your authoritarian will!

All of this may seem like some frivolous flight of fancy, but this type of analysis is commonly performed in a field of mathematics called abstract algebra. This section of math is dedicated to studying collections of objects, usually numbers, and general actions you can do to them. In our case, we attempted to justify 2+'2=5 by defining new “Orwellian” operations similar to traditional addition and multiplication and, by doing so, saw that it failed to hold certain properties which are essential for its use as a reliable mathematical base.

This sort of rigmarole is much more in tune with what mathematicians really do in comparison to the kind of mathematics most people see in their classes, a.k.a. solving for x. It’s a crying shame that, because of this, a perception of mathematics as something stale and trite is nearly universal among the general public. If you’d like to attempt a problem that looks like something a proper mathematician might do, try and see if you can find some other form of addition and multiplication that is both consistent and allows two plus two to equal five! Perhaps you can be a better mathematician/dictator than I can.

On Gambling Your Savings Away

Everyone who knows me knows I am a betting man. I have an almost comical obsession with putting money down on everything, from the mundane to the ridiculous; I once bet a friend 10 bucks a Pulitzer Prize-winning author would get my name wrong in a signed dedication. (I won.)

Dedication

I have, however, avoided casinos throughout my life like the plague. The windowless rooms, purified oxygen, and neutral expressions of fellow gamblers have led me to believe that casinos are some sort of terrestrial purgatory where you slowly but surely rid yourself of your sins (read: money). In this entry, I’ll try to convey just why I resist the allure of these glitzy gambling institutions and explain how the flow of heat from hot to cold is connected to the flow of money from your wallet to the craps dealer.

Gamblin’ Heat

Say you go to a casino and play a simplified version of roulette, where you can bet on either red or black with both having equal chances of being the landed color. (You could imagine betting heads or tails on a coin flip, it’s functionally the same thing.) Since this is a casino that’s interested in taking your money, let’s say that they give you slightly less than double what you bet when you win. In addition, I’m going to assume for simplicity that you have a terrible taste in bets and gamble on red all the time. In this system, I can easily show you all the possible gambling outcomes if you just gamble twice.

Bet 1Simple enough; note that there are two different ways in which you can win one bet, and a single way to either lose or win all your bets. Here are all the possible outcomes for a 4 bet gambling run:

Bet 2

Now there are six different ways for you to win half of your total bets, while still just one way for you to lose or win all your bets. For a gambler like me, a useful thing to do is to observe the number of outcomes for a given number of successful bets, as that tells me the relative likelihood of me winning some number of bets (and that’s all I really care about). As this quantity appears to be so important, I’m going to plot it below and keep plotting it while we go to longer gambling runs.BetPlot4Since the amount of outcomes is too large to list individually for bigger betting runs, let’s see how our outcome vs. betting wins plot evolves when we analyze runs from 5 bets to 150:

ezgif.com-cropThe amount of ways in which you can win half of your bets for a 150 bet run is ridiculously huge! In fact, I’ll type the number out just to scare you: 92826069736708789698985814872605121940117520. But the thing I want you to focus on is the fact that the graphs are getting both taller and narrower as we increase the number of bets; this means it’s becoming more and more likely for me to win a certain number of bets (half, in this case) and less likely for me to win any other amount of bets. This tendency is important to spot because every casino game will behave like this simple version of roulette when the amount of bets is very large.  In fact, the tendency is such that these plots will eventually become infinitely narrow as I increase the number of bets, leading to the following general statement for any kind of casino with any number of different games:

For a sufficiently long betting run, a gambler will always win an essentially fixed proportion of his bets.

I say essentially here because the probability of winning a number of bets close to this proportion doesn’t go down too quickly as you increase the betting run, but the difference definitely becomes negligible very fast. (If you had enough money to make a trillion roulette bets, would you care that you won 500000000001 times instead of 500000000000?) It is also not impossible for you to win every single bet you ever make, of course; it is just phenomenally improbable.

Since a casino will always manipulate payouts to ensure that winning that magical proportion of bets gives you a net loss, what this is effectively saying is that you will always lose money if you gamble long enough. And since one gambler betting a large number of times is the same as a bunch of people betting a moderate number of times, a busy casino will always make money. All an Atlantic City hotshot needs is to get morons to stay in their big ritzy oxygen chamber and cash will just come pouring out! Note that there’s absolutely nothing stopping you for walking in, winning every single bet you make, and walking away with a fortune; a sufficiently busy casino knows there’s some other poor schmuck somewhere in its glamorous bowels losing more money than you just won. And again, it’s not impossible for everyone to suddenly get a lucky streak and break the casino’s bank; it is just so fantastically unlikely that it is more probable for a plane to crash on your casino every year than to have to deal with a group of 10 people winning 15 consecutive bets at the same time.

Old-Fashioned Heat

Moving on to the science-y part of this entry, the statement I made in bold above is strongly linked to the laws of thermodynamics, which like that statement, are actually just very strong statistical tendencies. In some stable gas, kinetic energy is constantly shuffled around among all its particles, as if every particle was simultaneous gambler & casino. However, if you try to measure the kinetic energy of some large number of these, it becomes more and more likely to measure a certain total energy for a given number of particles; just like it becomes more and more likely to win a certain number of bets (half) as you increase your total bets. Take a gander below if you don’t believe me!

BetGif2Redone

Another way to look at this is by saying that the ratio of total measured energy vs. particle number becomes effectively fixed as the amount of particles you measure becomes very large. This quantity, after scaling by some constants, is what we call temperature. If we looked instead at the ratio of total measured energy vs. particle density, we would get (again after scaling by some constants) the thermodynamic definition of pressure.

If the amount of measured particles is very small, these notions of temperature and pressure would not make any sense, as these quantities would fluctuate wildly for different measurements. Correspondingly, we would not be able to make any predictions based on these quantities, and thermodynamics as a field would cease to exist! Luckily, every chunk of matter at our scale contains an enormous amount of particles (a liter of water contains 3.346*1025 molecules of H2O), so it is still much more likely for a plane to crash on you than for you to read a fever on your thermometer when you’re actually fine.

Going back to the gas example, say I now heated some small section of it for a while; for a gas with a decent amount of particles, it would be very unlikely for heated particles to remain in the same region and/or avoid nonheated particles wandering close to their turf. In short, there are many more outcomes in which that extra kinetic energy gets distributed to the rest of the gas, while only a handful of outcomes in which that energy stays with the original gang in the same area. This means that the second law of thermodynamics, the fact that heat flows from hot to cold, is not a fixed law of nature; it is just an overwhelmingly likely tendency.

I’ll finish off with a little addendum; notice how quickly those numbers got big for our outcomes vs. wins plots in our roulette example. In fact, my computer couldn’t even handle doing the calculations for a betting run of 200! In order to size these numbers down in a practical way, scientists and mathematicians take something called the logarithm of the number of outcomes for a given condition (number of bets won in the roulette example, energy for thermodynamics) and base all their calculations and theorems on that. This quantity, which behaves qualitatively just like the number of outcomes for a given condition, is called entropy; and that is why you hear the second law sometimes quoted as “entropy tends to a maximum”.