Thursday, October 26, 2017

October 27, 2017 at 12:56AM

Today I Learned: 1) In 2004, Equatorial Guinea was ruled by a dictator named Teodoro Obiang Nguema Mbasogo. As dictators go, he ranks somewhere in the range from "brutal" to "godawful". I don't know a ton of details about Mbasogo's rule, but I *do* know that he a) got into power by killing his Uncle (who, admittedly, was also pretty awful), b) is, officially, the country's god, with a direct permanent line of communication to the Almighty, and is bestowed with the magical ability to kill without going to hell for it, and c) regularly comes up on lists of "worst African dictator". So... not a good guy, not a good government. Anyway, in 2004, a bunch of London financial types with a lot of money decided they'd had enough of Mbasogo... existing. So they hired an army of 64 mercenaries (mostly ex-South-African) and all the equipment they'd need to take out Mbasogo, and asked them to do just that. The... plot? Job? War? Whatever it was, it ended before it really started. There's direct and indirect evidence that the US, UK, and Spanish governments may have known about the planned attack, but it was the *Zimbabwean* government that stopped them by arresting their plane while it was in a Zimbabwean airport. The soldiers were jailed, tortured, and at least one has died. A few of the financiers were fined and jailed for a few years; others have gotten away without a charge sticking. 2) ...how to take jacket measurements. Roughly. Better than I knew before, anyway. 3) Andy Halleran and I were curious about who invented Markov Chain Monte Carlo (MCMC), the modern scientist's Favorite Algorithm™, so we looked it up. It looks like MCMC was first published in 1953 out of Los Alamos. The authors are Nicholas Metropolis*, Arianna Rosenbluth, Marshall Rosenbluth, Augusta Teller, and Edward Teller, but there are at least a couple of claims that Nicholas and Augusta didn't really do anything on the paper. The more general Monte Carlo class of algorithms seems to have been quietly invented by Enrico Fermi, but he didn't publish it and nobody heard about it. Later on, Stanislaw Ulam (of cellular automaton and thermonuclear bomb fame) and... *sigh*. John von Neumann. Of *course* it was Johnvon Neumann. Anyway, they reinvented Monte Carlo methods while working on neutron penetration of radiation shielding. They turned out to be critical for simulations used to build the bomb, and, later, just about everything. * Of Metropolis-Hastings algorithm fame. Hastings was the first author of the *second* critical paper on MCMC, that generalized the first paper's strategy from one particularly tricky integral to functions-in-general.

Saturday, October 14, 2017

October 14, 2017 at 03:15AM

Today I learned: 1) I've been playing the confidence calibration game (http://ift.tt/1dzDceM) for a while now, and I like to think I've gotten decently good at it. I'm somewhat overconfident in some ranges, but my 60%, 80%, and 99% confidence estimations are pretty much spot-on. The *weird* thing I noticed today is that I am rather *overconfident* when I have 50% confidence. Specifically, when given two possible answers, when I have NO IDEA what the right one is and blindly guess, I am right only 34% of the time, with an N of 29 (10 correct, 16 incorrect). Is this significant? The, uh, "obvious" statistical test to use here is to P-value test, which asks how likely it is that I would see data this "weird" if the null hypothesis (that is, that I actually guess with 50% accuracy) were correct. In other words, the p-value quantifies how surprising the data are under a null hypothesis. The lower the value, the more surprising it is. In this case, assuming binomial distribution of answers, I get a two-tailed P-value of about 0.13. A little suspicious, but not very strong evidence. What if we do this the Bayesian way? Hmm. Well, for this we turn to Bayes' Rule. If p (little "p", not "P", which stands for "probability of") is the probability that I get a random guess right, and D is our observation (10 correct guesses out of 29), then we have P(p|D) = P(D|p) * P(p) / P(D) The probability of getting 10/29 guesses right when each guess has probability p of correctness is given by the binomial distribution, which I don't happen to know off the top of my head but the internet does. The probability of getting the data *at all* is the sum (integral) of all the probabilities of getting that data for every possible value of p (so, ∫Binom(D;x) evaluated from x=0 to x=1). The prior probability of p is the tricky bit, as usual. The typical thing to do here would be to assume we have no knowledge of p and give it a flat prior distribution, so it basically goes away (in fact, it does go away -- a uniform distribution on the range (0,1) is 1 everywhere, so it's just multiplying everything through by one). Plug in the Binomial distribution helpfully provided by Wolfram Alpha and we have P(p|D) = (29c10) * p^10 * (1-p)^19 / (29c10) * ∫[x^10 * (1-x)^29]dx where (29c10) means "29 choose 10" and that integral in the denominator is evaluated from x=0 to x=1. Conveniently, the (29c10) bits cancel, so we can take them out. The integral in the denominator is just a number, which happens to evaluate to about 1.7 * 10^-7. If we graph out what's left, we get this*: http://ift.tt/2yjhcJU Take-home points from the distribution: a) The expected value of p is 35%, which makes sense; b) it's *very likely* that I make random guesses at less than 50% accuracy -- about 95% of that curve is below p = 0.5. So, do I guess at worse than random? Eh, I wouldn't count on it. When it comes down to it, I *don't* put a uniform distribution on the prior for how well I guess at random -- in fact, my prior for that is pretty spiky around p = 0.5. But hey, now I have *some* evidence to the contrary. * Actually, something's wrong with this calculation, because it's not integrating to 1 like a good probability distribution should. Can anyone spot the error? I can't. 2) I *know* I knew this one before, but I forgot, and now I learned it again! Did you know the United States has had an Emperor? At least, he thought he was the Emperor. And he got coins minted after him. Anyway, Emperor Joshua Norton was born an Englishman, moved to San Francisco, lost all his money, went a little nuts, and declared himself Emperor. He was grandiose, for sure, but apparently charming and harmless enough that the locals humored him. Coins, as I said, were minted in his name, and his presence was generally honored and applauded, as were his proclamations. He was beloved enough that 30,000 San Franciscans attended his funeral, even though he owned virtually nothing but a few uniforms, hats, walking sticks, fake letters and bonds, and a saber. Another thing I didn't know about Emperor Norton the first time around -- he was actually arrested at one point by a policeman who tried to have him institutionalized. Public outcry was swift, and the police chief ordered him let free, on the grounds "that he had shed no blood; robbed no one; and despoiled no country; which is more than can be said of his fellows in that line." 3) Everyone knows that college tuition is rising rapidly. Did you know that college *spending* is not? Spending per student (though not, I think, spending per degree granted -- it's not quite the same) has been pretty flat over the last decade (I don't think there's good data from before that).

Thursday, October 12, 2017

October 12, 2017 at 03:33AM

Today I Learned: 1) It turns out that the whole medieval system of economics, and particularly the thing where YOU DO WHAT YOUR FATHER DID, NO MATTER WHAT, was laregely the result of one Roman emperor. That emperor was Diocletian, and he may have been the worst thing that ever happened to the free market in Europe. Diocletian ruled at the end of the 3rd century, a time when the Roman economy was in a rough spot. Centuries of coin debasement (replacement of valuable metals in coins with mundane metals) and inflation had completely devalued currency, and with it the ability of the Roman government to collect taxes and pay its servants (by which I mean its soldiers). Diocletian sought to fix this problem, and accordingly made a huge set of sweeping changes to the Roman economy. One thing that apparently worried Diocletian pretty badly was the idea that people might (*gasp*) leave an industry(!). After all, pig farming isn't very pleasant. What if all the pig farmers decided to up and move? Or, worse, *change jobs*? Where would the army get its bacon? That wouldn't be acceptable -- the army calculated its consumption of goods and services very carefully, and (partly because of the way Diocletian restructured the Roman tax system) any major supply changes might seriously damage Rome's ability to defend itself. Diocletian's solution was to simply fix everyone's jobs. Diocletian's government quietly, slowly took over all of the guilds of Rome, which had previously been voluntary unions of professionals. Then he removed the voluntary bit of the guilds. Then he mandated, by law, that you couldn't leave a guild, and that membership would be hereditary. Bam. Medieval serfdom achieved. Thanks, Diocletian. Thanks. 2) Zebrafish stripes aren't fixed patterns -- they're dynamic, moving (if slow) waves. If you laser ablate a section of stripes, the nearby stripes will move to fill the gap. 3) One of Alan Turing's many delightful insights was that of the reaction-diffusion network. A reaction-diffusion network is a simple mathematical model of chemicals that can a) react with each other and 2) diffuse around in space. Thus, reaction-diffusion. Anyway, it's a kind of neat simple descriptor of chemistry-in-space, but the *really* cool thing is that many Turing-style reaction-diffusion networks look an awful lot like the patterns of stripes, spots, and shapes found on animals. Just about every animal's skin/coat/shell pattern, from jaguars to giraffes to snails and fish, can be described by a Turing pattern. This is awesome and all, and it suggests a mechanism by which animals get their patterns... but I do have a bit of a fear that we're overfitting. After all, maybe reaction-diffusion networks can just make *any* pattern, including those of an animal. If that's true, then it's not tremendously likely that we've discovered the mechanism of animal pattern formation, any more than discovering how to render pictures with a computer tells us how the stars are arranged the way they are. Today I learned that there *are* some restrictions on what Turing patterns can do! Example: A Turing pattern on a tapering cylinder (say, a tail) can form spots near the base of the tail and stripes at the tip, but it *cannot* do the opposite. That is to say, if cat coat patterns are formed by reaction-diffusion newtorks, then there can be spotted cats with striped tail-tips, but not striped cats with spotted tail-tips. Indeed, there are plenty of examples of spotted cats with striped tails (see cheetahs, ocelots, and to a lesser extent jaguars), but to my knowledge there aren't striped cats with spotted tails (extra kudos to anyone who proves me wrong!). So, that's nice. For details on the cat tail thing and others, see "How the Leopard Gets its Spots" by James Murray. (http://ift.tt/2xAbM94)