Friday, December 29, 2017

December 29, 2017 at 11:22PM

This is my public key: -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFpG0EYBEADXKSES2oHszGPNUWHsbUeFHgUkkqICUQ6d7gnTNjVbR4Ytffy5 0c71tMRRxlMwCwmtTTQ0adLvNbj5u5FoNSqxUz8AOdtLVAGZMQ+REVv1luA4UjbH JFxOtHss/BMTZAS+pTTXO+TGMEF6hhSTlFl7+1Ier/MRCSiPHRizvJjyYKh9N5CP 8tkW3gPjHYJY8SKowYJFN2r3mJOhwr87RKI/jxTOc88Hqhxe2FXtLJf9eUHR2kDJ 0GEBPKIBYEHe8mE7u26S5zcbFfQlD2mXAUVDhpd2qI90OSzODxeoSZpA9yYPreny 24v3uRG1sZil8j14/LpYfDa8KwSK+Tcdpv69Q80pSN0O48sHd0G+0tE7CUE7sLoX IvO/wn5tNHfxGfdm4f6E3wOgfp8IHgS18fNzCamwEVcCHk089yLKSAHDV/Q3eOYS T5k24qGySBXFBSAfWx6xTcLz9elVxf6qEWEL0p8MSkBBhnPzLKlCI9KoqKuE2+VG ayRaqYPHN+skXxvUmjUZvwUxcEjmX005d7W6hsmHL2Qqx1wmmhlHRNZlvfr+9JmS K9f8VGUId2rZVXjwHK+DS1ZJds6/GVXDhYHdu18uUR70jqTDsEP1KNgsjuFPBvr3 NUsbaoXiDR64JFy+fm5OGmI2GH1gsyttffnhiASxM+jUFG0PmxrD265wGQARAQAB tCNTYW11ZWwgQ2xhbW9ucyA8c2NsYW1vbnNAZ21haWwuY29tPokCVAQTAQgAPhYh BDmEaAHIgYJ3zzDx0lxG0W9jFmuABQJaRtBGAhsDBQkB4oUABQsJCAcCBhUICQoL AgQWAgMBAh4BAheAAAoJEFxG0W9jFmuAj4UP/0ZVbanmH0YCb+YRpKm1qDi7nDOl zjPE0iN/Vx6gD2taNKD9FjZiMA0C9sVx6bWFTa3DZ3ETAGXtHk71L0RaUF8kadgF yYjC4urQdpHk2nzPNcxUAtbOGjj7dLBfcMfMyxf4fFELV0r55stHGMGOO0gF4/9M 06/Hvmmanhs1drDW8QmWZoYva4BFytZYTaYIKD6hTeR3IKSolqM3U7FzHOd/IcHu SS8V76jsaxLHl/5Jbq7wllM8r+cuRRWQNT1df6yBP3gKhSwkkxCiyCsZEiag7Y5q dVK+jn35AT0l7y6t7N0H6v05pjq5Nf95aNibMevupmNepnd4FpwRY3qshOx833hd 1vCyWb1nlKlaeT84IuiljPyt3al9N9HN0W/NGa/1H8rcwoYYyWj57tdCZn0+pEWF at7D9WagPdc1TnoMRq8TDcvh7PHdMh4Kq0Mm+R/PdpasOI/ve4hWdn2D1uWbJocv 9NGlmWIS/wS6+b2PvLtEfM+NtiSDy68mYfR3nNEHy+aowe2lZCt48icFmzVsGaRV /Qsnp9VLH/wTwdg7fBGP7Tj9QFhF5VkfzYdmfFKw5IdZRmI+8qy5cbeUKibG9a6y 66w4brOuAC+Npm2pVwt8BStHLwzAk3jNzKVmhOZEqz6yUfMPyFOyaNSDIMsv6En0 DEF17Jr270dhlOnsuQINBFpG0EYBEAC8ubyCztyz+uYcwFcDUwA5nYoBl2viuheM ZBJ+M5XU6nsZR52QHY8G+AMaYsx49GKZeSBARsBFOxX6MEh8MQy1/fEycbg4jqVs /ilK21yVN/H1g0oP1e2iBwIOeEMnhHr24L3RfKG6TKk+hG13bAptCMi3cEQ2N5sK TSS1YbI6gj7U8+MI+YTaW2SKcJrrRJ7C+uLzNag6pS/NTiSQEIgS95jU9M9d0eGp egcEG0L2yr42Mj2NUpFDR4/G+3jE5T/bvD5D1drSM8xiBIWAFMqq3culpxVGOqTC xnuAkSPsYqZgKTH8NDMz9edLz3Tv3GYegOjM5N/3EYWZsCrr4/Rkj3RHUinWt4qg 8ZRgDgNm4Fqfk9FjnVmZ1mtklqrURKN8O/vLWlZUwS/rISWhohPK5itezPQ1btIQ mPOj1rfj9QL3SOKXqa3E59p4+vI+TAiqn4Bj4Kr6sOsKx00tlLq6C0J2DdgnPIHy Ll5KMV6+cyIQkMzrbD2ngd9Pa3cqpbWX6sqkRR/V5BHWl0Nq3vNOVPy7zDP4RRiS TA1aMR4b/pKKNgQeJhsgEQWCY8Xb8P2SQOHc4i6R6dkUR1Pd+UCejO9wFwXrRVY0 tINhlbQdUoG9vUUin/axF6h/uMOsCS13xr9fHQ1Tq5i/+c/0gCbdO6FUa9NHCHt6 /pGQNOnz2QARAQABiQI8BBgBCAAmFiEEOYRoAciBgnfPMPHSXEbRb2MWa4AFAlpG 0EYCGwwFCQHihQAACgkQXEbRb2MWa4BfJxAAupsheyjnobehdQiqv/JLuQXB2z8D 0lrkT04fJvor2gkbeke3VoVxIJFif2Vg48A3dgOeU4p3t0+g9JJHUTNQHOpTF7wZ mQgmf8jPQKq4cin6yOALmgeb14+wI6HkxTQnDQIdZcYQwjZDwUMqMCuYJJVwj7J7 djVWW5xiqreJoVFoMyS6CDLVtT8uyO/L7XqhvRl3C+xttRba+KQDK+mAPiIFvzmr Vj9xe0TfKtyknJjtx2gYCAl2RYD68MoS1jtEkKKPnRor9NV/EJY4sBN2igCxZdfl zBZ6yZihvkZAjsljE/DrsDd36HfDc3nfLaRws3SeeY2fkpmcNkReMiEuBuIntRPP f1Dz+koqQAy2NWJKefppldtpm1wMe7AEIF0i8HnJsH0HggLO9gaUzfVPZXtcvGNG jgPR1WvZ3PnzHPfWD/bb1eon8DVB1t7w5Gan/VVUlz+nSggcJLC8nmb2qAlsHwLU ttSzLzYwygN6m7c64CovpxX8qgAiUuWNTSQVLQl5Uvr1S3DHhDedsjMsQTpocSKT CsOPYQuQq3N8l0oAWTGqugnLg/3mNXolIJ4RFiDV04Od2agMGdVlMP3x2vyI0q/V zFx3WFvReJ3FMJ6xNqmMH5B4wsf7Ku3rmSpIjmkslEH2V1fZTJUuS6Tmlk5guxKO kSlJvwNvHDIRW10= =bLJE -----END PGP PUBLIC KEY BLOCK-----

Monday, November 27, 2017

November 28, 2017 at 12:47AM

Today I learned: 1) Tardigrades (water bears) aren't a species -- they're a freaking *phyla* with over a thousand known species. 2) There are a lot of models of evolution out there. Today I learned a bit about three classes of evolutionary model -- additive, multiplicative, and stickbreaking. These different model classes have to do with the effects of multiple mutations on an organism's fitness. It's relatively easy to understand what a single mutation might do to fitness -- if it's good, then fitness goes up, and if it's bad, fitness goes down, and if it doesn't affect the organism's ability to reproduce at all, fitness stays the same. But what if you have *two* mutations with a fitness effect? How do you combine the effects? Do you add their individual fitnesses together? Do you multiply them together? Do you write out a table where every pair of possible mutations has a unique fitness with no particular pattern? In additive models, as you might guess, the fitness effects of multiple mutations are added together to get the total fitness effect of the mutations. That means that a mutation has a property like "this mutation increases fitness by 2". In multiplicative models, the fitness effects of multiple mutations are multiplied together to get the total fitness effect of the mutations. That means that a mutation has a property like "this mutation doubles fitness". The two models are actually quite similar -- they're identical to within an exponential transformation, so you can always do the math in whatever model you want and then transform the results to see what happens in the other model. Moreover, as fitness effects become smaller and smaller, the additive and multiplicative models become identical. For continuous evolution with extremely fine-grained fitness differences, they're the same. I'm not sure if the same is true for the stickbreaking model -- it's a *little* bit different. In the stickbreaking model, each mutation moves you a fixed *fraction* of the distance from the current fitness to the maximum fitness, so each mutation has a property like "this mutation moves you halfway to the best possible fitness". This model has the somewhat different property that it makes fitness converge to a maximum, which may be a more realistic representation of physical constraints. What's the "best" model to use in an evolutionary simulation or analysis? That's still up for some debate. The Lenski long-term evolution experiment* has some relevant data -- as of a few years ago, there were several phenotypic traits for which fitness could be tracked as the population evolved. The majority of those fit the stickbreaking model best, but there were clear examples of additive and multiplicative processes as well. It sems that it depends very much on the mutations. * If you don't know about the Lenski experiment, I highly recommend looking it up. It's one of the most well-known and, I think, envied experiments in Biology. 3) There's too much naval cargo shipping capacity in the world right now! According to FreightHub's 2017 report on global shipping capacity (http://ift.tt/2k3h0ZR), about 10% of the world's shipping ships are sitting idle. This is not a new phenomena, and persists despite widespread ship-scrapping. Another fun fact -- there are between 2,000 and 3,000 known cargo ships in 2017.

Friday, November 24, 2017

November 25, 2017 at 12:39AM

Today I learned: 1) Two new butterfly anatomy facts! Firstly, butterfly proboscuses don't work the way I thought. I always assumed they were like needles, and that butterflies stick them into nectar-rich flowers and suck up the nectar like a straw. I was sorely mistaken. In fact, butterfly proboscuses come in two parts, left and right, that zip together. The center of this double-proboscus structure forms a canal. The tip of the proboscus isn't just an open end -- it's actually more like a sponge, with lots of layers of tiny pores. Liquid is drawn into the pores by capillary action, eventually bleeding into the central canal. In the canal, liquid forms little tiny bridges between proboscus zipper teeth, kind of like water caught between teeth of a comb. Some fancy fluid dynamics that I don't understand draws the liquid up, aided by a bit of suction generated by an inflatable air sack in the butterfly's head. Okay, second new anatomy fact -- in addition to antannae, butterflies have a pair of antenna-like fuzzy bits that rest right up against the head, next to their eyes. A bit of googling tells me that they're called "labial palps", and that they're essentially used for smelling. Some up-close observation of a butterfly suggests to me that they're also used as windshield wipers to clean their eyes. 2) So, everyone knows that Dolly the sheep was the first cloned mammal. Some know that Dolly died early, with pretty bad arthritis and some other medical problems. That's one of the reasons scientists haven't pursued mammalian cloning very much -- it looks like cloning isn't very good for mammals, for reasons nobody could really understand. Well... now it looks like Dolly was just an outlier. There were four other sheep cloned from the same cell line as Dolly, and they all lived pretty normal, healthy lives. Moreover, it seems that arthritis is super-common in that breed of sheep, and that it isn't particularly unusual for a sheep her age to get arthritis. Now, cloning isn't *totally* healthy -- clones tend to have more difficult, dangerous births, and require more care as infants, but once they're grown they seem to do just fine. Also, I didn't realize that mammalian cloning *has* in fact been used commercially to reproduce extremely valuable breeding cows and steers. This isn't quite new news, but for some reason this has hit some popular news outlets lately. For a reputable source, I'd check out The Atlantic's article on the subject: http://ift.tt/2mXE11y 3) There are some really strange-looking lobsters out there. Today I learned about one of these -- the slipper lobster. The slipper lobster looks a bit like someone painted a small Maine lobster gray and squished it from nose to tail. It's very squat, almost pug-nosed. It has no claws, just arms ending in shell plates. It also has the unfortunate double distinction of being endangered and incredibly tasty. Facts #1 and #3 are courtesy of the Denver Butterfly Pavilion, which it turns out is a zoo of all kinds of different invertebrate species. Some highlights of mine include a glassed-in honeybee hive, some really spectacular orb weaving spiders, two foot-long isopods, and, of course, the butterfly garden.

Tuesday, November 14, 2017

November 14, 2017 at 04:21AM

Today I Learned: 1) Mammals were dominant as large land animals *before* the dinosaurs. Well, not really -- but our non-dinosaurian ancestors were. Mammals are part of a group called the synapsids, which began as lizard-like creatures with a slightly weird skull. These were the biggest, baddest, I *think* most common large animals (amniotes, technically) during the Permian Period, and "reigned supreme" for some 46 million years. The Permian-Triassic extinction hit the synapsids really hard, as is typical for large-bodied groups in a mass extinction, and their extinctions left ecological niches open for large animals that the dinosaurs filled nicely. Only three clades of Synapsids survived; of these, one died out later in the Triassic, one survived as a large herbivore alongside the dinosaurs, and one shrank really quickly down to rat size and began specializing in insect predation. The last group, known as eucynodonts, would eventually survive the late Cretaceous extinction event and eventually give rise to modern mammals. There's something of an open question about why dinosaurs (properly, the immediate ancestors of the dinosaurs) were able to take over from the Synapsids. One theory is that the post-Permian world was relatively arid, and dinosaurs had better adaptations for low-water environments (uric acid excretion being a key example). This theory is rather controversial, though -- after all, there are some mammals today that have adapted extremely well to arid environments, so why wouldn't the old Synapsids? 2) Another mammal fact -- there's a theory called the Nocturnal Bottleneck that says that many of the common features of mammals *are* common features of mammals because our last common ancestor was nocturnal, and was for quite some time. Evey mammal that isn't nocturnal had to evolve that way from the nocturnal ancestor. The major categories of evidence for the nocturnal bottleneck are: mammals, as a whole, have highly developed hearing and smell, but not great vision (especially in color); mammals have fur, tissues specialized in rapid heating, and extremely active mitochondria, which would help stay warm at night; mammals don't have particularly good UV protection; and burrowing behavior appears to be a basal mammalian trait. 3) Here's a nice evolution number to know -- it takes about 10 million years for a dormant mammalian gene to become completely unrecoverable by random disabling mutations.

Wednesday, November 8, 2017

November 09, 2017 at 12:38AM

Today I learned: 1) Cats and dogs don't drink the same way. Dogs use the undersides of their tongues as upside-down spoons, lapping into the water and cupping the tip of the tongue downward to catch water. Cats, in contrast, do *not* cup their tongues to catch water. Instead, they use the tip of their tongues to "grab" the top of the water, the flick their tongues up to pull a column of water toward the mouth. Inertia brings the column up until gravity breaks it. Also, cats have some water-trapping physiology in the mouth so that they don't have to swallow with every flick of the tongue. I quote: "Inside the mouth, cavities between the palate’s rugae and the tongue act as a nonreturn device and trap liquid until it is ingested every 3 to 17 cycles (15)." Details on cat tongue biomechanis here: http://ift.tt/2mGEHU5 2) Fans don't really "push" air the same way I imagined. That is, wind coming off a moving fan blade isn't just pushed straight off the blade -- it's actually a continuous ribbon-like vortex of air, curled kind of like a twinkie, with net forward momentum.... Look, it's kind of hard to describe, but you can see it here, along with some cat-lapping and other things-in-slow-motion: https://www.youtube.com/watch?v=gspK_Bi0aoQ 3) Human eyes can detect, at minimum, somewhere between 1 and 10 photons. See the section "Photon Counting in Vision" from http://ift.tt/2ylo6LN.

Thursday, October 26, 2017

October 27, 2017 at 12:56AM

Today I Learned: 1) In 2004, Equatorial Guinea was ruled by a dictator named Teodoro Obiang Nguema Mbasogo. As dictators go, he ranks somewhere in the range from "brutal" to "godawful". I don't know a ton of details about Mbasogo's rule, but I *do* know that he a) got into power by killing his Uncle (who, admittedly, was also pretty awful), b) is, officially, the country's god, with a direct permanent line of communication to the Almighty, and is bestowed with the magical ability to kill without going to hell for it, and c) regularly comes up on lists of "worst African dictator". So... not a good guy, not a good government. Anyway, in 2004, a bunch of London financial types with a lot of money decided they'd had enough of Mbasogo... existing. So they hired an army of 64 mercenaries (mostly ex-South-African) and all the equipment they'd need to take out Mbasogo, and asked them to do just that. The... plot? Job? War? Whatever it was, it ended before it really started. There's direct and indirect evidence that the US, UK, and Spanish governments may have known about the planned attack, but it was the *Zimbabwean* government that stopped them by arresting their plane while it was in a Zimbabwean airport. The soldiers were jailed, tortured, and at least one has died. A few of the financiers were fined and jailed for a few years; others have gotten away without a charge sticking. 2) ...how to take jacket measurements. Roughly. Better than I knew before, anyway. 3) Andy Halleran and I were curious about who invented Markov Chain Monte Carlo (MCMC), the modern scientist's Favorite Algorithm™, so we looked it up. It looks like MCMC was first published in 1953 out of Los Alamos. The authors are Nicholas Metropolis*, Arianna Rosenbluth, Marshall Rosenbluth, Augusta Teller, and Edward Teller, but there are at least a couple of claims that Nicholas and Augusta didn't really do anything on the paper. The more general Monte Carlo class of algorithms seems to have been quietly invented by Enrico Fermi, but he didn't publish it and nobody heard about it. Later on, Stanislaw Ulam (of cellular automaton and thermonuclear bomb fame) and... *sigh*. John von Neumann. Of *course* it was Johnvon Neumann. Anyway, they reinvented Monte Carlo methods while working on neutron penetration of radiation shielding. They turned out to be critical for simulations used to build the bomb, and, later, just about everything. * Of Metropolis-Hastings algorithm fame. Hastings was the first author of the *second* critical paper on MCMC, that generalized the first paper's strategy from one particularly tricky integral to functions-in-general.

Saturday, October 14, 2017

October 14, 2017 at 03:15AM

Today I learned: 1) I've been playing the confidence calibration game (http://ift.tt/1dzDceM) for a while now, and I like to think I've gotten decently good at it. I'm somewhat overconfident in some ranges, but my 60%, 80%, and 99% confidence estimations are pretty much spot-on. The *weird* thing I noticed today is that I am rather *overconfident* when I have 50% confidence. Specifically, when given two possible answers, when I have NO IDEA what the right one is and blindly guess, I am right only 34% of the time, with an N of 29 (10 correct, 16 incorrect). Is this significant? The, uh, "obvious" statistical test to use here is to P-value test, which asks how likely it is that I would see data this "weird" if the null hypothesis (that is, that I actually guess with 50% accuracy) were correct. In other words, the p-value quantifies how surprising the data are under a null hypothesis. The lower the value, the more surprising it is. In this case, assuming binomial distribution of answers, I get a two-tailed P-value of about 0.13. A little suspicious, but not very strong evidence. What if we do this the Bayesian way? Hmm. Well, for this we turn to Bayes' Rule. If p (little "p", not "P", which stands for "probability of") is the probability that I get a random guess right, and D is our observation (10 correct guesses out of 29), then we have P(p|D) = P(D|p) * P(p) / P(D) The probability of getting 10/29 guesses right when each guess has probability p of correctness is given by the binomial distribution, which I don't happen to know off the top of my head but the internet does. The probability of getting the data *at all* is the sum (integral) of all the probabilities of getting that data for every possible value of p (so, ∫Binom(D;x) evaluated from x=0 to x=1). The prior probability of p is the tricky bit, as usual. The typical thing to do here would be to assume we have no knowledge of p and give it a flat prior distribution, so it basically goes away (in fact, it does go away -- a uniform distribution on the range (0,1) is 1 everywhere, so it's just multiplying everything through by one). Plug in the Binomial distribution helpfully provided by Wolfram Alpha and we have P(p|D) = (29c10) * p^10 * (1-p)^19 / (29c10) * ∫[x^10 * (1-x)^29]dx where (29c10) means "29 choose 10" and that integral in the denominator is evaluated from x=0 to x=1. Conveniently, the (29c10) bits cancel, so we can take them out. The integral in the denominator is just a number, which happens to evaluate to about 1.7 * 10^-7. If we graph out what's left, we get this*: http://ift.tt/2yjhcJU Take-home points from the distribution: a) The expected value of p is 35%, which makes sense; b) it's *very likely* that I make random guesses at less than 50% accuracy -- about 95% of that curve is below p = 0.5. So, do I guess at worse than random? Eh, I wouldn't count on it. When it comes down to it, I *don't* put a uniform distribution on the prior for how well I guess at random -- in fact, my prior for that is pretty spiky around p = 0.5. But hey, now I have *some* evidence to the contrary. * Actually, something's wrong with this calculation, because it's not integrating to 1 like a good probability distribution should. Can anyone spot the error? I can't. 2) I *know* I knew this one before, but I forgot, and now I learned it again! Did you know the United States has had an Emperor? At least, he thought he was the Emperor. And he got coins minted after him. Anyway, Emperor Joshua Norton was born an Englishman, moved to San Francisco, lost all his money, went a little nuts, and declared himself Emperor. He was grandiose, for sure, but apparently charming and harmless enough that the locals humored him. Coins, as I said, were minted in his name, and his presence was generally honored and applauded, as were his proclamations. He was beloved enough that 30,000 San Franciscans attended his funeral, even though he owned virtually nothing but a few uniforms, hats, walking sticks, fake letters and bonds, and a saber. Another thing I didn't know about Emperor Norton the first time around -- he was actually arrested at one point by a policeman who tried to have him institutionalized. Public outcry was swift, and the police chief ordered him let free, on the grounds "that he had shed no blood; robbed no one; and despoiled no country; which is more than can be said of his fellows in that line." 3) Everyone knows that college tuition is rising rapidly. Did you know that college *spending* is not? Spending per student (though not, I think, spending per degree granted -- it's not quite the same) has been pretty flat over the last decade (I don't think there's good data from before that).

Thursday, October 12, 2017

October 12, 2017 at 03:33AM

Today I Learned: 1) It turns out that the whole medieval system of economics, and particularly the thing where YOU DO WHAT YOUR FATHER DID, NO MATTER WHAT, was laregely the result of one Roman emperor. That emperor was Diocletian, and he may have been the worst thing that ever happened to the free market in Europe. Diocletian ruled at the end of the 3rd century, a time when the Roman economy was in a rough spot. Centuries of coin debasement (replacement of valuable metals in coins with mundane metals) and inflation had completely devalued currency, and with it the ability of the Roman government to collect taxes and pay its servants (by which I mean its soldiers). Diocletian sought to fix this problem, and accordingly made a huge set of sweeping changes to the Roman economy. One thing that apparently worried Diocletian pretty badly was the idea that people might (*gasp*) leave an industry(!). After all, pig farming isn't very pleasant. What if all the pig farmers decided to up and move? Or, worse, *change jobs*? Where would the army get its bacon? That wouldn't be acceptable -- the army calculated its consumption of goods and services very carefully, and (partly because of the way Diocletian restructured the Roman tax system) any major supply changes might seriously damage Rome's ability to defend itself. Diocletian's solution was to simply fix everyone's jobs. Diocletian's government quietly, slowly took over all of the guilds of Rome, which had previously been voluntary unions of professionals. Then he removed the voluntary bit of the guilds. Then he mandated, by law, that you couldn't leave a guild, and that membership would be hereditary. Bam. Medieval serfdom achieved. Thanks, Diocletian. Thanks. 2) Zebrafish stripes aren't fixed patterns -- they're dynamic, moving (if slow) waves. If you laser ablate a section of stripes, the nearby stripes will move to fill the gap. 3) One of Alan Turing's many delightful insights was that of the reaction-diffusion network. A reaction-diffusion network is a simple mathematical model of chemicals that can a) react with each other and 2) diffuse around in space. Thus, reaction-diffusion. Anyway, it's a kind of neat simple descriptor of chemistry-in-space, but the *really* cool thing is that many Turing-style reaction-diffusion networks look an awful lot like the patterns of stripes, spots, and shapes found on animals. Just about every animal's skin/coat/shell pattern, from jaguars to giraffes to snails and fish, can be described by a Turing pattern. This is awesome and all, and it suggests a mechanism by which animals get their patterns... but I do have a bit of a fear that we're overfitting. After all, maybe reaction-diffusion networks can just make *any* pattern, including those of an animal. If that's true, then it's not tremendously likely that we've discovered the mechanism of animal pattern formation, any more than discovering how to render pictures with a computer tells us how the stars are arranged the way they are. Today I learned that there *are* some restrictions on what Turing patterns can do! Example: A Turing pattern on a tapering cylinder (say, a tail) can form spots near the base of the tail and stripes at the tip, but it *cannot* do the opposite. That is to say, if cat coat patterns are formed by reaction-diffusion newtorks, then there can be spotted cats with striped tail-tips, but not striped cats with spotted tail-tips. Indeed, there are plenty of examples of spotted cats with striped tails (see cheetahs, ocelots, and to a lesser extent jaguars), but to my knowledge there aren't striped cats with spotted tails (extra kudos to anyone who proves me wrong!). So, that's nice. For details on the cat tail thing and others, see "How the Leopard Gets its Spots" by James Murray. (http://ift.tt/2xAbM94)

Thursday, September 28, 2017

September 28, 2017 at 07:41PM

Today I learned: 1) ...about Stéphane Leduc's experiments in osmotic growth. Back in the 1910s, Leduc (and, I gather, a few others before him) discovered that if you put a seed crystal of one salt in a solution that's highly saturated with another salt, you can make a "cell" whose membrane is a layer of flexible, colloidally-precipitated salts. Osmotic pressure causes the "cell" to grow, sometimes in quite fantastic shapes... including many that are startlingly similar to living organisms we're familiar with. Leduc came up with osmotic growth recipes for single cells, cell colonies, ferns, mushrooms (complete with caps and gills), trees, tube worms, shells, and other things. Leduc actually hoped to show that osmotic pressure was the fundamental force responsible for life. In that, he was grossly mistaken. It's something of a cautionary tale for fundamental biologists, or even scientists in general... just beacuse you have a simple model that reproduces the patterns of a phenomena DOES NOT mean that you know what causes that phenomena. (On the other end of the spectrum, there are Turing patterns, which as far as I know were a) totally mathematical structures that b) turned out to be quite descriptive of things like shell and fur patterning.) Check out some photographs of Leduc's growths here: http://ift.tt/2x0uFC1 If you want to know more, you can read Leduc's book on the subject here: http://ift.tt/2wmfIdJ. I've only read a small excerpt, so I can't speak to the whole book, but it's worth at least looking through the plates -- he took some really gorgeous photos of crystal growths. He also gives a few recipes in chapter 12, so if you want to do these yourself, you can (some of those salts are toxic, though -- DO NOT mix the salts with or on anything you're going to use for cooking!). 2) Arrow's Information Paradox, a, uh, theorem of economics? A hypothesis of economics? Anyway, Arrow's Information Paradox deals with the supply and demand of information as a good. The paradox is that it's very hard to know how valuable a piece of information will be unless you know the information. So if some kind of information is available in an open market, and you're trying to decide whether or not you should buy it, there's no great way to decide without getting the information, which you can't do without buying it. The net result is that a free market is likely to mis-supply (probably under-supply?) information. Arrow's Information Paradox is one argument that knowledge should be treated as a public good and funded accordingly. 3) Wikipedia has a fairly pretty severe gender balance problem -- only about a third of Wikipedia visitors and only about 10-15% of Wikipedia editors are female. I was quite surprised about this. There are a whole bunch of interesting nuances to this overall fact -- for more, check out Wikipedia's article on gender bias on Wikipedia: http://ift.tt/1AnbVMx

Friday, September 22, 2017

September 22, 2017 at 10:25PM

Today I learned: 1) One of the major bacteria protein secretion tags works by getting itself inserted into the plasma membrane and esposing a motif that's recognized by a membrane-bound protease. The motif gets clipped, the tag stays in the membrane, and the rest of the protein floats away. 2) GFP doesn't really work in plants. Chlorophil blocks either GFP's absorption or its emission, and plants typically have a lot of chlorophil. Instead, the standard plant reporter is (one of the many) luciferase(s), which produces a fluorescent small molecule. Unfortunately, luciferase doesn't really work unless you express it in the chloroplast. I don't know why. Something about "metabolism". Anyway, even *more* unfortunately, the "express luciferase in the chloroplast" technology is under patent, so you can't really make glowing green plants for commercial use. =( Fortunately, it doesn't matter much, because 3) There's a class of lipids that act as membrane-specific markers for eukaryotic cells. I can't remember their name right now, and I can't find them, so I'm just going to call them marker lipids. Anyway, there are several (about a dozen) variations of marker lipid, and each is used by the cell to label a different kind of membrane, i.e., marker type I goes on the nuclear membrane, marker type II goes on the endoplasmid reticulum, marker type III goes on the plasma membrane, etc. That's how the cell can correctly insert stuff into the right membrane -- a lot of membrane insertion will only happen on membranes bearing the right marker lipid, and some proteins don't function unless they're bound to the right marker lipid. How, then, do the marker lipids get into the right membrane? Well, they can all be interconverted by a bunch of different enzymes, and those enzymes are *themselves* bound to the right membrane. So, using my example marker lipids above, the enzymes that turn other marker lipids into marker type I are found on the nuclear membrane. That way, just about any marker lipid can go into just about any membrane, and will eventually be converted into the right kind. How, then, do the marker-lipid-converting enzyems get into the right membrane? They recognize the marker lipids, of course!

Wednesday, September 13, 2017

September 13, 2017 at 04:07AM

Today I Learned: 1) ...dependency graphs matter. Bungie's Chris Butcher has a wonderful talk on asset processing for the game Destiny. In short, Bungie built a nifty asset compiler that let them build, on a PC, a compressed binary of the game so that developers and artists could check out their changes, in a final context, with the game's final memory layout. The problem was, the system made it really easy to link together dependencies. Imagine, for example, that some object (perhaps a system that highlights enemies) wants to know the bounding box for a crate. The compilation system then would go compile the crate and return the bounding box. But wait! In compiling down the crate, the system would need to know about the shader used by that crate. So it would compile the shader, too. Which would mean compiling every object used by the shader. And so on. Basically, it was really, really easy to end up compiling the ENTIRE LEVEL any time an artist made any change, which meant that artists would have to wait hours to see the results of every change. Not cool. The lesson? Compilation dependencies are dangerous things, especially if you let people who don't understand them VERY well start to add them. For more details, see: https://www.youtube.com/watch?v=7KXVox0-7lU 2) Lamins are a class of protein that's critical to the proper formation of the nuclear lamina, which is a sheet of proteins that coats the inside of the nucleus's plasma membrane. The nuclear lamina is really important for proper DNA spatial organization and replication. If you have missing or malformed lamins, you get progeria. Today I learned that lamins are unique to animals. Plants, fungi, and protsts don't have lamins. They DO have nuclear lamina -- they're just made with different proteins. 3) Speaking of lamins, today I learned that lamins and laminins are NOT THE SAME THING. *Lamins* structure the *nuclear lamina*. *Laminins* are the primary component of *basal lamina*, which is a layer of mixed proteins and glycoproteins (proteins with sugars attached) and polysaccharide (sugars linked in long chains) that coat the surface of animal cells and form the basal layer of most connective tissue. Also, today I learned about the basal lamina. I'm sure I've encountered it in some bio class along the way, but I never really got what it was, and how it differs from any other extracellular matrix. In short, the extracellular matrix is bulky, where the basal lamina is sheet-like and typically sticks directly to a layer of cells. They're also made from different proteins, but I don't understand the consequences of those differences yet.

Wednesday, September 6, 2017

September 06, 2017 at 10:07PM

Today I learned: 1) On macs, there's a keyboard shortcut that lets you add umlauts to vowels. Today I learned that it can also be used to make a naked umlaut: ¨. The same thing works for some other accents, like ´ and ˆ (not ^). 2) ...how hurricanes work! At least, a little bit. A hurricane is basically a giant, roughly-donut-shaped cell of moving air. It gets started when there's a lot of hot water. Like, a LOT of hot water. Heat from the water heats the air above it, and moistens it by evaporation. That air rises and cools, eventually forcing out the water vapor as clouds. That part above describes quite a few types of cloud formation. What makes hurricanes happen is that oceans are HUGE and contain a LOT of heat, so it takes a LOT of air to carry away all the excess. So much air get heated and forced upwards, in fact, that the air pressure lowers, which draws in air from the surrounding ocean. Then *that* air gets heated and rises, etc. If this happens forcefully enough, for long enough, it forms a cell, where air circulates up, out from the center, down, and back in toward the center. All the while, it's sucking water up into the accumulating cloud layer, which eventually gets dumped out when the whole thing hits something that isn't warm water (usually land). Why do hurricanes spin? Uh, something about coriolis forces, I'm sure. 3) There's a really cool unit-testing package for Python called Hypothesis that lets you test functions by defining guarantees of those functions, like "this function doesn't throw an error" or "this function *does* throw an error" or "if you serialize and read back a value using these functions, you get the same value back". Hypothesis automatically generates test cases from your specifications, paying particular attention to edge cases of various kinds. When you run your unit test, it runs all of the Hypothesis-generated test cases. If there's an error, Hypothesis will try to find the simplest possible example that breaks your specs and tells you what that example was. Honestly, this package sounds a little too good to be true to me -- anybody out there have experience with it? Lady Jade?

Sunday, September 3, 2017

September 03, 2017 at 10:50PM

Today I learned: 1) ...that I don't understand magnets. At all. What I am quite sure of is that magnetic fields are generated by moving electric charges. So when electricity flows, it generates a magnetic field. So far so good. But then what the hell is a static magnet? There's no charge flow in a refrigerator magnet! It turns out that I am right to be confused. There's a theorem from the late 1910s called the Bohr-van Leeuwen theorem (what awkward punctuation! Surely English can do better than that?) that says that when you apply statistical mechanics to a classical system of particles with charges and mass and all that, the net magnetization is *always* zero. In other words, classical physics *cannot* explain static magnets. So... what does? You guessed it -- quantum mechanics. For one reason or another, in QM, electrons end up generating a tiny little magnetic field around themselves. If you line up all the little magnetic fields, you get a big magnetic field. Why do electrons generate magnetic fields? Spin mumble mumble I have no idea. Now, the Bohr-van Leeuwen theorem only applies to non-rotating systems, so it's possible that classical mechanics can still explain, for example, the magnetic fields of stars and planets. But I can neither confirm nor deny whether it actually does. 2) Something else I don't understand -- and I'm throwing this out there to see if anyone does understand this -- what drives the colossal storms on Jupiter and other gas giants (but mostly Jupiter)? On Earth, hurricanes and typhoons are powered by warm water, which is itself driven by sunlight. But Jupiter is much farther than Earth from the sun, and it has much more powerful storms. Where does the energy come from? 3) Ledo Pizza now has a vegan pre-built pizza. It's easy enough to make one anyway, but theirs is better than any I've built.

Saturday, September 2, 2017

September 02, 2017 at 09:24PM

Today I learned: 1) I'm a big fan of the Chipotle* system of food service. For those not in The Know, I'm talking about restaurants where you build a food from a bar of ingredients, and then get it cooked super-quickly. One cool thing about the system is that it works for a lot of different foods: Chipotle makes burritos; Subway makes sandwiches; Blaze makes pizzas; any number of Mongolian barbeques make... some sort of unholy American seared-noodle thing. So... what food *hasn't* been made with the Chipotle system that should be? Until today, my answer was "pasta" (and no, Mongolian barbeque doesn't count). Today I learned that there's a restaurant called Noodles and Company that's basically the Chipotle of pastas. It's not *exactly* the same model, because they don't have the bar of toppings to choose from explicitly. It's more that they have a bunch of pre-arranged selections of pastas that you can customize heavily. Still, it's close enough to the Chipotle system that I'm counting it. *Really it's the Subway system, but I'll eat Chipotle over Subway any day, so for the purposes of this series of posts, it's the Chiptle system. 2) ...how the sliding doors work in the Ikea PAX system of wardrobes. It's pretty simple, all in all, but there are a lot of ways you can misassemble it. In particular, there are two rails on the top and two rails on the bottom, one for each of two sliding doors. If you hang either end of either door onto the wrong rail of either the top or bottom, you get some nasty physical conflicts. Also, importantly, the screws on the PAX doors tend to come loose over time. If you have a PAX wardrobe, make sure to check the screws on the back of the doors every couple months. If a screw loosens too much on the outer door, it can get the whole door stuck so that the doors can't come apart, leaving you with a one-door PAX wardrobe. 3) Early computers used to break a lot. For some reason, discrete components (transistors, resistors, capacitors) are a lot more failure-prone than integrated circuits (anybody know why?), so computers built out of discrete components (all of the early ones) were a lot more breakage-prone. Accordingly, debugging used to be a lot of tracing out circuit diagrams and figuring out what components would produce the bug if broken. I count this as a "fact" particularly because it's really antithetical to all of my programming instincts -- in my experience, if you run into a bug, it's because you made a mistake. No, the interpreter is not mis-reading your bytecode. No, your compiler doesn't have a bug. No, your processor *definitely* isn't broken (unless it's a Pentium P5 800 nm 5V or a Pentium P54C 600 nm 3.3V -- http://ift.tt/1LvPh8u). You made a mistake, and you have to find and fix it. But that wasn't always true. #todayilearnedToday I learned: 1) I'm a big fan of the Chipotle* system of food service. For those not in The Know, I'm talking about restaurants where you build a food from a bar of ingredients, and then get it cooked super-quickly. One cool thing about the system is that it works for a lot of different foods: Chipotle makes burritos; Subway makes sandwiches; Blaze makes pizzas; any number of Mongolian barbeques make... some sort of unholy American seared-noodle thing. So... what food *hasn't* been made with the Chipotle system that should be? Until today, my answer was "pasta" (and no, Mongolian barbeque doesn't count). Today I learned that there's a restaurant called Noodles and Company that's basically the Chipotle of pastas. It's not *exactly* the same model, because they don't have the bar of toppings to choose from explicitly. It's more that they have a bunch of pre-arranged selections of pastas that you can customize heavily. Still, it's close enough to the Chipotle system that I'm counting it. *Really it's the Subway system, but I'll eat Chipotle over Subway any day, so for the purposes of this series of posts, it's the Chiptle system. 2) ...how the sliding doors work in the Ikea PAX system of wardrobes. It's pretty simple, all in all, but there are a lot of ways you can misassemble it. In particular, there are two rails on the top and two rails on the bottom, one for each of two sliding doors. If you hang either end of either door onto the wrong rail of either the top or bottom, you get some nasty physical conflicts. Also, importantly, the screws on the PAX doors tend to come loose over time. If you have a PAX wardrobe, make sure to check the screws on the back of the doors every couple months. If a screw loosens too much on the outer door, it can get the whole door stuck so that the doors can't come apart, leaving you with a one-door PAX wardrobe. 3) Early computers used to break a lot. For some reason, discrete components (transistors, resistors, capacitors) are a lot more failure-prone than integrated circuits (anybody know why?), so computers built out of discrete components (all of the early ones) were a lot more breakage-prone. Accordingly, debugging used to be a lot of tracing out circuit diagrams and figuring out what components would produce the bug if broken. I count this as a "fact" particularly because it's really antithetical to all of my programming instincts -- in my experience, if you run into a bug, it's because you made a mistake. No, the interpreter is not mis-reading your bytecode. No, your compiler doesn't have a bug. No, your processor *definitely* isn't broken (unless it's a Pentium P5 800 nm 5V or a Pentium P54C 600 nm 3.3V -- http://ift.tt/1LvPh8u). You made a mistake, and you have to find and fix it. But that wasn't always true.

Wednesday, August 23, 2017

August 23, 2017 at 06:57PM

Today's TIL is actually facts from yesterday, but I didn't get a chance to compile them yesterday. So here they are, today: 1) ...about Rho-factor termination in prokaryotes. Rho-factor termination is one of the two main mechanisms that prokaryotic genes use to stop transcription (termination). The *other* termination mechanism is called "Rho-independent termination", and is what I usually think of when I think of a terminator. Rho-independent terminators are (usually GC-rich) regions of DNA at the end of a gene that, when transcribed, form a long hairpin. The hairpin is shaped just right so that when RNA polymerase produces the terminator, it gumms up the polymerase and makes it get stuck, so it sits there until it eventually falls off on its own. Roughly. Rho-dependent termination is a little more convoluted. Rho-dependent termination is dependent on a protein called, uh, Rho factor. Thus "Rho-dependent". Anyway, Rho factor bind to a specific sequence of *RNA* just upstream of a more standard terminator hairpin. When the Rho binding site is transcribed, Rho factor binds to it and starts moving down the still-growing transcript, towards the transcription fork (how it does this isn't entirely know, but it' suspected that Rho factor forms a barrel-like structure and pulls the RNA transcript through the center, spooling it onto the other end of the Rho factor as it goes. In any case, the process is ATP-dependent). Meanwhile, a more normal terminator makes RNA polymerase pause long enough for Rho factor to catch up; when it does, it unwinds the transcription bubble somehow, popping off RNA polymerase, halting transcription, and releasing the new RNA. 2) So-called "cold" plasmas don't have a well-defined temperature. It turns out that, at a stat mech level, the charged ions in a cold plasma act as though they're at a different temperature than neutrally-charged particles. So the whole thing, taken as a whole, doesn't really have a single temperature. 3) You can use MCMC to sample phylogenetic trees. It's basically the same process as usual MCMC, but there are two particular challenges for phylogenetic trees. One problem is mixing, which can be particularly bad in tree-space -- but that's always solvable with more computing power. Another is visualization -- it's not obvious how you plot, say, a collection of 25,000 likely tree reconstructions in a way that's useful. There are some algorithms for calculating distance metrics between trees, which can then be plotted in whatever-D you can visualize (so, 2D or 3D). For more general information about MCMC on phylogenetic trees, see http://ift.tt/2xtANmR. For more information on *visualizing* MCMC results on phylogenetic trees, see http://ift.tt/2wzJQpE.

Monday, August 21, 2017

August 21, 2017 at 05:53PM

Today I Learned: 1) ...why complex eigenvalues in the Jacobian of a dynamical system at steady state always indicate some kind of spiral flow! In short, it's because every eigenvalue of the Jacobian is associated with an eigenvector that represents the direction in which the *direction* of flow doesn't change. When an eigenvalue has an imaginary component, its associated eigenvector *also* has an imaginary component... which means there's no physical vector on which flow doesn't change direction. The only way for that to happen is if the system flows in a spiral. 2) If you ever see reference to a value "crossing the imaginary axis", it really means a change in sign (positive to negative or vice versa). Specifically, a change in *real-valued* sign -- if you ignore the imaginary part of a value, and the value changes sign, that means it had to cross the imaginary axis. This apparently comes up from time to time in dynamical systems, where the real sign of the eigenvalues of the Jacobian at steady state tells you whether a system is stable (always tends toward a state, like a ball rolling to the bottom of a valley) or unstable (always tends to move away from a state, like a ball rolling off the top of a hill). If the aforementioned eigenvalues are negative, the steady state point is stable; if they're positive, it's unstable; if they're complex-valued, the system spirals; if they're real-valued, the system goes straight in or out. 3) Oook, so, I'm now just over 100 episodes into the podcast "The History of Rome", which is about, uh, the history of Rome. The podcast has so far covered everything from the founding of the city of Rome by Romulus (around 750 BC) to the instatement of Elagabalus as Emperor by the eastern legions (218 AD). I've now listened to the brief histories of twenty-five Roman Emperors, and a pattern has made itself clear: Every time a young man or boy becomes Emperor, they are terrible. Every. Single. Time. They have, so far, been universally incompetent, indulgent, and cruel, and every single time a young emperor is deposed, everyone in Rome seems to be happy to move on. So let this be a lesson -- DON'T PUT YOUNG MEN IN CHARGE OF EMPIRES.

August 21, 2017 at 04:05AM

Friday, August 18, 2017

August 18, 2017 at 02:38AM

Today I Learned: 1) You can put incredibly thin coats of stuff onto other stuff using a process called "vacuum sputtering". You be tempted to say "stuff to other stuff? That's pretty vague." And you'd be right. Because it's a pretty general technique. That's part of the awesomeness of it. Deposition techniques in general take advantage of the ability of most materials to easily form reactive species on surfaces. Under most conditions, those surfaces react immediately with oxygen -- that's why metals rust and why many plastics age. In vacuum, though, reactive species will just sit on the surface of whatever material they're on until they get hit with something. So if you want to coat a surface with, say, titanium, you just have to get that (very clean) surface into a vacuum and introduce a "few" titanium atoms into the vacuum. Vacuum sputtering is a technique for getting the coating substance into the vacuum. You set up the substrate (the thing you want to deposit onto) across from a target (a block of the material you want to coat with). Then you hit the target with high-energy gas or plasma (typically argon). The impact of the gas sends atoms/molecules of the target flying off into the vacuum... where they hit the substrate and stick. It's a bit like MALDI, but with ionized gas instead of lasers (for those who know what MALDI is). So, next time you want to coat a wasp with gold, this is how! 2) ...how to use Lie derivatives to determine system identifiability. I'm not going to go into much depth on this, because I'm still in the early, fuzzy stage of understanding Lie derivatives, but basically there's a technique using Lie derivatives that helps you figure out what parameters of a system you can, in principle, figure out by looking at some output of a system. A schematic example -- say you have a car driving along a road, and you want to consider a bunch of variables (parameters) like a) what kind of gas it's using, b) what temperature it is outside, c) what kind of road it's driving on, and d) how hard the gas pedal is pushed down. Now say you have some equations that tell you, given all of those parameters, how fast the car goes. Now say you *only* have a record of the speed of the car, and, say, the temperature of the engine (and say that engine temperature also appears somewhere in the equations you have). From that information, can you figure out parameters a-d? A Lie derivative will tell you at least some things about what parameters you can and can't back out, given the variable you observe. 3) Carbide is steel with little crystals of carbon in it. For some reason, this makes carbide very hard. That hardness makes carbide a good material for drill bits, which wear out quite quickly when used in industrial assembly lines. Apparently wearing out drill bits contributes a surprising amount to the cost of manufactured goods?

Tuesday, August 15, 2017

August 16, 2017 at 02:16AM

Today I Learned: 1) So far this year, there have only been five reported cases of guinea worm. That's down from just over a thousand cases worldwide in 2011, and several million each year in the 1980s. We're really, really close to wiping this one out! 2) When I need a color palette (usually for graphing data), I go to colorbrewer2.org. It has a nice selection of pre-screened color palettes for 3-12 color classes, with options to use only colorblind, print-friendly, and photocopy-safe color schemes. It's designed for maps, but I find it works well for other kinds of data presentation. Today I learned about two other similar services, both of which have far better names than colorbrewer. First is coolors.co*, which wins for best presentation -- it has a very slick interface for quickly iterating through a bunch of colors, picking the ones you like, and filling in the rest. It's better for exploring color palettes than for finding optimal ones, though -- they don't seem especially well-optimized comapred to colorbrewer's. The other site is i-love-hue.com, which easily wins for best name. It does something functionally similar to both of the others, but is algorithmic (unlike colorbrewer) and is pretty transparent about its clustering algorithm (unlike coolors). It's worth taking a couple of minutes to check out those sites, if just to see how they handle presentation differently. * Not to be confused with coolors.com, which redirects to a Spanish-language car dealership page. 3) ...about this delightful set of 23 personality-probing questions: http://ift.tt/1GXeHqA

Sunday, August 13, 2017

August 13, 2017 at 11:32PM

Today I Learned: 1) ...how to make hash browns! It's, uh, embarassingly easy, in retrospect. You start with a potato. You peel the potato. You grate the potato (the hardest part). You heat some oil on medium-high heat, and dump the (peeled and shredded) potato onto the pan. You flip it when it's nice and toasty on the bottom; you take it off the pan when it's nice and toasty on the *other* bottom. Salt and pepper to taste. Eat. (Advanced techniques include pan-flipping the hashes and adding chopped shallots. Yeah.) Thanks to Erik Jue for teaching me hash-brown skills! 2) Watched a talk by Greg Foertsch, the art director on Firaxis Games' XCOM games. Some advice that's specific to art direction for video games: * Don't make textures until the last possible minute! Textures take a ton of time to make, and they *don't help you iterate faster*. If you spend time making textures, you're going to end up texturing a lot of things that don't end up in the final game. Better that you'd used that time to figure out what things work and what things don't. * Presentation > style. How you present your game's art and assets is ultimately more important than the exact look and feel of those things. Also, presentation informs style more than the other way around. * Early on in a game's development, it's insanely helpful to make a "vertical slice", which is basically a playable demo where everything's in place, even if it's only for a tiny chunk of the game. Turns out that usually, the first time you make a game, that first vertical slice isn't very fun. But you need that to tell you what to do to make your *next* vertical slice better. It's also a good rally point to keep everyone focused and engaged. * Level of detail really isn't that important. Or rather, having *high* level of detail isn't really that important. Firaxis ended up scaling *back* the level of detail on a lot of objects (guns and character models, in particular), because they looked better as simple, over-stylized versions. Some advice that's good for team projects (or just complex projects) in general: * An early period of intense collaboration can be really, really good for the team. XCOM's art team was stuck without any engineers for about the first six months of development, so they all crammed into a much-too-small workspace and hammered out a "gameplay video" that helped them work through a lot of the art design elements -- things like what the UI might look like, how tall to make various game objects like houses and cover and people, how and where to place the camera, etc. At the end of that, the combination of a) intense work on b) a concrete goal c) in an enclosed space gave the team a really strong sense of "buy-in" that fueled them through the rest of the project. * Drawing from the same vignette, I suspect what a team does while stuck says a lot about how that team works. At first blush, it might look like an art team can't really do anything useful without an engineer, but XCOM's art team did without one for *six months* and ended up solving a lot of the art and presentation problems of the game. * One of the things that I've seen over and over again from the various stories I've read of XCOM's development is that it's REALLY IMPORTANT to be able to tell your teammates that something sucks. It seems that Firaxis managed to screw up almost everything about XCOM at some point during development. What saved them was a strong culture of criticism. People were expected to say when something did or didn't work, and they *did*. That's what let Firaxis push the game through two complete overhauls and come out the other end with one of the best strategy games out there. * Scale back early. Set production goals early, and if they don't get met, either change the end goals or rethink how you are making things. 3) Matplotlib's colormaps all come with inverses -- just append a "_r" to the end of the colormap name and you'll get the reversed version.

Saturday, August 12, 2017

August 12, 2017 at 11:22PM

Today I Learned: 1) ...what thyme smells like. I know, I know, I really should have known this already -- but somehow I'd only encountered thyme when it was mixed with other spices, and I never knew what part of the smell/flavor to attribute to thyme. I hope to use a lot more of it in the near future. 2) There's a little latin market a couple of blocks from my house. And when I say little, I mean it. I think it's smaller than my living room. But the important thing is that it's close! Today I went there for the first time. There's a wierd mix of stuff crammed in there, including but not limited to: lots of dish soap; tortilla chips (but mostly uncooked tortillas); racks of spices, just like you'll find in the "latin foods" section of most grocery stores around here; bags of "spices for tamales", which seem to be mostly sesame seeds and a few huge hot peppers; super-cheap mustard, ketchup, nutella, mayo, and that kind of thing, mixed with cans of chipotles in ancho sauce and enchilada sauce; vinegars and lemon juice and baking soda all right next to each other; tons of different kinds of snack chips, including some that I'm pretty sure weren't made in the US; and, above and beyond anything else, lots and lots of alcohol and sodas. 3) ...how to make custom legends with Matplotlib. There are a bunch of matplotlib classes called "artists" that each know how to draw something on a plot -- for example, Line2D can draw a 2D line (shocking, I know). To make a legend with custom symbols, you make a list of artist objects, each one of which describes how to draw a thing, and a list of labels so matplotlib knows what to call each thing, and then you do plt.legend(artists, labels) and that's it.