1) A *roulette*, in geometry, is a planar curve described by a point on a shape (usually a smooth shape like a circle, line, ellipse, or parabola) as it rolls smoothly, without slipping, across, around, or inside another shape. Spirographs are one example. Another quite beautiful example can be found on the Wiki page for "Interpolation" (https://en.wikipedia.org/wiki/Interpolation).
2) There is such a thing as vegan chocolate sorbet. And it is delicious.
3) A couple of different algorithms for nonlinear optimization without gradient information. See, imagine you have a really complicated function of a bunch of variables (say, the function that takes the number of hours you spend on different activities and gives your grade in a class as an output) and you want to find the parameters that minimize (or, in this case, maximize) the value of the function. In general, this is a pretty difficult problem, but there are a bunch of nifty algorithms for figuring it out. Most of the ones I'd been familiar with involve starting with a guess and then iteratively moving some distance in the direction with the steepest slope -- imagine finding the lowest point in a valley by starting somewhere random, then turning in the direction with the steepest downward slope and taking ten steps in that direction over and over and over (warning: do NOT try this at home).
The problem is, that kind of technique requires knowing what the direction of steepest descent is. Usually that's done by calculating the derivative (or in multiple dimensions, the gradient -- it's conceptually the same thing) of the function at that point. Sometimes, though, as in the example above, you *can't* take the derivative of the function.
There are a number of popular solutions to this problem that don't require explicit calculations of the gradient. You could sample the space around you in every direction to try to approximate the derivative, but that's pretty inefficient.
There's another method called simulated annealing where you take a bunch of guesses, then let them jiggle around with some "temperature" (a probability of accepting worse values of the function) which you then slowly turn down. The idea is that the points initially scatter about randomly, then settle down into local minima. When you compare the values of the local minima at the end, you find the global minimum.
Then there are the simplex methods (not to be confused with the completely unrelated simplex method of *linear* optimization), where you start with a simplex (basically a bunch of points scattered evenly around a point -- a triangle in two dimensions or a triangular pyramid in 3D) centered on an initial guess. You check the values of the function at every vertex, pick the *worst* vertex, draw a line from that vertex through the opposite face of the simplex and some distance out the other side, and search very coarsely along that line for a better function value. You do this a bunch of times, and the simplex will ooze its way around the function landscape until it shrinks down to what it thinks is a good point. You can visualize this by laying out a triangle (your simplex) on a 2D landscape (heavily crumpled blankets are pretty good for this), then applying the algorithm described above. Alternatively you can ogle this graphic from wiki's page on the Nelder-Mead Simplex algorithm: http://tinyurl.com/o9sa5h4.
No comments:
Post a Comment