Saltar para: Posts , Pesquisa e Arquivos 
"OH WATERS, TEEM WITH MEDICINE TO KEEP MY BODY SAFE FROM HARM, SO THAT I MAY LONG SEE THE SUN." - Rig Veda
I was drawing a windmill onto the fog in the mirror after a shower, when I thought, why am I drawing a windmill onto the fog in the mirror? Then I answered, I’m drawing a windmill because it is a metaphor for rain. Next, I wiped the windmill off the mirror with my towel, and got ready for work. I work at a windmill farm 45 miles east of Los Angeles. The job consists mostly of staring at windmills. Mondays we meditate under the windmills, the company brings in a yogi with a PhD in philosophy. Tuesdays are Texas hold ’em Tuesdays. Wednesdays we stare at the windmills with absolute fear. Thursdays we check for mechanical failures and other duties as necessary. Fridays, Fridays we wipe the windmill blades clean of flies and mosquito guts. Then on weekends I watch boxing and long for the swooshing sounds of the windmill farm. Weekends are tough, but it’s only two days.
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are `sloppy', i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher Information Matrix, which we interpret as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. We show how the manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.