Saltar para: Posts [1], Pesquisa e Arquivos [2]

"Is consciousness fractal?"


On a warm September evening in 2002, two men attacked a middle-aged furniture salesman named Jason Padgett from behind as he left a karaoke bar, knocking him unconscious. When he came to, he found that the blows he’d sustained had left him with a severe concussion, post-traumatic stress disorder, and, quite literally, a new worldview. All around him, he claimed, familiar scenes now appeared as discrete geometric patterns—as shapes that under re-scaling maintained some semblance of themselves. He saw fractals everywhere: in trees and clouds, in drops of water, in the number pi. “Geometrical blueprints,” as he called them, were superimposed over his vision.

Padgett’s astonishing new worldview drew the attention of a team of neuroscientists who scanned his brain to determine which regions were responsible for his newly acquired synesthesia. But in a sense, the transformation may have been revealing an underlying bias toward fractal visual processing in all of us. Taylor believes we have evolved to be efficient interpreters of the fractals that surround us in nature—from lightning and waterfalls to the spiral arms of the Milky Way. Our bodies exploit fractal networks to maximize surface areas and help distribute oxygen, cells, and signals. Blood vessels branch out like root systems; the brain houses folds within folds. According to Taylor, this fractal-rich environment means we don’t simply enjoy looking at fractals—we are designed to process them effortlessly, and even have a need to be looking at them.

Autoria e outros dados (tags, etc)

"When machines justify knowledge"


Our machines now are letting us see that even if the rules the universe plays by are not all that much more complicated than Go’s, the interplay of everything all at once makes the place more contingent than Aristotle, Newton, Einstein, or even some Chaos theorists thought. It only looked orderly because our instruments were gross, because our conception of knowledge imposes order by simplifying matters until we find it, and because our needs were satisfied with approximations.

Autoria e outros dados (tags, etc)

Cormac McCarthty - "The Kekulé Problem"


Cormac McCarthy is best known to the world as a writer of novels. These include Blood Meridian, All the Pretty Horses, No Country for Old Men, and The Road. At the Santa Fe Institute (SFI) he is a research colleague and thought of in complementary terms. An aficionado on subjects ranging from the history of mathematics, philosophical arguments relating to the status of quantum mechanics as a causal theory, comparative evidence bearing on non-human intelligence, and the nature of the conscious and unconscious mind. At SFI we have been searching for the expression of these scientific interests in his novels and we maintain a furtive tally of their covert manifestations and demonstrations in his prose.

Over the last two decades Cormac and I have been discussing the puzzles and paradoxes of the unconscious mind. Foremost among them, the fact that the very recent and “uniquely” human capability of near infinite expressive power arising through a combinatorial grammar is built on the foundations of a far more ancient animal brain. How have these two evolutionary systems become reconciled? Cormac expresses this tension as the deep suspicion, perhaps even contempt, that the primeval unconscious feels toward the upstart, conscious language. In this article Cormac explores this idea through processes of dream and infection. It is a discerning and wide-ranging exploration of ideas and challenges that our research community has only recently dared to start addressing through complexity science.

Autoria e outros dados (tags, etc)

This is some cosmic shit right here.



Autoria e outros dados (tags, etc)

Order from Chaos


A theoretical soft condensed matter physicist by training who now heads a thriving 33-person research group spanning three departments at the University of Michigan in Ann Arbor, Glotzer uses computer simulations to study emergence — the phenomenon whereby simple objects give rise to surprising collective behaviors. “When flocks of starlings make these incredible patterns in the sky that look like they’re not even real, the way they’re changing constantly — people have been seeing those patterns since people were on the planet,” she said. “But only recently have scientists started to ask the question, how do they do that? How are the birds communicating so that it seems like they’re all following a blueprint?”


A more recent “wow” moment occurred in 2009, when Glotzer and her group at Michigan discovered that entropy, a concept commonly conflated with disorder, can actually organize things. Their simulations showed that entropy drives simple pyramidal shapes called tetrahedra to spontaneously assemble into a quasicrystal — a spatial pattern so complex that it never exactly repeats. The discovery was the first indication of the powerful, paradoxical role that entropy plays in the emergence of complexity and order.




Autoria e outros dados (tags, etc)

"Sloppiness and Emergent Theories in Physics, Biology, and Beyond"


Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are `sloppy', i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher Information Matrix, which we interpret as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. We show how the manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

Autoria e outros dados (tags, etc)

"At the cosmic dinner party, intelligence is the loudest thing in the room."




Through the study of animal communication, my colleagues and I have developed a new kind of detector, a “communication intelligence” filter, to determine whether a signal from space is from a technologically advanced civilization or not. Most previous SETI (Search for Extraterrestrial Intelligence) efforts have looked for radio transmissions with a narrow band of frequencies or for optical signals that blink very rapidly. From what we know about astrophysics, such transmissions would be clearly artificial, and their discovery would indicate technology capable of transmitting a signal over interstellar distances. SETI efforts generally throw away wideband radio signals and slower optical pulses, whose provenance is less obvious. Although those signals might well be from intelligent beings, they might also originate in natural sources of radio waves, such as interstellar gas clouds, and we have lacked a good way to tell the difference.


One aspect of human linguistics that emerged from early statistical studies of letters, words, and phonemes is known as Zipf’s Law, after the Harvard University linguist George Zipf. In English text, there are more e’s than t’s, more t’s than a’s, and so on, down to the least frequent letter, “q.” If one lists the letters from “e” to “q” in descending order of frequency and plots their frequencies on a log-log graph, one can fit the values with a 45-degree line—that is, with a line with a slope of –1. If one does the same thing with text made up of Chinese characters, one also gets a –1 slope. And the same is true with the letters, words, or phonemes of a conversation in Japanese, German, Hindi, and dozens of other languages. Baby babbling does not obey Zipf’s Law. Its slope is less than –1 because the sounds spill out nearly at random. But as children learn their language, the slope gradually tilts up and reaches –1 by about the age of 24 months.


Most linguists used to suppose that Zipf’s Law was a characteristic of human languages only. So we were quite excited to find that, upon plotting the frequency of occurrence of adult bottlenose-dolphin whistles, that they, too, obeyed Zipf’s Law! Later, when two baby bottlenose dolphins were born at Marine World in California, we recorded their infant whistles and discovered that they had the same Zipf’s Law slope as baby human babbling. Thus baby dolphins babble their whistles and have to learn their communication system in a way not dissimilar from the way baby humans learn their languages. By the time the dolphins reached the age of 12 months, the frequency of occurrence distribution of their whistles had reached a –1 slope, as well.


As a test of our approach’s ability to separate astrophysics from an intelligent signal, we turned to an example from radio astronomy. When stellar pulsars were discovered by astronomers Jocelyn Bell Burnell and Antony Hewish in 1967, they were dubbed “LGMs” for “little green men.” Since these radio sources pulsed so regularly, some scientists initially speculated that they could be the beacons of very advanced extraterrestrials. So we re-analyzed the pulses from the Vela Pulsar with the help of Simon Johnston of the Australia Telescope National Facility and obtained a Zipf slope for the pulsar signals of about –0.3. This is inconsistent with any language as we know it. In addition, we found little or no conditional probabilistic structure within the pulsar signals. And indeed pulsars are now known to be natural remnants of stellar supernovae. Information theory could thus easily distinguish between a putative intelligent signal and a natural source.



Autoria e outros dados (tags, etc)

"On the Self-Organizing Origins of Agency"





"It seems likely that agency emerges even earlier than 3 months. One might argue generally that the human is a self-organizing system from the moment of conception, through embryogenesis, the post-natal period onward to the infant stage, and beyond. My interest, however, just as in early quantitative studies of movement coordination, is in establishing empirically and theoretically whether the concept of self-organization is even relevant to agency or end-directedness and, if it is, to identify the self-organizing dynamics in a concrete situation. Certainly, early studies showed that 2-day-old infants (in a state of ‘quiet alertness’) engage in more frequent sucking bursts to their own mother's voice reading Dr Seuss's ‘To think that I saw it on Mulberry Street’ over that of another female. A main focus of such research is to investigate how effective the fetal auditory system is in detecting and responding to the maternal spoken voice. Evidence of differential sensitivity to the mother's voice occurs very early in life, even prenatally. What is less emphasized (and again not measured) is that the infant's sucking produces the mother's voice and, as in the baby–mobile case, the mother's voice causes the baby to suck more. Two-day-old infants, in fact, do work to produce their mother's voices in preference to other mother's voices or acoustic stimuli. Whereas the neonate's preference for the mother's voice suggests a role in infant bonding, these data are also highly consistent with the theory here, namely that the basis of agency is making something happen in the world. And making some things happen is more important than others."

Autoria e outros dados (tags, etc)

"A New Physics Theory of Life"


From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life. 

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.



Autoria e outros dados (tags, etc)

"Is technology making the world indecipherable?"


A number of years ago, a team of research scientists tried to improve the design of a certain kind of computer circuit. They created a simple task that the circuit needed to solve and then tried to evolve a potential solution. After many generations, the team eventually found a successful circuit design. But here’s the interesting part: there were parts of it that were disconnected from the main part of the circuit, but were essential for its function. Essentially, the evolutionary program took advantage of weird physical and electromagnetic phenomena that no engineer would ever think of using in order to make the circuit complete its task. In the words of the researchers: ‘Evolution was able to exploit this physical behaviour, even though it would be difficult to analyse.’

This evolutionary technique yielded a novel technological system, one that we have difficulty understanding, because we would never have come up with something like this on our own. In chess, a realm where computers are more powerful than humans and have the ability to win in ways that the human mind can’t always understand, these types of solutions are known as ‘computer moves’ — the moves that no human would ever do, the ones that are ugly but still get results. As the American economist Tyler Cowen noted in his book Average Is Over (2013), these types of moves often seem wrong, but they are very effective. Computers have exposed the fact that chess, at least when played at the highest levels, is too complicated, with too many moving parts for a person — even a grandmaster — to understand.


While we can’t actually control the weather or understand it in all of its nonlinear details, we can predict it reasonably well, adapt to it, and even prepare for it. And when the elements deliver us something unexpected, we muddle through as best as we can. So, just as we have weather models, we can begin to make models of our technological systems, even somewhat simplified ones. Playing with a simulation of the system we’re interested in — testing its limits and fiddling with its parameters, rather than understanding it completely — can be a powerful path to insight, and is a skill that needs cultivation.


We also need interpreters of what’s going on in these systems, a bit like TV meteorologists. Near the end of Average Is Over, Cowen speculates about these future interpreters. He says they ‘will hone their skills of seeking out, absorbing, and evaluating this information… They will be translators of the truths coming out of our networks of machines… At least for a while, they will be the only people left who will have a clear notion of what is going on.’



Autoria e outros dados (tags, etc)


Pesquisar no Blog