Saltar para: Posts [1], Pesquisa e Arquivos [2]

"Why our imagination for alien life is so impoverished"




For as long as scientists have looked for alien life, they have conceived them in our own image. The quest arguably began with a 1959 Nature paper by the physicists Giuseppe Cocconi and Philip Morrison, who argued that ‘near some star rather like the Sun there are civilisations with scientific interests and with technical possibilities much greater than those now available to us’. The two scientists further posited that such aliens would have ‘established a channel of communication that would one day become known to us’. Such alien signals would most likely take the form of shortwave radio, which is ubiquitous through the Universe, and would contain an obviously artificial message such as ‘a sequence of small prime numbers of pulses, or simple arithmetical sums’.

Nothing in this suggestion was unreasonable, but it’s self-evidently the result of two smart scientists asking: ‘What would we do?’ Cocconi and Morrison’s proposal to look for familiar types of signals, coming from familiar types of technology, has heavily conditioned the search for extraterrestrial intelligence (SETI) ever since. Today, the Harvard astronomer Avi Loeb thinks it might be good to look for spectroscopic signatures of chlorofluorocarbons (CFCs) in the atmospheres of alien planets, apparently in the conviction that aliens have fridges like ours (or perhaps they’re just crazy about hairspray). Other scientists have proposed finding aliens by looking for their light-polluting cities; their starship Enterprise-style antimatter drives; or the radiation flashes from extraterrestrial nuclear war. It all sounds dreadfully… human.

The obvious defence is that, if you’re going to bother with SETI at all, you have to start somewhere. That we have the urge to search for life elsewhere probably owes something to our natural instincts to explore our environment and to propagate our kind. If – and this does seem rather likely – all complex life in the Universe originated through a competitive Darwinian evolutionary process, isn’t it reasonable to imagine that it will have evolved to be curious and expansionist? Then again, not all human societies seem intent on spreading beyond the village, and whether Darwinian selection will continue to be the predominant shaping force on humanity over the next millennium (never mind a million years) is anyone’s guess.




Autoria e outros dados (tags, etc)

"Can a Living Creature Be as Big as a Galaxy?"



The classic Charles and Ray Eames short film Powers of Ten was made nearly four decades ago, but its influence has been profound. It can be connected, for example, to the rise to order-of-magnitude estimation as a standard aspect of the scientific curriculum, and it is the direct inspiration for the design of software mapping applications such as Google Earth.

The impact of Powers of Ten is heightened by the startling symmetry between the narrative of the inward sweep (in which the viewer descends inward in scale from a picnic on the Chicago lakefront to the sub-nuclear scale) and the arc of the outward sweep (in which the view pulls increasingly rapidly away to set the Earth and its contents into the grand scale of the Cosmos).

Were we just lucky, as sentient beings, to be able to sweep out in both directions, and examine the scales of the universe both large and small? Probably not.




Autoria e outros dados (tags, etc)

"Academic Drivel Report"


Six years ago I submitted a paper for a panel, “On the Absence of Absences” that was to be part of an academic conference later that year—in August 2010. Then, and now, I had no idea what the phrase “absence of absences” meant. The description provided by the panel organizers, printed below, did not help. The summary, or abstract of the proposed paper—was pure gibberish, as you can see below. I tried, as best I could within the limits of my own vocabulary, to write something that had many big words but which made no sense whatsoever. I not only wanted to see if I could fool the panel organizers and get my paper accepted, I also wanted to pull the curtain on the absurd pretentions of some segments of academic life. To my astonishment, the two panel organizers—both American sociologists—accepted my proposal and invited me to join them at the annual international conference of the Society for Social Studies of Science to be held that year in Tokyo.

I am not the first academic to engage in this kind of hoax. In 1996, in a well-known incident, NYU physicist Alan Sokal pulled the wool over the eyes of the editors of Social Text, a postmodern cultural studies journal. He submitted an article filled with gobbledygook to see if they would, in his words, “publish an article liberally salted with nonsense if it (a) sounded good and (b) flattered the editors' ideological preconceptions.” His article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” (published in the Spring/Summer 1996 issue), shorn of its intentionally outrageous jargon, essentially made the claim that gravity was in the mind of the beholder. Sokal’s intent was not simply to pull a fast-one on the editors, but to challenge the increasingly popular “post-modern” view that there are no real facts, just points-of-view. His paper made the bogus case that gravity, too, was a “social construction.” As soon as it was published, Sokal fessed up in another journal (Lingua Franca, May 1996), revealing that his article was a sham, describing it as “a pastiche of Left-wing cant, fawning references, grandiose quotations, and outright nonsense … structured around the silliest quotations [by postmodernist academics] he could find about mathematics and physics.”


Professor Bauchspies soon sent me another email listing the other papers that had been accepted for the panel. They had the following titles:

·      “Agnotology and Privatives: Parsing Kinds of Ignorances and Absences in Systems of Knowledge Production.”

·      “Science, Ignorance, and Secrecy: Making Absences Productive”

·      “Alter-Ontologies: Justice and the Living World”

·      “The Motility of the Ethical in Bioscience: The Case of Care in Anti-ageing

·      “Mapping Environmental Knowledge Gaps in Post-Katrina New Orleans: A Study of the Social Production of Ignorance”

·      “The Absence of Science and Technology Equals Development?”




Autoria e outros dados (tags, etc)

The Half Life of Certainty



On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties. The drugs, sold under brand names such as Abilify, Seroquel, and Zyprexa, had been tested on schizophrenics in several large clinical trials, all of which had demonstrated a dramatic decrease in the subjects’ psychiatric symptoms. As a result, second-generation antipsychotics had become one of the fastest-growing and most profitable pharmaceutical classes. By 2001, Eli Lilly’s Zyprexa was generating more revenue than Prozac. It remains the company’s top-selling drug.

But the data presented at the Brussels meeting made it clear that something strange was happening: the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.


 The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time! Hell, it’s happened to me multiple times.”


For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts. Richard Palmer, a biologist at the University of Alberta, who has studied the problems surrounding fluctuating asymmetry, suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.

The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”

Autoria e outros dados (tags, etc)

"Moore’s Law and the Origin of Life"



Here’s an interesting idea. Moore’s Law states that the number of transistors on an integrated circuit doubles every two years or so. That has produced an exponential increase in the number of transistors on microchips and continues to do so.

But if an observer today was to measure this rate of increase, it would be straightforward to extrapolate backwards and work out when the number of transistors on a chip was zero. In other words, the date when microchips were first developed in the 1960s.

A similar process works with scientific publications. Between 1990 and 1960, they doubled in number every 15 years or so. Extrapolating this backwards gives the origin of scientific publication as 1710, about the time of Isaac Newton.

Today, Alexei Sharov at the National Institute on Ageing in Baltimore and his mate Richard Gordon at the Gulf Specimen Marine Laboratory in Florida, have taken a similar to complexity and life.

These guys argue that it’s possible to measure the complexity of life and the rate at which it has increased from prokaryotes to eukaryotes to more complex creatures such as worms, fish and finally mammals. That produces a clear exponential increase identical to that behind Moore’s Law although in this case the doubling time is 376 million years rather than two years.

That raises an interesting question. What happens if you extrapolate backwards to the point of no complexity–the origin of life?

Sharov and Gordon say that the evidence by this measure is clear. “Linear regression of genetic complexity (on a log scale) extrapolated back to just one base pair suggests the time of the origin of life = 9.7 ± 2.5 billion years ago,” they say.

And since the Earth is only 4.5 billion years old, that raises a whole series of other questions. Not least of these is how and where did life begin.


Autoria e outros dados (tags, etc)

"Earth's holy fool?"



So let us celebrate James Lovelock and Lynn Margulis. Not just as scientists — their very real contributions speak for themselves — but as people with true courage and integrity. They withstood the pressure of fellow scientists turning on them. It would have been easy to drop the whole thing and say that it was a flawed hypothesis — giving themselves credit for being good Popperian scientists. But they knew there was something there that needed explaining and they had the guts to stick with it —Lovelock particularly, but Margulis too. Were they ‘holy fools’? The very friendly, entirely British Lovelock is as far from being a character in a Dostoyevsky novel as I can imagine. But in a sense they were just that, and ultimately science benefits from their thinking. Scientifically respectable or not, the Gaia hypothesis has made our culture richer for its boldness. Fools rush in where angels fear to tread, and sometimes it is better to be a little foolish than to stick to the safe, angelic, path.

Autoria e outros dados (tags, etc)

O demónio é a maquina


Anyone whose resolve to exercise in 2013 is a bit shaky might want to consider an emerging scientific view of human evolution. It suggests that we are clever today in part because a million years ago, we could outrun and outwalk most other mammals over long distances. Our brains were shaped and sharpened by movement, the idea goes, and we continue to require regular physical activity in order for our brains to function optimally.


To explain those outsized brain, evolutionary scientists have pointed to such occurrences as meat eating and, perhaps most determinatively, our early ancestors’ need for social interaction. Early humans had to plan and execute hunts as a group, which required complicated thinking patterns and, it’s been thought, rewarded the social and brainy with evolutionary success. According to that hypothesis, the evolution of the brain was driven by the need to think.

But now some scientists are suggesting that physical activity also played a critical role in making our brains larger.

To reach that conclusion, anthropologists began by looking at existing data about brain size and endurance capacity in a variety of mammals, including dogs, guinea pigs, foxes, mice, wolves, rats, civet cats, antelope, mongeese, goats, sheep and elands. They found a notable pattern. Species like dogs and rats that had a high innate endurance capacity, which presumably had evolved over millenniums, also had large brain volumes relative to their body size.

The researchers also looked at recent experiments in which mice and rats were systematically bred to be marathon runners. Lab animals that willingly put in the most miles on running wheels were interbred, resulting in the creation of a line of lab animals that excelled at running.

Interestingly, after multiple generations, these animals began to develop innately high levels of substances that promote tissue growth and health, including a protein called brain-derived neurotrophic factor, or BDNF. These substances are important for endurance performance. They also are known to drive brain growth.


Autoria e outros dados (tags, etc)

User do Reddit metaforiza o nosso conhecimento do cérebro


To put this all into a metaphor:

The human brain is a trans-sonic plane, and the doctors studying it are engineers from 1900. They understand the the visible effects: push (a lot of random) button(s and weird glow-y panels that fill with changing words and may or may not be the result of the devil), receive thrust, and on some planes that are broken they've managed to tear off an engine and fiddle around inside it, but the avionics equipment, what with using semi-conductors and microprocessors, is basically black-box witchcraft to them, and the engines themselves are pretty much nonsense.

They recognize the basic idea of how the engines work; combustion of a hydrocarbon compound that isn't totally alien to them and is orders of magnitude more pure than anything they have outside of labs, much less in the quantities they need to run it for an extended period. The actual principles of the jet engine (compression from forced intake, fuel-air ratios, carefully tuned gear ratios and intelligent onboard systems in the engine itself to detect failures, damage and atmospheric conditions) are totally beyond them, and every engine they dismount to try to figure out stops working after two, maybe three, ignition runs since they're fueling it with total crap and have nothing hooked up to the diagnostic outputs and control inputs. Even the fucking landing gear is lightyears ahead of them; tires of vulcanized rubber, shocks based on pneumatic and hydraulic systems created through complex computer models to handle, y'know, a whole goddamn fucking plane bouncing off them. Even the goddamn metal itself that the plane is made of is alien to them, partly because aluminum was worth more than gold until some time in the late 1800's, and partly because the metallurgical techniques we use to create aircraft alloys, especially for trans-sonic planes, are utterly impossible given their level of technology.

So, to tie it all together: While the plane is in a running state, the engineers can't (from their perspective, with their tools and methods of figuring out how things work) touch a single damn thing that matters without everything breaking and flashing red. When the plane is disassembled and/or broken, they can't get anything working again and as far as they're concerned every single fundamental principle behind what we know to be how the plane operates istotally fucking impossible (remember that they hadn't even discovered heavier-than-air flight at this point. The wright brothers are still a ways off). Given a few decades or so, they'll eventually come to understand the principles behind some of the macro mechanical systems, and maybe even manage to mix up some fuel that will actually get the engine to do more than fail/explode, and at best even get an early start on powered flight in general. But actually replicating the plane itself is easily a generation or more out of their reach.

Autoria e outros dados (tags, etc)

"Homosexual Necrophilia in the Mallard Duck"




On June 5, 1995, Kees Moeliker, the curator of the Natural History Museum of Rotterdam, heard a loud bang just outside of his office. He went over to the window and discovered that a drake mallard had hit one of museum’s windows at full speed and died. Moeliker observed another male mallard came over and start picking at the dead duck’s head. The live mallard then proceeded to mount the corpse and forcefully rape it. This activity went on for a full seventy-five minutes, during which time the perpetrator took only two short breaks. Moeliker documented the entire event by taking notes and photos from safely behind the museum's windows. When the necrophiliac mallard was finished, Moeliker secured the violated corpse and stashed it in a freezer for later examination.

I found this observational study fascinating on multiple levels. Of course, the fact that someone would watch a dead duck being raped for over an hour, not to mention take copious notes while doing so, is interesting in and of itself. But what was even more fascinating to me about this article was finding out that neither necrophilia nor homosexuality is all that rare in mallard ducks. In fact, scientists have previously observed male mallards attempting to mate with deceased females, and researchers estimate that up to 1 in 5 mallard duck pairs consist of homosexual males.2 It turns out that the only unique thing about this case was the combination of mallard necrophilia with homosexuality.

Autoria e outros dados (tags, etc)


Pesquisar no Blog