If viruses can evolve within hours, computer code can do it within fractions of a second. Viruses are dumb; computers have processors that might some day surpass our own brains — some would say they already have. If we are going to take the risk of giving machines, in Lipson’s words, ‘so much freedom’, we need a good reason to do it. In Out of Control, Kelly proposes one possible reason. Perhaps, he says, the world has become such a complicated place that we have no other choice but to enable the marriage between the biologic and the technologic; without it, the problems we face are too difficult for our human brains to solve. Kelly proposes a kind of Faustian pact: ‘The world of the made, will soon be like the world of the born: autonomous, adaptable and creative but, consequently, out of our control. I think that’s a great bargain.’
One of HFT’s objectives has always been to make the market more efficient. Speed traders have done such an excellent job of wringing waste out of buying and selling stocks that they’re having a hard time making money themselves. HFT also lacks the two things it needs the most: trading volume and price volatility. Compared with the deep, choppy waters of 2009 and 2010, the stock market is now a shallow, placid pool. Trading volumes in U.S. equities are around 6 billion shares a day, roughly where they were in 2006. Volatility, a measure of the extent to which a share’s price jumps around, is about half what it was a few years ago. By seeking out price disparities across assets and exchanges, speed traders ensure that when things do get out of whack, they’re quickly brought back into harmony. As a result, they tamp down volatility, suffocating their two most common strategies: market making and statistical arbitrage.
As profits have shrunk, more HFT firms are resorting to something called momentum trading. Using methods similar to what Swanson helped pioneer 25 years ago, momentum traders sense the way the market is going and bet big. It can be lucrative, and it comes with enormous risks. Other HFTs are using sophisticated programs to analyze news wires and headlines to get their returns. A few are even scanning Twitter feeds, as evidenced by the sudden selloff that followed the Associated Press’s hacked Twitter account reporting explosions at the White House on April 23. In many ways, it was the best they could do.
Presentation on the counterintuitive behavior and disproportionate causal effects of complex system. Illustration
using the the example of restaurant dynamics determined by the quality. The simulation is applied to Discrete Duty and Analogue Action.
We see these very simple programs, with very complex behavior. It makes one think that maybe there’s a simple program for our whole universe. And that even though physics seems to involve more and more complicated equations, that somewhere underneath it all there might just be a tiny little program. We don’t know if things work that way. But if out there in the computational universe of possible programs, the program for our universe is just sitting there waiting to be found, it seems embarrassing not to be looking for it.
I believe it’s no wonder that our world is in trouble. We currently lack the global systems science needed to understand our world, which is now changing more quickly than we can collect the experience required to cope with upcoming problems. We can also not trust our intuition, since the complex systems we have created behave often in surprising, counter-intuitive ways. Frequently, their properties are not determined by their components, but their interactions. Therefore, a strongly coupled world behaves fundamentally different from a weakly coupled world with independent decision-makers. Strong interactions tend to make the system uncontrollable – they create cascading effects and extreme events.
As a consequence of the transition to a more and more strongly coupled world, we need to revisit the underlying assumptions of the currently prevailing economic thinking. In the following, I will discuss 10 widespread assertions, which would work in a perfect economic world with representative agents and uncorrelated decisions, where heterogeneity, decision errors, and time scales do not matter. However, they are apparently not well enough suited to depict the strongly interdependent, diverse, and quickly changing world, we are facing, and this has important implications. Therefore, we need to ‘think out of the box’ and require a paradigm shift towards a new economic thinking characterized by a systemic, interaction-oriented perspective inspired by knowledge about complex, ecological, and social systems. As Albert Einstein noted, long-standing problems are rarely solved within the dominating paradigm. However, a new perspective on old problems may enable new mitigation strategies.
In August 2011, several areas of London experienced episodes of large-scale disorder, comprising looting, rioting and violence. Much subsequent discourse has questioned the adequacy of the police response, in terms of the resources available and strategies used. In this article, we present a mathematical model of the spatial development of the disorder, which can be used to examine the effect of varying policing arrangements. The model is capable of simulating the general emergent patterns of the events and focusses on three fundamental aspects: the apparently-contagious nature of participation; the distances travelled to riot locations; and the deterrent effect of policing. We demonstrate that the spatial configuration of London places some areas at naturally higher risk than others, highlighting the importance of spatial considerations when planning for such events. We also investigate the consequences of varying police numbers and reaction time, which has the potential to guide policy in this area.
We can look at the history of technology as a human driven, parallel experiment of evolution. So far artifacts are not capable of self-reproduction, but the population-level dynamics of long-term technological innovation nonetheless resemble biological evolution in many ways. The design of new technologies is strongly inﬂuenced by existing technologies, and technological change can be viewed as a process of descent with variation and selection [5-7]. Both chance and the appropriate context are required for innovations to occur. Lineages of design often show rapid change and diversiﬁcation as well as exaptation. The later is illustrated by Gutenberg’s printing press, when an existing technology (the screw press) that was co-opted to serve a completely novel purpose. Extinction and replacement are also common. As soon as a genuinely novel invention appears, it is typically followed by an enormous diversiﬁcation, followed by the extinction (and turnover) of most competing inventions. Moreover, technological change also displays convergence: similar discoveries are made simultaneously by diferent inventors such as the more than 20 diferentpatents involving light bulb inventions prior to Edison’s success. The view that technological evolution follows similar rules to biological evolution has captured theinterest of scientists, historians and engineers alike.
Despite the commonalities, technological evolution departs from biological evolution in fundamental ways. For
technological change long-term goals and expectations play a leading role in which the designers seek optimality, typically under explicit criteria such as e"ciency, cost and speed. Moreover, as pointed by Fran¸cois Jacob, in contrast to artifacts, living structures are largelythe result of tinkering, i.e. a widespread reuse and combination of available elements to build new structures.
Technology is highly dependent on the combination of preexisting inventions, but unlike biology, the introduction of new simple elements can completely reset the path of future technologies. In contrast, in biology, once established, solutions to problems are seldom replaced.
Both biological and technological innovations involve cost constraints. Thermodynamic can also help
understanding the origin of some structures. Allometric scaling laws provide a good illustration of how a theory of biological distribution networks (including both vascular and respiratory ones) based on efciencient energy dissipation on fractal trees. Efficiency has also beendriving technological improvements and marks the development of the steam engine and the bicycle . The evolution of the latter can be traced as a succession ofimprovement steps towards increasing performance and lower metabolic cost. However, the coupling between energy costs and improvements is not a precondition for technological change to occur. On one hand, many examples illustrate a common pattern of development of a given invention: in early stages, inventions are often overly expensive and not perceived as economically relevant. The barrier to di↵usion can only be overcome through the vision of individuals pursuing their views and goals.
Is it possible to formulate a theory of technological evolution? How much can we take advantage from our theoretical understanding of biological evolution? Recent advances within network theory and a unique availability of the fossil record of human inventions might helpin reaching that goal. Such theory needs to consider the existence of universal trends, the economic context and history. We believe that a major effort in this direction would settle the debate on similarities versus differences.
Complexity is not so much a subject of research as a new way of looking at phenomena. It is inherently interdisciplinary, meaning
that it derives its problems from the real world and its concepts and methods from all fields of science.
Complexity lies at the root of the most burning issues that face us every day, such as hunger, energy, water, health, climate, security, urbanization, sustainability, innovation, and the impact of technology.
Though Cristianini and his team didn't deploy the level of analysis Lim did to presidential speech, they did a general test for news media readability using Flesch scores, which assess the complexity of writing based on the length of words and sentences. Shorter, on this account, means simpler, although that doesn't mean that shortness of prose breadth always denotes simplicity; for general purposes, Flesch scores tend to hit the mark: writing specifically aimed at children tends to have a higher score over, say, scholarship in the humanities, and these differences in readability tend to reflect differences in substantive content.
According to Cristianini et al.'s analysis of a subset of the media—eight leading newspapers from the US and seven from the UK; a total of 218,302 stories—The Guardian is considerably more complex a read than any of the other major publications, including The New York Times. Surprisingly, so is the Daily Mail, whose formula of "celebrity 'X' is" happy/sad/disheveled/flirty/fat/pregnant/g
Cristianini et al. also measured the percentage of adjectives expressing judgments, such as "terrible" and "wonderful" in order to assess the publication's degree of linguistic subjectivity. Not surprisingly, tabloid newspapers tended to be more subjective, while the Wall Street Journal, perhaps owing to its focus on business and finance, was the most linguistically objective. Despite The Guardian and the Daily Mail's seemingly complex prose, the researchers found that, in general, readability and subjectivity tended to go hand-in-hand when they combined the most popular stories with writing styles. "While we cannot be sure about the causal factors at work here," they write, "our findings suggest the possibility, at least, that the language of hard news and dry factual reporting is as much as a deterrent to readers and viewers as the content." When political reporting was 'Flesched' out, so to speak, it was the most complex genre of news to read, and one of the least subjective.
When I asked Lim what he thought of the study via email, this, he said, was the pattern that stood out. "This means that at least in terms of the items included in the dataset, the media is opinionated and subjective at the same time that it is rendering these judgments in simplistic, unsubtle terms. This is not an encouraging pattern in journalistic conventions, especially given that the public appears to endorse it (given the correlation between the popularity of a story, its readability, and subjectivity)."
That all this data mining points to the importance of style is just one of the delightful ways that Big Crit can challenge our assumptions about the way markets and consumers and the world works. Of course, we are still, in analytical terms, learning to scrawl. As Colleen Cotter—perhaps the only person to have switched from journalism to linguistics and to then have produced a deep linguistic study of the language of news—cautions, we need to be careful about reading too much into "readability."
"If 'readability,' is just a quantitative measure, like length of words or structure of sentences (ones without clauses)," she says via email, "then it's a somewhat artificial way of 'counting.' It doesn't take into account familiarity, or native or intuitive or colloquial understandings of words, phrases, and narrative structures (like news stories or recipes or shopping lists or country-western lyrics)." Nor do readability formulas take into account "the specialist or local audience," says Cotter, who is a Reader in Media Linguistics at Queen Mary University in London. "I remember wondering why we had to have bridge scores published in the Redding, CA, paper, or why we had to call grieving family members, and the managing editor's claims that people expect that."
There are other limits to algorithmic content analysis too, as Lichter notes. "A content analysis of Animal Farm can tell you what Animal Farm says about animals," he says. "But it can't tell you what it says about Stalinism."
We're beginning to come to grips with the idea that your brain is not this well-organized hierarchical control system where everything is in order, a very dramatic vision of bureaucracy. In fact, it's much more like anarchy with some elements of democracy. Sometimes you can achieve stability and mutual aid and a sort of calm united front, and then everything is hunky-dory, but then it's always possible for things to get out of whack and for one alliance or another to gain control, and then you get obsessions and delusions and so forth.
You begin to think about the normal well-tempered mind, in effect, the well-organized mind, as an achievement, not as the base state, something that is only achieved when all is going well, but still, in the general realm of humanity, most of us are pretty well put together most of the time. This gives a very different vision of what the architecture is like, and I'm just trying to get my head around how to think about that.
Where does it all come from? Where are we going? Are we alone in the universe?
What is good and what is evil? The scientific narrative of cosmic evolution demands that we tackle such big questions with a cosmological perspective. I tackle the first question in Chapters 4, 5 and 6; the second in Chapters 7 and 8; the third in Chapter 9 and the fourth in Chapter 10. However, where do we start to answer such questions wisely? Doing so requires a methodological discipline mixing philosophical and scientific approaches.
In Chapter 1, I elaborate the concept of worldview, which is defined by our answers to the big questions. I argue that we should aim at constructing comprehensive and coherent worldviews. In Chapter 2, I develop criteria and tests to assess the relative strengths and weaknesses of different worldviews. In Chapter 3, I apply those methodological insights to religious, scientific and philosophical worldviews.
In Chapter 4, I identify seven fundamental challenges to any ultimate explanation of the origin of the universe: epistemological, metaphysical, thermodynamical, causal, infinity, free parameters and fine-tuning. I then analyze the question of the origin of the universe upside down and ask:what are the origins of our cognitive need to find an explanation of this origin? I conclude that our explanations tend to fall in two cognitive attractors, the point and the cycle. In Chapter 5, I focus on the free parameters issue, namely that there are free parameters in the standard model of particle physics and in cosmological models, which in principle can be filled in with any number. I analyze the issue with in physical, mathematical, computational and biological frameworks.
Chapter 6 is an in depth analysis of the fine-tuning issue, the claim that those free parameters are further fine-tuned for the emergence of complexity. I debunk common and uncommon physical, probabilistic and logical fallacies associated with this issue. I distinguish it from the closely related issues of free parameters, parameter sensitivity, metaphysical issues, anthropic principles, observational selection effects, teleology and God's existence. I conclude that fine-tuning is a conjecture, and that we need to study how common our universe is compared to other possible universes. This study opens a research endeavor that I call artificial cosmogenesis. Inspired by Drake's equation in the Search for Extraterrestrial Intelligence, I extend this equation to the Cosmic Evolution Equation, in order to study the robustness of the emergence of complexity in our universe, and whether or to what extent it is fine-tuned. I then review eight classical explanations of fine-tuning (skepticism, necessity,
fecundity, god-of-the-gaps, chance-of-the-gaps, weak-anthropic-principle-of-the-gaps, multiverse and design) and show their shortcomings.
In Chapter 7, I show the importance of artificial cosmogenesis from extrapolating the future of scientific simulations. I analyze two other evolutionary explanations of fine-tuning in Chapter 8. More precisely, I show the limitations of Cosmological Natural Selection to motivate the broader scenario of Cosmological Artificial Selection.
In Chapter 9, I set up a new research field to search for advanced extraterrestrials, high energy astrobiology. After developing criteria to distinguish natural from artificial systems, I show that the nature of some peculiar binary star systems needs to be reassessed because of thermodynamical, energetic and civilizational development arguments which converge towards them being advanced extraterrestrials. Since those putative beings feed on stars, I call them starivores. The question of their artificiality remains open, but I propose concrete research projects and a prize to further continue and motivate the scientific assessment of this hypothesis.
In Chapter 10, I explore foundations to build a cosmological ethics. I build on insights from thermodynamics, evolution, and developmental theories. Finally, I examine the idea of immortality with a cosmological perspective and conclude that the ultimate good is the infinite continuation of the evolutionary process. Appendix I is a summary of my position, and Appendix II provides argumentative maps of the entire thesis.
Phase 1: Exploitation: A new organization of any kind focuses on making the best possible use of what resources they can appropriate to make their mission real. Although exploitation has a negative connotation, when you are scrambling for scarce resources to do good outcomes, you use the opportunities you find when you find them.
Gradually, your work produces more predictable use of resources, you develop a reputation for doing good, you build relationships in your sector’s stakeholder community and you grow. You begin to work to more permanently stabilize your organization and build resources you don’t need to use immediately.
Phase 2: Conservation: The organization begins to accumulate resources that are not immediately needed, but might be needed in the future, or for as yet undetermined purposes. The existence of those additional resources constitute a base of power and control in the larger ecosystem of which the organization is a part. Conserving those resources (not just money, but also expertise and general experience in the mission) also requires maintenance, repairs, re-organizations,training, and other activities that are not directly related to the mission that drove you in the earlier phase, but are necessary to keep the now substantial pile of resources that you have built up for rainy days, development of your rep, and non-mission political purposes. Maintaining this complex system of resources causes internal mission drift and starts reducing your flexibility in responding to changes in your environment. Basically, your first response to environmental change (if you notice it) is to look to the preservation of your organization and its current inventory of resources (a kind of organizational narcissism). This cycle increases the fragility of your organization regardless of its apparent size or power.
Phase 3: Release: The cycle of conservation will eventually make the organization so fragile that some unanticipated disturbance in the environment will trigger a series of crises that will result in the disintegration of the organization from its conserved state. It will begin to lose its conserved resources. The usual response to the crises is defense of what is, as best as that can be done. The other alternative, of course, would be to completely rethink the organization from the ground up, but almost no one does this.
The release of previously conserved resources is not an ordered process. In fact, the breakup of the conserved resources (think forest fire) produces new resources that are not obviously useful to any existing organization.
Phase 4: Reorganization: Regardless of what shape resources are in, there are always pioneers who will make use of them, turning them into more organized resources (think weeds after a forest fire). As the free (unorganized) resources become more organized, it becomes easier to exploit them, and the cycle begins again.
The diagram points to other interesting aspects of this living cycle. What we think of as revolts occur as a response to highly conserved resources unavailable for mission related (sometimes ANY) use. Entrepreneurial mindsets are most useful in the exploitation phase and become less useful (and much harder to use period) in the conservation phase. Release can be slowed down, but not prevented. And maybe most importantly, the mere act of conservation puts serious and lasting constraints on what can be done with valuable resources.
Although it has been notoriously difficult to pin down precisely what it is that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and in particular, that top-down (or downward) causation -- where higher-levels influence and constrain the dynamics of lower-levels in organizational hierarchies -- may be a major contributor to the hierarchal structure of living systems. Here we propose that the origin of life may correspond to a physical transition associated with a shift in causal structure, where information gains direct, and context-dependent causal efficacy over the matter it is instantiated in. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some potential novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems.
We argue that the present crisis and stalling economy continuing since 2007 are rooted in the delusionary belief in policies based on a "perpetual money machine" type of thinking. We document strong evidence that, since the early 1980s, consumption has been increasingly funded by smaller savings, booming financial profits, wealth extracted from house price appreciation and explosive debt. This is in stark contrast with the productivity-fueled growth that was seen in the 1950s and 1960s. This transition, starting in the early 1980s, was further supported by a climate of deregulation and a massive growth in financial derivatives designed to spread and diversify the risks globally. The result has been a succession of bubbles and crashes, including the worldwide stock market bubble and great crash of October 1987, the savings and loans crisis of the 1980s, the burst in 1991 of the enormous Japanese real estate and stock market bubbles, the emerging markets bubbles and crashes in 1994 and 1997, the LTCM crisis of 1998, the dotcom bubble bursting in 2000, the recent house price bubbles, the financialization bubble via special investment vehicles, the stock market bubble, the commodity and oil bubbles and the debt bubbles, all developing jointly and feeding on each other. Rather than still hoping that real wealth will come out of money creation, we need fundamentally new ways of thinking. In uncertain times, it is essential, more than ever, to think in scenarios: what can happen in the future, and, what would be the effect on your wealth and capital? How can you protect against adverse scenarios? We thus end by examining the question "what can we do?" from the macro level, discussing the fundamental issue of incentives and of constructing and predicting scenarios as well as developing investment insights.
As such, the constitution of
the financial markets is fundamentally changed from an ensemble of individual networks to a closely linked network-of-networks configuration. Very recent research in network theory has shown that, when different networks become linked, the overall structure loses resilience. Indeed, due to diversification, there are less minor events but, due to the stronger coupling, there are more catastrophic events.
Using models specifically designed to quantify the degree of endogeneity (called “reflexivity” by Georges Soros) in the markets, defined as the fraction of transactions that are triggered internally or are self-excited, like aftershocks of an earthquake, and that are not the result of some new external information, we recently quantified that this degree of reflexivity increased from 30% in the 1990s to at least 80% as
of today. This proves, in hard numbers, that markets increasingly live a life of their own, disconnected from the real economy, activated by machines and the algorithms that compete to trade in milliseconds, a process also facilitated by massive injections of liquidity and the low interest rate policy operating at a different time scale. As a consequence, the bubbles and crashes, that we have become accustomed to, now develop and evolve increasingly over time scales of seconds to minutes.
We do not expect that the technological race will provide a stabilization effect, overall. This is mainly due to the crowding of adaptive strategies used by algorithmic agents, which exhibit pro-cyclical properties (for instance via a preference in socalled momentum trading) and a propensity to herd that is even larger than found in human beings. No level of technology can change this basic fact, which is widely documented for instance in artificial worlds populated by software-agents that simulate financial markets on computers. New algorithms that exploit the high volatility periods associated with distress and crashes are been vigorously developed.These are really worrying trends.
The recent social unrest across the Middle East and North Africa has deposed dictators who had ruled for decades. While the events have been hailed as an "Arab Spring" by those who hope that repressive autocracies will be replaced by democracies, what sort of regimes will eventually emerge from the crisis remains far from certain. Here we provide a complex systems framework, validated by historical precedent, to help answer this question. We describe the dynamics of governmental change as an evolutionary process similar to biological evolution, in which complex organizations arise by replication, variation and competitive selection. Different kinds of governments, however, have differing levels of complexity. Democracies must be more systemically complex than autocracies because of their need to incorporate large numbers of people in decision-making. This difference has important implications for the relative robustness of democratic and autocratic governments after revolutions. Revolutions may disrupt existing evolved complexity, limiting the potential for building more complex structures quickly. Insofar as systemic complexity is reduced by revolution, democracy is harder to create in the wake of unrest than autocracy. Applying this analysis to the Middle East and North Africa, we infer that in the absence of stable institutions or external assistance, new governments are in danger of facing increasingly insurmountable challenges and reverting to autocracy.