Saltar para: Posts , Pesquisa e Arquivos 
"OH WATERS, TEEM WITH MEDICINE TO KEEP MY BODY SAFE FROM HARM, SO THAT I MAY LONG SEE THE SUN." - Rig Veda
We compared entropy for texts written in natural languages (English, Spanish) and artificial languages (computer software) based on a simple expression for the entropy as a function of message length and specific word diversity. Code text written in artificial languages showed higher entropy than text of similar length expressed in natural languages. Spanish texts exhibit more symbolic diversity than English ones. Results showed that algorithms based on complexity measures differentiate artificial from natural languages, and that text analysis based on complexity measures allows the unveiling of important aspects of their nature. We propose specific expressions to examine entropy related aspects of tests and estimate the values of entropy, emergence, self-organization and complexity based on specific diversity and message length.
Theoretical neuroscientist Tony Bell argues that the hallmark of biological computation is that information flows across multiple hierarchies of complexity, bubbling up from microscopic to macroscopic levels and back down again.
Simpler structures (like neurons) communicate with more complex structures (like areas of the brain), and vice versa. Computations are occurring on and between every level, from molecules to cells to organisms as a whole. There is a good deal of evidence that even the subcellular compartments within neurons are doing their own computations. Most recently, a study1 published in Nature in October 2013 showed that activity within dendrites (a compartment representing a neuron’s “input”) shapes the information processed by that neuron. Given this multi-level information flow, Bell argues that noise simply cannot be defined: Deviations from expectation should be interpreted as meaningful communication at another level of the complexity hierarchy.