REDUCTIONISM HAS BEEN THE DOMINANT approach to science since the 1600s. René Descartes, one of reductionism’s earliest proponents, described his own scientific method thus: “to divide all the difficulties under examination into as many parts as possible, and as many as were required to solve them in the best way” and “to conduct my thoughts in a given order, beginning with the simplest and most easily understood objects, and gradually ascending, as it were step by step, to the knowledge of the most complex.”1 Since the time of Descartes, Newton, and other founders of the modern scientific method until the beginning of the twentieth century, a chief goal of science has been a reductionist explanation of all phenomena in terms of fundamental physics. (Location 67)
twentieth-century science was also marked by the demise of the reductionist dream. (Location 77)
Many phenomena have stymied the reductionist program: the seemingly irreducible unpredictability of weather and climate; the intricacies and adaptive nature of living organisms and the diseases that threaten them; the economic, political, and cultural behavior of societies; the growth and effects of modern technology and communications networks; and the nature of intelligence and the prospect for creating it in computers. The antireductionist catch-phrase, “the whole is more than the sum of its parts,” takes on increasing significance as new sciences such as chaos, systems biology, evolutionary economics, and network theory move beyond reductionism to explain how complex behavior can arise from large collections of simpler components. (Location 79)
computation as an idea goes much deeper than operating systems, programming languages, databases, and the like; the deep ideas of computation are intimately related to the deep ideas of life and intelligence. (Location 103)
How do we move beyond the traditional paradigm of reductionism toward a new understanding of seemingly irreducibly complex systems? (Location 114)
How is it that those systems in nature we call complex and adaptive—brains, insect colonies, the immune system, cells, the global economy, biological evolution—produce such complex and adaptive behavior from underlying, simple rules? How can interdependent yet self-interested organisms come together to cooperate on solving problems that affect their survival as a whole? And are there any general principles or laws that apply to such phenomena? Can life, intelligence, and adaptation be seen as mechanistic and computational? If so, could we build truly intelligent and living machines? And if we could, would we want to? (Location 123)
Nigel Franks, a biologist specializing in ant behavior, has written, “The solitary army ant is behaviorally one of the least sophisticated animals imaginable,” and, “If 100 army ants are placed on a flat surface, they will walk around and around in never decreasing circles until they die of exhaustion.” Yet put half a million of them together, and the group as a whole becomes what some have called a “superorganism” with “collective intelligence.” (Location 185)
complex systems, an interdisciplinary field of research that seeks to explain how large numbers of (Location 198)
relatively simple entities organize themselves, without the benefit of any central controller, into a collective whole that creates patterns, uses information, and, in some cases, evolves and learns. (Location 198)
The word complex comes from the Latin root plectere: to weave, entwine. (Location 200)
exactly how the individual actions of the ants produce large, complex structures, how the ants signal one another, and how the colony as a whole adapts to changing circumstances (e.g., changing weather or attacks on the colony). And how did biological evolution produce creatures with such an enormous contrast between their individual simplicity and their collective sophistication? (Location 213)
These actions recall those of ants in a colony: individuals (neurons or ants) perceive signals from other individuals, and a sufficient summed strength of these signals causes the individuals to act in certain ways that produce additional signals. The overall effects can be very complex. (Location 231)
One class of lymphocytes are called B cells (the B indicates that they develop in the bone marrow) and have a remarkable property: the better the match between a B cell and an invader, the more antibody-secreting daughter cells the B cell creates. The daughter cells each differ slightly from the mother cell in random ways via mutations, and these daughter cells go on to create their own daughter cells in direct proportion to how well they match the invader. The result is a kind of Darwinian natural selection process, in which the match between B cells and invaders gradually gets better and better, until the antibodies being produced are extremely efficient at seeking and destroying the culprit microorganisms. (Location 260)
Economies are complex systems in which the “simple, microscopic” components consist of people (or companies) buying and selling goods, and the collective behavior is the complex, hard-to-predict behavior of markets as a whole, such as changes in the price of housing in different areas of the country or fluctuations in stock prices (figure 1.4). Economies are thought by some economists to be adaptive on both the microscopic and macroscopic level. (Location 279)
The eighteenth-century economist Adam Smith called this self-organizing behavior of markets the “invisible hand”: it arises from the myriad microscopic actions of individual buyers and sellers. Economists are interested in how markets become efficient, (Location 286)
economists involved in the field of complex systems have tried to explain market behavior in terms similar to those used previously in the descriptions of other complex systems: dynamic hard-to-predict patterns in global behavior, such as patterns of market bubbles and crashes; processing of signals and information, such as the decision-making processes of individual buyers and sellers, and the resulting “information processing” ability of the market as a whole to “calculate” efficient prices; and adaptation and learning, such as individual sellers adjusting their production to adapt to changes in buyers’ needs, and the market as a whole adjusting global prices. (Location 289)
The World Wide Web came on the world scene in the early 1990s and has experienced exponential growth ever since. Like the systems described above, the Web can be thought of as a self-organizing social system: individuals, with little or no central oversight, perform simple tasks: posting Web pages and linking to other Web pages. However, complex systems scientists have discovered that the network as a whole has many unexpected large-scale properties involving its overall structure, the way in which it grows, how information propagates over its links, and the coevolutionary relationships between the behavior of search engines and the Web’s link structure, all of which lead to what could be called (Location 295)
“adaptive” behavior for the system as a whole. (Location 299)
Complex collective behavior: All the systems I described above consist of large networks of individual components (ants, B cells, neurons, stock-buyers, Website creators), each typically following relatively simple rules with no central control or leader. It is the collective actions of vast numbers of components that give rise to the complex, hard-to-predict, and changing patterns of behavior that fascinate us. Signaling and information processing: All these systems produce and use information and signals from both their internal and external environments. Adaptation: All these systems adapt—that is, change their behavior to improve their chances of survival or success—through learning or evolutionary processes. (Location 312)
Now I can propose a definition of the term complex system: a system in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing, and adaptation via learning or evolution. (Location 317)
a system that exhibits nontrivial emergent and self-organizing behaviors. (Location 324)
the importance of testing the resulting theories by experiments is a more modern notion. The influence of Aristotle’s ideas was strong and continued to hold sway over most of Western science until the sixteenth century—the time of Galileo. (Location 371)
The most important person in the history of dynamics was Isaac Newton. Newton, who was born the year after Galileo died, can be said to (Location 385)
have invented, on his own, the science of dynamics. Along the way he also had to invent calculus, the branch of mathematics that describes motion and change. (Location 386)
Newtonian mechanics produced a picture of a “clockwork universe,” one that is wound up with the three laws and then runs its mechanical course. The mathematician Pierre Simon Laplace saw the implication of this clockwork view for prediction: in 1814 he asserted that, given Newton’s laws and the current position and velocity of every particle in the universe, it was possible, in principle, to predict everything for all time. With the invention of electronic computers in the 1940s, the “in principle” might have seemed closer to “in practice.” (Location 408)
However, two major discoveries of the twentieth century showed that Laplace’s dream of complete prediction is not possible, even in principle. (Location 413)
It was the understanding of chaos that eventually laid to rest the hope of perfect prediction of all complex systems, quantum or otherwise. The defining idea of chaos is that there are some systems—chaotic systems—in which even minuscule uncertainties in measurements of initial position and momentum can result in huge errors in long-term predictions of these quantities. This is known as “sensitive dependence on initial conditions.” (Location 418)
Agregar ejemplos de sistemas caóticos y cómo se evidencia que lo son.
predicción teoría del caos favorite
This kind of behavior is counterintuitive; in fact, for a long time many scientists denied it was possible. However, chaos in this sense has been observed in cardiac disorders, turbulence in fluids, electronic circuits, dripping faucets, and many other seemingly unrelated phenomena. These days, the existence of chaotic systems is an accepted fact of science. (Location 427)
Possibly the first clear example of a chaotic system was given in the late nineteenth century by the French mathematician Henri Poincaré. Poincaré was the founder of and probably the most influential contributor to the modern field of dynamical systems theory, which is a major outgrowth of Newton’s science of dynamics. Poincaré discovered sensitive dependence on initial conditions when attempting to solve a much simpler problem than predicting the motion of a hurricane. He more modestly tried to tackle the so-called three-body problem: to determine, using Newton’s laws, the long-term motions of three masses exerting gravitational forces on one another. (Location 434)
In other words, even if we know the laws of motion perfectly, two different sets of initial conditions (here, initial positions, masses, and velocities for objects), even if they differ in a minuscule way, can sometimes produce greatly different results in the subsequent motion of the system. Poincaré found an example of this in the three-body problem. (Location 452)
How, precisely, does the huge magnification of initial uncertainties come about in chaotic systems? The key property is nonlinearity. A linear system is one you can understand by understanding its parts individually and then putting them together. (Location 464)
A nonlinear system is one in which the whole is different from the sum of the parts. Jake puts in two cups of baking soda. Nicky puts in a cup of vinegar. The whole thing explodes. (You can try this at home.) The result? More than three cups of vinegar-and-baking-soda-and-carbon-dioxide fizz. (Location 468)
But what happens when, more realistically, we consider limits to population growth? This requires us to make the growth rule nonlinear. Suppose that, as before, each year every pair of rabbits has four offspring and then dies. But now suppose that some of the offspring die before they reproduce because of overcrowding. Population biologists sometimes use an equation called the logistic model as a description of population growth in the presence of overcrowding. (Location 490)
In order to use the logistic model to calculate the size of the next generation’s population, you need to input to the logistic model the current generation’s population size, the birth rate, the death rate (the probability of an individual will die due to overcrowding), and the maximum carrying capacity (the strict upper limit of the population that the habitat will support.) (Location 497)
for the values of R that produce chaos, if there is any uncertainty in the initial condition x0, there exists a time beyond which the future value cannot be predicted. (Location 606)
the presence of chaos in a system implies that perfect prediction à la Laplace is impossible not only in practice but also in principle, since we can never know x0 to infinitely many decimal places. This is a profound negative result that, along with quantum mechanics, helped wipe out the optimistic nineteenth-century view of a clockwork Newtonian universe that ticked along its predictable path. (Location 618)
In the mathematical explorations we performed above, we saw that as R was increased from 2.0 to 4.0, iterating the logistic map for a given value of R first yielded a fixed point, then a period-two oscillation, then period four, then eight, and so on, until chaos was reached. In dynamical systems theory, each of these abrupt period doublings is called a bifurcation. This succession of bifurcations culminating in chaos has been called the “period doubling route to chaos.” (Location 631)
Feigenbaum noticed that as the period increases, the R values get closer and closer together. This means that for each bifurcation, R has to be increased less than it had before to get to the next bifurcation. You can see this in the bifurcation diagram of Figure 2.11: as R increases, the bifurcations get closer and closer together. Using these numbers, Feigenbaum measured the rate at which the bifurcations get closer and closer; that is, the rate at which the R values converge. He discovered that the rate is (approximately) the constant value 4.6692016. What this means is that as R increases, each new period doubling occurs about 4.6692016 times faster than the previous one. (Location 662)
As Feigenbaum’s colleague Leo Kadanoff said, this is “the best thing that can happen to a scientist, realizing that something that’s happened in his or her mind exactly corresponds to something that happens in nature.” (Location 685)
Seemingly random behavior can emerge from deterministic systems, with no external source of randomness. The behavior of some simple, deterministic systems can be impossible, even in principle, to predict in the long term, due to sensitive dependence on initial conditions. Although the detailed behavior of a chaotic system cannot be predicted, there is some “order in chaos” seen in universal properties common to large sets of chaotic systems, such as the period-doubling route to chaos and Feigenbaum’s constant. Thus even though “prediction becomes impossible” at the detailed level, there are some higher-level aspects of chaotic systems that are indeed predictable. (Location 702)
A complete account of how such entropy-defying self-organization takes place is the holy grail of complex systems science. But before this can be tackled, we need to understand what is meant by “order” and “disorder” and how people have thought about measuring such abstract qualities. (Location 725)
The different examples of complex systems I described in chapter 1 are all centrally concerned with the communication and processing of information in various forms. Since the beginning of the computer age, computer scientists have thought of information transmission and computation as something that takes place not only in electronic circuits but also in living systems. (Location 740)
The scientific study of information really begins with the science of thermodynamics, which describes energy and its interactions with matter. (Location 750)
Energy is roughly defined as a system’s potential to “do work,” (Location 752)
Entropy is a measure of the energy that cannot be converted into additional work. (Location 760)
As you’ve probably noticed, a room does not clean itself up, and Cheerios spilled on the floor, left to their own devices, will never find their way back into the cereal box. Someone or something has to do work to turn disorder into order. (Location 769)
The second law of thermodynamics is said to define the “arrow of time,” in that it proves there are processes that cannot be reversed in time (e.g., heat spontaneously returning to your refrigerator and converting to electrical energy to cool the inside). The “future” is defined as the direction of time in which entropy increases. Interestingly, the second law is the only fundamental law of physics that distinguishes between past and future. All other laws are reversible in time. (Location 775)
Szilard was the first to make a link between entropy and information, a link that later became the foundation of information theory and a key idea (Location 812)
in complex systems. In a famous paper entitled “On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings,” Szilard argued that the measurement process, in which the demon acquires a single “bit” of information (i.e., the information as to whether an approaching molecule is a slow one or a fast one) requires energy and must produce at least as much entropy as is decreased by the sorting of that molecule into the left or right side of the box. Thus the entire system, comprising the box, the molecules, and the demon, obeys the second law of thermodynamics. In coming up with his solution, Szilard was perhaps the first to define the notion of a bit of information—the information obtained from the answer to a yes/no (or, in the demon’s case, “fast/slow”) question. (Location 813)
it is not the act of measurement, but rather the act of erasing memory that necessarily increases entropy. Erasing memory is not reversible; if there is true erasure, then once the information is gone, it cannot be restored without additional measurement. Bennett (Location 835)
Statistical mechanics proposes that large-scale properties (e.g., heat) emerge from microscopic properties (e.g., the motions of trillions of molecules). (Location 853)
In short, classical mechanics attempts to say something about every single microscopic entity (e.g., molecule) by using Newton’s laws. Thermodynamics gives laws of macroscopic entities—heat, energy, and entropy—without acknowledging that any microscopic molecules are the source of these macroscopic entities. Statistical mechanics is a bridge between these two extremes, in that it explains how the behavior of the macroscopic entities arise from statistics of large ensembles of microscopic entities. (Location 860)
The situation is analogous to a slot machine with three rotating pictures (figure 3.2). Suppose each of the three pictures can come up “apple,” “orange,” “cherry,” “pear,” or “lemon.” Imagine you put in a quarter, and pull the handle to spin the pictures. It is much more likely that the pictures will all be different (i.e., you lose your money) than that the pictures will all be the same (i.e., you win a jackpot). Now imagine such a slot machine with fifty quadrillion pictures, and you can see that the probability of all coming up the same is very close to zero, just like the probability of the air molecules ending up all clumped together in the same location. (Location 883)
estadística probabilidad entropía termodinámica cita
Shannon’s definition of information completely ignores the meaning of the messages and takes into account only how often the source sends each of the possible different messages to the receiver. (Location 925)
Shannon’s definition of information content was nearly identical to Boltzmann’s more general definition of entropy. In his classic 1948 paper, Shannon defined the information content in terms of the entropy of the message source. (Location 937)
In general, in Shannon’s theory, a message can be any unit of communication, be it a letter, a word, a sentence, or even a single bit (a zero or a one). Once again, the entropy (and thus information content) of a source is defined in terms of message probabilities and is not concerned with the “meaning” of a message. (Location 944)
Information, as narrowly defined by Shannon, concerns the predictability of a message source. In the real world, however, information is something that is analyzed for meaning, that is remembered and combined with other information, and that produces results or actions. In short, information is processed via computation. (Location 981)
Gödel’s proof is complicated. However, intuitively, it can be explained very easily. Gödel gave an example of a mathematical statement that can be translated into English as: “This statement is not provable.” Think about it for a minute. It’s a strange statement, since it talks about itself—in fact, it asserts that it is not provable. Let’s call this statement “Statement A.” Now, suppose Statement A could indeed be proved. But then it would be false (since it states that it cannot be proved). That would mean a false statement could be proved—arithmetic would be inconsistent. Okay, let’s assume the opposite, that Statement A cannot be proved. That would mean that Statement A is true (because it asserts that it cannot be proved), but then there is a true statement that cannot be proved—arithmetic would be incomplete. Ergo, arithmetic is either inconsistent or incomplete. (Location 1021)
At this point, I should summarize Turing’s momentous accomplishments. First, he rigorously defined the notion of “definite procedure.” Second, his definition, in the form of Turing machines, laid the groundwork for the invention of electronic programmable computers. Third, he showed what few people ever expected: there are limits to what can be computed. (Location 1185)
Just as quantum mechanics and chaos together quashed the hope of perfect prediction, Gödel’s and Turing’s results quashed the hope of the unlimited power of mathematics and computing. (Location 1194)
The discovery that DNA is the carrier of hereditary information did not take place until the 1940s. Many theories of heredity were proposed in the 1800s, but none was widely accepted until the “rediscovery” in 1900 of the work of Gregor Mendel. (Location 1372)
it has been surprisingly difficult to come up with a universally accepted definition of complexity (Location 1613)
The answer is that there is not yet a single science of complexity but rather several different sciences of complexity with different notions of what complexity means. Some of these notions are quite formal, and some are still very informal. If the sciences of complexity are to become a unified science of complexity, then people are going to have to figure out how these diverse notions—formal and informal—are related to one another, and how to most usefully refine the overly complex notion of complexity. (Location 1624)
the amoeba, another type of single-celled microorganism, has about 225 times as many base pairs as humans do, and a mustard plant called Arabidopsis has about the same number of genes that we do. (Location 1660)
The most complex entities are not the most ordered or random ones but somewhere in between. Simple Shannon entropy doesn’t capture our intuitive concept of complexity. (Location 1683)
Andrey Kolmogorov, and independently both Gregory Chaitin and Ray Solomonoff, proposed that the complexity of an object is the size of the shortest computer program that could generate a complete description of the object. This is called the algorithmic information content of the object. (Location 1686)
In short, as we would wish, both very ordered and very random entities have low effective complexity. (Location 1707)
given different proposed sets of regularities that fit an entity, we can determine which is best by using the test called Occam’s Razor. The best set of regularities is the smallest one that describes the entity in question and at the same time minimizes the remaining random component of that entity. (Location 1716)
Charles Bennett proposed the notion of logical depth. The logical depth of an object is a measure of how difficult that object is to construct. (Location 1725)
Logical depth has very nice theoretical properties that match our intuitions, but it does not give a practical way of measuring the complexity of any natural object of interest, since there is typically no practical way of finding the smallest Turing machine that could have generated a given object, not to mention determining how long that machine would take to generate it. And this doesn’t even take into account the difficulty, in general, of describing a given object as a string of 0s and 1s. (Location 1741)
These works of fiction both presage and celebrate a new, technological version of the “What is life?” question: Is it possible for computers or robots to be considered “alive”? This question links the previously separate topics of computation and of life and evolution. (Location 1924)
Von Neumann was also one of the first scientists who thought deeply about connections between computation and biology. He dedicated the last years of his life to solving the problem of how a machine might be able to reproduce itself; his solution was the first complete design for a self-reproducing machine. The self-copying computer program I will show you was inspired by his “self-reproducing automaton” and illustrates its fundamental principle in a simplified way. (Location 1965)
The dual use of information is also at the heart of Gödel’s paradox, embodied by his self-referential sentence “This statement is not provable.” This is a bit tricky to understand. First, let’s note that this sentence, like any other English sentence, can be looked at in two ways: (1) as the literal string of letters, spaces, and punctuation contained in the sentence, and (2) as the meaning of that literal string, as interpreted by an English speaker. To be very clear, let’s call the literal string of characters S. That is, S = “This statement is not provable.” We can now state facts about S: for example, it contains twenty-six letters, four spaces, and one period. Let’s call the meaning of the sentence M. We can rewrite M as follows: “Statement S is not provable.” In a way, you can think of M as a “command” and of S as the “data” for that command. The weird (and wonderful) thing is that the data S is the same thing as the command M. The chief reason Gödel was able to translate his English sentence into a paradox in mathematics was that he was able to phrase M as a mathematical statement and S as a number that encoded the string of characters of that mathematical statement. (Location 2020)
As I described in chapter 6, DNA is made up of strings of nucleotides. Certain substrings (genes) encode amino acids making up proteins, including the enzymes (special kinds of proteins) that effect the splitting of the double helix and the copying of each strand via messenger RNA, transfer RNA, ribosomes, et cetera. In a very crude analogy, the DNA strings encoding the enzymes that perform the copying roughly correspond to the lines of code in the self-copying program. These “lines of code” in DNA are executed when the enzymes are created and act on the DNA itself, interpreting it as data to be split up and copied. However, you may have noticed something I have so far swept under the rug. There is a major difference between my self-copying program and DNA self-reproduction. The self-copying program required an interpreter to execute it: an instruction pointer to move down the lines of computer code and a computer operating system to carry them out (e.g., actually perform the storing and retrieving of internal variables such as ip and L, actually print strings of characters, and so on). The interpreter is completely external to the program itself. In contrast, in the case of DNA, the instructions for building the “interpreter”—the messenger RNA, transfer RNA, ribosomes, and all the other machinery of protein synthesis—are encoded along with everything else in the DNA. That is, DNA not only contains the code for its self-replicating “program” (i.e., the enzymes that perform the splitting and copying of DNA) but also it encodes its own interpreter (the cellular machinery that translates DNA into those very enzymes). (Location 2039)
Von Neumann’s original self-reproducing automaton (described mathematically but not actually built by von Neumann) similarly contained not only a self-copying program but also the machinery needed for its own interpretation. Thus it was truly a self-reproducing machine. This explains why von Neumann’s construction was considerably more complicated than my simple self-copying program. That it was formulated in the 1950s, before the details of biological self-reproduction were well understood, is testament to von Neumann’s insight. (Location 2052)
Von Neumann was in many ways ahead of his time. His goal was, like Turing’s, to develop a general theory of information processing that would encompass both biology and technology. His work on self-reproducing automata was part of this program. (Location 2099)
Von Neumann also was closely linked (Location 2101)
to the so-called cybernetics community—an interdisciplinary group of scientists and engineers seeking commonalities among complex, adaptive systems in both natural and artificial realms. What we now call “complex systems” can trace its ancestry to cybernetics and the related field of systems science. I explore these connections further in the final chapter. (Location 2101)
The term algorithm is used these days to mean what Turing meant by definite procedure and what cooks mean by recipe: a series of steps by which an input is transformed to an output. (Location 2142)
The input to the GA has two parts: a population of candidate programs, and a fitness function that takes a candidate program and assigns to it a fitness value that measures how well that program works on the desired task. (Location 2147)
Here is the recipe for the GA. Repeat the following steps for some number of generations: Generate an initial population of candidate solutions. The simplest way to create the initial population is just to generate a bunch of random programs (strings), called “individuals.” Calculate the fitness of each individual in the current population. Select some number of the individuals with highest fitness to be the parents of the next generation. Pair up the selected parents. Each pair produces offspring by recombining parts of the parents, with some chance of random mutations, and the offspring enter the new population. The selected parents continue creating offspring until the new population is full (i.e., has the same number of individuals as the initial population). The new population now becomes the current population. Go to step 2. (Location 2153)
A recent article in Science magazine, called “Getting the Behavior of Social Insects to Compute,” described the work of a group of entomologists who characterize the behavior of ant colonies as “computer algorithms,” with each individual ant running a simple program that allows the colony as a whole to perform a complex computation, such as reaching a consensus on when and where to move the colony’s nest. (Location 2352)
a kind of computation very different from the kind our desktop computers perform with a central processing unit and random-access memory. (Location 2359)
Cellular automata were invented—like so many other good ideas—by John von Neumann, back in the 1940s, based on a suggestion by his colleague, the mathematician Stan Ulam. (This is a great irony of computer science, since cellular automata are often referred to as non -von-Neumann-style architectures, to contrast with the von-Neumann-style architectures that von Neumann also invented.) (Location 2421)
John Conway also sketched a proof (later refined by others) that Life could simulate a universal computer. This means that given an initial configuration of on and off states that encodes a program and the input data for that program, Life will run that program on that data, producing a pattern that represents the program’s output. Conway’s proof consisted of showing how glider guns, gliders, and other structures could be assembled so as to carry out the logical operations and, or, and not. It has long been known that any machine that has the capacity to put together all possible combinations of these logic operations is capable of universal computation. Conway’s proof demonstrated that, in principle, all such combinations of logical operations are possible in the Game of Life. (Location 2454)
What we really want from cellular automata is to harness their parallelism and ability to form complex patterns in order to achieve computations in a nontraditional way. The first step is to characterize the kinds of patterns that cellular automata can form. (Location 2467)
Note that particles are a description imposed by us (the scientists) rather than anything explicit taking place in a cellular automaton or used by the genetic algorithm to evolve cellular automata. (Location 2733)
it is still a mystery how high-level information about sensory data is encoded and processed in the brain. Perhaps the explanation will turn out to be something close to particle-like or, given the brain’s three dimensions, wave-like computation, where neurons are the scaffolding for information-carrying waves of activity and their information-processing interactions. (Location 2741)
In many people’s minds information has taken on an ontological status equal to that of mass and energy—namely, as a third primitive component of reality. (Location 2756)
the notion of computation was formalized in the 1930s by Alan Turing as the steps carried out by a Turing machine on a particular input. (Location 2776)
The meaning of the input and output information in a Turing machine comes from its interpretation by humans (programmers and users). The meaning of the information created in intermediate steps in the computation also comes from its interpretation (or design) by humans, who understand the steps in terms of commands in a high-level programming language. This higher level of description allows us to understand computations in a human-friendly way that is abstracted from particular details of machine code and hardware. (Location 2789)
In the previous chapter I proposed that particles and their interactions are one approach toward such a high-level language for describing how information processing is done in cellular automata. Information is communicated via the movement of particles, and information is processed via collisions between particles. In this way, the intermediate steps of information processing acquire “meaning” via the human interpretation of the actions of the particles. (Location 2803)
For cellular automata, no such compilers or decompilers exist, at least not yet, and there is still no practical and general way to design high-level “programs.” Relatively new ideas such as particles as high-level information-processing structures in cellular automata are still far from constituting a theoretical framework for computation in such systems. (Location 2810)
it is a tremendously important question for complex systems science, because a high-level description of information processing in living systems would allow us not only to understand in new and more comprehensive ways the mechanisms by which particular systems operate, but also to abstract general principles that transcend the overwhelming details of individual systems. In essence, such a description would provide a “high-level language” for biology. (Location 2815)
As I described in chapter 1, analogies often have been made between ant colonies and the brain. Both can be thought of as networks of relatively simple elements (neurons, ants) from which emerge larger-scale information-processing behaviors. Two examples of such behavior in ant colonies are the ability to optimally and adaptively forage for food, and the ability to adaptively allocate ants to different tasks as needed by the colony. Both types of behavior are accomplished with no central control, via mechanisms that are surprisingly similar to those described above for the immune system. (Location 2894)
As was the case for cellular automata, when I talk about information processing in these systems I am referring not to the actions of individual components such as cells, ants, or enzymes, but to the collective actions of large groups of these components. (Location 2950)
in this way, information is not, as in a traditional computer, precisely or statically located in any particular place in the system. Instead, it takes the form of statistics and dynamics of patterns over the system’s components. (Location 2952)
In the immune system the spatial distribution and temporal dynamics of lymphocytes can be interpreted as a dynamic representation of information about the continually changing population of pathogens in the body. (Location 2954)
One consequence of encoding information as statistical and time-varying patterns of low-level components is that no individual component of the system can perceive or communicate the “big picture” of the state of the system. Instead, information must be communicated via spatial and temporal sampling. (Location 2962)
It appears that such intrinsic random and probabilistic elements are needed in order for a comparatively small population of simple components (ants, cells, molecules) to explore an enormously larger space of possibilities, particularly when the information to be gained from such explorations is statistical in nature and there is little a priori knowledge about what will be encountered. (Location 2986)
In short, the system both explores to obtain information and exploits that information to successfully adapt. (Location 3032)
In my view, meaning is intimately tied up with survival and natural selection. (Location 3039)
In short, the meaning of an event is what tells one how to respond to it. (Location 3040)
But in a complex system such as those I’ve described above, in which simple components act without a central controller or leader, who or what actually perceives the meaning of situations so as to take appropriate actions? This is essentially the question of what constitutes consciousness or self-awareness in living systems. To me this is among the most profound mysteries in complex systems and in science in general. Although this mystery has been the subject of many books of science and philosophy, it has not yet been completely explained to anyone’s satisfaction. (Location 3045)
Marvin Minsky, a founder of the field of artificial intelligence, concisely described this paradox of AI as, “Easy things are hard.” Computers can do many things that we humans consider to require high intelligence, but at the same time they are unable to perform tasks that any three-year-old child could do with ease. (Location 3080)
analogy-making is the ability to perceive abstract similarity between two things in the face of superficial differences. This ability pervades almost every aspect of what we call intelligence. (Location 3085)
As the nineteenth-century philosopher Henry David Thoreau put it, “All perception of truth is the detection of an analogy.” Perceiving abstract similarities is something computers are notoriously bad at. That’s why I can’t simply show the computer a picture, say, of a dog swimming in a pool, and ask it to find “other pictures like this” in my online photo collection. (Location 3108)
Reading the book, written by Douglas Hofstadter, turned out to be one of those life-changing events that one can never anticipate. The title didn’t let on that the book was fundamentally about how thinking and consciousness emerge from the brain via the decentralized interactions of large numbers of simple neurons, analogous to the emergent behavior of systems such as cells, ant colonies, and the immune system. In short, the book was my introduction to some of the main ideas of complex systems. (Location 3118)
It should be clear by now that the key to analogy-making in this microworld (as well as in the real world) is what I am calling conceptual slippage. Finding appropriate conceptual slippages given the context at hand is the essence of finding a good analogy. (Location 3201)
The problem is how to allocate limited resources—be they ants, lymphocytes, enzymes, or thoughts—to different possibilities in a dynamic way that takes new information into account as it is obtained. Ant colonies have solved this problem by having large numbers of ants follow a combination of two strategies: continual random foraging combined with a simple feedback mechanism of preferentially following trails scented with pheromones and laying down additional pheromone while doing so. (Location 3240)
Like ant colonies, the immune system combines randomness with highly directed behavior based on feedback. (Location 3249)
cita sistema inmune colonias de homigas emergencia sistemas complejos
As more and more information is obtained, exploration gradually becomes more focused (increasing resources are concentrated on a smaller number of possibilities) and less random: possibilities that have already been identified as promising are exploited. (Location 3257)
Activation also spreads from a node to its conceptual neighbors and decays if not reinforced. (Location 3272)
in the last few decades, some biologists have proposed that the complexity of an organism largely arises from complexity in the interactions among its genes. (Location 3852)
in a regular network with 1,000 nodes, the average path length is 250; in the same network with 5% of the links randomly rewired, the average path length will typically fall to around 20. (Location 3928)
Natural, social, and technological evolution seem to have produced organisms, communities, and artifacts with such structure. Why? It has been hypothesized that at least two conflicting evolutionary selective pressures are responsible: the need for information to travel quickly within the system, and the high cost of creating and maintaining reliable long-distance connections. Small-world networks solve both these problems by having short average path lengths between nodes in spite of having only a relatively small number of long-distance connections. (Location 3953)
Several different research groups have found that the Web’s in-degree distribution can be described by a very simple rule: the number of pages with a given in-degree is approximately proportional to 1 divided by the square of that in-degree. (Location 3993)
It turns out that this rule actually fits the data only for values of in-degree (k) in the thousands or greater. (Location 3998)
A distribution like this is called self-similar, because it has the same shape at any scale you plot it. In more technical terms, it is “invariant under rescaling.” This is what is meant by the term scale-free. The term self-similarity might be ringing a bell. We saw it back in chapter 7, in the discussion of fractals. There is indeed a connection to fractals here; more on this in chapter 17. (Location 4019)
A very important property of scale-free networks is their resilience to the deletion of nodes. (Location 4053)
The reason for this is simple: if nodes are deleted at random, they are overwhelmingly likely to be low-degree nodes, since these constitute nearly all nodes in the network. Deleting such nodes will have little effect on the overall degree distribution and path lengths. (Location 4056)
However, this resilience comes at a price: if one or more of the hubs is deleted, the network will be likely to lose all its scale-free properties and cease to function properly. For example, a blizzard in Chicago (a big airline hub) will probably cause flight delays or cancellations all over the country. A failure in Google will wreak havoc throughout the Web. (Location 4060)
If every neuron were connected to every other neuron, or all different functional areas were fully connected to one another, then the brain would use up a mammoth amount of energy in sending signals over the huge number of connections. Evolution presumably selected more energy-efficient structures. (Location 4089)
In short, what Brown, Enquist, and West are saying is that evolution structured our circulatory systems as fractal networks to approximate a “fourth dimension” so as to make our metabolisms more efficient. As West, Brown, and Enquist put it, “Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional … Fractal geometry has literally given life an added dimension.” (Location 4418)