Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Physics, Nature and Society: A Guide to Order and Complexity in Our World
Physics, Nature and Society: A Guide to Order and Complexity in Our World
Physics, Nature and Society: A Guide to Order and Complexity in Our World
Ebook385 pages4 hours

Physics, Nature and Society: A Guide to Order and Complexity in Our World

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This wide-ranging and accessible book serves as a fascinating guide to the strategies and concepts that help us understand the boundaries between physics, on the one hand, and sociology, economics, and biology on the other. From cooperation and criticality to flock dynamics and fractals, the author addresses many of the topics belonging to the broad theme of complexity. He chooses excellent examples (requiring no prior mathematical knowledge) to illuminate these ideas and their implications. The lively style and clear description of the relevant models will appeal both to novices and those with an existing knowledge of the field.

LanguageEnglish
PublisherSpringer
Release dateNov 19, 2013
ISBN9783319020242
Physics, Nature and Society: A Guide to Order and Complexity in Our World

Related to Physics, Nature and Society

Related ebooks

Science & Mathematics For You

View More

Related articles

Related categories

Reviews for Physics, Nature and Society

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Physics, Nature and Society - Joaquín Marro

    Joaquín MarroThe Frontiers CollectionPhysics, Nature and Society2014A Guide to Order and Complexity in Our World10.1007/978-3-319-02024-2_1

    © Springer International Publishing Switzerland 2014

    1. A Glance at the Method

    Joaquín Marro¹  

    (1)

    University of Granada, Granada, Spain

    Joaquín Marro

    Email: jmarro@ugr.es

    Abstract

    The universe contains groups of galaxies that, on their characteristic timescales, change continually as they interact. The galaxies in each group collide, fuse, form quasars and display tidal effects, their shapes responding to but also conditioning these processes. Each galaxy is itself a cluster of stars which interact via gravity, generating complicated movements and fluxes. A star is, in the end, a mixture of electrons, neutrinos and quarks subjected to fundamental interactions. And these same elemental constituents that in a star cooperate jointly to emit radiation relate to each other in another way within the brain, giving rise then to the mind extraordinary functions.

    The universe contains groups of galaxies that, on their characteristic timescales, change continually as they interact. The galaxies in each group collide, fuse, form quasars and display tidal effects, their shapes responding to but also conditioning these processes. Each galaxy is itself a cluster of stars which interact via gravity, generating complicated movements and fluxes. A star is, in the end, a mixture of electrons, neutrinos and quarks subjected to fundamental interactions. And these same elemental constituents that in a star cooperate jointly to emit radiation relate to each other in another way within the brain, giving rise then to the mind extraordinary functions.

    This kind of approach is valid at many levels and in many contexts, which suggests looking at nature as a sum of systems or objects (galaxies, stars, organisms, macromolecules…) that, despite their diversity, have a lot in common. Each is made up of a very many parts—elements which are all the same or quite similar—that interact among each other and with the environment. Some questions then arise. Is the world we see a consequence of cooperation between the elements these objects are composed of? And if so, is there collaboration behind all natural phenomena and can these be classified according to such a circumstance? Is there some unique principle that determines how the constituents of an object or system should be organised?

    Science does not have rigorous answers to these questions yet, but in reading this book you will discover that the intuition behind them is reasonable. Two ways of understanding nature seem to be converging, sketching out the answers we are looking for. From one side, physicists and mathematicians are suggesting simple models in which many units cooperate. When simulated on a computer, it turns out that these models emulate, often with fidelity, aspects of the high level of organisation and complexity we observe in nature. From the other side, many scientists, including biologists, sociologists and economists, are interpreting natural and social systems with regard to the interactions between their elements.

    It is clearly not the advance of some form of categorical reductionism. There are phenomena that cannot be explained in this way. If a system is simple, then a deterministic fundamental law might be enough, together with a little more information, to understand its behaviour. Other times it is difficult to associate an object with the phenomena of interest, or to identify the parts whose interaction give rise to it, or these are essentially different from one another—which can make it impossible to apply this method. But one observes that science continues to reap success when studying the consequences of cooperation between elements in many different situations, and for the moment practice does not seem to put essential limits on this procedure. In any case, we have learned to take advantage of the feasible reductions and to change strategy when convenient.

    1.1 Cooperation

    With ever greater intensity, physics has been engaging in a systematic study of inanimate matter from the point of view of what its basic elements are like and how they interact. Knowing the behaviour of its atoms, a physicist is nowadays able to understand many of the properties of a lump of matter, for example. The success of this method is such that very few now question—at least as a working hypothesis—that cooperation between the parts of an object can determine its structure and functions, something that has propitiated making incursions into other fields. The validity of concepts and techniques which were initially conceived for very restricted environments has been stretched so as to try to understand the dynamical diversity of living organisms, or the complex behaviour of social communities, either animal or human. This hope is being rewarded since today this seems to be a good path, if still somewhat imprecise in occasions, towards solving specific problems, from the design of new drugs and materials, the accurate prediction of weather, or even a deep understanding of the basis for intelligence and life¹.

    The suggestion to look in this way at various enigmas may have come from outside of physics. For example, the philosopher Auguste Compte (1798),² who spoke of inorganic physics, organic physics for biology, and social physics, had the intuition that there were substantial connections between the elements of a system, whichever its nature. The scientific basis to follow this program is what is known today as statistical physics, a theory founded and developed, among others, by Ludwig Boltzmann (1844), Williard Gibbs (1839) and Albert Einstein (1879). The need for information and mathematical rigour in this area of physics—which requires knowledge of the laws by which the elements of the system interact (as well as an adequate manipulation of these data within a function that ultimately contains all the information which is globally relevant)—drastically limits the cases to which statistical physics can be applied. By crossing its borders, extending techniques and concepts, we move into terrain that belongs to other disciplines, including computational physics and science of complexity. The ensemble of all this knowledge still lacks a coherent structure. But a unifying basis has begun to be sketched out, and I attempt to describe it in the following.

    Nature nowadays is depicted as a hierarchical structure—that is, it can be described at many well defined levels, which occur in order, like the layers of an onion. Many objects can be observed and described at a microscopic level, with regard to their elemental constituents (molecules, atoms, electrons…), and at a macroscopic level, when we are interested in the whole (its shape, temperature, magnetization, elasticity, conductivities…). Often intermediate ones are also possible, such as the nanoscopic and the mesoscopic levels. We may use a powerful microscope to peer in on atomic components, or look at conglomerates such as magnetic regions or macromolecules, or stick to properties which are directly observable in the lab, like the viscosity or the colour of a sample. These levels can be distinguished precisely—to the point that they admit different theories—when length, time and energy have typical values which are different at each level; that is, when the scales for these magnitudes are sufficiently separate, a situation that arises frequently in practice, as shown in Table 1.1:³

    Table 1.1

    Typical orders of magnitude for lengths, intervals of time and numbers of constituent elements at different levels of observation mentioned in the text

    One Ångström (Å) is equivalent to the ten millionth part of a millimetre, written as 1 Å =  $$ 10^{ - 7} $$ mm. Let us remember that $$ 10^{\text{n}} $$ is notation to indicate a one followed by n zeros, and that $$ 10^{{ - {\text{n}}}} $$ means $$ {1 \mathord{\left/ {\vphantom {1 {10^{\text it{n}} }}} \right. \kern-0pt} {10^{\text{n}} }} $$ ; for example, one thousand in this notation is $$ 10^{3} $$  = 1,000 and one thousandth is represented by $$ 10^{ - 3} \, = \, 1/1,000 \, = \, 0.001 $$

    We are interested in the universe at each of these scales, from the realm of elementary particles up to that of clusters of galaxies. Fortunately—otherwise, our world would be incomprehensible—the fundamental laws of nature are always the same. That is to say, Newton’s equations of movement, Schrödinger’s equation for the wave function and Maxwell’s for electromagnetic fields are valid everywhere. Scientists, however, find in these laws an understanding which is sufficient only at the lowest levels of description. As one goes up in the onion layers, phenomena appear which require further analysis. The water in a river, for example, satisfies the laws of Newtonian mechanics at a microscopic scale, but other equations are needed, those of hydrodynamics, in order to describe the movements we can see from the shore.

    The idea of distinguishing between microcosm and macrocosm goes back to the Ancient Greeks and Arabs. Leonardo da Vinci (1452), who attempted to capture this in his homo ad circulum et quadratum (Fig. 1.1), was perhaps the first to provide a useful enunciation of this based on his belief that the human body was a miniature for the world. Interpreting the flow of blood through the heart in the light of his previous studies on rivers and currents, Leonardo described how blood passed though valves forming vortices, as has now been shown to occur using sophisticated imaging techniques. But the decisive step was taken by Boltzmann when he formulated his mathematical intuition that there is a layered structure that must be hierarchical. That is, he provided explicitly complex descriptions of what had been observed at higher levels in terms of the fundamental laws whereby things happen at lower ones.

    A317947_1_En_1_Fig1_HTML.jpg

    Fig. 1.1

    The Vitruvius Man, a microcosm in whose harmony the visionary Leonardo da Vinci saw a reflection of the symmetries and proportions of the macrocosm

    Today that intuition is widely accepted, and statistical physics proves it true, sometimes with mathematical rigour, in the case of objects at thermodynamic equilibrium—a relatively general situation but one which is much simpler than the conditions that interest us in this book. The physicist James Clerk Maxwell (1831) suggested an example of relation between levels which nowadays is used as paradigm in elementary physics textbooks. Making some modest microscopic hypotheses, namely, ones concerning statistical properties of the velocities of the molecules involved, Maxwell showed that the pressure measured by a manometer in a tank with a dilute gas depends on temperature and is simply related to the constant interchange of momenta that occurs as the molecules collide with the inner wall of the tank. The macroscopic pressure is merely a sum of microscopic pushes!

    1.2 Mathematical Metaphors

    Relating descriptions at different levels is difficult in practice. Seldom can it be done with the full mathematical rigour we would desire, but nowadays other quite precise connections, either analytical or numerical, can be found. It can often be done by emulating or imitating reality—that is, by designing an appropriate model and simulating its behaviour on a computer. Figure 1.2 illustrates this method. The upper panel is an example of turbulence, but a fake one, since it does not represent a natural observation but rather a behaviour that ensues from equations. The numerical treatment on a computer of the appropriate equations yields an image which is indistinguishable from what we would observe in a lab experiment performed under the same conditions. In fact, the simulation in this figure contains the characteristics that are known to be important for the turbulence we observe in nature.

    A317947_1_En_1_Fig2_HTML.gif

    Fig. 1.2

    Numerical simulations (upper panel) are able to mimic turbulent situations that arise in nature (photograph in the lower panel)

    It is today a proven fact that, in a great many practical cases, natural phenomena can be imitated by working with a computer and only using the fundamental physical information about the system in question. For example, with only Newton’s equations, knowledge of the forces between molecules, and an appropriate treatment of the contour conditions—that is, of the size, dimensions and symmetries of the problem—some of the phenomena observed in gases and liquids can be reproduced.⁴ When this is managed, one has proven numerically that these macroscopic phenomena follow from a Newtonian microscopic description. Other times we can relate closer levels of description, as is done for weather predictions, where, for want of microscopic information (where are all the molecules in the atmosphere and how are they moving at this precise moment?) and computing power, one starts from an elemental phenomenological description (for instance, what local currents are there?) to predict—quite successfully—the large-scale behaviour that matters to us.⁵

    These computer simulations have come to play an important role in scientific research, mainly because they allow us to play the following game, the essence of the scientific method that I shall discuss later. Let us suppose that, at some high level of observation (the macroscopic level, say) we have come across a phenomenon we would like to understand. Our intuition, and the conviction that there is a relation between levels, might allow us to put forward a detailed model (for example, a microscopic one) for the phenomenon. We express this model as an appropriate code for simulating its behaviour in the memory of a computer, as someone did to get the picture in the upper panel of Fig. 1.2. We are then in a position to measure—in practice, to calculate from the data produced by the code—the magnitudes of interest. It is a numerical experiment, since it will provide similar information to that of a real experiment, except that it refers to our model instead of to the original system of interest (a gas, a liquid, a star…). It is obvious that the model will be an enormous simplification of reality but, if it captures the essential physical characteristics of the phenomenon in question (something we will have to make sure of by, for example, studying variations and comparing the output to measurements performed in a lab), it will prove remarkably useful. It will allow us to evaluate theories, which must be able to describe the relatively simple behaviour of the model before that of complex reality. Also, thanks to the flexibility of this procedure, it can sometimes help to identify the mechanisms or physical processes that a good theory should incorporate. There is no other method, analytical or experimental, which lets us look into and relate the different levels of observation in nature like this one does.

    A very simple model recently used by physicists serves to illustrate the procedure just described. The hypothesis of punctuated equilibrium says that the evolution of species is not gradual but intrinsically intermittent.⁶ The periods of smooth change are believed to be interrupted by large events involving the extinction of many species and the emergence of new ones. There is no need of external cataclysms (meteorite collisions, climactic changes or volcanic eruptions) to explain massive mutations or extinctions (Fig. 1.3) such as, for instance, the disappearance of the dinosaurs—along with nearly 70 % of species—dozens of millions of years ago.⁷ According to this hypothesis, large events have a certain probability of happening, and the same cause as small ones.

    A317947_1_En_1_Fig3_HTML.gif

    Fig. 1.3

    Fraction of genera that are observed in one temporal interval but not in the following one, according to marine fossils.⁸ (Horizontal axis is for millions of years past.) Note the consistency with the hypothesis that extinctions and mutations occur in bursts

    In an attempt to ascertain whether this hypothesis is plausible, and in order to understand better observations such as those in Fig. 1.3, the following has been proposed. We make N equidistant marks, or nodes, on a line that is closed at its ends to form a circumference (Fig. 1.4). At each node we imagine that there is a species that we characterize with a single quality, such as its degree of adaptation to the environment or its reproductive success. For simplicity, we represent this quality with a random number—namely, a number chosen at random between 0 and 1. From this initial condition, we find the lowest number on the line and substitute it for a new random number. We iterate this simple step many times keeping track of the numbers that go appearing. The idea is thus to look into the consequences of random mutations.

    A317947_1_En_1_Fig4_HTML.gif

    Fig. 1.4

    Toy model for the evolution of species. The marks on the horizontal line represent different species, and their height, between 0 and 1, is their degree of adaptation to the environment. The minimum value in this particular case and its neighbors are indicated

    The model assumes that the changes are associated to the probability that each species has of surviving in its environment, so we constantly eliminate the less well-adapted and replace it with another with a new degree of adaptation. This is in agreement with Charles R. Darwin’s (1809) principle that weak species are more likely to mutate. With only this premise, no interesting behaviour ensues from the model. When implemented on a computer, all we observe is that, after a sufficient number of iterations, all species tend to a degree of adaptation equal to 1—a trivial solution. We suspect that the model is leaving out something essential, and conjecture that it should take into account the conditioning that can occur between different species.

    Allowing for this, we can imagine that, once the lowest degree of adaptation has been found at each step, we substitute this number and that of both its neighbours, to the left and to the right (Fig. 1.4), by new three random numbers. The idea is to mimic co-evolution of interdependent species—that is, the extinction of one species drags down all of those (two, in this case) which are most influenced by it. This model is so simple that it cannot be considered realistic. For example, not only does it super-simplify the interaction between species, but it represents a whole species with just one degree of adaptation, even though we know that selection mechanisms operate at an individual level. But we trust that replacing complexity with an average might not invalidate the usefulness of model, since we are not attempting to make predictions about specific cases, but only of explaining (statistically) a great many data—such as the generic bursting seen in Fig. 1.3.

    The fact is that this simplified model meets our expectations for it. Species begin with degrees of adaptation uniformly distributed between 0 and 1 but, with time (in practice, after many iterations), almost all of them adjust their degree so as to be above a certain threshold, which turns out to be 0.67, as in the simulation shown in Fig. 1.5. Specifically, when the model is implemented on a computer, one observes that, over time, each little symbol in this figure oscillates randomly along the vertical axis, mostly above the 0.67 threshold. However, as a consequence of a cooperative effect, species at some random point along the line fall suddenly to lower degrees, as for the case shown in Fig. 1.5. These sudden changes are consistent with the data shown in Fig. 1.3 and seem to cast doubt upon the hypothesis of gradual change that can only be perturbed by external catastrophes.

    A317947_1_En_1_Fig5_HTML.gif

    Fig. 1.5

    Snapshot, taken from a computer screen, of an extinction event in the model discussed. Each black mark corresponds to the height on the bar that in Fig. 1.4 represents the degree of adaptation of a species. That is, the vertical axis goes from 0 (lower line) to 1 (upper line) and the horizontal axis represents position in the system, as in the previous figure. The dashed horizontal line shows the 0.67 threshold⁹

    This model is still too simple to describe the evolution of species. For instance, it does not allow us to relate discontinuities to biodiversity. So more realistic variations have since been put forward. I shall not go into these here since what has been said is enough to motivate two important considerations. Firstly, the method is inseparable from the computer. In fact, even its most simple version represents such a formidable mathematical challenge that its properties are only known thanks to computer simulations. This circumstance occurs in many cases throughout the following chapters, and is a peculiarity of some areas of current research. Computation, or, to be more precise, the creative use of computers in science, has come to play an extremely important role in the development of theories as an essential complement to lab-based experiments. As mentioned above and the examples illustrate, it allows us to simulate impossible or very costly experiments, contrast theories, discover relevant mechanisms and to put forward—with the requisite degree of caution—the application to real life systems of what can be learned from the simplified model.

    This example also suggests that even if we knew all its fundamental laws the world might still seem dark and mysterious to us. We need something more. We must surpass that fundamental or basic level of observation, relating it to the levels that contain the phenomenology of interest for each case. As we shall see in examples, only by checking the relationship between levels we can aspire to a complete description. I believe that only in this way will we ever come to understand how a natural world governed by simple laws has such a predisposition to forming complex structures. Only in this way shall we be able to uncover the higher organizing principles that are assumed to underpin, say, the functioning of the immune system or the evolution of ecological systems.

    1.3 Natural Complexity

    In recognition of the fact that natural objects may exhibit complicated phenomenology we sometimes call them complex systems. There is no single definition of complexity in this context, but an operational one can be derived from what I have said up until now. A complex system is made up of many elements which, as a consequence of mutual cooperation, exhibit a phenomenology that is difficult to predict. The elements and rules by which they interact may be considered well known, however, it is far from easy to explain the emergent properties at a higher level of observation as a consequence of the properties of the elements at a lower one.

    To understand nature by following this procedure, we have to begin by identifying the elements that make up the complex system of interest, something which is not always easy. In a biological system, it might be convenient to consider the elements to be cells—or groups of these, or molecular agglomerates, or complex molecules—but it does not seem likely that we should find significant emergent properties arising from the collaboration between atoms or, even less, between quarks. These may, however, be relevant for the study of the properties of matter under extreme conditions. Researchers have to decide, in each case and through a detailed initial analysis, from which elements of the system emerges the cooperation behind the phenomena they wish to understand.

    At the same time, the tendency towards order in the formation of structures exhibited by the universe at various levels as a consequence of collaboration among elements is varied and peculiar. It involves singular concepts—such as criticality, scale invariance, fractal geometry and universality—that I shall explore with you in the following chapters. It is also characteristic of complex systems to have several possible ways of undergoing certain changes, and to be highly sensitive to differences in their initial conditions or perturbations—which makes predicting their future behaviour difficult.

    It is clear from what I have already said that a protein and the current financial system are complex systems.¹⁰ In fact, they are the norm in biology and sociology. However, the dilute gas we used earlier as an example is not a complex system as regards pressure. This is a sum of pushes and, in this case, the principle of superposition is satisfied—the effects of the parts are simply added—which implies relative simplicity. The emergence of complexity requires this principle to fail. It occurs when the whole is not simply the sum of the parts, as is actually most frequent. For instance, we know that a large group of animals, whether rational or not, can behave, in a crisis with intense mutual interaction, in ways unimaginable even to someone well acquainted with their individual behaviour. And this might occur only with respect to some of the observed properties. The complexity of the problem depends on the aspect of interest. A gas that is simple as far as pressure goes may turn out to be complex as regards the phenomenon of diffusion. There is something more in the fact that, as we shall see, diffusion is an irreversible process.

    The causes of complexity are invisible at a microscopic scale—which refers to properties of the individual constituents. As a consequence complexity, although consistent with the most traditional physics, cannot be described by it. We still don’t know whether we must search for a unique principle of organization behind all the phenomena we are interested in. It is a difficult task, since we should have to explain phenomena as diverse as turbulence in a fluid and the interdependency of financial markets… as well as a multitude of emergent behaviour in biological systems, which are a limiting case of complexity. It has been argued that we should rather look for lessons, more than general laws. That is, to find inspiration in the way wise people use their experience—learning in some situations and then, with intuition and dexterity, extrapolating to understand a different scenario—which again points to the desirability of simple models. As mentioned in the prologue, this is reminiscent of the postmodern attitude which has meant to highlight differences and to rebel against general principles and absolute truth.¹¹ The reflection in our context of the ideas of the philosopher Jean-François Lyotard (1924) and his

    Enjoying the preview?
    Page 1 of 1