"The fact that our subjective thought and the objective world are subject to the same laws, and hence, too, that in the final analysis they cannot contradict each other in their results, but must coincide, governs absolutely our whole theoretical thought." (Engels)
The content of "pure" mathematics is ultimately derived from the material world. The idea that the truths of mathematics are a special kind of knowledge that is inborn or of divine inspiration does not bear serious examination. Mathematics deals with the quantitative relations of the real world. Its so-called axioms only appear to be self-evident to us because they are the product of a long period of observation and experience of reality. Unfortunately, this fact seems to be lost on many present-day theoretical mathematicians who delude themselves into thinking that their "pure" subject has nothing to do with the crude world of material things. This is a clear example of the negative consequences of carrying the division of labour to the extreme.
From Pythagoras onwards, the most extravagant claims have been made on behalf of mathematics, which has been portrayed as the queen of the sciences, the magic key opening all doors of the universe. Breaking free from all contact with the physical world, mathematics appeared to soar into the heavens, where it acquired a god-like existence, obeying no rule but its own. Thus, the great mathematician Henri Poincaré, in the early years of this century, could claim that the laws of science did not relate to the real world at all, but represented arbitrary conventions destined to promote a more convenient and "useful" description of the corresponding phenomena. Certain theoretical physicists now openly state that the validity of their mathematical models does not depend upon empirical verification, but on the aesthetic qualities of their equations.
The theories of mathematics have been, on the one side, the source of tremendous scientific advance, and, on the other, the origin of numerous errors and misconceptions which have had, and are still having profoundly negative consequences. The central error is to attempt to reduce the complex, dynamic and contradictory workings of nature to static, orderly quantitative formulae. Nature is presented in a formalistic manner, as a single-dimensional point, which becomes a line, which becomes a plane, a cube, a sphere, and so on. However, the idea that pure mathematics is absolute thought, unsullied by contact with material things is far from the truth. We use the decimal system, not because of logical deduction or "free will," but because we have ten fingers. The word "digital" comes from the Latin word for fingers. And to this day, a schoolboy will secretly count his material fingers beneath a material desk, before arriving at the answer to an abstract mathematical problem. In so doing, the child is unconsciously retracing the way in which early humans learned to count.
The material origins of the abstractions of mathematics were no secret to Aristotle: "The mathematician," he wrote, "investigates abstractions. He eliminates all sensible qualities like weight, density, temperature, etc., leaving only the quantitative and continuous (in one, two or three dimensions) and its essential attributes." Elsewhere he says: "Mathematical objects cannot exist apart from sensible (i.e., material) things." And "We have no experience of anything which consists of lines or planes or points, as we should have if these things were material substances, lines, etc., may be prior in definition to body, but they are not on that account prior in substance." (1)
The development of mathematics is the result of very material human needs. Early man at first had only ten number sounds, precisely because he counted, like a small child, on his fingers. The exception were the Mayas of Central America who had a numerical system based on twenty instead of ten, probably because they counted their toes as well as their fingers. Living in a simple hunter-gatherer society, without money or private property, our ancestors had no need of large numbers. To convey a number larger than ten, he merely combined some of the ten sounds connected with his fingers. Thus, one more than ten is expressed by "one-ten," (undecim, in Latin, or ein-lifon—"one over"—in early Teutonic, which becomes eleven in modern English). All the other numbers are only combinations of the original ten sounds, with the exception of five additions—hundred, thousand, million, billion and trillion.
The real origin of numbers was already understood by the great English materialist philosopher of the 17th century Thomas Hobbes: "And it seems, there was a time when those names of number were not in use; and men were fayn to apply their fingers of one or both hands, to those things they desired to keep account of; and that thence it proceeded, that now our numerall words are but ten, in any Nation, and in some but five, and then they begin again." (2)
Alfred Hooper explains: "Just because primitive man invented the same number of number-sounds as he had fingers, our number-scale today is a decimal one, that is, a scale based on ten, and consisting of endless repetitions of the first ten basic number-sounds…Had men been given twelve fingers instead of ten, we should doubtless have a duo-decimal number-scale today, one based on twelve, consisting of endless repetitions of twelve basic number-sounds." (3) In fact, a duodecimal system has certain advantages in comparison to the decimal one. Whereas ten can only be exactly divided by two and five, twelve can be divided exactly by two, three, four and six.
The Roman numerals are pictorial representations of fingers. Probably the symbol for five represented the gap between thumb and fingers. The word "calculus" (from which we derive "calculate") means "pebble" in Latin, connected with the method of counting stone beads on an abacus. These, and countless other examples serve to illustrate how mathematics did not arise from the free operation of the human mind, but is the product of a lengthy process of social evolution, trial and error, observation and experiment, which gradually becomes separated out as a body of knowledge of an apparently abstract character. Similarly, our present systems of weights and measures have been derived from material objects. The origin of the English unit of measurement, the foot, is self-evident, as is the Spanish word for an inch, "pulgada," which means a thumb. The origin of the most basic mathematical symbols + and – has nothing to do with mathematics. They were the signs used in the Middle Ages by the merchants to calculate excess or deficiency of quantities of goods in warehouses.
The need to build dwellings to protect themselves from the elements forced early humans to find the best and most practical way of cutting wood so that their ends fitted closely together. This meant the discovery of the right angle and the carpenters’ square. The need to build a house on level ground led to the invention of the kind of levelling instrument depicted in Egyptian and Roman tombs, consisting of three pieces of wood joined together in an isosceles triangle, with a cord fastened at the apex. Such simple practical tools were used in the construction of the pyramids. The Egyptian priests accumulated a huge body of mathematical knowledge derived ultimately from such practical activity.
The very word "geometry" betrays its practical origins. It means simply "earth-measurement." The virtue of the Greeks was to give a finished theoretical expression to these discoveries. However, in presenting their theorems as the pure product of logical deduction, they were misleading themselves and future generations. Ultimately, mathematics derives from material reality, and, indeed, could have no application if this were not the case. Even the famous theorem of Pythagoras, known to every school pupil, that a square drawn on the longest side of a right triangle is equal to the sum of the squares drawn on the other two sides, had been already worked out in practice by the Egyptians.
Engels, and before him Hegel, pointed to the numerous contradictions that abound in mathematics. This was always the case, despite the claims of perfection and almost papal infallibility made by mathematicians for their "sublime science." This fashion was started by the Pythagoreans, with their mystical conception of Number, and the harmony of the universe. Very soon, however, they found out that their harmonious and orderly mathematical universe was plagued with contradictions, the solution of which drove them to despair. For example, they found that it was impossible to express the length of the diagonal of a square in numbers.
The later Pythagoreans discovered that there were many numbers, like the square root of two, which could not be expressed in numbers. It is an "irrational number." But although the square root of two cannot be expressed as a fraction, it is useful to find the length of the side of a triangle. Present-day mathematics contains a veritable menagerie of such strange animals, still untamed, despite all efforts to domesticate them, but which, once accepted for what they are, render valuable services. Thus we have irrational numbers, imaginary numbers, transcendental numbers, transfinite numbers, all displaying strange and contradictory features, and all indispensable to the workings of modern science.
The mysterious ¹ (pi) was well known to the ancient Greeks, and generations of schoolchildren have learned to identify it as the ratio between the circumference and diameter of a circle. Yet, strangely, its exact value cannot be found. Archimedes calculated its approximate value by a method known as "exhaustion." It was between 3.14085 and 3.14286. But if we try to write down the exact value, we get a strange result: ¹ = 3.14159265358979323846264338327950…and so on ad infinitum. Pi (¹) which is now known as a transcendental number, is absolutely necessary to find the circumference of a circle, but cannot be expressed as the solution to an algebraic equation. Then we have the square root of minus one, which is not an arithmetical number at all. Mathematicians refer to it as an "imaginary number," since no real number, when multiplied by itself, can give the result of minus one, because two minuses give a plus. A most peculiar creature, this—but not a figment of the imagination, despite its name. In Anti-Dühring, Engels points out that:
"It is a contradiction that a negative magnitude should be the square of anything, for every negative magnitude multiplied by itself gives a positive square. The square root of minus one is therefore not only a contradiction, but even an absurd contradiction, a real absurdity. And yet Ã–1 is in many cases a necessary result of correct mathematical operations. Furthermore, where would mathematics—lower or higher—be, if it were prohibited from operating with Ã–1?" (4) Engels’ remark is even more true today. This contradictory combination of plus and minus plays an absolutely crucial role in quantum mechanics, where it appears in a whole host of equations, which are fundamental to modern science.
That this mathematics involves startling contradictions is not open to doubt. Here is what Hoffman has to say about it:
"That such a formula should have any connection with that world of strict experiment which is the world of physics is in itself difficult to believe. That it was to be the deep foundation of the new physics, and that it should actually probe more profoundly than anything before towards the very core of science and metaphysics is as incredible as must once have seemed the doctrine that the earth is round." (5)
Nowadays, the use of the so-called "imaginary" numbers is taken for granted. The square root of minus one is used for a whole range of necessary operations, such as the construction of electrical circuits. Transfinite numbers, in turn, are needed to understand the nature of time and space. Modern science, and particularly quantum mechanics, could not manage without the use of mathematical concepts which are frankly contradictory in character. Paul Dirac, one of the founders of quantum mechanics, discovered the "Q" numbers, which defy the laws of ordinary mathematics which state that a multiplied by b is the same thing as b multiplied by a.
The idea of the infinite seems difficult to grasp, because, at first sight, it is beyond all human experience. The human mind is accustomed to dealing with finite things, reflected in finite ideas. Everything has a beginning and an end. This is a familiar thought. But what is familiar is not necessarily true. The history of mathematical thought has some highly instructive lessons on this score. For a long time, mathematicians, at least in Europe, sought to banish the concept of infinity. Their reasons for so doing are obvious enough. Apart from the evident difficulty in conceptualising infinity, in purely mathematical terms it involves a contradiction. Mathematics deals with definite magnitudes. Infinity by its very nature cannot be counted or measured. This means that there is a real conflict between the two. For that reason, the great mathematicians of ancient Greece avoided infinity like the plague. Despite this, from the beginnings of philosophy, men speculated about infinity. Anaximander (610-547 B.C.) took it as the basis of his philosophy.
The paradoxes of Zeno (c. 450 B.C.) point to the difficulty inherent in the idea of infinitesimal quantity as a constituent of continuous magnitudes by attempting to prove that movement is an illusion. Zeno "disproved" motion in different ways. He argued that a body in motion, before reaching a given point, must first have travelled half the distance. But before this, it must have travelled half of that half, and so on ad infinitum. Thus, when two bodies are moving in the same direction, and the one behind at a fixed distance from the one in front is moving faster, we assume that it will overtake the other. Not so, says Zeno. "The slower one can never be overtaken by the quicker." This is the famous paradox of Achilles the Swift. Imagine a race between Achilles and a tortoise. Suppose that Achilles can run ten times faster than the tortoise which has 1000 metres start. By the time Achilles has covered 1000 metres, the tortoise will be 100 metres ahead; when Achilles has covered that 100 metres, the tortoise will be one metre ahead; when he covers that distance, the tortoise will be one tenth of a metre ahead, and so on to infinity.
Zeno’s paradoxes do not prove that movement is an illusion, or that Achilles, in practice, will not overtake the tortoise, but they do reveal brilliantly the limitations of the kind of thinking now known as formal logic. The attempt to eliminate all contradiction from reality, as the Eleatics did, inevitably leads to this kind of insoluble paradox, or antinomy, as Kant later called it. In order to prove that a line could not consist of an infinite number of points, Zeno claimed that, if it were really so, then Achilles would never overtake the tortoise. There really is a logical problem here. As Alfred Hooper explains:
"This paradox still perplexes even those who know that it is possible to find the sum of an infinite series of numbers forming a geometrical progression whose common ratio is less than 1, and whose terms consequently become smaller and smaller and thus ‘converge’ on some limiting value." (6)
In fact, Zeno had uncovered a contradiction in mathematical thought which would have to wait two thousand years for a solution. The contradiction relates to the use of the infinite. From Pythagoras right up to the discovery of the differential and integral calculus in the 17th century, mathematicians went to great lengths to avoid the use of the concept of infinity. Only the great genius Archimedes approached the subject, but still avoided it by using a roundabout method. The early atomists, starting with Leukippus, who may have been a pupil of Zeno, stated that the atoms "indivisible and infinite in number, move about ceaselessly in empty space, of infinite extent."
Modern physics accepts that the number of instants between two seconds is infinite, just as the number of instants in a span of time with neither beginning nor end. The universe itself consists of an infinite chain of cause and effect, ceaselessly changing, moving and developing. This has nothing in common with the crude and one-sided notion of infinity contained in the infinite series of numbers in simple arithmetic, in which "infinity" always "starts" with the number one! This is what Hegel called "Bad Infinity."
The greatest of Greek mathematicians, Archimedes (287-212 B.C.) made effective use of indivisibles in geometry, but considered the idea of infinitely large and small as without logical foundation. Likewise, Aristotle argued that, since a body must have form, it must be bounded, and therefore cannot be infinite. While accepting that there were two kinds of "potential" infinities—successive addition in arithmetic (infinitely large), and successive subdivision in geometry (infinitely small)—he nevertheless polemicised against geometers who held that a line segment is composed of infinitely many fixed infinitesimals, or indivisibles.
This denial of the infinite constituted a real barrier to the development of classical Greek mathematics. By contrast, the Indian mathematicians had no such scruples and made great advances, which, via the Arabs, later entered Europe. The attempt to banish contradiction from thought, in accordance with the rigid schemas of formal logic held back the development of mathematics. But the adventurous spirit of the Renaissance opened men’s minds to new possibilities which were, in truth, infinite. In his book The New Science (1638), Galileo pointed out that every integer (whole number) has only one perfect square, and every perfect square is the square of only one positive integer. Thus, in a sense, there are just as many perfect squares as there are positive integers. This immediately leads us into a logical contradiction. It contradicts the axiom that the whole is greater than any of its parts, inasmuch as not all the positive integers are perfect squares, and all the perfect squares form part of all the positive integers.
This is only one of the numerous paradoxes which have plagued mathematics ever since the Renaissance when men began to subject their thoughts and assumptions to a critical analysis. As a result of this, slowly, and in the teeth of stubborn resistance from conservative minds, one by one the supposedly unassailable axioms and "eternal truths" of mathematics have been overthrown. We arrive at the point where the entire edifice has been shown to be unsound and in need of a thoroughgoing reconstruction on more solid, yet more flexible foundations, which are already in the process of being laid, and which will inevitably have a dialectical character.
Many of the so-called axioms of classical Greek mathematics were already undermined by the discovery of the differential and integral calculus, the greatest breakthrough in mathematics since the Middle Ages. It is an axiom of geometry that straight and curved are absolute opposites, and that the two are incommensurable, that is, the one cannot be expressed in terms of the other. Yet, in the last analysis, straight and curved in the differential calculus are regarded as equal. As Engels points out, the basis for this was laid a long time before it was elaborated by Leibniz and Newton: "The turning-point in mathematics was Descartes’ variable magnitude. With that came motion and hence dialectics in mathematics, and at once, too, of necessity the differential and integral calculus, which moreover immediately begins, and which on the whole was completed by Newton and Leibniz, not discovered by them." (7)
The discovery of the calculus opened up a whole new horizon for mathematics and science in general. Once the old taboos and prohibitions were lifted, mathematicians were free to investigate entirely new areas. But they made use of infinitely large and small numbers uncritically, without considering their logical and conceptual implications. The use of infinitely small and great quantities was regarded as a kind of "useful fiction," which, for some reason which was not at all clear, always gave the correct result. In the section on Quantity in the first volume of The Science of Logic, Hegel points out that, while the introduction of the mathematical infinite opened up new horizons for mathematics, and led to important results, it remained unexplained, because it clashed with the existing traditions and methods:
"But in the method of the mathematical infinite mathematics finds a radical contradiction to that very method which is characteristic of itself, and on which it rests as a science. For the calculation of the infinite admits of, and demands, modes of procedure which mathematics, when it operates with finite magnitudes, must altogether reject, and at the same time it treats these infinite magnitudes as finite Quanta, seeking to apply to the former those same methods which are valid for the latter." (8)
The result was a long period of controversy concerning the validity of the calculus. Berkeley denounced it as in open contradiction to the laws of logic. Newton, who made use of the new method in his Principia, felt obliged to conceal the fact from the public, for fear of an adverse reaction. In the early 18th century, Bernard Fontenelle finally had the courage to state categorically that inasmuch as there are infinitely many natural numbers, an infinite number exists as truly as do finite numbers, and that the reciprocal of infinity is an infinitesimal. However, he was contradicted by Georges de Buffon, who rejected the infinity as an illusion. Even the great intellect of D’Alembert was incapable of accepting this idea. In the article in his Encyclopaedia on the Differential, he denied the existence of infinity, except in the negative sense of a limit on finite quantities.
The concept of "limit" was in fact introduced in an attempt to get round the contradiction inherent in infinity. This was especially popular in the 19th century, when mathematicians were no longer prepared simply to accept the calculus unthinkingly, as the earlier generation had been content to do. The differential calculus postulated the existence of infinitesimally small magnitudes of varying orders—a first differential, a second differential, and so on to infinity. By introducing the concept of "limit" they at least created the appearance that an actual infinity was not involved. The intention was to make the idea of infinity seem subjective, to deny it objectivity. The variables were said to be potentially infinitely small, in that they become less than any given quantity, as potentially infinite, in that they become larger than any preassigned magnitude. In other words, "as big or small as you like!" This sleight of hand did not remove the difficulty, but only provided a fig-leaf to cover up the logical contradictions involved in the calculus.
The great German mathematician Karl Frederick Gauss (1777-1855) was prepared to accept the mathematical infinite, but expressed horror at the idea of real infinity. However, his contemporary Bernhard Bolanzo, setting out from Galileo’s paradox, began a serious study of the paradoxes implicit in the idea of a "completed infinite." This work was further developed by Richard Dedekind (1813-1914) who characterised the infinite as something positive, and pointed out that, in fact, the positive set of numbers can be regarded as negative (that is, as one that is not infinite). Finally, George Cantor (1845-1918) went far beyond the definition of infinite sets and developed an entirely new arithmetic of "transfinite numbers." Cantor’s papers, beginning in 1870, are a review of the whole history of the infinite, beginning with Democritus. Out of this, there developed a whole new branch of mathematics, based on the theory of sets.
Cantor showed that the points in an area, however large, or in a volume or a continuum of still higher dimension, can be matched against the points on a line or a segment, no matter how small it may be. Just as there can be no last finite number, so there can be no last transfinite number. Thus, after Cantor, there can be no argument about the central place of the infinite in mathematics. Moreover, his work revealed a series of paradoxes which have plagued modern mathematics, and have yet to be resolved.
All modern scientific analysis relies on the concept of continuity, that is to say, that between two points in space, there is an infinite number of other points, and also that, between any two points in time there is an infinite number of other moments. Without making these assumptions, modern mathematics simply could not function. Yet such contradictory concepts would have been indignantly rejected, or at least regarded with suspicion, by earlier generations. Only the dialectical genius of Hegel (a great mathematician incidentally) was capable of anticipating all this in his analysis of finite and infinite, space, time and motion.
Yet despite all the evidence, many modern mathematicians persist in denying the objectivity of infinity, while accepting its validity as a phenomenon of "pure" mathematics. Such a division makes no sense at all. For unless mathematics was able to reflect the real, objective world, what use would it be? There is a certain tendency in modern mathematics (and, by extension, incredibly, in theoretical physics) to revert to idealism in its most mystical form, alleging that the validity of an equation is purely a question of its aesthetic value, with no reference to the material world.
The very fact that mathematical operations can be applied to the real world and get meaningful results indicates that there is an affinity between the two. Otherwise, mathematics would have no practical application, which is clearly not the case. The reason why infinity can be used, and must be used, in modern mathematics is because it corresponds to the existence of infinity in nature itself, which has imposed itself upon mathematics, like an uninvited guest, despite all the attempts to bar the door against it.
The reason why it took so long for mathematics to accept infinity was explained very well by Engels:
"It is clear that an infinity which has an end but no beginning is neither more or less infinite than one with a beginning but no end. The slightest dialectical insight should have told Herr Dühring that beginning and end necessarily belong together, like the North Pole and the South Pole, and that if the end is left out, the beginning just becomes the end—the one end which the series has; and vice versa. The whole deception would be impossible but for the mathematical usage of working with infinite series. Because in mathematics it is necessary to start from determinate, finite terms in order to reach the indeterminate, the infinite, all mathematical series, positive and negative, must start with 1, or they cannot be used for calculation. But the logical need of the mathematician is far from being a compulsory law for the real world." (9)
From our school days we are taught to look upon mathematics, with its self-evident truths "axioms" and its rigorous logical deductions as the last word in scientific exactitude. In 1900, all this seemed certain, although in the International Congress of mathematicians held that year, David Hilbert set forth a list of the 23 most significant unsolved mathematical problems. From that point things have got steadily more complicated, to the point where it is possible to talk of a real crisis in theoretical mathematics. In his widely-read book, Mathematics: The Loss of Certainty, published in 1980, Morris Klein describes the situation thus:
"Creations of the early 19th century, strange geometries and strange algebras, forced mathematicians, reluctantly and grudgingly, to realise that mathematics proper and the mathematical laws of science were not truths. They found, for example, that several differing geometries fit spatial experience equally well. All could not be truths. Apparently mathematical design was not inherent in nature, or if it was, man’s mathematics was not necessarily the account of that design. The key to reality had been lost. This realisation was the first of the calamities to befall mathematics.
"The creation of these new geometries and algebras caused mathematicians to experience a shock of another nature. The conviction that they were obtaining truths had entranced them so much that they had rushed impetuously to secure these seeming truths at the cost of sound reasoning. The realisation that mathematics was not a body of truths shook their confidence in what they had created, and they undertook to reexamine their creations. They were dismayed to find that the logic of mathematics was in sad shape."
At the beginning of the 20th century, they set about trying to solve the unsolved problems, remove the contradictions, and elaborate a new and foolproof system of mathematics. As Klein explains:
"By 1900 the mathematicians believed they had achieved their goal. Though they had to be content with mathematics as an approximate description of nature and many even abandoned the belief in the mathematical design of nature, they did gloat over their reconstruction of the logical structure of mathematics. But before they had finished toasting their presumed success, contradictions were discovered in the reconstructed mathematics. Commonly these contradictions were referred to as paradoxes, a euphemism that avoids facing the fact that contradictions vitiate the logic of mathematics.
"The resolution of the contradictions was undertaken almost immediately by the leading mathematicians and philosophers of the times. In effect four different approaches to mathematics were conceived, formulated, and advanced, each of which gathered many adherents. These foundational schools all attempted not only to resolve the known contradictions but to ensure that no new ones could ever arise, that is, to establish the consistency of mathematics. Other issues arose in the foundational efforts. The acceptability of some axioms and some principles of deductive logic also became bones of contention on which the several schools took differing positions."
The attempt to eliminate contradictions from mathematics only led to new and insoluble contradictions. The final blow was struck in 1930, when Kurt Gödel published his famous theorems, which provoked a crisis, even calling into question the fundamental methods of classical mathematics:
"As late as 1930 a mathematician might perhaps have been content with accepting one or another of the several foundations of mathematics and declared that his mathematical proofs were at least in accord with the tenets of that school. But disaster struck again in the form of a famous paper by Kurt Gödel in which he proved, among other significant and disturbing results, that the logical principles accepted by the several schools could not prove the consistency of mathematics. This, Gödel showed, cannot be done without involving logical principles so dubious as to question what is accomplished. Gödel’s theorems produced a debacle. Subsequent developments brought further complications. For example, even the axiomatic-deductive method so highly regarded in the past as the approach to exact knowledge was seen to be flawed. The net effect of these newer developments was to add to the variety of possible approaches to mathematics and to divide mathematicians into an even greater number of differing factions." (10)
The impasse of mathematics has produced a number of different factions and schools, none of which accept the theories of the others. There are the Platonists (yes, that’s right), who regard mathematics as an absolute truth ("God is a mathematician"). There are the Conceptualists, whose conception of mathematics is entirely different to that of the Platonists, but it is merely the difference between objective and subjective idealism. They see mathematics as a series of structures, patterns and symmetries which people have invented for their own purposes—in other words, mathematics has no objective basis, but is purely the product of the human mind! This theory is apparently popular in Britain.
Then we have the Formalist school, which was formed at the beginning of the 20th century, with the specific aim of eliminating contradictions from mathematics. David Hilbert, one of the founders of this school, saw mathematics as nothing more than the manipulation of symbols according to specific rules to produce a system of tautological statements, which have inner consistency, but otherwise no meaning whatsoever. Here mathematics is reduced to an intellectual game, like chess—again a completely subjective approach. The Intuitionist school is equally determined to separate mathematics from objective reality. A mathematical formula, according to these people, is not supposed to represent anything existing independently of the act of computation itself. This has been compared to the attempt of Bohr to use the discoveries of quantum mechanics to introduce new views of physical and mathematical quantities as divorced from objective reality.
All these schools have in common an entirely idealist approach to mathematics. The only difference is that the neo-Platonists are objective idealists, who think that mathematics originated in the mind of God, and the rest—intuitionists, formalists and conceptualists—believes that mathematics is a subjective creation of the human mind, devoid of any objective significance. This, then, is the sorry spectacle presented by the main schools of mathematics in the last decade of the 20th century. But it is not the end of the story.
In recent years, the limitations of mathematical models to express the real workings of nature have been the subject of intense discussion. Differential equations, for example, represent reality as a continuum, in which changes in time and place occur smoothly and uninterruptedly. There is no room here for sudden breaks and qualitative changes. Yet these actually take place in nature. The discovery of the differential and integral calculus in the 18th century represented a great advance. But even the most advanced mathematical models are only a rough approximation to reality, valid only within certain limits. The recent debate on chaos and anti-chaos has centred on those areas involving breaks in continuity, sudden "chaotic" changes which cannot be adequately conveyed by classical mathematical formulae.
The difference between order and chaos has to do with linear and non-linear relationships. A linear relationship is one that is easy to describe mathematically: it can be expressed in one form or another as a straight line on a graph. The mathematics may be complex, but the answers can be calculated and can be predicted. A non-linear relationship, however, is one that cannot easily be resolved mathematically. There is no straight line graph that will describe it. Non-linear relationships have been historically difficult or impossible to resolve and they have been often ignored as experimental error. Referring to the famous experiment with the pendulum, James Gleick writes that the regularity Galileo saw was only an approximation. The changing angle of the body’s motion creates a slight non-linearity in the equations. At low amplitudes, the error is almost non-existent. But it is there. To get his neat results, Galileo also had to disregard non-linearities that he knew of: friction and air resistance.
Much of classic mechanics is built around linear relationships which are abstracted from real life as scientific laws. Because the real world is governed by non-linear relationships, these laws are often no more than approximations which are constantly refined through the discovery of "new" laws. These laws are mathematical models, theoretical constructions whose only justification lies in the insight they give and their usefulness in controlling natural forces. In the last twenty years the revolution in computer technology has transformed the situation by making non-linear mathematics accessible. It is for this reason that it has been possible, in a number of quite separate faculties and research establishments, for mathematicians and other scientists to be able to do the sums for "chaotic" systems where they could not be done in the past.
James Gleick’s book Chaos, Making a New Science describes how chaotic systems have been examined by different researchers using widely different mathematical models, and yet with all the studies pointing to the same conclusion: that there is "order" in what was previously thought of as pure "disorder." The story begins with studies of weather patterns, in a computer simulation, by an American meteorologist, Edward Lorenz. Using at first twelve and then later only three variables in non-linear relationships, Lorenz was able to produce in his computer a continuous series of conditions constantly changing, but literally never repeating the same conditions twice. Using relatively simple mathematical rules, he had created "chaos."
Beginning with whatever parameters Lorenz chose himself, his computer would mechanically repeat the same calculations over and over again, yet never get the same result. This "aperiodicity" (i.e., the absence of regular cycles) is characteristic of all chaotic systems. At the same time, Lorenz noticed that although his results were perpetually different, there was at least the suggestion of "patterns" that frequently cropped up: conditions that approximated to those previously observed, although they were never exactly the same. That corresponds, of course, to everyone’s experience of the real, as opposed to computer-simulated weather: there are "patterns," but no two days or two weeks are ever the same.
Other scientists also discovered "patterns" in apparently chaotic systems, as widely different as in the study of galactic orbits and in mathematical modelling of electronic oscillators. In these and other cases, Gleick notes, there were "suggestions of structure amid seemingly random behaviour." It became increasingly obvious that chaotic systems were not necessarily unstable, or could endure for an indefinite period. The well-known "red-spot" visible on the surface of the planet Jupiter is an example of a continuously chaotic system that is stable. Moreover, it has been simulated in computer studies and in laboratory models. Thus, "a complex system can give rise to turbulence and cohesion at the same time." Meanwhile, other scientists used different mathematical models to study apparently chaotic phenomena in biology. One in particular made a mathematical study of population changes under a variety of conditions. Standard variables familiar to biologists were used with some of the computed relationships being, as it would be in nature, non-linear. This non-linearity could correspond, for example, to a unique characteristic of the species that might define it as a propensity to propagate, its "survivability."
These results were expressed on a graph plotting the population size, on the vertical axis, against the value of non-linear components, on the horizontal. It was found that as the non-linearity became more important—by increasing that particular parameter—so the projected population went through a number of distinct phases. Below a certain crucial level, there would be no viable population and, whatever starting point, extinction would be the result. The line on the graph simply followed a horizontal path corresponding to zero population. The next phase was a steady state, represented graphically as a single line in a rising curve. This is equivalent to stable population, at a level that depended on the initial conditions. In the next phase there were two different but fixed populations, two steady states. This was shown as a branching on the graph, or a "bifurcation." It would be equivalent in real populations to a regular periodic oscillation, in a two year cycle. As the degree of non-linearity increased again, there was a rapid increase in bifurcations, first to a condition which corresponded to four steady states (meaning a regular cycle of four years), and that very quickly afterwards it was 8, 16, 32, and so on.
Hence, within a short spread of values of the non-linear parameter, a situation had developed which, for all practical purposes, had no steady state or recognisable periodicity—the population had become "chaotic." It was also found that if the non-linearity was increased further throughout the "chaotic" phase, there would be periods when apparent steady states returned, based on a cycle of 3 or 7 years, but in each case giving way as non-linearity increased, to further bifurcation’s representing 6, 12, and 24 year cycles in the first case, or 14, 28, and 56 year cycles in the second. Thus, with mathematical precision, it was possible to model a change from stability with either a single steady state or regular, periodic behaviour, to one that was, for all measurable purposes, random or aperiodic.
This may indicate a possible resolution to debates within the field of population science between those theorists who believe that unpredictable population variations are an aberration from a "steady state norm," and others who believe that steady state is the aberration from "chaotic norm." These different interpretations may arise because different researchers have effectively taken a single vertical "slice" of the rising graph, corresponding to only one particular value for non-linearity. Thus, one species could have a norm of a steady or a periodically oscillating population and another could exhibit chaotic variability. These developments in biology are another indication, as Gleick explains, that "chaos is stable; it is structured." Similar results began to be discovered in a wide variety of different phenomena. "Deterministic chaos was found in the records of New York measles epidemics and in 200 years of fluctuations of the Canadian lynx population, as recorded by the trappers of the Hudson’s Bay Company." In all these cases of chaotic processes, there is exhibited the "period-doubling" that is characteristic of this particular mathematical model.
Another one of the pioneers of chaos theory, Benoit Mandelbrot, a mathematician at IBM, used yet another mathematical technique. In his capacity as a researcher for IBM, he looked for—and found—"patterns" in a wide variety of natural "random" processes. He found, for instance, that the background "noise" that is always present in telephone transmissions, follows a pattern that is completely unpredictable, or chaotic, but is nevertheless mathematically definable. Using a computer at IBM, Mandelbrot was able to produce chaotic systems graphically, yet only using the simplest mathematical rules. These pictures, known as "Mandelbrot sets," showed an infinite complexity, and when a computer drawing was "blown up" to show finer detail, the vast, seemingly limitless variety continued.
The Mandelbrot sets have been described as possibly the most complex mathematical object or model ever seen. Yet within its structure, there were still patterns. By repeatedly "magnifying" the scale and looking at finer and finer detail (something the computer could do indefinitely because the whole structure was based on a given set of mathematical rules) it could be seen that there were regular repetitions—similarities—at different scales. "The degree of irregularity" was the same at different scales. Mandelbrot used the expression "fractal" to describe the patterns evident within the irregularity. He was able to construct a variety of fractal shapes, by slightly altering the mathematical rules. Thus he was able to produce a computer simulation of a coast line which, at any scale (at any magnification) always exhibited the same degree of "irregularity" or "crinkliness."
Mandelbrot compared his computer-induced systems to examples of geometries that were also fractal shapes, repeating the same pattern over and over again on different scales. In the so-called Menger Sponge, for example, the surface area within it approaches infinity, while the actual volume of the solid approaches zero. Here, it is as if the degree of irregularity corresponds to the "efficiency" of the sponge in taking up space. That may not be as far fetched as it may sound because, as Mandelbrot showed, there are many examples of fractal geometry in nature. The branching of the wind-pipe to make two bronchiole and their repeated branching right down to the level of the tiny air passages in the lungs, follows a pattern that can be shown to be fractal. In the same way it can be shown that the branching of blood vessels is fractal. In other words, there is a "self-similarity," a repeating geometric pattern of branching, at whatever scale is examined.
The examples of fractal geometry in nature are almost limitless and in his book, The Fractal Geometry of Nature, Mandelbrot sought to demonstrate just that. It has been found that the spectrum of the timing of a normal heart beat follows fractal laws, perhaps due to the fractal arrangement of nerve fibres in the heart muscle. The same is true of the rapid involuntary eye movements that are a feature of schizophrenia. Thus, fractal mathematics is now routinely used in a variety of scientific fields, including physiology and disciplines as widely separated as earthquake studies and metallurgy.
Yet another indications of the deterministic basis of chaos has been shown in studies of phase transitions and by the use of what mathematical modellers call "attractors." There are many examples of phase transitions. It can mean the change from the smooth "laminar" flow of a fluid to turbulent flow, the transition from solid to liquid or liquid to gas, or the change within a system from conductivity to "superconductivity." These phase transitions may have crucial consequences in technological design and construction. An aircraft, for example would lose lift if the laminar air flow over the wing became turbulent; likewise, the pressure needed to pump water will depend on whether or not the flow in the pipe is turbulent.
The use of phase-scale diagrams and attractors represents yet another mathematical device that has found a wide variety of applications in apparently random systems. As in the case of other chaos studies, there has been the discovery of common patterns, in this case "strange attractors" in a variety of research programmes, including electric oscillators, fluid dynamics and even in the distribution of stars in globular clusters. All these various mathematical devices—period-doubling; fractal geometry; strange attractors—were developed at different times by different researchers to examine chaotic dynamics. But all their results point in the same direction: that there is an underlying mathematical lawfulness in what was always considered to be random.
A mathematician, Mitchell Feigenbaum, pulling a number of threads together, has developed what he has called a "universal theory" of chaos. As Gleick says "he believed that his theory expressed a natural law about systems at the point of transition between order and turbulence…his universality was not just qualitative, it was quantitative…it extended not just to patterns but to precise numbers."
Marxists would recognise here the similarity with the dialectical law known as the law of transformation of quantity to quality. This idea describes the transition between one period of more or less gradual development, when change can be measured or "quantified," and the next when change has been so "revolutionary," there has been such a "leap," that the entire "quality" of the system has been altered. Gleick’s use of the terms in a similar sense here is yet another indication of the way modern scientific theory is stumbling towards materialist dialectics.
The central point about the new science is that it deals with the world as it really is: as a constantly shifting dynamic system. Classical linear mathematics is like formal logic which deals with fixed and unchanging categories. These are good enough as approximations, but do not reflect reality. Dialectics, however, is the logic of change, of processes and as such it represents an advance on formalism. In the same way, chaos mathematics is a step forward from the rather "unreal" science that ignored uncomfortable irregularities of life.
The idea of the transformation of quantity into quality is implicit in modern mathematics in the study of continuity and discontinuity. This was already present in the new branch of geometry, topology, invented in the early years of the 20th century by the great French mathematician, Jules Henri Poincaré (1854-1912). Topology is the mathematics of continuity. As Ian Stewart explains it: "Continuity is the study of smooth, gradual changes, the science of the unbroken. Discontinuities are sudden, dramatic: places where a tiny change in cause produces an enormous change in effect." (11)
The standard text-book mathematics gives a wrong impression of how the world actually is, how nature really works. "The mathematical intuition so developed," wrote Robert May, "ill equips the student to confront the bizarre behaviour exhibited by the simplest non-linear systems." (12) Whereas elementary school geometry teaches us to regard squares, circles, triangles and parallelograms as entirely separate things, in topology ("rubber-sheet geometry"), they are treated as the same. Traditional geometry teaches that the circle cannot be squared, however in topology this is not the case. The rigid lines of demarcation are broken down: a square can be turned ("deformed") into a circle. Despite the spectacular advances of 20th century science, it is surprising to note that a large number of what would seem to be quite simple phenomena are not properly understood and cannot be expressed in mathematical terms, for example, the weather, the flow of liquids, turbulence. The shapes of classical geometry are inadequate to express the extremely complex and irregular surfaces found in nature, as Gleick points out:
"Topology studies the properties that remain unchanged when shapes are deformed by twisting or stretching or squeezing. Whether a shape is square or round, large or small, is irrelevant in topology, because stretching can change those properties. Topologists ask whether a shape is connected, whether it has holes, whether it is knotted. They imagine surfaces not just in the one-, two-, and three-dimensional universes of Euclid, but in spaces of many dimensions, impossible to visualise. Topology is geometry on rubber sheets. It concerns the qualitative rather than the quantitative." (13)
Differential equations deal with the rate of change of position. This is more difficult and complex than what may appear at first sight. Many differential equations cannot be solved at all. These equations are able to describe motion, but only as a smooth change of position, from one point to another, with no sudden leaps or interruptions. However, in nature, change does not only occur in this way. Periods of slow, gradual, uninterrupted change are punctuated by sharp turns, breaks in continuity, explosions, catastrophes. This fact can be illustrated by innumerable examples from organic and inorganic nature, the history of society and of human thought. In a differential equation, time is assumed to be divided into a series of very small "time-steps." This gives an approximation of reality, but in fact there are no such "steps." As Heraclitus expressed it, "everything flows."
The inability of traditional mathematics to deal with qualitative as opposed to merely quantitative change represents a severe limitation. Within certain limits, it can suffice. But when gradual quantitative change suddenly breaks down, and becomes "chaotic," to use the current expression, the linear equations of classical mathematics no longer suffice. This is the starting point for the new non-linear mathematics, pioneered by Benoit Mandelbrot, Edward Lorenz and Mitchell Feigenbaum. Without realising it, they were following in the footsteps of Hegel, whose nodal line of measurement expresses the very same idea, which is central to dialectics.
The new attitude to mathematics developed as a reaction against the dead end of the existing schools of mathematics. Mandelbrot had been a member of the French school of mathematical Formalism known as the Bourbaki group, which advocated a purely abstract approach, proceeding from first principles and deducing everything from them. They were actually proud of the fact that their work had nothing to do with science or the real world. But the advent of the computer introduced an entirely new element into the situation. This is yet another example of how the development of technique conditions that of science. The vast number of computations which could be made at the press of a button made it possible to discover patterns and lawfulness where previously only random and chaotic phenomena appeared to exist.
Mandelbrot began by investigating unexplained phenomena of the natural world, like apparently random bursts of interference in radio transmissions, the flooding of the Nile, and crises of the stock exchange. He realised that the traditional mathematics could not deal adequately with such phenomena. In investigating infinity in the last century, George Cantor invented the set which is named after him. This involves a line which is divided into an infinite number of points (Cantor "dust") the total length of which is 0. Such a manifest contradiction disturbed many 19th century mathematicians, yet it served as the starting point for Mandelbrot’s new theory of fractal mathematics, which played a key role in chaos theory:
"Discontinuity, bursts of noise, Cantor dusts," Gleick explains, "—phenomena like these had no place in the geometries of the past 2,000 years. The shapes of classical geometry are lines and planes, circles and spheres, triangles and cones. They represent a powerful abstraction of reality, and they inspired a powerful philosophy of Platonic harmony. Euclid made of them a geometry that lasted two millennia, the only geometry still that most people ever learn. Aristotle found an ideal beauty in them. But for understanding complexity, they turn out to be the wrong kind of abstraction." (14)
All science involves a degree of abstraction from the world of reality. The problem with classical Euclidean measurement, dealing with length, depth and thickness, is that it failed to capture the essence of irregular shapes that are found in the real world. The science of mathematics is the science of magnitude. The abstractions of Euclidean geometry therefore leave aside all but the quantitative side of things. Reality is reduced to planes, lines and points. However, the abstractions of mathematics, despite the exaggerated claims made for them, remain only a rough approximation to the real world, with its irregular shapes and constant and abrupt changes. In the words of the Roman poet Horace, "You may drive out nature with a pitch-fork, yet she’ll be constantly running back." James Gleick describes the difference between classical mathematics and chaos theory in the following way:
"Clouds are not spheres, Mandelbrot is fond of saying. Mountains are not cones. Lightning does not travel in a straight line. The new geometry mirrors a universe that is rough, not rounded, scabrous, not smooth. It is a geometry of the pitted, pocked, and broken up, the twisted, tangled, and intertwined. The understanding of nature’s complexity awaited a suspicion that the complexity was not just random, not just accident. It required a faith that the interesting feature of a lightning bolt’s path, for example, was not its direction, but rather the distribution of zigs and zags. Mandelbrot’s work made a claim about the world, and the claim was that such odd shapes carry meaning. The pits and tangles are more than blemishes distorting the classic shapes of Euclidean geometry. They are often the keys to the essence of a thing." (15)
These things were seen as monstrous aberrations by traditional mathematicians. But to a dialectician, they suggest that the unity of finite and infinite, as in the infinite divisibility of matter, can also be expressed in mathematical terms. Infinity exists in nature. The universe is infinitely large. Matter can be divided into infinitely small particles. Thus, all talk about the "beginning of the universe" and the search after the "bricks of matter" and the "ultimate particle" are based on entirely wrong assumptions. The existence of the mathematical infinite is merely a reflection of this fact. At the same time, it is a dialectical contradiction that this infinite universe consists of finite bodies. Thus, finite and infinite form a dialectical unity of opposites. The one cannot exist without the other. The question is therefore not whether the universe is finite or infinite. It is both finite and infinite as Hegel explained long ago.
The advances of modern science have permitted us to penetrate deeper and deeper into the world of matter. At each stage, an attempt has been made to "call a halt," to erect a barrier, beyond which it was allegedly impossible to go. But at each stage, the limit was overcome, revealing startling new phenomena. Every new and more powerful particle accelerators have uncovered new and smaller particles, existing in ever tinier time scales. There is no reason to suppose that the situation will be any different in relation to the quarks, which at present are being represented as the last of the particles.
Similarly, the attempt to establish the beginning of the universe and "time" will turn out to be a wild goose chase. There is no limit to the material universe, and all efforts to impose one will inevitably fail. The most encouraging thing about the new mathematics of chaos theory is that it represents a rejection of sterile abstractions and ivory-tower reductionism, and an attempt to move back towards nature and the world of everyday experience. And to the degree that mathematics reflects nature, it must begin to lose its one-sided character and acquire a whole new dimension which expresses the dynamic, contradictory, in a word, dialectical character of the real world.