Empirical Verification of Evolutionary and Biblical Cosmological Models – Microcosmos
Why empirical verification is necessary in science
In 1931 the Austrian mathematician Kurt Gödel drew the incompleteness theorems, according to which the formal systems of logic and mathematics are semantically inadequate and do not allow strict proof (or refutation). As a simple example we could point out the impossibility to resolve the Zeno’s puzzles – Achilles (the fastest runner in the world) could not catch up with a tortoise if she only started a few strides in front of him. Hitherto, no one could refute the assertions of the philosopher of Elea in theory, but in practice even a small child can easily cope with a similar problem. Therefore, today it is not enough to create a scientific formulation, but there should be certain corollaries, that allow an empiric verification of its genuineness.
The most significant blemish of the Christian view on the Creation remains the circumstance that this view is based primarily on the critics of the evolutionary theory, and does not develop its own theory which should be subject to verification. In this article we will try for the first time to present the Biblical model that allows for its corollaries to be deduced theoretically with the aid of physics and mathematics, as well as to be verified by observation, experiments, computer simulations, etc.
The main question of philosophy is: “Which one is primary – matter or consciousness?”
Friedrich Engels successfully divided the great diversity of idea-based systems of mankind into two main groups: “Has God created the world, or the world has always existed?”, asked Engels. Depending on how they answer this question, philosophers have flocked into two large camps. Those who assert that the Spirit is primary to nature and, consequently, accept some kind of Creation of the world, have built the idealism camp. The others, who see nature as primary [to consciousness/spirit], have contributed to the formation of the various schools of materialism”. Below we will discuss how dialectical materialism and Christianity relate to these two diametrically opposite view-of-the-world systems of thought in order to see to what extent each of these views is supported by scientific evidence.
- A) Dialectical Materialism
According to this philosophical position, matter is eternal, uncreatable and indestructible, and consciousness has come into existence only at a later stage in the development of matter. Vladimir Lenin asserts that “There is nothing in the world except for moving matter”, and many contemporary scientists completely agree with him. The American astronomer Carl Sagan opened his famous television series with the words: “The Cosmos is everything that has been, is or will ever be”. Sterling Lamprecht gives a more extended definition of naturalism: “”a philosophical position, empirical method that regards everything that exists or occurs to be conditioned in its existence or occurrence by causal factors within one all-encompassing system of nature.”.
According to Charles Darwin the factors of biological evolution are brought down to mutability, heredity and natural selection. However, if we consider things in a strictly naturalistic manner, we could relate his doctrine also to the evolution of the inanimate nature. Russian physicist Andrei Linde (currently working in Stanford University) proposes the idea of the “chaotic inflation”. It states that the quantum fluctuations of vacuum permanently lead to the origination of mini-universes. They evolve in isolation, and initially they are inflated by inflation processes, while later on – they evolve according to the classical hypothesis of the Big Bang (fig. 1).
Fig. 1 Linde’s model of chaotic inflation is illustrated as a tree-like structure, consisting of an infinite number of multiplying “bubbles” (inflationary universes). Each newly obtained universe could “sprout”, thus forming new mini-universes. (The change in colour presents “mutations” in the physical laws against the parent universes.)
With each appearance of a new world what is observed is mutability in the laws and the constants of the matter. The accidental recurrences of some of them are regarded as a type of heredity. The natural selection is also in force, since it preserves the physical structures – atoms, molecules, celestial systems, that, with the combination of appropriate parameters, are stable. Further, on planets of suitable conditions, evolution determinately leads to the evolvement of living, and at some places, of conscious creatures.
But if Darwinism could be applied to animate and inanimate nature, then we have to accept it as a universal dialectic-materialistic concept, which preconditions the self-organization of the universe. It is curious to note that exactly this philosophy is one of the pillars on which authoritarian communist regimes were founded and continue being founded. Therefore, the fact that this philosophy is now accepted in a New Europe without criticism and it is being used as a foundation for scientific paradigms and for morality and ethics norms prompts legitimate concern.
As in contemporary scientific theories vacuum appears to be some kind of a proto-matter giving the beginning of everything else, we will say a few words on this. What is understood as vacuum in physics is an enclosed space devoid of all matter, i.e. devoid of any atoms, molecules, protons, neutrons, etc., as well as of particles conveying interaction – photons, gravitons, etc. However, vacuum has some ‘end energy’, physical fields therein maintain their forces, the so called virtual particles constantly come and go, fluctuations occur, etc. The quantum theory allows for the existence of multiple states of vacuum, it being (thereby) accepted that cosmic or cosmological inflation is due to various transformations between such states.
The American physicist of Ukrainian origin Alexander Vilenkin equates the initial vacuum with the “nothing”, stating further: “Nothing is the condition of non-classical space-time … an area of unlimited quantum gravity; a rather bizarre state in which all our notions of space, time, energy, entropy, etc. lose all their relevance”. Elsewhere, he however acknowledges: “The nature of the primary condition is too much a speculative subject even by cosmological standards”.
And the researcher of philosophical ideas in cosmology Chris Isham adds: “Conceptual issues in quantum cosmology are so serious that many professional physicists assume that its entire agenda may turn to be entirely erroneous”.
The Big Bang Theory
It is well known that the Big Bang theory rests on three observation pillars – the expansion of the Universe, the cosmic background radiation, and the abundance of light elements. The classic formulation of this concept, however, was not in the position to cope with a number of challenges that it faces, for instance the problems of the cosmic horizon, the flatness of space, magnetic monopoles etc. At the end of 1979 Alan Guth and Henry Tye, in one of their articles, developed the so called inflation cosmology, which eliminates the pointed difficulties standing in front of the standard cosmological model. According to them, shortly after the Big Bang, the energy of the Universe was carried by an inflatonic field with negative pressure. Thanks to this field for a period of about 10-36 to 10-32 seconds, the Universe inflated exponentially more than 1030 times. The field gradually released the energy it contained in the form of almost homogeneous sea of particles and radiation, and later on the Universe evolved according to the conventional scenario (see Table 1).
Time elapsed after the Big Bang
Years before our age
|Big Bang (singularity)||13,7 (13,82) billion years|
10-36 to 10-32 sec.
Quark – gluon plasma
Quarks unite in protons and neutrons.
Synthesis of hydrogen and helium atoms.
1 to 3 min.
Formation of light elements up to boron.
380 thousand years.
The Universe becomes transparent. Cosmic microwave background radiation (CMBR) is released.
200-500 million years.
Birth of the first stars and protogalaxies.
13,5-13,2 billion years.
3,3 billion years.
Formation of mature galaxies, quasars and of the oldest stars in the Milky Way.
10,4 billion years.
|8,1 billion years.||The Sun System originates, including the Earth.||
5,6 billion years.
As in this part reference will be made to the coming into existence of micro-cosmos, we will limit ourselves to the period up to around 380 thousand years after the Big Bang.
Let us remember Hawking’s conclusion at end of the Great Design: “Because there is a law like gravity, the universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going.”
In the latter statement however there are several unsustainable premises:
In the first place, vacuum is being equated to “nothing”, because of the fact that it does not contain matter which is rather misleading especially for people that do not deal with physics. As we noted vacuum has certain energy, virtual particles constantly come and go in it, etc., which means that vacuum, strictly speaking, is definitely “something”, as opposed to absolutely “nothing”. If vacuum is something, then vacuum too requires explanation for its coming into existence.
Secondly, we are told of a “law such as the law of gravity” which brings the Universe into existence. We will shortly see how problematic is the position according to which there is any kind of firmly established “law”. Here, we will only cite Heinz Pagels who asks a similar question: “Yet this unthinkable void converts itself into the plenum of – a necessary consequence of physical laws. Where are these laws written into the void?”
Thirdly, it remains also unclear where space and time come from. In his book “A Universe Out Of Nothing”, Lawrence Krauss tries to further develop Hawking’s premise by overcoming the difficulties pointed, but, in our opinion, he is not particularly convincing.
To put it differently, Hawking and Mlodinow have totally failed to answer the question “why there is something instead of nothing?”, as they try to persuade their readers, but they have only shifted things towards the unfathomable primary vacuums.
Now let us look at those three premises in greater detail.
The first of them is that matter has come into existence out of vacuum, and in the beginning there was a process of inflation.
(To visualize things better we will return to Table 1).
Singular cosmic beginning requires rigorous solution, which inflation is still not in a position to give, because it is still not well integrated in String theory, therefore it is not part of the merger of the quantum mechanics and the General theory of relativity.
No one could say how the inflaton field arises with a suitable form of the potential energy for inflation to emerge. We don’t know the exact parameters of the primordial explosion – when it happened, how long did it take, what quantity of energy it converted into particles and radiation, etc. Therefore, there is no way to avoid the impression that physicists just tailor their concepts so that they coincide with the astronomic observations.
But the most significant problem here is the following: Theory states that the initial Universe was made entirely of high energy radiation, creating spontaneously particles and antiparticles. Around one microsecond after the explosion the temperature dropped below 1013К. Quarks and antiquarks decelerated and were caught by the strong interaction which stuck them together in groups of three – forming barions and antibarions, respectively. According to the statistical law, however, it is expected that their number should have been equal and the inevitable collisions between them would have led to complete annihilation. The energy of the resulting radiation would have gradually dissolved with the expansion of the Universe, wherefore no new couples of particles would have been born i.e., no substance could be existent today.
Russian physicist Andrey Sakharov assumes that in that epoch there was a violation of the so called СР-symmetry (charge-particle symmetry), which resulted in disbalance – for every billion of antibarions there are a billion and one barions. After the big firework was exhausted, the surviving barions turned into protons and neutrons, from which later on all atomic nuclei were built.
The point is that there should have been lepton asymmetry in which the number of the surviving electrons was exactly the same as the number of the produced protons (so that atoms are electro-neutral), which is a statistical absurdity.
(It is necessary to make the following clarification: If there existed just a slight predominance either of the positive or negative charges, they would have repulsed with a power that exceeds 1036 times gravity and they would have ruptured all structures in the world that we know, excluding atomic nuclei, because in the nuclei the strong interaction is about a hundred times greater than electromagnetic one.)
To put it differently, at least for now, the concept of the birth of the world out of vacuum is unconvincing for the fact that it fails to offer a sufficiently good rational explanation of the coming into existence of matter.
Secondly, the issue of laws arises.
In this respect, two sub-questions are brought up:
а) Is it possible for a matter which is in the state of absolute chaos to come by chance to its contemporary level of order.
What will happen, though, if the so called “undetermined mutability” (according to Darwin) acts at the level of fundamental constants, laws and interactions? Let us try to imagine a world in which everything changes in a totally chaotic way. In this world some of the characteristics of the elementary particles might be constant, while others might transform permanently. For example, if the electric charge changes arbitrarily, it could take absolutely random values: +1; –1; +7/8; +14/3; –112/27, etc. The same is assumed also for the mass, the spin, the magnetic moment, etc., we should assume even a qualitative (evolutionary?) transformation of particles into something different from what they are in reality. The law of gravity now could be written as follows:
and in a short while:
then it could change into some other type, etc. (Due to the lack of durability, in that case we could not talk about laws neither.) Having in mind the delicate balance of all forces in nature, it becomes absolutely clear that with whatever metamorphosis of the interactions, everything will fall apart “before our eyes”. In such a world neither any stationary or dynamic structures could be created, nor could they be stable in time. If, in the primary matter that builds our world, a similar “undetermined mutability” existed, it would lead to an absolute chaos, which is not capable of producing any arrangement or organization whatsoever.
Here we will make one clarification. Some scientists state that the new string theory offers a powerful conceptual paradigm, which has the potential to respond to the question: “what is the reason for the elementary particles to possess exactly these observed properties”. Therefore, let us say a few words on that. The strings can perform an infinite number of resonance wave oscillations, which means that they should generate an infinite row of elementary particles with all sorts of properties. In that case we can ask, why there exist only those limited number of observed particles, which, as we have noted in chapter ІV, resemble the elements of a perfect meccano (construction set), allowing the assembling of our world? The response given by the string theory is that there are at least six (or seven) additional dimensions of space, which at microscopic level are rolled into the so called Calabi-Yau shapes(fig. 2). (Named after Eugenie Calabi and Shing-Tung Yau, who have discovered them in mathematics even before their meaning for the string theory is known.)
Fig. 2 a) One of the possible Calabi-Yau shapes. b) Big enlargement of an area in space with the additional dimensions in the form of miniature Calabi-Yau shapes.
The additional dimensions have a great influence on the way the strings oscillate, and, as a result,on the properties of the particles. But the equations show that there are an infinite number of Calabi-Yau shapes, and each of them is as valid as all the rest. That is to say that we come to a dead-end again – how were those shapes, which generate exactly the necessary elementary particles, selected? Or the question is only shifted, but not solved.
We will remind however, that the string theory fails to meet the verification and falsification criteria and it remains a purely speculative research field, thereby failing to yet achieve the status of scientific quality. In his work on the history of quantum gravity Carlo Rovelli point out: “So, where are we, after 70 years of research? There are well-developed tentative theories, in particular strings and loops, and several other intriguing ideas. There is no consensus, no established theory, and no theory that has yet received any direct or indirect experimental support. In the course of 70 years, many ideas have been explored, fashions have come and gone, the discovery of the Holly Graal has been several times announced, with much later scorn.”
Hawking and Mlodinow, citing this very problematic strings theory, say that it predicts the possible existence of 10500 universes. But even if such hypothesis turns true, this will not still mean that those universes actually exist. And, as we shall be convinced in a little while, the number [of universes] stated is absolutely insignificant for the purpose of ‘saving’ Hawking and Mlodinow’s theory of the emergence of some orderly world.
- b) The second sub-question on the issue of laws is: “what is the statistical probability for the accidental coming into existence of any stable and well-ordered universe?”
The fundamental constants, the characteristics of the elementary particles, etc., are measured with analogue quantities, therefore they allow for an infinite (∞) number of values of their settings. Let us assume that a system of n elements is required for the existence of such a world. Generally, the possibility for each member of the system to have exactly the appropriate parameters is 1/∞, and for all n elements – 1/∞n. Even if the system has an endless number of stable configurations, the probability to have any one of them formed by chance is:
∞/∞n = 1/∞n-1
(Where n is a whole positive number bigger than one. Presently we can say that for our world n has a value of around 100, because presently there are at least one hundred parameters whose value has to be precisely adjusted.)
If we assume that n-1=k, the expression will take the form: 1/∞k, i.e. this probability is on a certain degree less times smaller than infinitely small (fig. 3).
Fig. 3 Possible configurations of the values of parameters that ensure working (stable and functioning) conditions from I to ∞. Systems І, ІІ, ІІІ and so on, may be like other worlds, similar to the physical structures being formed within them.
To put it differently, for systems that allow an endless number of values for their parameters, a peculiar paradox occurs. Although they could possess countless working conditions, even then the probability to reach any one of them by chance is smaller than infinitely small or, in practice, it could never be fulfilled.
We will once again make a short detour to give readers who are not experts in mathematics some clarification. In theory, statistical laws allow for the realization of events with insignificantly small probability. However, practical experience shows that such events never happen. Therefore, some scientists assume that for each event there is a certain “probability threshold“, under which its realization is unlikely. But, as unconceivably small, as these relations might be, such as for example 1/10500; 1/1065 720 etc., still, there are other people who argue that such probabilities could be materialized. However, when we get a probability 1/∞, it is infinitely smaller than the smallest probability that we could write or even think about. That is why we hope that even to such “optimists” a probability of 1/∞ most surely will show an absolute “prohibition” for a certain event to be realized in practice.
There arises the question whether it makes sense to raise 1/∞ to some power, since the 1/∞ ratio actually tends to zero and shows a total inability for an event to happen? We should follow, however, the rules in mathematical theory, according to which the total probability for two or more events to happen is equal to the product of the probabilities for each of them to be implemented separately. When a total probability 1/∞ to some power is achieved, this, according to us, shows more than an absolute impossibility for the realization of something.
The above tells us that the 10500 universes are an immaterially small number. But even if their number is increased to an infinite value, calculations cited here allow us to understand that still no chance materializes for the coming into existence of a world like ours.
With living organisms the variations are limited, since their components (DNA, proteins, etc.) are built in strictly determined number of discrete units (nucleotides, amino acids, etc.). But in fact, there appear to be negligibly small, practically unrealisable probabilities for an accidental formation of the protocell, which is able to realize all processes of life. To put it differently, in the highlighted areas, the existing dynamic and statistical laws forbid the self-arrangement of matter.
Figure 3 allows us to make another important conclusion, namely, that no evolutionary processes are possible either in dead or in living nature.
According to the definition, The system is a multitude of elements that are in relations and connections between each other and form a certain unity, a wholeness. All elements of the system are interdependent, i.e. each of them affects the rest, and vice versa – they also have an impact on it. The structure of the system determines its internal form of arrangement, i.e. it is an expression of the order existing therein. The full description of the order in the complexly organized systems is studied by a comparatively new science, called taxiology (logic of order), which is being developed as one of the most fundamental and important logical theories. But its basic principles and categories are studied through too complicated extensional mathematical logic and theoretical computation methods. Therefore, we shall not examine them here, but will apply an extremely simplified approach, which will allow us to make conclusions regarding the possibility for an evolution of hierarchically structured systems.
A principle known as “all or nothing” is valid with them, referring to the fact that the structure must be composed of suitable elements, which should be arranged in the correct order, so that the action of the system is not disturbed. If we change the parameters even of a single one of them, or we totally remove it, or we change places of some of the elements, a disturbance will occur in the functioning of the system that will destroy it or take it out of use. Therefore, either everything is fine and the system functions in a normal way, or otherwise, it is as if nothing is fine and the system is terminated.
This principle forbids the gradual “evolution” of one structure into another. Could a small mechanical watch gradually transform itself into a clock? Let us presume that one of its gears has grown bigger, as for a clock. Then, it will be incompatible with all the mechanisms of the small watch and the latter will not tell the time correctly, or will not be able to work at all. Let the other parts also transform and become as for a clock. While one part of its mechanisms is for a small watch, and the other – for a bigger one, its function will be considerably disturbed or could not be realized at all. The watch/clock will work normally only when either all its parts are small, or all parts are big.
And what will happen if one of the parts of the watch is replaced by a computer part? For example, a transistor is put in place of a gear. It is absolutely sure that the watch will go out of use now. On the other hand, even the computer will not realize its function even when we have assembled all the computer parts, and only one part is left from the watch.
Based on the aforesaid we could make the following conclusion: when one object is gradually transformed into another object of the same type (but different in some way – by size, by model, etc.) the function is hampered, or even ceased. And when an object of one type is transformed into an object of another type the function cannot be realized at all. Therefore, either “everything” is in line and the system is functioning normally, or even if one thing is not in order, it is as if “nothing” is in order and the function is broken.
Of course, the relations between the elements of the systems in nature are significantly more complicated; we have taken these examples only to illustrate the principle “all or nothing”. By analysing Fig. 3 we could make the following conclusion regarding the possibility for an evolution of the systems with an infinite number of values of their parameters: Neither gradual nor saltatory (“quantum”) transition of one working system into another is possible.
In the first case, i.e. with a gradual transition, if one of its parameters changes its value, it will not be in accordance with its other parameters any more, and the system will get out of order. But, the other system will not be fit to work until all its necessary parameters are completely built. As we have clarified, here the principle “all or nothing” applies.
The second case, of the “quantum” (sudden) transformation, is also impossible to be realized. The probability for all parameters of the system to suddenly change and to acquire exactly the necessary values of the parameters of any other functioning system is smaller than infinitely small (according to the calculations above – 1/∞k).
At the beginning we already mentioned that every metamorphosis in the parameters of the micro-world (characteristic of the particles, intensity of interactions, etc.) makes atoms unstable and results in their being destroyed. In other words, the atoms and the other chemical elements are discrete structures, which cannot pass one into another through a series of intermediate forms, but they require strictly calculated design. We could think about the celestial formations – planetary, stellar, galactic in a similar way.
As it is well known, proteins have a very important role within living creatures. They build the cell structures, perform catalytic functions, participate in the realization of the genome, etc. But one part of them are species-dependent. Therefore if a mutation occurs, that will lead to the formation of a different protein, its action will not be in unison with the work of the other proteins. In that way the genetic mutations impede the synchronization of the systems in the organism and for that reason, in fact, they appear harmful for the individual, i.e. they do not assist the individual in the fight for existence. In other words, the principle “all or nothing” does not aid the gradual evolution of organisms and there are no new data for “quantum-driven” (i.e. sudden) emergence of new species.
Theoreticians propose two different explanations for the course of the evolutionary process in biology. The first one is called “phyletic gradualism“. According to this point of view, the present living creatures have gradually evolved from earlier and simpler organisms. In that case, however, we should observe constant lines of transitional forms among the species, as well as among higher taxa. It is inexplicable why this line of intermediary links is missing not only with the contemporary organisms, but also with the fossils. In that direction, N. Heribert-Nielsen, Director of the Botanical Institute with the University of Lund, Sweden, has made a very indicative statement. After 40 years of investigations in the area of paleontology and botany, finally, he was forced to say: “It is not even possible to make a caricature of an evolution out of paleo-biological facts. The fossil material is now so complete that … the lack of transitional series cannot be explained as due to the scarcity of the material. The deficiencies are real; they will never be filled.”.
The second view is known as “punctuated (discrete) equilibrium“. This term denotes a hypothetical process, with which mutations in the species should be made saltatory and to have a quick evolution in small populations. S. Stanley calls this “quantum” (in this case “sudden”) emergence of new type. Such an imaginary process could explain the universal absence of transitional structure, but there are no genetic proofs of it whatsoever.
Here is the evaluation that two famous evolutionists – J. Valentine and D. Erwin give to this concepts: “We conclude that … neither of the contending theories of evolutionary change at the species level, phyletic gradualism or punctuated equilibrium, seem applicable to the origin of new body plans”.
From the consideration made we have come to the following conclusion: Intermediate states are: (a) unstable – in atomic and celestial structures, and (b) non-functioning – in living organisms. This means that the concept of a universal Darwinian evolution of systems in non-living and living nature alike is definitively unacceptable.
The third issue in Hawking and Mlodinow’s book pertains to the emergence of space and time.
Chris Isham devotes plenty of focus on the fact that, as of now, it remains unclear what the theory of quantum gravity should look like and on what data it should be based. According to Isham, the main difficulties with building a quantum theory of gravity, and, hence, with quantum cosmology, ensue from the fact that “general relativity is not just a theory of the gravitational field—in an appropriate sense, it is also a theory of spacetime itself; and hence a theory of quantum gravity must have something to say about the quantum nature of space and time.”.
We will discuss this issue in more detail in the last article of our series.
- B) Christian Theism
A few words on God
According to the Bible, God is spirit. His nature (essence, substance) is spiritual one, understood as something entirely different from matter. From the postulate that God is not material it follows that His presence cannot be established by any scientific or technological methods (unlike establishing the existence of a physical field through such methods, for example). To put it in a different way, His existence cannot be verified directly but we can only indirectly judge by certain things that He actually is.
God is transcendent which means that He is outside the material time-and-space continuum, as well as that He is immanent (in the sense of omnipresent). I.e. His presence is everywhere and at all times, but as being separate and independent from everything else. He is not an anonymous power or energy but a personalized God with a mind, emotions, a will and a strongly expressed character. The one who has not only wisely created but also permanently dominates the entire world order. Finally, God holds everything under His control, having the sovereign entitlement to govern the world in a different manner. At times, he can leave events unfold in accordance with the natural laws and at any time he can make a miracle through which He can help people or achieve another purpose.
The Bible presents God as three Persons, distinct but equal in their essence – the Father, the Son and the Holy Spirit; exactly through this God is Unity in Trinity. This truth is revealed to us in the Holy Scriptures but it is a secret that transcends the mind and is unparalleled in human experience. Therefore, neither the intellect is capable of reaching it nor the language – to narrate it in an accessible manner. Our dilemma – seeking to define the undefinable in order to provide argumentation for our faith is reflected in the words of Augustine The Blessed: “Three Persons – not because it is possible to talk about it but because one should not remain silent”.
God exists indefinitely (while we do not forget that he is outside time), and there is no cause or reason that had brought him into existence. To put it concisely, God is eternally existing by His own reason. This way, the question: “Who created God?”, constantly asked by some scientists, becomes totally irrelevant.
According to Christian teaching, The Trinity God has called into existence all things material and spiritual, initially out of nothing (lat. ex nihilo). Though the phrase “from nothing” is not immediately found in the Bible, the idea of it is represented many times over in the Old and in the New Testaments. In theology, the expression “from nothing” is used to explain that God did not use any initial substance. We need to clarify, however, that “nothing” has no existence which means that, before the “beginning”, only God and nothing else existed. Of course, after God created matter he subsequently designed plants, animals and humans out of matter’s components, i.e. from the chemical elements contained in the seas and on the ground.
The Creation is an act of His sovereign will, it is not an action to which God has been driven by any reason outside Him or by any immanent necessity. In calling reality into existence He used only His Word. In the first chapter of the Book of Genesis we read that God spoke and His words immediately turned into reality. It is enough that he speaks “Let there be light!” and light immediately arises and shines over the depths of havens. According to Saint Augustine and St. Maximus the Confessor the logos (ideas) of God’s mind are the archetypes (prototypes) of every detail of the Universe. They are embodied as tropos expressing the creative design in the nature of things, therefore light, for example, has wave-particle duality, it polarizes, moves at a certain speed, etc.
Bruce Milne writes: The biblical view of the world contains the truth that God constantly maintains and renovates the world… This is also hinted in the Jewish sacraments (mandatory life-rituals) used in the creative work of God but translations are unable to properly relate that meaning. According to the “Hebrew Grammar” textbook, written by Gesenius and (later) Kautzsch “Jewish sacraments’ in active voice refers to a person or to something who or which is taken as participating in a continuous permanent activity” … If we have to put it more philosophically, God has called the Universe into existence out of nothing and for this reason it (the Universe) ‘’hangs’’ at every moment, as if above the abyss of non-existence. Had God withdrawn His supporting Word, then every existence – spiritual and material – would crash immediately into the nothing and would cease existing.” 
In this line of reasoning, we have to calm down Stephen Hawking through the assurance given to us by God in the Word to Hebrews, that “God Jesus Christ holds everything through His powerful Word” (Book of Hebrews, 1:3) which means that the Higgs boson will not be able to destroy the Universe at least until His Second coming!
We have to point out, however, that there are also other viewpoints. For example, John B. Cobb, Jr. and David Ray Griffin argue the idea that God has not created things from the absolute nothing. The so called “process theology” asserts the doctrine of Creation from Chaos. According to its proponents there are more texts in the Old Testament that support this view in comparison to texts supporting creation from nothing. They maintain that in a state of absolute chaos there would be only random low-level “useful (utilitarian) cases” and those certainly could not order themselves into “sustainable individuals”. God, however, creates permanently and as a result at every moment an indefinite number of “experimental cases” occur. In this way God contributes for the causation of each useful case. As it is seen, this theory almost entirely overlaps with the hypothesis of the Universe coming into existence out of the chaotic bubbling of vacuum, with the difference that here God definitely helps the causation and sifting of the “useful cases’ in a manner which eventually leads to the construction of an orderly and sustainable world such as ours. 
Other scientists interpret the very formula “creation out of nothing” to imply that the “nothing” was a value – the original negative which God has brought into being in His work of creation. According to the existentialist Martin Heidegger, for example, non-existence has its own being and therefore it is capable of confronting existence. Such concept bears the traces of Greek philosophy.
In the latter two concepts, the “nothing” is actually “something”, i.e. some proto-matter, therefore they are subject to theological and scientific criticism. If we assume that a very fluctuant primary matter existed then there is no guarantee that such matter had the exact properties for the creation of atoms, for example. Provided that elementary particles have some unsuitable properties, they may lead to unbalancing the equilibrium of atoms. In this case the inappropriate properties will have to be destroyed, i.e. turned into nothing. On the other hand, if the particle does not display any of the main characteristics (i.e. such that cannot emerge as combinations of others) for the assembling of atoms, they will have to be created out of nothing. To put it in a different way, it is necessary that God has the ability to create from nothing the most suitable matter for building all things in the Universe, otherwise He would not be omnipotent as well.
God’s intelligent activity revealed in the structure of the microcosm
Space has exactly three dimensions which seems to be the best choice. When the quantum theory is applied to one-dimensional universes particles pass one through another without influencing each other. Therefore, they cannot connect between them in order to form more complex substances. In a two-dimensional space, the gravity-driven things under no conditions are able to form a connected system. There would be nowhere to have the protein molecules to develop, or the amino-acids.
As early as 1917, Paul Ehrenfest analyzed the so called Poisson’s and LaPlace’s equation and proved that a space of four or more dimensions will be unstable. Electrons within the atoms as well as the celestial objects will be subjected to constant impacts with one another which will lead to chaos and to falling out of orderly organization of the material world. To put it differently, the three dimensions are something really special.
Time, in the classical physics of Newton, is like an arrow flowing at constant speed and in one direction – from the past to the present and into the future. According the Einstein’s theory, time turns into a river which accelerates or slows down, until it follows its meanders through the distorted space of the universe. In 1937, van Stockum discovered a solution for the Theory of General Relativity equations which allows for time travel. But a number of paradoxes arise which disturb the cause-and-effect link.
In 1992, Stephen Hawking proposed the “hypothesis in defense of chronology”, according to which a walk through time is impossible because it contradicts certain principles of physics. His arguments were however refuted by some scientists with the antithesis that “there is no law of physics which prohibits the appearance of closed time-like curves”. Still, even today Hawking’s question: “If time travel is possible, then where are the tourists from the future?”, remains relevant.
The atom is the main building block of nature because it is integral in substances, objects, living organisms, etc. Therefore, let us follow how the values of the various physical variables are combined so that the atom is built as a stable structure. We have to note that a wide range of parameters in the microcosm have to be within very strictly defined limits or else the world will break down “into pieces”.
Stability of the atomic nucleus. The atomic nucleus is made of protons and neutrons which are referred to as subatomic particles – nucleons. The protons building it have impressive resilience – it is estimated that they cannot disintegrate over a period of 1035 years. The neutrons, if in a free state, decompose within about 15 minutes into other types of particles. While in the nucleus, however, they constantly transform into protons (and vice versa), being preserved in this manner. The strong nuclear interaction is exactly of the necessary intensity to overcome the Coulomb’s forces of rejection between the positively charged protons. Had it been weaker, it would not allow them connecting into the nuclei of the atoms and then the only element in the universe would have been carbon. Provided however that its value is higher, the nucleons (i.e. the protons and neutrons) would have had such affinity between them that they would have constantly linked to each other and the only nuclei in nature would have been heavy elements nuclei. The mass of the neutron is just a little over that of the proton. If it was the opposite, i.e. at a proton heavier only by 0.1%, it would disintegrate into a proton, positron and neutrino. There would have been no chemical elements and all stars would have transformed in neutron or black holes.
It seems quite improbable that, with the huge density of the nuclear substance and the strong forces of nuclear interaction, it is possible for an independent movement of the separate particles in the nucleus to exist. However, research has revealed that nucleons account for only 1/50 of the volume of the nucleus and, furthermore, according to Pauli exclusion principle, the average free run of each nucleon is rather long (even practically unlimited), i.e. each of the nucleons moves independently of the others, which makes it clear that the nucleus is a dynamic system of particles.
Dynamic nucleon systems differ from the Solar system, for example, in some of their characteristics and, in this regard, we come across some additional difficulties, which are the following: Firstly, in them there is no central object around which the movement of particles takes place, and, secondly – the forces in effect between the nucleons, other than being extremely intense, are also much more variable and complex.
For the purpose of solving the first difficulty, we can reason in the following way: Each of the nucleons in the nucleus creates around itself some nuclear field. In the movement of all nucleons the field created by them in each particular point changes constantly. It could be assumed, however, that this field, after being averaged for some small time interval, will remain practically unchanged. In this way it, in its turn, will determine the movement of each separate nucleon. This leads to the ‘requirement’ that the average potential of the field has to be self-set which means that it has to ensure such movement by the nucleons that, in their distribution in space, results in the same initial average potential.
Except for the self-set average potential, all other forces which change over time, distance and/or orientation in space, have an impact on the nucleons in the atomic nucleus. The study of nuclear potentials showed that these forces are influenced in a rather complex way by the distance between the nucleons, but also by their velocity, the direction of their spin, etc. Furthermore, in the nucleus (as well as in the electron shell), there is the so called spin-orbit interaction between particles, i.e. their energy is substantially influenced by the angle between the vector of their spin and the vector of their orbit moment, etc.
Stability of the electron shell. The value of electro-magnetic interaction also is chosen with remarkable precision. Had it been smaller, the electrons could not bind to the nucleus and would have dispersed away into space. Conversely, had it been stronger, its greater force would make nucleus to bind the electrons with such strength that their exchange in forming common covalent links with other atoms and, consequently, the formation of molecules, even less of ionic compounds), be impossible. The size and stability of the electron orbits depends on the relation between the masses of the electron and the proton. Had the mass of the proton been a few times greater than the one observed, atoms would have collapsed. At a three times higher electric charge of the electron we would not have known nuclei heavier than that of carbon, i.e. Mendeleev’s Table would have consisted of only six elements. Opposite spins, in turn, contribute for grouping of electrons in pairs in atom orbits, etc.
The English physicist and mathematician Douglas Hartree gives an interpretation of the problem of the multiple particles by pointing to the technical difficulties related to calculations concerning multiple-electron systems: “Iron contains 26 electrons, each of them determined by three variables. Therefore, we have 26.3=78 independent variables. If each variable has only 10 discrete values, there will be 1078 equations and the solution thereof necessary to calculate the main state of iron.” 
Sir Arthur Eddington calculated the number of atoms in the Universe at about 1080. In this case, if in the place of each atom we have to write in one equation, we will arrive at the system of equations necessary for the calculation of the main state of iron (as a chemical element). For lead (Pb) however the system will contain 1082х3=10246 equations. Or, we will need 10166 more universes in order to write all equations necessary to form the electron shell of lead (Pb).
We have also to add the following: Even if this colossal number of equations could be solved, related with achieving dynamic equilibrium within the nucleus and the electron shell, this would not be enough. They give us only the estimated interpretation of the task as they fail to sufficiently take into account all interactions and their constant change. Therefore, such solution is still not good enough for us, as it actually is some kind of inaccuracy (imbalance) which constantly grows and at the end it will lead to the destruction of the system. The atom, however, remains stable over time, i.e. it has been designed in a magnificent manner, well beyond the capabilities of our mind. The great German physicist Max Born says: “I used to see the atom as the key to all the deepest secrets of nature but it opened in front of me the greatness of the entire creation and of the Creator.” 
In what way the principles of conscious creation we discussed in the first part are displayed in the microcosm?
- Choosing and maintaining optimal initial conditions.
It strikes us that a wide range of parameters have to be within certain tightly precise limits and that they have to remain constantly strictly fixed – constants, laws, interactions, appropriate space and time, etc. In the words of the Russian physicist and mathematician Joseph Rosenthal: “Equilibrium is so fragile that the smallest variation in the existing laws leads to catastrophic consequences”.
- Hierarchical Order
Within the atom, the protons, neutron and electrons do not behave as independent particles but as an interactive entity. The unique characteristic of each chemical element is not its material contents but rather the way on which its components are positioned. However, as Michael Dodds from the University of Notre Dame points out: “Here we understand not simply their structural or artificial position; rather we mean the dynamic unity which determines the behavior of each component of the atom”. Therefore, the simple substance of certain chemical element possesses properties which are of another quality and cannot be reduced to the mechanical sum of the properties of the particles building its atom.
- Using complex mathematics.
The reader probably has figured out that it was not necessary for God to make any calculations because He possesses the full and supreme knowledge of things. Logic and mathematics are necessary only for us, to enable us to penetrate into God’s supreme wisdom. Therefore, we can add that these sciences give us actually the code through which we translate God’s thoughts written in the Book of Creation. But, if the Universe is comprehensible to us, then this speaks in favor of the position that we are created “in God’s likeness” The mathematician, physicist and astronomer Johannes Keppler in this sense points out that “nature reads God’s mind after God” and he craves that through his scientific studies he “will acquire the ability to taste the pleasure of the God Creator in His work and to share into His joy” 
- The impossibility the structure of the atom to evolve through the evolutionary scheme.
We already pointed that even the smallest change in the parameters of matter would lead to destruction of atoms, which speaks to the fact that no “intermediate forms” are conceivable. Each addition of particles leads to a new configuration of system components which requires an impossibly complicated mathematical calculation in order to achieve stable dynamical equilibrium of the nucleus and the electron shell. If we define this postulate as “dynamic aspect of the anthropic principle”, it follows that it cannot be explained through the “multiple universes” argument.
Moreover, in order to accomplish some kind of self-organization in cosmic systems and in the living organisms, millions of years are necessary. But even the naturalists acknowledge that atoms appear instantly in the first moments of the Bang or within the depths of the stars, i.e. in this case the “time” factor, without which evolution would not be possible, is lacking
I.e. the chemical elements atoms are discrete structures, calculated and projected strictly individually in physical and mathematical terms, and no evolutionary theory is applicable to their origin.
One of the proponents of the quantum theory, the German physicist Max Planck says: “After all my studies of the atom I can say the following: The entire matter derives and exists only through a single Force, which spurs the atomic particles into vibration and keeps them in movement… we have to accept that behind this Force there is a conscious, intelligent Spirit. It is this Spirit that is the Primary cause of all matter”. 
- Only the intellect is capable of accomplishing events the probability of which occurrence is 1/∞ to a certain power
In the first part we already pointed out that the probability by chance to put together such a simple system, like a piston and a cylinder, is (1/∞)4, but for a reasonable engineer this is not a problem. To put it differently, a probability of 1 to infinity (multiplied) to a certain power draws the demarcation line between consciousness and the blind chance (or “watchmaker” in Dawkins words), which the latter could under no circumstances be able to cross.
Fig. 3 Possible configurations of the values of the parameters which ensure from I to ∞ working (stable and functioning) states. Systems І, ІІ, ІІІ, etc. may be either other worlds or the physical structures that form within them.
Returning to fig. 3 above let us remember the paradox related to that simple calculation:
∞/∞n = 1/∞n-1 = 1/∞k,
from which it is understood that God could create an infinite variety of orderly and stable worlds but each of them is highly improbable (1/∞ to a certain power), which makes impossible its accidental coming into existence.
This answers the question raised by Einstein “Had God a choice when creating the Universe?”, the same question being repeated by Hawking and Mlodinow in their book “The Great Design”.
Richard Dawkins put forward the following thought: “A universe with a supernaturally intelligent creator is a very different kind of universe from one without. The difference between the two hypothetical universes could hardly be more fundamental in principle, even if it is not easy to test in practice. And it undermines the complacently seductive dictum that science must be completely silent about religion’s central existence claim. The presence or absence of a creative super-intelligence is unequivocally a scientific question, even if it is not in practice “or not yet” a decided one.”
We fully agree with his proposition that we need to find a way to empirically verify the evolutionary and the Biblical cosmological models on the origin of the Universe. Furthermore, as Dawkins is a rampant atheist, we can hardly suspect that such a criterion has been invented for the benefit to Christianity. In a little while we will propose an approach which will allow us to delineate the consequences of the two models, but, as the problem is rather complex (fortunately, not an impossible one!), it will be necessary to clarify some general positions.
Namely: The Bible is not a scientific reference book! Biblical authors and scientists speak in different languages. The language of science is strictly specialized and technical. It has evolved in order to serve some very specific purposes and often it can be understood only by professionals. The language of the Bible is phenomenological – it represents things as they appear. In other words, the language of the Bible with respect to nature is a popular one, i.e. the kind of language spoken by ordinary people in the relevant age and at the relevant time. The first chapters of the book Genesis do not argue in favor or against the theories of Aristotle, Ptolemaeus, Copernicus, Newton or Einstein. The Bible fundamentally refuses to speak on such theories in any area – be it astronomy, geology, physics, chemistry, biology – and it lacks any kind of scientific postulates.
As many theologians rightly point out, one should not try to speculatively reason how the Universe originated because God is capable of creating the world around us in a manner which defies the capacity of our reason, mind and imagination, i.e. we need not place any limitations on the omnipotence of God. As archimandrite Iustin Popovic said: “The way of the creation is so complex and unfathomable in its very foundation that it is inaccessible to the human mind”.
The founder of the empirical method and father of the entire modern science Glileo Galielei assures us that: “The main purpose of the Bible is the admiration of God and the salvation of the souls… But hundreds of texts in it teach us that the glory and greatness of the omnipotent God are perfectly visible in all of His Creations and they can be read in the open Book of Nature.
The authors of the “Great Design”, citing Richard Feynman, hold that there is an option the Universe has come into existence in all possible ways. We will add that God’s Creation presupposes that the universe has come into existence not only in all possible ways but also in all “impossible” ways. Still, we hope that the scientific method which reveals us, one by one, the secrets of the miracles of nature, will soon demystify also the secret of Creation.
Through contemporary means and devices – elementary particles accelerators, telescopes observing the inner depths of the cosmos, detectors of gravity waves, computer simulations, etc. – we are able to take a look even into the first moments after the birth of the world. Therefore, we will try to find the answer to Dawkins’ question: “Is our Universe the work of a conscious Creator or it has evolved by the principle of pure chance?” We can distinguish the two viewpoints in the following manner:
А) Darwin was right if it is proven that matter evolves on the basis of multiple variations born out of mutations and the natural selection singles out the most appropriate forms and in this way leads to perfecting the physical, chemical and biological structures. But here it is necessary to observe also a great number of unsuccessful combinations, the so called interim forms” which have been rejected by the selection process and it will be exactly those forms that will form the most important criterion that things have developed through a sequence of entirely chance conditions. As it is implied, the direction of processes is from chaos to order.
Б) The omniwise God should have created a perfectly ordered and harmonic world in which everything is provided for and from the beginning everything goes according to a certain plan. As an additional complication, however, comes the fact that after the Fall, according to Apostle Paul, “creation was subjected to slavery of frustration (corruption) (i.e. destruction)” (Romans, 8: 20, 21). Which presupposes that disintegration in it also will be seen. But the direction of processes here will be exactly the opposite – from order to chaos.
Experimental verification of the theories of Microcosm
For this purpose, the most significant role is played by the elementary particles accelerators, with the most powerful of them – the Large Hadron Collider – has been officially installed in 2008 in the European Organization for Nuclear Research (CERN). It is located in a tunnel with circumference of 27 km, 50 to 175 meters underground at the French-Swiss border close to Geneva. It is designed to effect collisions of high-energy protons with a total energy of 14 TeV, as well as of lead nucleus with energy of 5,5 GeV. The objective of the project is above all to discover the Higgs boson, theoretically predicted by the standard model of elementary particles physics. It is also planned to focus on studying the properties of the W and Z bosons, of nuclear interactions at super-high energy and of the processes of birth and disintegration of heavy quarks.
On 4 July 2012 CERN announced that a particle with a mass of about 126 GeV has been observed but there is still a lot of work ahead before it can be proved that it is exactly the sought after Higgs boson. Nearly a year later, on 14 march 2013, it was confirmed that the particle discovered was indeed the Higgs boson, which led to Peter Higgs and Francois Englert being awarded the Nobel prize. In the view of many scientists this was the greatest scientific discovery of the 21st century.
The Large Hadron Collider briefly stopped operation in February 2013 and upon resuming operation in March 2015 it started working at full capacity.
Fundamental questions regarding the beginning of the universe also still remain unanswered. Antimatter, being the “mirror image” of the matter, supposedly should make for one half of the substance that had been produced as a result of the Big Bang, which means that immediate annihilation should have followed and only radiation should have remained thereafter. Obviously, it is not true! The experiments to follow at CERN will search for small differences – inaccuracies in the mirror image – which would possibly explain the domination of matter over antimatter in the present Universe.
Astronomical observations have shown that the matter we know accounts for only 4% of the critical density of cosmic space, the majority being accounted by components unknown until now – such as dark matter and dark energy. Therefore, scientists will look for evidence for the existence on “new” particles which will help us understand the mysterious and still not researched 96% of the fabric of the Universe.
Another challenge will be the discovery of some possibility for merging of quantum physics and the General Theory of Relativity. These and other similar “mysteries” will give us direction in our experimental study of the Universe but the door is always open for surprises.
Predictions of the Evolutionary Model
In the first place, it is necessary to think out an experiment through which the quantum fluctuations of vacuum can be demonstrated in order to verify whether vacuum is really able to produce various parameters of matter – laws, constants, characteristics of subatomic particles, etc. This is so as if the vacuum always produces the same result, this will tend to rather support the position that originally vacuum had been programmed to manufacture our world.
Secondly, where are the “intermediate forms”, i.e. such physical and chemical structures which are non-functional, have no relevance for the building of matter, etc., which for those reasons have supposedly been thrown out by the selection process. It seems that they should outnumber the functional structures because by definition, out of the multiplicity of variants, selection singles out the most appropriate ones. Of course, there can always be the defense that those structures are unstable, i.e. they disintegrate fast and therefore we do not observe them. However, some of them have to be stable enough and remain in the microcosm even if they had no practical purpose or application, i.e. if they were absolutely useless. Only in such manner materialists could convince us that their concept of the universal Darwinian evolution is acceptable to a certain extent. Otherwise, it will be very difficult to believe that this sophisticated organization we observe at all levels of order in the world has resulted from a huge chance, from the first attempt, without any trials and errors.
Thirdly, the conclusion that matter has come out of vacuum because the number of positive charges equals the number of negative ones, seems incorrect. Let us remember that protons, for example, which a positively charged, are baryons, and electrons – respectively negatively charged – are leptons. This leads to a few inconsistencies:
а) According to the standard “scenario”, baryons and leptons are produced at different times and under different temperatures.
б) Baryons are made of quarks while leptons are not made of subarticles.
в) Baryons participate in strong nuclear interaction while leptons are not subjected to such interaction.
For the above reasons baryons and leptons, by their origin and structure, are different and independent of each other. Therefore, even if baryon asymmetry could be discovered experimentally, it is impossible for it to be linked and result in the “same” lepton asymmetry. Consequently, it seems like a statistical absurdity that the same number of protons and electrons will be produced for the atoms to be assembled, as well as that the same number positive and negative charges will be produced in all other particles to avoid the breaking down of all structures in the Universe.
In the fourth place, for the same reason it seems unacceptable also the assertion that if the sum of the positive energy of matter and the negative energy of gravity is close to, or even equal to zero, we can assume that the world has come out of vacuum. The sum of one positive charge and one negative charge, for example, is also zero, but they are two distinctly different charges. In a similar way, the energies of matter and gravity are of different nature and we should therefore examine these energies by their absolute value, i.e. as existing separately and independently of each other.
To put it concisely, naturalists do not offer any convincing arguments to support their claim that the world has come into existence out of vacuum and they will have to call for at least one more experiment proving lepton asymmetry.
Fifth, from the premise mentioned in point four above, one more conclusion is arrived at, namely, that vacuum is locally stable but globally unstable. The first fact is rather in support of the anthropic principle, as it protects the Solar system, for example, from the sudden occurrence of a black hole which could “swallow” us rather quickly. Instability of vacuum globally, however, should result in the spontaneous coming into existence of an infinite number of universes. What is most interesting, is that we could check the multiverse hypothesis through reasoning fully analogous with those used in the Olbers’ paradox. Let us remember that if an infinite number of stars existed their rays would be coming from all directions towards the Earth and the sky would have been unbearably bright day and night. By analogy, quantum fluctuations of vacuum should have been also led to bridges between universes (the so called wormholes) and through such bridges all kinds of unimaginable creatures would have been constantly coming to Earth. To paraphrase Hawking, it is necessary to ask ourselves “then where are the tourists from other worlds?”. (Jokingly, we could call this situation the “Hawking’s double paradox”, as, at the same time, this paradox actually refutes Hawking’s position instead of supporting it.)
Predictions by Christianity
Christian theism is a concept according to which God, through His direct intervention, in a few stages (creative days) created the Universe, the living organisms and man. Upon completion of his creative act the Universe was so completely ordered, in perfect harmony and fully accomplished. This view also admits chaos and self-organization of matter, but let us clarify what exactly we have in mind.
Scientists consider as “chaotic” those systems whose behaviour is subject to radical change caused even by negligibly small events. Thus, long term forecasts become impossible. The discovery of the ability to measure the chaos parameters is often cited as the third biggest achievement of the 20th century, along with the theory of relativity and quantum mechanics. Chaos theory possesses a mathematical apparatus, operating on the grounds of the behaviour of non-linear dynamic equations, sensitive to the initial conditions. If the initial data changes even with insignificantly small quantities, comparable for instance with the Avogadro number variations (of the order of 10-24), the check of the final system state will show absolutely different values as a result. Examples of such chaotic systems are the turbulent streams in the atmosphere, the stormy movement of water, biologic populations, etc. Chaos in this case however is determined, i.e. it submits itself to specific laws, though finding them sometimes proves rather complex and takes a long time.
There exists, however, such a field in physics as the quantum chaos theory, which studies non-determined systems, acting according to the quantum mechanics laws. Heisenberg’s uncertainty principle has a significant role in this field. According to it, the coordinates and the impulse of a particle cannot be measured simultaneously, but are described with a probability wave. Nevertheless, the quantum theories are also deterministic in the sense that they provide laws for change of the wave with time. Therefore, let us recall that the electrons, while moving around the nucleus, form the beautiful atomic orbitals. This suggests that here again a wonderful and perfect order reigns. When we speak about chaos in our world, that should only be in a relative sense.
During the 70s of the ХХ century the German physicist-theorist Hermann Haken laid the foundations of new interdisciplinary science, called by himself synergetics. Synergetics studies the self-organization phenomena, i.e. the mechanisms leading to the spontaneous generation of spatial and/or time structures in inanimate, as well as in animate nature. A number of scientists think that the world has come into being and evolved along an endless chain of such processes – from the formation of atoms, stars and galaxies to the biologic and social structures.
A truly staged arranging of the matter can be observed at the formation of the electron layers of atoms, the beautiful spatial grids of crystal bodies, Benard Cells, the putting together of viruses and a number of other phenomena in nature. For example, if close to a “naked” nucleus of some chemical element we drop a stream of electrons, a part of those electrons will stay around the nucleus and automatically will be distributed along the s, p, d orbits, etc.
By analogue, some assume that there might exist still undiscovered laws, which help also for the structuring of the cosmos. If that is really so, we could establish their existence quite easily. It would be enough to launch the space crafts with arbitrary directions and speeds, and since they succeed every time to become satellites of the Sun or of some other planet, we could assume that the celestial systems are self-organizing. But the experience shows that such an ordering, alas, does not happen. Also, even if we mix in a suitable solution all chemical elements that build the cells in the necessary quantities and proportions, they will not join together into a live organism. In the genetic program no possibility is discovered for saltatory ascending transmutation of the species, for instance to have chickens hatched from snake’s eggs.
We do not agree with the position that through such kind of processes of self-organization evolution is the result, for the following reasons,
а) They are not manifested in all spheres of reality.
b) They always lead to the formation of a certain amount of structures which are characteristic of a given phenomenon.
c) They do not allow a qualitative leap from one level of order to another higher one – for instance, a transition from chemical to biological level is not possible.
Synergy processes presuppose to a great extent the existence of a conscious Creator as the ordering relationships in them testify to planning and to the existence of purpose.
In principle, we can define the original creation as a world of harmony and organization and we can make certain scientific predictions regarding its genesis:
1) Upon the emergence of the material time-and-space continuum its properties have to be similar to the elements of a genius constructor, who allows for the “assembling” of all hierarchically built structures of Universe and the living organisms. There are no arbitrary properties of matter which are not relevant to the organization of the Universe, proportions between them are extremely delicately balanced and they remain fixed, i.e. they do not vary randomly, otherwise the result will be absolute chaos and disintegration.
2) Vacuum is an integral part of the system – it is the ideal medium in which the objects and fields are set, in which all interactions take place, etc. In this case hardly it was vacuum out of which space, time, particles have been born. Still, there is some probability that it was so, if we assume that God created the vacuum first and then planted synergy processes of self-organization, through which everything else came into existence out of the vacuum. 
As it became clear, in microcosm, purely theoretically, facts can be interpreted decisively in favor of Christianity and not in favor of naturalism. Unfortunately, it is still impossible to delineate the predictions of the two models through empirical verification. However, things will acquire more distinct contours when we examine the organization of macrocosm.
References and Notes
 Engels, F., Ludwig Feuerbach, International Publishers, New York, 1974, p. 21
 Lenin, V., Materialism and Empiro-criticism, see 5 Time and Space.
 Lamprecht, Sterling Power, The Metaphysics of Naturalism, Appleton-Century-Crofts, New York, 1960, p. 160
 Vilenkin, A. “Birth of Inflationary Universes” – In: Physical Review, 27, 12, 1983, р. 2851.
 Vilenkin, A., E. P. S. Shellard Cosmic strings and other topological defects, Cambridge 1994, р. 49.
 Isham, С. Quantum Cosmology and the Laws of Nature, Berkeley 1993. р. 77.
 A year earlier (1978) Russian physicists Genadiy Chibisov and Andrei Linde arrive at the idea of inflation, but upon their detailed analysis they come to the realization that it has many problems and they decide not to publish their research.
 Hawking, S., Mlodinow, L. “The Great Design”, Bard Publishing Ltd, Sofia 2012, p. 214
 Op. cit.: Vaas R. Time before time. Classifications of universes in contemporary cosmology, and how to avoid the antinomy of the beginning and eternity of the world. p. 11.
 Rovelli, C., Notes for a brief history of quantum gravity – http://www.arxiv.org gr-qc/0006061.
 Hawking, St., Mlodinow, L. “The Great Design”, Bard Publishing Ltd, Sofia 2012, p. 143
 Paul A. Moody, Introduction to Evolution /New York: Harper and Row, 1962/, p. 503. /Synthetische Artbildung, 1953/.
 James W. Valentine and Douglas H. Ervin, “Interpreting Great Development Experiments. The Fossil Record.” An article from Symposium published in Development as an Evolutionary Process, Alan R. Lias, Inc., 1987, p. 96. Symposium article, in Development as an Evolutionary Process, Alan R. Lias, Inc., 1987, p. 96.
 Butterfield, J., Isham C., Spacetime and the Philosophical Challenge of Quantum Gravity
 Milne, Bruce, „Know the Truth: A Handbook of Christian Belief”, New Man, Sofia, 1996, p. 80
 “Stephen Hawking: Higgs boson may destroy the universe“
 Cobb, J. B. Jr. and D. R. Griffin. Process Theology: An Introductory Exposition, Philadelphia, Westminster, 1976, p. 65.
 Heidegger, M., Being and Time. NY, Harper and Row, 1962.
 Hartree, Douglas, The calculation of Atomic Structures, IIL, Moscow, 1960.
 Born, M., “My Life and My Views”, New York, Charles Scribner’s Sons, 1968, p. 88
 Dodds, M. “Top Down, Bottom Up or Inside Out? Retrieving Aristotelian Causality in Contemporary Science” – accessed at:
(as of 20.12.2002).
 “Scientists and their Gods”
 “50 Nobel Laureates and other great scientists on their faith in God“
 Dawkins, Richard, “The God Delusion“, East-West Publishing, Sofia, 2008, p. 73.
 “50 Nobel Laureates and other great scientists on their faith in God“
 Hawking Stephen, Mlodinow, “The Great Design” Bard Ltd., Sofia, pp. 167-173.
 Some saintly fathers and church theoreticians suppose that „the earth was formless and void..” (Gen. 1:2) has to be understood in the sense of „amorphic pre-matter”, in which things existed in a state of possibility and from which all material objects in the Universe eventually came into being.
See “Cosmology agrees with the first verse from the Book of Genesis“
This position may be supported by the words of Apostle Paul “By faith we understand that the worlds have been framed by the word of God, so that what is seen hath not been made out of things which appear.” (Hebrews, 11:3). We think, however, that judging from the context of the rest of the first chapter of the Book of Genesis, such interpretation is less probable to be correct.
Philosophia 11/2016, pp. 117-150.