Richard Taye Oyelakin
Department of Philosophy, Obafemi Awolowo University,
email@example.com , +2347033712734
Abstract. Putnam’s labelling of human-computer as “probabilistic” and digital computer as “deterministic” has hitherto passed as given. This, however, is well worth some critical re-examination. This is what this paper engaged in. It is observed that since both human and digital computers are strict rule followers, nothing precludes describing both of them as deterministic. But, more importantly, how could Putnam have come about the “probabilistic” label? It is either human-machine is self-programmed or it gathered the information from a higher machine. Neither of these was proved plausible. (1) Human-computer does not possess a complete understanding of itself to enable it to confirm this. Metaphysical determinism also does fail to rescue. (2) The possibility of a causal chain of programming nullifies the plausibility of an absolute higher programmed machine to provide this information. This means, the paper argues, that Putnam’s label of human-machine as “probabilistic” and digital as “deterministic” is deficient of sufficient grounds. Hence, it is an arbitrary stipulation.
PDF Key Words: Deterministic, Probabilistic, Automaton, Human-computer, Nature Computer, God Computer
Putnam established an analogy between the individuation conditions of mental states and those of Turing machine states. He explained that the states of Turing machines are individuated in terms of the way they affect and are affected by other Turing machine states, stimulus inputs, and motor outputs. By the same process, he thought, mental states are individuated by the way they affect and are affected by other mental states, stimuli, and behaviour. Levine asserts that machine functionalism is a position that mentality depends on programming while physiology depends on its structure. For him,
In terms of the computer metaphor, which is behind many functionalist views, our mentality is a matter of the way we are “programmed,” our “software,” whereas our physiology is a matter of our “hardware. 
This is saying that the nature of the mental states is nothing over and above the nature of the way it is programmed. This might mean that a sufficient account of the nature of the mental states may either be contained in the program or in the nature of the programmer. Of course, Levine may not wish to disagree that if the nature of mental states is the way it is programmed, then an adequate account of the nature of mental states may not be complete without the understanding of the nature of the program and or programmer. For instance, Aristotelian logic is different in some material content from Hegelian logic. This difference consists in the difference in the thought patterns of each of the scholars.
Computing mechanisms manipulate complex strings of digits. The structures and processes in question are complex in the sense that in the interesting cases, there are rules and instructions describing the structure of the inputs and outputs. It follows that properties which make up the description of machine state such as Machine Table, Description of a State, Algorithm, etc., are also thought to be necessarily and sufficiently applicable for the description and study of mental state. Putnam, however, concluded that mental states are (probabilistic) Turing machine states.
Putnam’s Computational Functionalism and the Adoption of Turing Machine
To describe the nature of the mental state, Putnam adopted the description of a Turing machine. Turing machine is a discrete automaton which allows for an appropriate ‘sensory inputs’ and given some rules or instructions, it eventually produces ‘motor outputs.’ These rules serve as the operational manual of the machine. The rules are sometimes referred to as ‘Algorithm’, Internal Instructions, functional operations, program or the Machine Table. For Putnam;
I shall assume the notion of a probabilistic automaton has been generalized to allow for “sensory inputs” and “motor outputs” – that is, the Machine table specifies, for every possible combination of a ‘state’ and a complete set of ‘sensory inputs’, and ‘instruction’ which determines the probability of the next ‘state’, and also the probabilities of the ‘motor outputs’. (This replaces the idea of the machine as printing on a tape.) I shall also assume that the physical realization of the sense organs responsible for the various inputs, and of the motor organs, is specified, but that the ‘states’ and the ‘inputs’ themselves are, as usual, specified only ‘implicitly’ – i.e. by the set of transition probabilities given by the Machine Table.
What is important in this excerpt is the understanding of machine table. What does Machine Table have to do with the entire nature of the machine system? Apart from the machine table, there are some other notions involved in this hypothesis.
There are some significant notions which are necessary in the functioning of the Turing machine. This means that without these notions the machine would not be what it is. Some of these notions are; Machine Table, Description of State, and Total State of the Machine. It is important to clarify the importance of some of these notions to the overall function of the machine.
The Machine Table
The machine table is the rule of operation for the machine. It specifies the operation of the system. The machine table is the program upon which the machine runs and performs all its tasks. It specifies two strands of the causal relationship. The first is between the appropriate input, the machine state, and the behavioural output. The second is the relationship between the input, initial machine state, and the subsequent state of the machine. Putnam specifies that instead of the machine to print the output on a tape, for every combination of a state, and given the inputs, the machine will specify the next state as well as its output. This will be specified in the Machine table. This means that given an appropriate input, and the instructional rules of the machine, the next state of the machine, as well as the next action that the machine is likely to perform, is specified. Precisely, the Machine table is the algorithm which specifies the state functional description of the machine. The machine table contains the rule of operation for the machine. For instance, the rule may be; if you read 0, erase what you read, and then print 1, then move to the next state.
There are various versions of machine table. A machine table may be stated using letters as contained in Hein;
We’ll agree to let the letters L, S, and R mean “move left one cell,” “stay at the current cell,” and “move right one cell,” respectively. For example, suppose we have the following instruction; (I, a, b, L, j). The instruction is interpreted as follows: If the current state of the machine is I, and if the symbol in the current tape cell is a, then write b into the current tape cell, move left one cell and go to state j.
This is an example of a machine table. This machine is programmed using letters. It is programmed to identify the initial state, the current input and the expected output. The next state of the machine is also stated. This description satisfies the two strands of relationships in the description of machine state. The programming language or pattern does not matter. All that matters is whether the instructions are well written and the machine is appropriately programmed to carry out specific tasks. There is also Block’s structure of the machine table.
For Putnam, the machine table is a necessary description of any Turing machine. The machine table is what instructs the machine on what to do when a particular input is received. The machine is constrained by this machine table. For instance, if the instruction says, “if you read or scan 1 as input, print 11, proceed to scan the next square to your left, then shift to state B”, the machine is constrained by this instruction and cannot perform otherwise.
These instruction are read as follows: ‘s5L A’ means ‘print the symbol s5 on the square you are now scanning (after erasing whatever symbol it now contains), and proceed to scan the square immediately to the left of the one you have just been scanning; also, shift into state A.’
This is an example of an instruction which is contained in the machine table. The machine described to implement this instruction cannot do otherwise. This is the idea of the machine being referred to as deterministic automaton. The machine table also performs other functions apart from instructing the machine on what to do.
The ‘machine table’ describes a machine if the machine has internal states corresponding to the columns of the table, and if it ‘obeys’ the instruction in the table in the following sense: when it is scanning a square on which a symbol s1 appears and it is in, say, state B, that it carries out the ‘instruction’ in the appropriate row and column of the table (in this case, column B and row s1). Any machine that is described by a machine table of the sort just exemplified is a Turing machine.
Logical State versus Physical State
In the functionalist project, the idea of logical state refers to the class of all true statements which are used to describe an abstract causal relationship between the stimulus input, the initial state, the behavioural output, and the subsequent state. This class of statements is true of any relationship to be classified as functional. From the relationship among these true statements, some rules might be generated. Machine table or program is structured using the class of such true statements. These statements must have an appropriate relationship to constitute a logical state. For instance, the necessity in the relationship between the statements; “it rains, then the ground is wet” makes it possible to generate a modus ponens logical rule. The rule says; “if it rains, then the ground is wet”. Again, this might be symbolically represented as “if P then Q”. The logical rule; “if P then Q” is not a description of some particular physical structure. Rather it is an abstract description of a relationship which if it is true, it is true in all cases. It is then possible for this abstract logical state to be instantiated by individual experimental cases.
Machine state and mental states are described as logical states. They are abstractly described with these true statements. It is then possible for different physical realizations or implementations of a particular functional software or program. Searle’s description of clocks as a mechanism which enables us to tell the time is a good example of abstract/logical state. This definition is an abstract one. One of the features of a logical or abstract definition is that it can be multiply realized by different physical structure. The notion “multiple realizability” is strongly tied to the abstract or logicality of a machine of mental state. Searle describes “multiple realizability” when he said of clocks that “A clock can be made out of gears and wheel, it can be made out of an hourglass with sand in it, it can be made of quartz oscillators, it can even be made out of any number of physical materials.” In this sense, gears and wheel, hourglass with sand, and quartz oscillators, are examples of different physical substrates or hardware which may implement any particular abstract/logical state. This means that the “mechanism which enables us to tell the time” is an abstract description and could be realized by any of that different physical hardware or substrate. This implies that the mechanism itself is an abstract functional system and not identical to any of the physical realizations. Its functional description is also independent of any particular physical substrate.
Deterministic versus Probabilistic Automaton
(1)All organisms capable of feeling pain are Probabilistic Automata. (2) Every organism capable of feeling pain possesses at least one Description of a certain kind (i.e. being capable of feeling pain is possessing an appropriate kind of Functional Organization). (3) No organism capable of feeling pain possesses a decomposition into parts which separately possess Description of the kind referred to in (2). (4) For every Description of the kind referred to in (2), there exists a subset of the sensory inputs such that an organism with that description is in pain when and only when some of its sensory inputs are in that subset.
Basically, determinism may be represented from Nagel’s point of view; as any state that the system is in at any given time “the necessary and sufficient condition for the occurrence of that state at that time is that the system was in a certain state at a certain previous time.” From a strict point of view, determinism will argue that “behaviour is completely caused by a combination of genetics, past experiences, and current circumstances.” For Fraley, scientific determinism “is the belief that whatever happens has physically determinate causes and is the predictable result of these causes.” Whereas for Fraley, the scientific probability is “a science of ignorance management that yields nonspecific predictions for dependent variables that are partly dependent on unmeasured and often undetected antecedents.” An important note from all these views is that the machine table serves as the antecedent and determinate cause for the functional operation of both automata. Both deterministic and probabilistic automata are described by machine table.
These machines must have internal states. These internal states must be in a suitable condition to carry out the instruction appropriately. A particular system is deterministic if there is strict adherence to the postulates as specified in the machine table. In other words, the relationship between the machine table and the functional organization is 1 in the case of the deterministic automaton. The stipulated rule concerning the relationship among the input, the initial state, subsequent states, and the behavioural output must certainly hold without any possibility of alteration. If it is programmed that given 2 as an input, the machine should utter ouch as output and then remain in the current state, the machine cannot function otherwise. This is a strict rule obeying machine. Any machine in this kind of condition is a deterministic automaton. To make the difference between deterministic and probabilistic automaton clear, Putnam argues that:
The notion of a Probabilistic Automaton is defined similarly to a Turing Machine, except that the transitions between “states” are allowed to be with various probabilities rather than being “deterministic”. (Of course, a Turing Machine is simply a special kind of Probabilistic Automaton, one with transition probabilities 0, 1)
In the case of probabilistic automaton, the relationship between the machine table and the functional organization allows for some alterations. While the transition relationship between the machine table and the functional organization is 1 in the case of deterministic automaton, in the case of probabilistic automaton, the transitional relationship in the probabilistic automaton allows for transition probabilities between 0, and 1. This is with a view to allowing for complexities in the nature of some functional organizations such as human-machine.
In the probabilistic automaton, it is programmed in the machine table that given an appropriate set of stimulus input, and the initial state, a particular behavioural output is a matter of probability just as a particular subsequent state is also a matter of probability. On the one hand, this may mean that though a probabilistic automaton is a rule obeying machine, it does not follow the rule so strictly. In other words, the causal relationship between the input, output, and the subsequent state, as stipulated in the machine table is not that of necessity.
I shall assume the notion of a probabilistic automaton has been generalized to allow for “sensory inputs” and “motor outputs” – that is, the Machine table specifies, for every possible combination of a ‘state’ and a complete set of ‘sensory inputs’, and ‘instruction’ which determines the probability of the next ‘state’, and also the probabilities of the ‘motor outputs’.
On the other hand, however, it might also be argued that both deterministic and probabilistic automata are actually deterministic in the nature of their relationships. Consider, “The notion of a Probabilistic Automaton is defined similarly to a Turing Machine, except that the transitions between “states” are allowed to be with various probabilities rather than being “deterministic”.” By being “defined similarly to a Turing machine” suggests a deterministic machine. The various transitional ranges, which define it as probabilistic is also part of its deterministic nature. It appears trivially true that a digital machine is determined to be deterministic in nature while probabilistic automaton is determined to be probabilistic. It follows that both are deterministic in nature in the sense that both are strictly determined by the specifications of their respective machine table. To probe the issue further, I think, it needs to be explained how one machine table is deterministic and the other probabilistic. By then, the issue is purely about the nature of machine tables. It may become necessary to question the morale behind the nature of any particular machine table.
Besides, if it is true as Puddefoot noted that; “Created life is not God, and therefore (at least as far as God is concerned), the life we enjoy is ‘artificial’, less perfect than the divine life, just as computer life is (or so we suppose) less perfect than our own,” then it may follow consistently that human-machine is also a deterministic automaton. “The deist god of the Enlightenment …created the universe with completely deterministic natural laws …” This is upon the assumption that man was artificially created for some specific purposes. These purposes explain the nature of the program which human-machine was designed to implement. This is because “…a computer does nothing unless it is programmed. But what it does depends on how it is programmed.” This appears plausible in the sense that much as man wills to overcome the observed some limitations and deficiencies in their powers and cognitive ability, it appears that it is impossible to edit the program designed for man to implement. Of course, this does not mean that God could not have created man as a probabilistic automaton. Even at that, since man was created for some specific purposes, there must be a set of programs to determine that the purpose is fulfilled. The other option is to argue that man was not created or was not created for any specific purpose. This, again raises a metaphysical question. Human nature appears to betray this option. Man coordinates his life with purposes. Agency purpose is part of the description of being alive. This is aptly and strongly demonstrated in the nature of the computer machines created by man. Each is created for some specific purposes. The purpose of a computer is to serve determines the program to be structured for it. The program necessarily sets the limits of the machine. If nothing says that human was created to serve some purpose, then nothing is against describing human-machine as strictly deterministic.
Does Deterministic Machine Imply Probabilistic Machine?
What the deterministic and probabilistic automaton does eventually are strictly determined by the specification of the machine table. It is the instruction of the machine table, in the case of the deterministic automaton, as well as a probabilistic automaton which the functional system implements. Furthermore, it is noted that no machine can implement any instruction which is not contained in its program. This may further suggest, and perhaps so strictly, that it is what is programmed that the machine implements. Then, each program is a program to be implemented by different physical machine. Therefore, calling one probabilistic and the other deterministic may just be a convenient stipulative description. What determines what is implemented is the program otherwise known as the machine table. Whereas, we know that neither of the machines has any power to implement outside of and beyond its program, both are strictly bound by the content and specifications of the program. Be that as it may, it then suggests that either of them is a deterministic automaton. That is the machine whose functional operations are strictly determined by the specifications of its program. The rest is just a convenient labelling. It is not clear what could suggest such differentiation at the face of this seeming evidence? This may not be more than an intention to show that the human-machine is different in degree compared to a digital machine. But structurally, as it has been argued, this is nowhere supported.
To be so supported, the only means to explain such support is in the nature of the machine tables. It may need to be explained how one machine table is probabilistic and the other deterministic. It may further need to be shown why one machine table has to be probabilistic. Put more directly, why is digital automaton programmed as deterministic whereas the human-machine is probabilistic? A noted point to this extent is that by and large, digital machines are creations of human intelligence or cognition. (1) But, why did man create a deterministic automaton instead of perhaps duplicating itself? (2) Is it possible for a man to create a property which is not part of its own nature? (3) Can a phenomenon create another phenomenon with properties out-rightly independently of its own properties? There are two possible answers to the first question.
First, it may be that human-machine created deterministic automaton because man’s cognition (program) itself is fixed. This does not imply a presumption of an understanding of the source and entire nature of human-machine. Being “fixed” is used to mean being naturally finite. By being fixed then, I mean that human intelligence can only create from the content of human cognition alone and not outside of it and this content is limited. In other words, and addressing the second question, it appears impossible for human intelligence to create anything outside of its own characteristic properties. Human-machine is supposed to be a natural machine. In as much as properties of human-machine are properties of a natural machine, it is naturally necessary that these composing properties are fixed or finite. If they are fixed, then there is a sense to assume they are determined.
For instance, water on earth is fixed as H2O not by itself but either by nature or some otherworldly power. In a way, water is determined to be H2O on earth and not XYZ, otherwise it could have been possible to be edited at will or something. Though, probable in the sense that it may change, but that is obviously without its own power or human’s. However, nothing limited could have determined itself except by something higher in degree to it. This may explain the point that human-machine assumes a higher degree to enable it create and program digital computer; an inexorable rule following deterministic automaton. Though it is being argued that human-machine fares no better. The point which is apparent, if this line of reasoning is valid, is that probabilistic automaton (human-machine) also consists of deterministic properties. This is because, without this, the human automaton would not have been able to program nor create a deterministic automaton. By the same token, it follows that deterministic automaton also consists of probabilistic properties; the properties of its creator. A fetus is a combination of male and female properties.
Does Probabilistic Automaton Possess Deterministic Properties?
The pertinent question is; how is it possible for a probabilistic automaton to be able to programme a deterministic automaton? This becomes necessary on the assumption that human computer is probabilistic automaton, and it is true that there has not been a self-programmed deterministic automaton (digital computers), that is, they have all been programmed by human computer. The problem identified here is more pronounced because it tends to portend some inconsistency in Putnam’s hypothesis.
For instance, (1) if human computer is probabilistic automaton, it may mean that it does not possess the properties of a deterministic automaton. This may not be viewed as merely a difference in degree but a difference in kind. This may occur when phenomenon A possesses some certain properties which make it somewhat (radically) different from phenomenon B in some respects to its properties, even though both still belong to some generic category. For example, whereas bats and man both belong to the category mammals and are similar based on some notable properties, they are radically different on some others such that it may be pretty impossible for man to be able to program bats. Now, (2) if, as it is supposed, human automaton does not possess the properties of a deterministic automaton, then it precludes any possibility of being able to transfer that property to another automaton by the way of programming it. This, apparently, leads to the conclusion that it is impossible for human automaton to be able to program a deterministic digital automaton.
Some possibilities may be identified. First, it is probable that, human probabilistic automaton is only able to program another probabilistic automaton. That is, by replicating its own kind in another automaton. This may mean that human automaton may only be able to program other probabilistic automaton and not any other. But, for Putnam, human automaton is able to program deterministic automaton. That is, human computer is able to program digital computers. How is this to be understood from the purview of Putnam’s hypothesis? If human automaton is able to program digital automaton, some possibilities may be identified. First, it may imply, as earlier argued, that human automaton is deterministic after all. This, appears more to be the case. Second, it may also mean that being a probabilistic automaton includes the possession of some inherent deterministic properties which make it possible for human automaton to be able to program or create deterministic automaton.
Granted that human-machine created and programmed digital automaton, it important to raise a question about the nature of the human automaton. This is with a view to arguing that it may be difficult to assert that human automaton is actually probabilistic whereas digital automaton is deterministic. To pursue this, there is a question which must be answered. How does it come to be known that the nature of human computer is such that the relationship between the machine table and the functional system ranges from the probability of 0 to 1? This is, obviously, not known by any other means than by human ingenuity which results from a close observation of the nature, and functioning of the human computer compared to its own creation; digital computer. But, have we been bothered to double-check this assumption since it is not impossible that we are incorrect? In fact, what actually suggests to us that we are correct? To be correct, we must be prepared to prove that there are some automata which are self-programmed. But, the question is; how can we go about proving this? We know that digital automaton is a creation of the ingenuity of human-machine, hence it is not self-programmed. But then, is human-machine a self -propelled machine?
Man appears to be in charge in the world. This goes with the supposed pre-Socratic idea that man occupies the center of the world. This is evident in Protagoras popular dictum “Man is the measure of all things, of the things that are, that they are, and of the things that are not, that they are not.” Energizing belief, as it were, but is it true that human-machine is self- created and or self-programmed? This question may not be safely addressed except the question of the origin of human-machine is addressed first. We may therefore reframe our question to properly reflect our intention. What is the source of the human-machine?
The Question of the Source of the Human-machine
The question of the source of the human-machine, of course, is a perennial metaphysical question. There have been several attempts to address this question. Scientific alternative answer came and counteracted the overarching religious provisions for this question. Religion, from different variants simply traced man to a supreme being which is variously labelled according to the suitability of the religious belief in question. Religious attempts rather left some more questions unanswered. For instance, if God created man and the world, then who created God? But, because the religious answer failed to satisfy the curiosity of human intellectualism, a more sufficiently objective solution was sought through the observation and the workings of nature.
Both the evolutionary and big bang theories as accounts of the origin of the universe fare no better. For instance, in the language of machine and programming, every event is computationally and functionally describable. Meaning that no sudden big bang or evolution could occur without an antecedent programming as evolution and big band, if correct, are programs being implemented. For instance, every explosion, thunder and lightning, rainfall, wind, pressure, change in weather, and in fact nature at large, are all implementations of some abstract computation. This obviously and correctly construes Nature as a machine.
The big bang and evolution theories have therefore provided two answers to our question. First, human-machine, as it were, is a product of Nature. But, obviously, this may not make any meaningful sense except Nature itself is conjectured as a programmed machine. Otherwise, it will be unclear how human-machine could come forth from a non-machine source. Is there any reason to doubt this? The accuracy with which natural phenomena unfold leaves no much doubt. It then means that if human-machine is a creation of Nature machine, then one question is settled. That is, human-machine is not self –programmed. Religion diversely holds that man is a creation of some supernatural powers. Again, howsoever it is viewed, there is no doubting the fact, if it is a fact, that whatever programmed human-machine must itself be a programmable or programmed machine. This obviously, connects the view that for supernatural powers, whatever nomenclature it is given, to be qualified as creators of human-machine, they all must be programmed machines. However, since no machine is self-programmed, it follows that these must also have some higher machines which programmed them. This may be the case for both the religious and scientific approaches; and the chain may continue. Let us discontinue this pursuit to address our main question.
Putnam’s Probabilistic Label
The question is; what is Putnam’s motivation for the describing human computer as probabilistic automaton whereas digital computer is a deterministic automaton? In other words, what is the urge for Putnam’s probabilistic label? One way to show that human-machine is probabilistic is if it is true that there is X, a self -programmed computer and human computer is an example of X. This is upon the assumption that it is only if human computer is a self-programmed machine that it is assumed that man may acquire complete insight of its nature, from where it could be understood that human nature is probabilistic. Furthermore, that such complete understanding may be elusive is based on following grounds.
First, there is not yet a self-programmed computer machine. “Self-programmed” in the sense of complete understanding and independent existence of a machine without any external input. The belief that it is unnecessary that there is an external creator who created human raged from the earliest philosophers. But, the issue became clearer at the advent of digital computers. Digital computers are product of human ingenuity. Largely, they serve human purposes. Up till this moment, no digital machine is self-created or programmed. At present there are only self-improving computers. This is because “…once it is self-aware, it will go to great lengths to fulfil whatever goals it’s programmed to fulfil.” But self-improving is not the same as self-creating. Digital computers still largely depend upon human computers for existence. Do human computers fare better? Well, clearly, this may not have proven the point that human computers is not self-programmed as it is not impossible that humans possess some higher properties of differentiation from digital machine. Second, it may be true that human computers are complex entities yet to be fully understood. For Patricia Churchland and Searle, the complete nature of the functioning of human brain is not yet understood. For Churchland, “If we can figure out how the brains do it, we might figure out how to get a computer to mimic how brains do it.” For Searle.
The short answer to that question is that we just do not know at present. Since we do not know how the brain does it, we do not know what sorts of chemical devices are necessary for its production.
This is also the view implied by Kurzweil as quoted by Barrat; “Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions.” For Puddefoot “…that we understand so little of the workings of the brain that it would be premature to say at this stage in the history of computers and neuro-science that anything is or is not possible.” This means that there is the need to acquire a complete understanding of the workings of the human brain for us to ascertain whether or not human-machine is self-programmed. Of what importance is then Blue Brain Project of “reconstructing the brain piece by piece?” there is a difference between the mechanical reconstruction and biological brain. To successfully construct a brain requires a full understanding of what it does and how it does. Anderson’s analogy of sand dunes makes a great point in the context.
But there is a question yet to be settled; the complexity of human-machine is to be understood by who? If we mean; to be understood by man, then we would have affirmed some assumptions. (1) That human-machine is self-created otherwise there would have been no reason to suppose that man can nurse an ambition to acquire a complete understanding of what he did not know how it came about. (2) That man has not fully understood the complexities of human-machine because human did not self-create. One only possesses the full understanding of what one creates. On the other hand, (3) Human-machine fully understands the complexities of digital machine because it created it. It may also presuppose, though debatably, (4) that digital machine cannot fully understand its complexities because it does not self-program/create. However, nothing in all these assumptions has foreclosed the possibility of such understanding. But, that there is yet no complete understanding of the nature of human brain precludes the assertion that human-machine is self-programmed.
However, the ignorance arising from the fact that we do not yet have a complete understanding of how human brain works does not actually tilt the balance towards confirming the point. If this were true, then anthropocentricism could have been sufficiently justified and nature could have been entirely dependent upon human-machine. But, human’s lack of power and control over most of the natural occurrences is an affirmation of the fact that man is not really or fully in charge. In fact, that human is more dependent on nature appears truer. Third and equally important is the problem of what may actually constitute an adequate understanding. Again, talk about “complete understanding” is based on an assumption that man is self-created. Otherwise, the belief that man does not self-create precludes any talk of complete understanding. It follows that, it is impossible to talk of acquiring complete understanding of what one did not create. This is evidenced by the importance of manufacturer’s manual (MM) of complex products.
The second way to acquire such understanding of human-machine as probabilistic is if the human computer enquires from the higher machine which supposedly created it. This is apparently impossible with Nature machine. How and where can human-machine engage nature machine with such a question? But, with the religious view, it is believed that the supernatural might be approached and questioned. An understanding that human-machine is probabilistic automaton might have been acquired from it. This, however, appears impossible either. Suppose, in a bid to find out about its nature, human computer approaches God machine and by whatever means asks; “what kind of machine did you program me to be?” If God machine can or will truly answer, theistically, we expect it to say, “I programmed you as probabilistic automaton.” From that, we may suppose that the knowledge that human computer is probabilistic automaton is acquired from the God machine. However, this is questionable for the following reasons.
First, the medium of the supposed communication is unclear. This is because, though it is assumed that God must equally be a computational machine, however, human-machine is natural and naively, it is supposed that God machine is not. How natural machine and none natural machine can communicate is a main problem. Critically speaking, there is a further question to be dealt with. What sort of thing could a non -natural machine be? The term “machine” carries with it a sense of naturalness. Then, it does not appear to go well having non-natural machine. This appears more of a contradiction. It is like saying that something X which is natural is at the same time not natural. But, the only consistent argument this yields is to assume that since (1) all machines are natural, and (2) God is a machine, then (3) God is natural. Pantheism is the only position which may accommodate this claim.
But, is pantheism a dominantly sustainable view on the existence of God? What about Theism or Deism? It appears that there is no better position than this at present. Further agonizing on the possibility of non- natural machine will surely slip into argumentum ad ignorantiam. It may be summed up that should there be an external force which created human-machine, it must also consist of computational properties and hence a programmed machine. Second, even if it possible to acquire such understanding from the non-natural machine, it means that the understanding is actually acquired or imposed and not by self-conviction. Some of the reason which makes some people change their name is the lack of conviction in the reason given by the parents for the christening. The grouse usually is; why use your own experience to impose a name on me?
But, again, why do we suppose God machine will simply answer that “I programmed you as probabilistic automaton”? Such a supposition can only proceed from a naive belief about the existence of the supernatural deity. But, if, we are consistent about our construal, God is understood as a programmed computer machine, and it is agreed that no computer is yet self-programmed, then we may now understand how such a God machine is likely to answer. Instead, it is most likely going to say, “I was also programmed to program you as probabilistic automaton.” Even, if Nature machine is able to answer us, this also appears as the consistent answer. Given our premises, this is the consistent answer. Then, it appears we will embark on an endless pursuit of a chain of higher order machines. What promise of success do we have from this? Now, what clue does this suggest to us about the description of human computer as probabilistic? Human computer could not have known that it is probabilistic since it does not possess a complete knowledge of itself. Approaching the supposed higher machines proves abortive. It only leads to a chain of supernatural machines. Now, to our question; how did it happen that Putnam labels human computer as probabilistic and digital computer as deterministic? At this point, we appear hopeless and can only conclude that the label is deficient of sufficient grounds. At best, we may claim that we do not know what those automata are.
Putnam’s labelling of human computer as probabilistic and digital computer as deterministic engaged some critical questions and analyses. It is observed that since both human and digital computers are strict rule followers, then nothing precludes describing both of them as deterministic. But, how could Putnam, being a human-machine, have come about the label? It is either it is self-programmed or it gathered the information from the higher machines. Neither of these was proved possible. Human computer does not possess a complete understanding of itself to enable it know this. The next point of call is metaphysical determinism. However, the possibility of a causal chain of programming precludes the possibility of any higher programmed machine to provide this information. It then follows that Putnam’s label of human-machine as probabilistic and digital as deterministic is deficient of sufficient grounds and hence can be labelled as an arbitrary stipulation.
Anderson, B. (2014). Computational Neuroscience and Cognitive Modelling; A student’s Introduction to Methods and Procedures. Loss Angeles: Sage Publication Ltd.
Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: St Martin’s Press.
Betchel, W. and Mundale J. (1999). Multiple Realisability Revisited: Linking Cognitive and Neural States. Philosophy of Science, Vol. 66, No 2, 175-207.
Boden M. A. (1990) (ed) The Philosophy of Artificial Intelligence, Oxford: Oxford University Press.
Block N. (1980). “Functionalism” In Ned Block (ed) Readings in the Philosophy of Psychology. Cambridge: MIT.
Ned Block (ed) Readings in the Philosophy of Psychology. Cambridge: MIT
Churchland, P. (1993). The Co-evolutionary Research Ideology. In Alving Goldman (ed.), Readings in Philosophy and Cognitive Science. Cambridge, Mass: MIT, pp 745-767.
Fraley, L. E. (1994). Uncertainty about Determinism: A Critical Review of Challenges to the Determinism of Modern Science. Behavior and Philosophy. Vol. 22, No 2, 71-83.
Gregg John, (2003) Functionalism: Can’t we just say that consciousness depends on the higher-level organization of a given system. http://www.jrgz.net/mind/functionalism.html. Accessed on 10 October, 2013 at 8:00 am.
Hein J. L. (1995). Discrete Structures, Logic, and Computability. Boston: Jones and Bartlett Publishers.
Levine J. (2002). “Materialism and Qualia: The Explanatory Gap.” In Chalmers D. J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press.
Nagel, E. (1960). Determinism in History, Philosophy and Phenomenological Research, vol XX No 3, 296-317.
Nielsen, K. (1967). Is to Abandon Determinism to Withdraw from the Enterprise of Science? Philosophy and Phenomenological Research, vol 28, No 1, 117-121.
Ogletree, S. M. and Oberle, C. D. (2008). The Nature, Common Usage, and Implications of Free Will and Determinism. Behavior and Philosophy, vol. 36, 97-111.
Oyelakin, R. T. (2019). “Why Did the Machine Think: A Functional Theistic Interpretation from Computational Functionalism” Philosophy & Theology 31, 1 & 2, 79-95.
Piccinini G. (2010). The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism, Philosophy and phenomenological Research, Volume 81, Issue 2, 269-311.
Puddefoot, J. (1996). God and the Machine: Computers, Artificial Intelligence and the Human Soul. London: SPCK; Holy Trinity Church
Putnam H. (2002). “The Nature of Mental States”. In Chalmers D. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press
Searle J. (1990) “Minds, Brains, and Programs” In Boden M. A. (ed) The Philosophy of Artificial Intelligence, Oxford, Oxford University Press,
Searle, J. (2008). Philosophy in a New Century: Selected Essays. Cambridge: Cambridge University Press.
Shagrir, O. (2006). Why We View the Brain as a Computer, Synthese, Vol. 153, No 3, 393-416.
Sloman, A. (1978). The Computer Revolution in Philosophy: Philosophy Science and Models of Mind. Sussex: The Harvester Press Limited.
Stenger, W. J. (2007). God: The Failed Hypothesis; How Science Shows that God does not Exist. New York: Prometheus Books.
Stumpf, S. E. and Fieser, J. (2003). Socrates to Sartre and Beyond: A History of Philosophy. New York: McGraw-Hill Companies, Inc.
Turing A.M. (1950) “Computing Machinery and Intelligence” Mind 59, No 2236, 433-460.
Putnam H. (1975) “The Nature of Mental States”, In Putnam H. (ed) Mind, Language and Reality: Philosophical Papers,
 Piccinini G. 2010. The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism, Philosophy and phenomenological Research, Volume 81, Issue 2, 269-311.
Levine J. “Materialism and Qualia: The Explanatory Gap.” In Chalmers D. J. Philosophy of Mind: Classical and Contemporary Readings. (New York: Oxford University Press, 2002), 355.
Putnam H. (1975) “The Nature of Mental States”, In Putnam H. (ed) Mind, Language and Reality: Philosophical Papers, 365.
 See Shagrir, O. (2006). Why We View the Brain as a Computer, Synthese, Vol. 153, No 3, 393-416.
Putnam H. (1975) “The Nature of Mental States”, In Putnam H. (ed) Mind, Language and Reality: Philosophical Papers, 433-434. Modern computationalism was formulated by Warren McCulloch in the 1930s, and published for the first time by him and his collaborator Walter Pitts in the 1940s (McCulloch & Pitts, 1943).2 Roughly speaking, McCulloch and Pitts held that the functional relations between mental inputs, outputs, and internal states were computational, in the sense rigorously defined a few years earlier by Alan Turing in terms of his Turing Machines (Turing, 1965; first published 1936–1937). McCulloch and Pitts also held that specific mental phenomena could be explained by hypothesizing specific computations that could bring them about. According to McCulloch and Pitts, the computations postulated by their theory of mind were performed by specific neural mechanisms. McCulloch and Pitts offered rigorous mathematical techniques for designing neural circuits that performed those computations. Finally, they held that by explaining mental phenomena in terms of neural mechanisms, their theory solved the mind–body problem, but they did not formulate an explicit solution to the mind–body problem.
Putnam H. (1975) “The Nature of Mental States”, In Putnam H. (ed) Mind, Language and Reality: Philosophical Papers, 433-434.
Alan Turing will prefer to call it ‘book of rules’ Turing A.M. (1950) “Computing Machinery and Intelligence” Mind 59, No 2236, 433-460. Reprinted in Boden M. A. (1990) (ed) The Philosophy of Artificial Intelligence, Oxford, Oxford University Press, p 44, while John Searle will call it ‘instruction or rules’. Searle J. (1990) “Minds, Brains, and Programs” In Boden M. A. (1990) (ed) The Philosophy of Artificial Intelligence, Oxford, Oxford University Press, p 69. The machine table determines the functioning of the machine. This is important to the functioning of the machine since without it the machine could in fact not function. Some scholars will even take the machine table as equivalent to the machine.
Hein J. L. Discrete Structures, Logic, and Computability. (Boston: Jones and Bartlett Publishers, 1995), 699.
Block N. “Functionalism” In Ned Block (ed) Readings in the Philosophy of Psychology. Cambridge: MIT, 1980, 171-184.
 Putnam H. (1975) Mind and Machines. In Putnam H. (ed) Mind, Language and Reality, 365.
 Gregg John, (2003) Functionalism: Can’t we just say that consciousness depends on the higher-level organization of a given system. www.jrgz.net/mind/functionalism.html. Accessed on 10 October, 2013 at 8:00 am.
 See Betchel, W. and Mundale J. (1999). Multiple Realisability Revisited: Linking Cognitive and Neural States. Philosophy of Science, Vol. 66, No 2, 176.
Searle (2004) Mind: A Brief Introduction, 64.
 Putnam H. (1975) “The Nature of Mental States”, In Putnam H. (ed) Mind, Language and Reality: Philosophical Papers, 434.
 Nagel, E. (1960). Determinism in History, Philosophy and Phenomenological Research, vol XX No 3, p 294. See also Nielsen, K. (1967). Is to Abandon Determinism to Withdraw from the Enterprise of Science? Philosophy and Phenomenological Rsearch, vol 28, No 117.
 Ogletree, S. M. and Oberle, C. D. (2008). The Nature, Common Usage, and Implications of Free Will and Determinism. Behavior and Philosophy, vol. 36, 97.
 Fraley, L. E. (1994). Uncertainty about Determinism: A Critical Review of Challenges to the Determinism of Modern Science. Behavior and Philosophy. Vol. 22, No 2, 71.
 Fraley, L. E. op. cit., 81.
Putnam H. “The Nature of Mental States”. In Chalmers D. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002) p 75 also in Putnam, H. “The Nature of Mental States” In Putnam H. . (ed) Mind, Language and Reality: Philosophical Papers, 433.
Putnam H. “The Nature of Mental States”. In Chalmers D. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press, 2002), 75.
 Puddefoot, J. (1996). God and the Machine: Computers, Artificial Intelligence and the Human Soul. London: SPCK; Holy Trinity Church, 95.
 Stenger, W. J. (2007). God: The Failed Hypothesis; How Science Shows that God does not Exist. New York: Prometheus Books, 234.
 Sloman, A. (1978). The Computer Revolution in Philosophy: Philosophy Science and Models of Mind. Sussex: The Harvester Press Limited, 64.
 Stumpf, S. E. and Fieser, J. (2003). Socrates to Sartre and Beyond: A History of Philosophy. New York: McGraw-Hill Companies, Inc., 32.
 See Oyelakin, R. T. (2019). “Why Did the Machine Think: A Functional Theistic Interpretation from Computational Functionalism” Philosophy & Theology 31, 1 & 2, 90 ff .
 Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: St Martins Press, 7, 99, and 176.
 Ibid, 8.
 Churchland, P. (1993). “The Co-evolutionary Research Ideology”. In Alving Goldman (ed), Readings in Philosophy and Cognitive Science. Cambridge, Mass: MIT, 745.
 Searle, J. (2008). Philosophy in a New Century: Selected Essays. Cambridge: Cambridge University Press.
 Barrat, J. (2013). Op. cit, 218.
 Puddefoot, J. (1996) Op. cit, 79.
 Anderson, B. (2014). Computational Neuroscience and Cognitive Modelling; A student’s Introduction to Methods and Procedures. Loss Angeles: Sage Publication Ltd., 3.
Philosophia 26/2020, pp. 126-147