Conversations On-Line
Journal of Consciousness Studies open forum
5
Subject:
Paul Duggan/Turing test
Date:
09/17/96
'Pretending some systems are closed'
Pat Hayes wrote on this subject on 9/7/96. I appreciate his commentary which took my observations to task. I would like to return the courtesy by evaluating some of his specific remarks and word choices. His language usage is very telling when juxtaposed to his basic philosophical assertions concerning consciousness and AI. They pinpoint the Godel set-theory issues which I have raised here several times. These issues are as obviously important for discussions about consciousness as the simple phrase, "point of view".
That phrase represents not just a linguistic syntax, but is a product of conscious
experience....shared experiences translated into shared correlates of communication. But
primally, it 'maps' the relationship of 'subjectivity' vs 'objectivity' (according to
Cartesian standards) .. the floating relative loci of observation from which the rest of
existence is "experienced". The enaction
of qualia.
Without needing to "define" the moment or experience of quail, we rely on the supposition that we share the capacity to experience the "same" type of experiences, even as we recognize factors that make our individual experiences "unique". One of those factors being that we each proceed from unique physical structures (bodies) which cannot be in the same place and the same time as any other {completely or perfectly}.
Now, if we each are physical enactments of Godel structures, having temporal and spatial limits with concomitant boundaries which "include" and "exclude" potential information, there is thus a plurality of such Godel spaces.
It is my observation that each unique sentient - which can convey to us a sense of its being "conscious" - is capable of some degree of "agency". That is, it does not *need* information from beyond its "Godel space" in order to function. That is, it is not simply a "reactionary" extant. In this light, any Turing device *requires* additional information from outside any transient Godel-limit (defined and established by its composition and structure) in order to "continue functioning" or "change functions behaviors". This is a "point of view" criteria.
Not only must a Turing system be able to react to *new* information (originating from
beyond its Godel boundary), it *requires* information from beyond its Godel boundary at
crucial junctures ... such as precipitated by the need to break a
program loop that leads to recursive dys-function. Such is not the design of
"conscious systems".
I turn the issue back to Pat Hayes. I suggest that he is intuitively aware of these notions just by the responses he gave 9/7/96. He had to refer to the sentience of a 'programmer' on the far side of a computer's Godel Boundary, external to any potentially innate AI sentience or consciousness available within the AI structure. He had to fall back and say that some external consciousness is necessary to come to the rescue of fundamentally Turing-system AI constructions. Thus, as adept as they become at data handling, their "machine consciousness" will never (at least under current hardware/software organizations) be anything like organic sentience, quite contrary to his proposition that they are or can be, under current methodology.
Thus:
(PH)The halting problem is meaningless for any computer (as opposed to a calculator) that can handle interrupts, since we can always stop it by telling it to give up.
"by telling it" destroys any initial Godel-limit boundary. You have to
constantly pump external information into any hard- architectured computer until you cover
all un-planned for contingencies. The independence of an AI to coordinate and extrapolate
data is missing. You may get it to hone its reflexive responses by "if-thening"
enough times to retain the viable
information and discard the rest, but that is not orchestrated sentience.
"Meaning" is still resident in the external programmer and not in the machine.
(PH)Same applies to a human who decided to stop reading when told to do so. Its easy
enough to program a computer to disobey instructions.
"when told to so so". Again, external moderation. (A counterpart to the infinite-regress or Chinese room.) A falling back on multiple "boundaries" ....alternate frames of reference.
"to program ... to disobey". Nothing "emerging" *there* ! No spontaneous development of "new" and unprecedented behaviors or internal coding self-alterations.
>(JR)From the outset then, we can deduce that all Turing-type linear systems are *not* conscious in the range of organic sentience ....
(PH)Rubbish, Nothing follows about sentience from any such silly example.
The suggestion and example are no more "silly" than a gedanken-experiment is "silly" even though it may not be physically do-able. To give a more concrete example of the relationships and dynamics involved, I would refer you to Lawrence Crowell's 9/4/96 post on Quantum-D. He discusses the nucleotide structures of DNA and RNA. With a slight adjustment in stereo-configurations and a few other atoms, the same basic structures also function in AMP-ADP-ATP energy cycles. Innate energy and information can be utilized in several different ways.
There is no similarly devised "component" of any AI system that is free to be
used in multiple ways. In AIs, data is always distinct from substrate. In sentient systems
that is not the case. Turing systems are "data vs substrate" systems.
Insufficiently integrated to accommodate self-sentience comparable to organic sentience
and agency. In organically sentient systems the structures can be program-codes. Dextro-
and levo- rotatory forms of a given "singular" molecule can function as a
Boolean (0,1) gate in the presence of a specific metabolic environment. The
"information" and "instruction of what to do next" is resident in
whether a molecule enables or restricts electron flow through a metabolism.
(PH)Turing ... discusses at some length that the computer will probably have to
deliberately mislead the judge by pretending to hesitate in a human-like way, not be too
fast at arithmetic, etc..
"to deliberately mislead ... by pretending". Exactly where do these capacities
spontaneously arise from rigid architecture? What qualities of architecture lead to the
emergence of those capacities? The only source for such "implanted prerogatives"
comes from outside an AI computer's Godel limitations.
>(JR)"Buffered Time Response"...indicative of some other organization of linear components *with information storage capacity*,
(PH)Yes, exactly what computers have. More information storage capacity than just about anything else in the known universe, in fact.
Or rather, than their proponents chose to consider. Every phase-state of electron
shells can be considered distinct data-storage states. Here, I refer you to Rudolph
Marcus' 1993 Nobel prize winning work in chemistry dealing with substantial variability of
electron transfer rates between molecules. There is more interactive information, and
interaction potential, produced and
encountered in "simple" bond construction and breaking than any computer can
handle in real-time.
If AI wants to accomplish something of significance, don't waste time trying to have it
mimic a 10th grade education. Build one that unfailingly constructs or acquires new energy
sources for itself, without human intervention of any sort (and self maintains it's energy
utilization structures ... repairs itself as necessary without human help). These last
functions would be more representative of "consciousness" than rapid
co-processing of basically "linear" information functions
INTEGRITY PARADIGM