Conversations On-Line
Journal of Consciousness Studies open forum
4
Subject: Paul Duggan/Turing test Date: 09/10/96
'Pretending some systems are closed'
Pat Haye wrote on 9/7/96:
(PH)The halting problem is meaningless for any computer (as opposed to a calculator) that can handle interrupts, since we can always stop it by telling it to give up.
(JR) "by telling it" destroys any initial Godel-limit boundary. You
have to constantly pump external information into any hard- architectured computer until
you cover all un-planned for contingencies. The independence of an AI to coordinate and
extrapolate data is missing. You may get it to hone its reflexive responses by
"if-thening" enough times to retain the viable
information and discard the rest, but that is not orchestrated sentience.
"Meaning" is still resident in the external programmer and not in the machine.
(PH) Same applies to a human who decided to stop reading when told to do so. Its easy
enough to program a computer to disobey instructions.
(JR) "when told to so so". Again, external moderation. A counterpart to the infinite-regress or Chinese room. A duplicitous confabulation of multiple "boundaries"....alternate frames of reference.
"to program ... to disobey". Nothing "emerging" *there* ! No
spontaneous development of "new" and unprecedented behaviors or internal coding
self-alterations.
>>(JR)From the outset then, we can deduce that all Turing-type linear systems
are *not* conscious in the range of organic sentience ....
(PH)Rubbish, Nothing follows about sentience from any such silly example.
(JR) Ah, that wonderful word "silly" rises again. As in: Unicorns are silly
but as equally ethereal "gedanken experiments" are "not-silly".
I would refer you to Lawrence Crowell's 9/4/96 post on Quantum-D. He discusses the
nucleotide structures of DNA and RNA. With a slight adjustment in stereo-configurations
and a few other atoms, the same basic structures function
in AMP-ADP-ATP energy cycles. Inate energy and information can be utilized in several
different ways. There is no similarly devised "component" of any AI system that
is free to be used in multiple ways. In AIs, data is always distinct from substrate. In
sentient systems that is not the case. Turing systems are data vs substrate systems.
Insufficiently integrated to accommodate self-sentience comparable to organic sentience
and agency.
(PH) Turing ... discusses at some length that the computer will probably have to
deliberately mislead the judge by pretending to hesitate in a human-like way, not be too
fast at arithmetic, etc..
(JR) "to deliberately mislead ... by pretending". Exactly where do these
capacities spontaneously arise from rigid architecture? What qualities of architecture
lead to the emergence of those capacities? I suggest that such suppositions
fall within the "unicorn" catagory. The only source for such
"implanted prerogatives" comes from outside the AI computer's Godel limitations.
.
.
>>(JR)"Buffered Time Response"...indicative of some other organization
of linear components *with information storage capacity*,
(PH)Yes, exactly what computers have. More information storage capacity than just about anything else in the known universe, in fact.
(JR) Or their proponents chose to consider. Every phase-state of electron shells can be
considered distinct data-storage states. Here, I refer you to Rudolph
Marcus' 1993 Nobel prize winning work in chemistry dealing with substantial
variability of electron transfer rates between molecules. There is more interactive
information, and interaction potential, produced and encountered in "simple"
bond construction and breaking than any computer can handle in real-time.
If AI wants to accomplish something of significance, don't waste time trying to have it
mimic a 10th grade education. Build one that unfailingly constructs or acquires new energy
sources for itself, without human intervention of any sort (and self maintains it's energy
utilization structures ... repairs itself as necessary without human help).
INTEGRITY PARADIGM