(urth) Re: urth-urth.net Digest, Vol 5, Issue 41

Dan'l Danehy-Oakes danldo at gmail.com
Fri Jan 28 15:19:04 PST 2005


My apologies to both Turin and Civet: I had mossed the latter's
earlier mention of Searle's "Chinese Room."

I do think that the "Chinese Room" is a quibble, and a bad one 
at that: it involves a particular kind of essentialism that I don't 
have any use for, and I perceive it as a desperate attempt to 
defend the epistemologically unique status of the human "mind," 
in advance of any serious assault made on that status by (for 
example) a machine capable of winning the Imitation Game
with any regularity.


> Basically by endorsing the test you are making an ought-claim 
> (possibly of the ethical variety, although the ethical isn't the only
> type of "ought"): that, regardless of the truth behind the currently 
> impenetrable curtain, we *should* make our judgments concerning
> intelligence based on evidence of behavior. 

I don't think it's an ethical but an epistemological "ought," but I 
agree with your interpretation here. But then, what evidence - 
other than _some_ kind of behavior - is avaialable for or against 
the existence of _any_ intelligence, machine, animal,human, 
or alien?

I don't think I'm "endorsing" the test, though: I'm simply 
challenging those who think it's invalid to come up with something 
better, ideally something that doesn't rely upon intuition. (The
Turing test relies on a statistical summation of multiple people's
intuitions over many runs; the Searle-Penrose axis rely totally on
their intuition that intelligence is "something more" than massively
parallel information processing.)


> It is, granted, a short step from there to saying that the
> evidence of behavior *is* the only truth of the matter there is, 

It is in fact a huge leap: one, I gather, that some behaviorists
have made, but very few. I certainly have never encountered 
a behaviorist who would deny his or her own consciousness.


> I have often thought about the particular flaw that Dan'l identifies. 
> It seems to me that the class of sentient machines that you're saying 
> would be excluded would really be comparable, all in all, to 
> madmen.

H'mmm. No. I'm suggesting that its experiential world might be far 
more different from thine and mine than that of, say, a sociopath,
or even a catatonic: at least as different as the experiential world
of one of the "higher" animals. As far as I can tell, a sentient 
machine would operate from, at least some, entirely different 
_kinds_ of sensory input from humans, for example; nor would it
likely be bashed about by hormonally-induced "moods."


> Labeling a machine as "insane" might be quite unfair, but pragmatically
> would have to be done. A machine intelligence with perceptions and
> motivations sufficiently different from our admittedly arbitrary norm would
> be far more dangerous than any homicidal madman.

Only if it also had the ability to act upon its motivations. I see no reason
why such a machine should be granted, for example, arms or any other
manipulating appendage.

--Blattid

-- 
"We're going to sit on Scorsese's head"
     -- The Goodfeathers



More information about the Urth mailing list