(urth) Re: urth-urth.net Digest, Vol 5, Issue 41

Chris rasputin_ at hotmail.com
Fri Jan 28 16:29:34 PST 2005


Dan'l said:
>I do think that the "Chinese Room" is a quibble, and a bad one
>at that: it involves a particular kind of essentialism that I don't
>have any use for, and I perceive it as a desperate attempt to
>defend the epistemologically unique status of the human "mind,"
>in advance of any serious assault made on that status by (for
>example) a machine capable of winning the Imitation Game
>with any regularity.

I don't entirely disagree, although I think, to be fair to Searle, he didn't 
intend it to be quite the desperation-position it seems to be. In answer to 
a rhetorical question you ask later on, "what other evidence is available?" 
Well, there's empirical evidence about the processes that we presume produce 
our consciousness, which interestingly *would* catch some subset of possible 
"weird" intelligences that would never be picked up by a Turing test, so you 
could say there is a return to Searle's angle there. The very obvious 
problem with this is that it is difficult/impossible to quantify which 
*aspects* of these processes are the ones that make them essential to 
intelligence. And nobody's seriously going to fall back on Searle's apparent 
presumption that, for lack of knowing any better, basically *all* of them 
are essential.

> > Basically by endorsing the test you are making an ought-claim
> > (possibly of the ethical variety, although the ethical isn't the only
> > type of "ought"): that, regardless of the truth behind the currently
> > impenetrable curtain, we *should* make our judgments concerning
> > intelligence based on evidence of behavior.
>
>I don't think it's an ethical but an epistemological "ought," but I
>agree with your interpretation here. But then, what evidence -
>other than _some_ kind of behavior - is avaialable for or against
>the existence of _any_ intelligence, machine, animal,human,
>or alien?

I'm not really certain it's either of the above, which is why I'm pretty 
wishy-washy here. I think the "ought" can be framed any number of ways, 
which pretty much determines how you'll then choose to characterize it. I 
just have a vague suspicion that if you tried to get down to one essential 
formulation that sums up all the rest, you'd end up with either an aesthetic 
or ethical claim.

> > It is, granted, a short step from there to saying that the
> > evidence of behavior *is* the only truth of the matter there is,
>
>It is in fact a huge leap: one, I gather, that some behaviorists
>have made, but very few. I certainly have never encountered
>a behaviorist who would deny his or her own consciousness.

Point taken, although I still think it is a relatively easy step to take 
(not that I find it desirable). The way has been prepared ahead of time by 
other epistemological arguments.

>H'mmm. No. I'm suggesting that its experiential world might be far
>more different from thine and mine than that of, say, a sociopath,
>or even a catatonic: at least as different as the experiential world
>of one of the "higher" animals. As far as I can tell, a sentient
>machine would operate from, at least some, entirely different
>_kinds_ of sensory input from humans, for example; nor would it
>likely be bashed about by hormonally-induced "moods."

I was using, or was trying to use, "mad" in a sort of abstract sense. 
"Madness" as that which lies outside the realm of intelligibility; incapable 
of being reduced or expressed logically in human terms. So your 
garden-variety crazyperson is only a little mad in this sense. A machine 
could do it much better.

> > Labeling a machine as "insane" might be quite unfair, but pragmatically
> > would have to be done. A machine intelligence with perceptions and
> > motivations sufficiently different from our admittedly arbitrary norm 
>would
> > be far more dangerous than any homicidal madman.
>
>Only if it also had the ability to act upon its motivations. I see no 
>reason
>why such a machine should be granted, for example, arms or any other
>manipulating appendage.

The problem is in knowing just how many limitations would be enough to 
render such a machine harmless. It is, potentially, much much much smarter 
than any human meatbag.





More information about the Urth mailing list