(urth) 5HC : Chinese boxes or tea chests?
Chris
rasputin_ at hotmail.com
Tue Feb 1 17:32:45 PST 2005
Dan'l said:
> >Searle has not, to the best of my knowledge, proposed
> >either a definition of "knowing Chinese" or an empirical
> >test for determining whether the Room actually knows
> >Chinese. He has instead relied on his readers' paralogical
> >emotional response to assure us that it does not. But that
> >does not mean that it does not.
Well, hold up for a second. Empirical tests aren't necessary, because this
is a thought problem, and in thought problems you can actually *stipulate*
such things; of course, this means that what you end up with is a conclusion
that a hypothesis is either logically consistent or inconsistent, it doesn't
mean that an actual instance of what you describe in the thought problem can
or will happen.
I think it was Searle's intention to stipulate the layout of the Room such
that not only does it not "know Chinese", but that it *does appear* to "know
Chinese". Unfortunately the layout he describes doesn't really fit the bill,
for one thing. But really there are a multitude of problems here. Granting
that he does have a definition of what it is to understand a language
(presumably laid out somewhere in the large body of work he's done in the
philosophy of language), for example, doesn't necessarily compel you to
agree with that definition. And so on and so on and so on.
Crush said:
>Yes, but to be fair Turing's criteria is just as subjective. Roughly, "can
>a
>computer succeed in impersonating a human person well enough to fool
>humans." Where is the "control" here? Mr. Raylor has brought up an
>excellent
>point: What if a computer can succeed in fooling someone that it is
>something we KNOW it is not? That would be a useful control, I think.
And I think this was Searle's intention in trying to lay out the Chinese
Room the way he did: to raise this as a logical possibility. Personally I
don't think he succeeded with his example, but even if he did succeed, what
are the implications? Well, to start with it would mean that obviously you
can't *know* that something is intelligent just because it behaves
"intelligently". And if this is the case then a strict behaviorist would
have a problem. (It is unclear to me but I get the general impression that
Searle thinks that Turing's position is necessarily behaviorist, and I don't
think it really is.)
>Is successful mimicry really the final criteria for personhood for a
>machine?
This is something I have been trying to get at here. There are two questions
to be answered, an "is" question and an "ought" question. If Searle is
correct the "is" question (is the machine intelligent?) can't be answered by
examination of behavior alone (or possibly can't be answered at all).
However, "personhood" is really an "ought" question. How ought a machine
that appears to behave intelligently be treated, what rights should it have,
etc?
I would say that Turing is really answering this "ought" question: a machine
that appears to be conscious should be granted personhood even if you're
unsure of the truth of whether it "is" intelligent. And I think it is even
more clear that this is the position that Wolfe holds with the example of
Rose.
More information about the Urth
mailing list