(urth) Re: urth-urth.net Digest, Vol 5, Issue 41

Dan'l Danehy-Oakes danldo at gmail.com
Sun Jan 30 13:54:32 PST 2005


Turin wrote...

> I don't entirely disagree, although I think, to be fair to Searle, he didn't
> intend it to be quite the desperation-position it seems to be. In answer to
> a rhetorical question you ask later on, "what other evidence is available?"
> Well, there's empirical evidence about the processes that we presume produce
> our consciousness, which interestingly *would* catch some subset of possible
> "weird" intelligences that would never be picked up by a Turing test, so you
> could say there is a return to Searle's angle there. The very obvious
> problem with this is that it is difficult/impossible to quantify which
> *aspects* of these processes are the ones that make them essential to
> intelligence.

First of all, to say this involves a set of a priori assumptions, not least of
which is that there are some "aspects of the processes" that make them
"essential (note this key word) to intelligence."  This in turn involves an 
interesting (to me, at any rate) set of as-of-not unverified (and to my 
thinking unlikely ever to be verified) assumptions about the _nature_ of 
"the processes that we presume produce our consciousness": specifically, 
that there exists some empirically-unique set of processes that etc.  But
as of now (30 Jan 05 @1.30PM PST) I know of no evidence that supports
this hypothesis and, in fact, conclude from evidence available-to-me at 
this time, the (empirically observable) "processes that produce our 
consciousness" occur in many contexts that do _not_ produce consciousness,
at least that we can tell. Rather, consciousness appears to occur as an
emergent phenomenon, and there is no a priori reason to assume that
other physical processes cannot produce similar emergent phenomena.
The only "reason" I can see to make such an assumption is the desire
to protect the ontologically privileged status of the human nervous system
or "soul."

(Incidentally, I brainfarted in my previous post and repeatedly used 
"epistemological" where I meant "ontological." Sorry for any confusion 
this may have caused.)

This (in brief) is why I regard the "Chinese room" thoughtexperiment 
as a desperation move taken by essentialilsts.

>  And nobody's seriously going to fall back on Searle's apparent
> presumption that, for lack of knowing any better, basically *all* of them
> are essential.

Searle does. And, for all I can tell, so does Penrose. Both seem to me
to be engaged in a holding battle for the ontologically unique essence
of "human mind." (In fact, I am inclined to agree with their position, but
on purely "emotional" grounds: I see not one bit of empirical evidence
to justify the position, and so believe it has no proper place in any
scientific debate.)

 
> >I don't think it's an ethical but an epistemological "ought," but I

Again, this should have been "ontological."

> >agree with your interpretation here. But then, what evidence -
> >other than _some_ kind of behavior - is avaialable for or against
> >the existence of _any_ intelligence, machine, animal,human,
> >or alien?
> 
> I'm not really certain it's either of the above, which is why I'm pretty
> wishy-washy here. I think the "ought" can be framed any number of ways,
> which pretty much determines how you'll then choose to characterize it. I
> just have a vague suspicion that if you tried to get down to one essential
> formulation that sums up all the rest, you'd end up with either an aesthetic
> or ethical claim.

H'mmmm. An interesting point. I definitely do agree that the ontological
"ought" here results in ethical issues - i.e., if we are compelled to 
conclude based on empirical evidence that this artificial system is
sentient-sapient-intelligent-conscious (pick your favorite magick word),
we have to consider our ethical stance toward it differently than if it
is "just" an artifact: we have to consider what ethically might allow or 
oblige us to regard something other than a human as a "person"
in resolving ethical questions. (Which in turn is one of the great
values of SF for us today: it doesn't resolve the questions, but it
at least gives us a sandbox in which to play with them until a real
situation arises, and so be better prepared to deal with them.)

> I was using, or was trying to use, "mad" in a sort of abstract sense.
> "Madness" as that which lies outside the realm of intelligibility; incapable
> of being reduced or expressed logically in human terms. So your
> garden-variety crazyperson is only a little mad in this sense. A machine
> could do it much better.

Point taken and accepted.

 
> >Only if it also had the ability to act upon its motivations. I see no
> >reason
> >why such a machine should be granted, for example, arms or any other
> >manipulating appendage.
> 
> The problem is in knowing just how many limitations would be enough to
> render such a machine harmless. It is, potentially, much much much smarter
> than any human meatbag.

That seems to me another presupposition. I don't assume that it _couldn't_
be, but, as long as we don't know how consciousness and intelligence 
actually arise in physical systems like brains, we don't know whether any
_c_-like limit to what a consciousness can do exists. There is a reasonable
(but not empirically testable at this time) argument to the effect
that a single
consciousness cannot handle more than some (undetermined) amount of 
"stuff" at a time without splitting into multiple consciousnesses.

--Dan'l

-- 
"We're going to sit on Scorsese's head"
     -- The Goodfeathers



More information about the Urth mailing list