(urth) Re: urth-urth.net Digest, Vol 5, Issue 41

Chris rasputin_ at hotmail.com
Sun Jan 30 22:07:36 PST 2005


Whoa whoa hold up. I am definitely not Turin. Civet or Chris will do.

>Turin wrote...
>
> > I don't entirely disagree, although I think, to be fair to Searle, he 
>didn't
> > intend it to be quite the desperation-position it seems to be. In answer 
>to
> > a rhetorical question you ask later on, "what other evidence is 
>available?"
> > Well, there's empirical evidence about the processes that we presume 
>produce
> > our consciousness, which interestingly *would* catch some subset of 
>possible
> > "weird" intelligences that would never be picked up by a Turing test, so 
>you
> > could say there is a return to Searle's angle there. The very obvious
> > problem with this is that it is difficult/impossible to quantify which
> > *aspects* of these processes are the ones that make them essential to
> > intelligence.
>
>First of all, to say this involves a set of a priori assumptions, not least 
>of
>which is that there are some "aspects of the processes" that make them
>"essential (note this key word) to intelligence."

I'll grant that there are a priori assumptions, although these need not be 
as extensive as you suggest. In the end not much is required other than the 
same general assumptions necessary for any causal inquiry. For example, we 
know that certain creatures (humans) seem to behave as if conscious, but if 
certain processes centered around the brain stop functioning they cease to 
behave as if conscious. This doesn't mean that there's no other way to be 
conscious, but it does suggest that the functions of those processes 
contribute toward human consciousness, and allows the suggestion that 
"relevantly similar" processes might also fulfill the same role.

This line of reasoning can lead to hasty generalization depending on the 
depth of your knowledge of what's really going on with consciousness. Which 
in our case is pitifully small. But my point was that this avenue could 
conceivably open up, and there's every reason to believe it will, at least 
to some extent.

>This in turn involves an
>interesting (to me, at any rate) set of as-of-not unverified (and to my
>thinking unlikely ever to be verified) assumptions about the _nature_ of
>"the processes that we presume produce our consciousness": specifically,
>that there exists some empirically-unique set of processes that etc.  But
>as of now (30 Jan 05 @1.30PM PST) I know of no evidence that supports
>this hypothesis

There is actually a great deal of empirical evidence surrounding you every 
day: how often do you see something that you believe is conscious without a 
brain on the average day? Of course, it takes little imagination to reason 
hypothetically that the weight of this empirical evidence is systematically 
skewed by whatever you want to call it - the evolutionary path of this 
planet, etc. The possibility, or even likelihood, of this logical 
possibility is what would lead you to disbelieve the hypothesis as you are 
thinking of it.

But really the a priori assumption here isn't so grandiose. The "aspects of 
processes" and such are functional descriptions rather than physical, and 
fixed more or less by definition (although, again, we have a very poor 
definition of consciousness, but we may get a better one later). As a loose 
example, to describe something as a "car" you need not specify the exact 
workings of its engine, but you do need something that *functions* as an 
"engine" that makes it go.

>The only "reason" I can see to make such an assumption is the desire
>to protect the ontologically privileged status of the human nervous system
>or "soul."

I guess the best way to sum up here is that I wasn't talking about this, I 
was simply suggesting that advances in cognitive science could make it 
possible to identify other possible physical processes capable of producing 
consciousness, and that this might help identify consciousnesses that 
wouldn't necessarily be picked up by a test of intelligent behaviour. 
Nothing logically precludes it, not that any test based on such a method 
would ever be any more certain of its results than the Turing test would.

>(Incidentally, I brainfarted in my previous post and repeatedly used
>"epistemological" where I meant "ontological." Sorry for any confusion
>this may have caused.)

It sort of hung together as "epistemological", it didn't seem unreasonable. 
I don't know about "ontological", though. An ontological claim is a claim 
about what "is", and "is" claims are distinguished from "ought" claims 
essentially by definition. I don't think that there can actually *be* such a 
thing as an ontological "ought".

>Searle does. And, for all I can tell, so does Penrose. Both seem to me
>to be engaged in a holding battle for the ontologically unique essence
>of "human mind." (In fact, I am inclined to agree with their position, but
>on purely "emotional" grounds: I see not one bit of empirical evidence
>to justify the position, and so believe it has no proper place in any
>scientific debate.)

I am not familiar with Penrose. Searle is not quite so bald about it. 
There's this whole aspect of the Chinese Room thought experiment that we 
generally strip out (and thank god we do, because thinking about it makes me 
want to scream) where supposedly the Chinese Room is a purely syntactical 
calculator *with no semantics*, and if you accept this supposition then to 
make a long story short it gives Searle some ground to stand on. But there 
are 100 pretty good reasons not to accept this, and if you reject it then 
Searle will appear to be standing in the rain shouting "Intelligence is made 
by brains!" for no particular reason.

But yeah, he was probably motivated by the same emotional reasons you 
suggest. He's just not *quite* as sloppy a thinker as all that, or at least 
not sloppy in quite the same way.

> > The problem is in knowing just how many limitations would be enough to
> > render such a machine harmless. It is, potentially, much much much 
>smarter
> > than any human meatbag.
>
>That seems to me another presupposition. I don't assume that it _couldn't_
>be, but, as long as we don't know how consciousness and intelligence
>actually arise in physical systems like brains, we don't know whether any
>_c_-like limit to what a consciousness can do exists. There is a reasonable
>(but not empirically testable at this time) argument to the effect
>that a single
>consciousness cannot handle more than some (undetermined) amount of
>"stuff" at a time without splitting into multiple consciousnesses.
>
>--Dan'l

I'm willing to accept this as a hypothesis, but I wouldn't be willing to 
wager on it.

In any event here we are nicely back in the realm of sci-fi. I am thinking 
of cautionary tales along the lines of the beginning of Vinge's "Fire Upon 
the Deep". There's probably a better example, and I know I've read multiple 
stories that picked up on the idea, but that's the only one I can think of.

Chris





More information about the Urth mailing list