Friday, January 24, 2014

overflow

quick note on something unimportant:

my qualia clearly overflow my behavioral access to them.

say there's a thing here. a can of beer (cans I think are more prosaic), with its characteristic physical attributes.

when i look at it, i have an experience of it. much of that experience is strongly, closely correlated with the physical attributes of the can. you can take this for granted, or you can confirm it by ask me questions and carefully collecting my responses. the can's geometric properties, its shape, its albedo and texture, things like that. other parts of my experience are not correlated with attributes of the can, but are quirks of my own systems. colors, a/modally completed contours, illusory depth from shading, meanings of symbols, etc.

all of this you can, in principle, recover from me by making certain types of measurements - basically, you prompt me with questions or decisions, and i give you responses. these can be words, numbers, button presses, ratings, slider adjustments, essays, etc.

let's say i give you all the time in the world. you have time to run every test you can think of. you can run every task until performance asymptotes, and you can estimate any parameter that you can dream of. every aspect of this can of beer that i have any ability to respond to, to access behaviorally, is your data.

is there anything left to my experience that you have not collected, that you cannot find in your data and models?

my qualia are overflowing!

(you can make this same sort of argument for physics - i measure the physical attributes of an object until i can't find any more to measure. you can then point out, well, isn't there something left? the thing itself? but then i can ask you, what is there, about that thing, that is not described or captured in my measurements and models? what can you point to? that the thing is *there*? well, I have its thereness perfectly specified in a coordinate space. that the thing is *substantial*? well, i've got every aspect of its substantiality described by my equations of quantum electrodynamics. what is left? i think that, ultimately, there's nothing for you to point to, because in every case, i can show you how i've measured or modeled whatever it is. i don't see how the case is the same with phenomenal experience.)

Tuesday, January 07, 2014

idealism 2

The days are counting down, just weeks now until the Big Shift. This evening, ideas swirling through my head, especially a reiteration of the first version of this post. I wanted to resketch those ideas, so here we go, in less detail but more formally:

1. There is a real world that exists in some form that we can perceive, accurately or not.
1.1. The substance of this real world is not physical or objective or dualistic.
1.2. The substance of the world is subjective and phenomenal.
2. All us humans (and many other creatures) experience phenomenal consciousness.
2.1. Phenomenal consciousness is a substructure or subfunction of a brain.
2.1.1. Consciousness is not the only type of phenomenal substance (reiterating 1.).
2.2. Experience of phenomenal consciousness is analogous to a space with things in it.
2.2.1. Things that are 'in the space' of consciousness are things that one is 'conscious of'.
2.3. Objects are neural parsings of stuff in the real world.
2.3.1. Objects can be informatively (yet redundantly) labeled 'neural objects'.
2.3.2. A substructure of a neural object that is present in consciousness is an 'object-in-consciousness'.
2.4. The stuff in the world that is parsed into objects is also subjective and phenomenal.
2.4.1. Generally this stuff is not conscious.
2.4.2. An exception is when the stuff is a living brain.
2.5. We generally recognize that stuff in the world is not conscious.
2.5.1. We come to this conclusion because conscious objects are within the space of consciousness, but do not themselves contain conscious spaces (except for brains, and we only know they do because they say so).
2.5.2. For 2.5.1. to be true, one consciousness would need to be able to emulate another.
2.5.3. Despite the truth of 2.5., it is arrived at for the wrong reasons.
2.5.3.1. We mistake objects-in-consciousness for stuff in the world.
2.5.3.2. Since we make this mistake, and since objects-in-consciousness are not themselves conscious, we believe (correctly) that (most) stuff in the world is not conscious.
2.5.4. We are perplexed that brains are conscious, yet do not appear to be.
2.5.4.1. This is because we are mistaking brains, which are conscious, for objects-in-consciousness, which we have already mistaken for stuff in the world.
2.5.4.1.1. This is a subtle error, because if the middle step is left out, it seems not to be an error (we are confusing brains for stuff-in-the-world, which they are).
3. The hard problem of consciousness is the apparently uncrossable gulf between phenomenal subjective experience as-a-brain, and the non-phenomenal objective status of-a-brain.
3.1. Items 1. and 2. shows how this gulf is a consequence of a sequence of mistakes about the status of stuff-in-the-world and of objects-in-consciousness.

This is a type of idealism - which of the many subtypes I'm not sure - that is, while not popular (as far as I can tell), at least tolerable in philosophical circles. I'm liking it more and more!