Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Friday, May 24, 2013

Binding Problem

I am still evolving, as I read all this NCC stuff, but in testing myself and my thinking, I find that I produce something very similar to what I have produced several times in the past year or so (also under the Vision tag):


My view has been that the phenomenal visual scene can be likened to a stack of qualia or phenomenal properties, all simultaneously experienced or bound together in such a way that it is often difficult to see the bound parts as distinct from one another, although they are distinguishable in principle. The root of this stack is the set of phenomenal properties that I believe are most often identified with ‘qualia’, i.e. properties that have scalar magnitudes or intensities. Brightness and darkness, color, contrast, and then at a slightly higher order, orientation, scale, direction, speed. These are familiar as physical objects of study either in the psychophysical field of spatial vision, or as determinants of sensitivity in the neurophysiology of the first few synapses of the initial retinocortical pathway for visual encoding. But they are not the only phenomenal properties of visual scenes, and in fact they are not the properties of scenes that we spend the most of our ordinary visual time analyzing. Instead, we spend most of our visual effort attending to more fuzzily inferred properties of the scene: identities, utilities, depths, valences, affordances. These are the properties of a scene that are immediately apparent to us, but they are the ones that require the most inference: the shape and meaning of a word; not so much its contrast or color, which we can easily adapt to and forget, although they remain in our phenomenal consciousness. I am reminded what Foucault said regarding the multiple layers of a calligram: “As a sign, the letter permits us to fix words; as line, it lets us give shape to things.” All these things are simultaneously present and part of the seen scene, but we tend to attend selectively to certain levels.

I think it is clear from this conception of the phenomenal scene that indicating the presence of phenomenal properties, i.e. that something is present in consciousness, requires the presence of the higher level inferences, but not necessarily of the lower level ‘root’. I can daydream or close my eyes and continue to experience visual phenomena, although they are indistinct and insubstantial, and I can tell you about what I experienced, and then we can argue over whether or not visual imagery constitute visual phenomena. However, if all I have is the spatial scene, but I am unable to make any inferences about it, then I cannot report anything about it – reporting presumes context, or cause, or object, and these all require higher level inferences. Or rather, perhaps I could report, but my reports would be nearly meaningless, not least because objective meaning is tied to subjective meaning, which is what we have removed in this example. My reports would, at best, maybe with some minimal inferences, allow me to transmit information about the perceptual magnitude of local, ‘low-level’ features. I would then be performing in a psychophysics experiment, and you would probably be using signal detection theory to interpret my responses. Norma Graham noted the strange convenience of this situation more than 20 years ago, when she noted, “It is (or we can hope it is) as if the simplicity of the experimental situation has made all the higher level stages practically transparent.”

Friday, April 12, 2013

syncope

quick note:

went for the mri tonight; radiologist said everything looks ok, but wait to see what the neurologist says; and i can come get a copy of the pictures next week. mri was interesting, hypnotic, staring at a blank plastic surface inches from your face, keeping absolutely still, listening and feeling these musical, super loud rhythms coming from the machine.

interesting in a different way was what happened first: the nurse tried to put a contrast agent into my blood through a vein in my arm; she failed on the first time, sticking the needle into the vein and through the other side; the second time, she hit a nerve, and i went into vasovagal syncope.

everything started to tingle, my field of view started to fade, i broke into a sweat, i felt nauseated, and then

then, everything was black, and i didn't know anything or sense anything - yet i had some sort of minimal awareness. i had a vague feeling of waking up from a deep sleep in a place i didn't know. i remember feelings that i associate in some way with sunlight, trees, and mountains. i felt confused.

then, i started to feel my body - i was in a chair, but i couldn't move. why am i in a chair? where am i?

then i started to hear a shuffling sound, loud and abrasive, felt my body being rifled back and forth - the confusion was growing.

then my vision came back - when i asked her later, the nurse said my eyes were open all along, deviated down and leftward - and only then i remembered i was in the MRI clinic. at first i thought, when did i go to sleep? i wasn't sleepy.., and then i realized that i must have passed out. everything started to come back.

the nurse was calling for the doctor and others to come, and struggling to put a blood pressure meter on my arm. the sounds were all muffled for about 30 seconds or so, as though i had earplugs in. at the same time, there was intense tinnitus.

after a few minutes i felt normal again. the shuffling noise, i think, was blood rushing back into my ears, and maybe also the agitated movements of the nurse. i was soaked with sweat. they gave me a can of juice and an oxygen tube in my nose. i didn't notice anything interesting about the oxygen. i talked with the amused radiology resident, Amad, and we decided not to do the 'GAT', the contrast, unless the scan turned up something worrying, which it didn't.

this was the first time i've ever passed out, but i often get woozy from needles, getting blood drawn etc, and i stopped giving blood in college because each time the wooziness got worse - the last time i couldn't walk out of the clinic, had to lie down for 20 minutes. point is, this wasn't important, just weird.

so, the quick note is: order of losing consciousness - all at once. order of regaining consciousness - awareness of self, body, hearing, and vision. glad i've been reading those tononi papers - i would estimate my phi went something like this:

Thursday, November 15, 2012

stack puzzle

Okay, I’ve been wondering for a while whether or not something is a valid question – a good question or a bad question. It is related to a few entries I’ve written here in the past year (esp. this and this), and to a paper that I’m about to get ready for submission.

The question: are the percepts contributed by different layers or modules of visual processing perceived as embedded within one another, or as layered in front of or behind one another?

Such percepts could include brightness, location and sharpness of an edge, its color, its boundary association; color and shape and texture of a face, its identity, its emotional valence, its association with concurrent speech sounds; scale of a texture, its orientation, its angle relative to the frontal plane, its stereoscopic properties.

All of these, and more, are separately computed properties of images as they are perceived, separate in that they are computed by different bits of neural machinery at different parts of the visual system hierarchy. Yet, they are all seen together, simultaneously, and the presence of one implies another. That is, to see an edge implies that it must have some contrast, some color, some orientation, some blur; but this implication is not trivial. That is, a mechanism that senses an edge does not need to signal contrast or color or orientation or scale; the decoder could simply interpret the responses of the mechanism as saying ‘there is an edge here’. To decode the orientation of an edge requires that many such mechanisms exist, each preferring different orientations, and that some subsequent mechanism exists which can discriminate the responses of one from another, i.e. the fact that the two properties are both discriminable (edge or no; orientation) means that there must be a hierarchy, or that there must be different mechanisms.

So, whenever something is seen, the seeing of the thing is the encoding of the thing by many, many different mechanisms, each of which has a special place in the visual system, a devoted job – discriminate orientation, discriminate luminance gradients, discriminate direction of motion, or color, etc.

So, although we know empirically and logically that there must be different mechanisms encoding these different properties, there is no direct perceptual evidence for such differences: the experience is simultaneous and whole. In other words, the different properties are bound together; this is the famous binding problem, and it is the fundamental problem of the study of perception, and of all study of subjective psychology or conscious experience.

This brings us to the question, reworded: how is the simultaneity arranged? From here, it is necessary to adopt a frame of reference to continue discussion, so I will adopt a spatial frame of reference, which I am sure is a severe error, and which is at the root of my attempts so far to understand this problem; it will be necessary to rework what comes below from different points of view, using different framing metaphors.

Say that the arrangement of the simultaneous elements of visual experience is analogous to a spatial arrangement. This is natural if we think of the visual system as a branching series of layers. As far as subjective experience goes, are ‘higher’ layers in front of or behind the ‘lower’ layers? Are they above or below? Do they interlock like... it is hard to think of a metaphor here. When do layers, as such, interlock so that they form a single variegated layer? D* suggested color printing as something similar, though this doesn’t quite satisfy me. I imagine a jigsaw puzzle where the solution is a solid block, and where every layer has the same extent as the solution but is mostly empty space. D* also mentioned layers of transparencies where on each layer a portion of the final image – which perhaps occludes lower parts – is printed; like the pages in the encyclopedia entry on the human body, where the skin, muscles, organs, bones, were printed on separate sheets.

But after some thought, I don't think these can work. An image as a metaphor for the perceptual image? A useful metaphor would have some explanatory degrees of freedom; one set of things that can be understood in one way, used to understand something different in a similar way. Where do we get by trying to understand one type of image as another type of image? Not very far, I think. The visual field is a sort of tensor: at every point in the field, multiple things are true at the same time, they are combined according to deterministic rules, and a unitary percept results. Trying to understand this problem in terms of a simpler type of image seems doomed to fail.

So, whether or not there is a convenient metaphor, I think that the idea of the question should be clear: how are the different components of the percept simultaneously present? A prominent part of psychophysics studies how different components interact: color and luminance contrast, or motion and orientation, but my understanding is that for the most part different components are independently encoded; i.e. nothing really affects the perceived orientation of an edge, except perhaps the orientations of other proximal (in space or time) edges.

Masking, i.e. making one thing harder to see by laying another thing in proximity to it, is also usually within-layer, i.e. motion-to-motion, or contrast-to-contrast. Here, I am revealing that my thinking is still stuck in the lowest levels: color, motion, contrast, orientation, are all encoded together, in overlapping ensembles. So, it may well be that a single mechanism can encode a feature with multiple perceptual elements.

Anyways, the reason why I wonder about these things is, lately, because of this study where I had subjects judge the contrast of photographic images and related these judgments to the contrasts of individual scales within the images. This is related to the bigger question because there is no obvious reason why the percept contrast of a complex, broadband image should correspond to the same percept contrast of a simple spatial pattern like a narrowband wavelet of one type or another. This is where we converge with what I have written a few months ago: the idea of doing psychophysics with simple stimuli is that a subject’s judgments can be correlated with the physical properties of the stimuli, which can be completely described because they are simple. When the stimuli are complex and natural, there is a hierarchy of physical properties for which the visual system is specifically designed, with its own hierarchy, to analyze. Simple stimuli target components of this system; complex stimuli activate the entire thing.

It is possible that when I ask you to identify the contrast – the luminance amplitude – of a Gabor patch, you are able to do so by looking, from your behavioral perch, at the response amplitude of a small number of neural mechanisms which are themselves stimulated directly by luminance gradients, which are exactly what I am controlling by controlling the contrast of the Gabor. It is not only possible, but this is the standard assumption in most contrast psychophysics (though I am suspicious that the Perceptual Template people have fuzzier ideas than this, I am not yet clear on their thinking – is the noisiness of a response also part of apparent magnitude?).

It is also possible that when I ask you to identify the contrast of a complex image, like a typical sort of image you look at every day (outside of spatial vision experiments), you are able to respond by doing the same thing: you pool together the responses of lots of neural mechanisms whose responses are determined by the amplitude of luminance gradients of matched shape. This is the assumption I set out to test in my experiment, that contrast is more or less the same, perceptually, whatever the stimulus is.

But, this does not need to be so. This assumption means that in judging the contrast of the complex image, you are able to ignore the responses of all the other mechanisms that are being stimulated by the image: mechanisms that respond to edges, texture gradients, trees, buildings, depth, occlusions, etc. Why should you be able to do this? Do these other responses not get in the way of ‘seeing’ those more basic responses? We know that responses later in the visual hierarchy are not so sensitive to the strength of a stimulus, rather they are sensitive to the spatial configuration of the stimulus; if you vary how much the configuration fits, you will vary the response of the neuron, but if you vary its contrast you will, across some threshold, turn the neuron on and off.

I don’t have a solution; the question is not answered by my experiment. I don’t doubt that you can see the luminance contrast of the elements in a complex scene, but I am not convinced that what you think is the contrast is entirely the contrast. In fact, we know for certain that it is not, because we have a plethora of lightness/brightness illusions.

No progress here, and I'm still not sure of the quality of the question. But, maybe this way of thinking can make for an interesting pitch at the outset of the introduction of the paper.