Showing posts with label vision. Show all posts
Showing posts with label vision. Show all posts

Friday, May 24, 2013

Binding Problem

I am still evolving, as I read all this NCC stuff, but in testing myself and my thinking, I find that I produce something very similar to what I have produced several times in the past year or so (also under the Vision tag):


My view has been that the phenomenal visual scene can be likened to a stack of qualia or phenomenal properties, all simultaneously experienced or bound together in such a way that it is often difficult to see the bound parts as distinct from one another, although they are distinguishable in principle. The root of this stack is the set of phenomenal properties that I believe are most often identified with ‘qualia’, i.e. properties that have scalar magnitudes or intensities. Brightness and darkness, color, contrast, and then at a slightly higher order, orientation, scale, direction, speed. These are familiar as physical objects of study either in the psychophysical field of spatial vision, or as determinants of sensitivity in the neurophysiology of the first few synapses of the initial retinocortical pathway for visual encoding. But they are not the only phenomenal properties of visual scenes, and in fact they are not the properties of scenes that we spend the most of our ordinary visual time analyzing. Instead, we spend most of our visual effort attending to more fuzzily inferred properties of the scene: identities, utilities, depths, valences, affordances. These are the properties of a scene that are immediately apparent to us, but they are the ones that require the most inference: the shape and meaning of a word; not so much its contrast or color, which we can easily adapt to and forget, although they remain in our phenomenal consciousness. I am reminded what Foucault said regarding the multiple layers of a calligram: “As a sign, the letter permits us to fix words; as line, it lets us give shape to things.” All these things are simultaneously present and part of the seen scene, but we tend to attend selectively to certain levels.

I think it is clear from this conception of the phenomenal scene that indicating the presence of phenomenal properties, i.e. that something is present in consciousness, requires the presence of the higher level inferences, but not necessarily of the lower level ‘root’. I can daydream or close my eyes and continue to experience visual phenomena, although they are indistinct and insubstantial, and I can tell you about what I experienced, and then we can argue over whether or not visual imagery constitute visual phenomena. However, if all I have is the spatial scene, but I am unable to make any inferences about it, then I cannot report anything about it – reporting presumes context, or cause, or object, and these all require higher level inferences. Or rather, perhaps I could report, but my reports would be nearly meaningless, not least because objective meaning is tied to subjective meaning, which is what we have removed in this example. My reports would, at best, maybe with some minimal inferences, allow me to transmit information about the perceptual magnitude of local, ‘low-level’ features. I would then be performing in a psychophysics experiment, and you would probably be using signal detection theory to interpret my responses. Norma Graham noted the strange convenience of this situation more than 20 years ago, when she noted, “It is (or we can hope it is) as if the simplicity of the experimental situation has made all the higher level stages practically transparent.”

Saturday, April 27, 2013

midnight aura

Olive the Cat wakes me up every morning sometime between 2 and 5, and I go put food in her bowl. This morning, at 2:30, I'm awakened by her scratching at the baseboard, and as I wake up I think I see fortification spectrum... turns out I'm about halfway through an aura. Left visual field, about 15 minutes in (noting for reference that the pain is on the right side, supraorbital nerve). I debated turning on the computer so I could record the remainder, but I think it was too far along to be worthwhile. The spectrum extended from fovea straight left, then arced downward. I watched it for a while - I'm still impressed at how straight it is at that point, I wonder if the CSD wave somehow gets caught up in the base of the calcarine sulcus.

The scotoma seemed very small, even when the wave was well into the periphery the blind region didn't seem thicker than the scintillations. The scintillations were very clear, whereas usually I don't see them very clearly - maybe because I was dark adapted the whole time, or my brain was in a sleepytime state, or maybe it was just a random thing. I did notice that closing my eyes, even though it didn't change the apparent luminance of the scene very much, made the phosphenes completely disappear for several seconds, and they would fade back into view only weakly, slowly. I couldn't go back to sleep for ~45 minutes. Ears were almost ringing, headache started. Minor, 5/10.

Yesterday, and maybe Thursday, several times, I noticed flashes, spots, in my periphery, and thought, 'something is up'. Yesterday afternoon, I'm sitting at my computer, reading text near the lower bezel, and I feel I see a phosphene or blind spot just below fixation, where the aura usually starts - it lasts ~10 seconds and disappears. Maybe that represented a false start? The cortex is weakly susceptible, and maybe there are false starts, and then it kicks off - or doesn't. Also, after the syncope episode, I've started to wonder if the tinnitus I get now and then is, at least sometimes, an aura - I had an episode yesterday.

I was dreaming about something as I woke up, and it seemed relevant somehow, but of course I've completely forgotten it now.

Tuesday, April 09, 2013

visual cortex is weird

Migraine weirdness:
1. The weekend after the AF, neck was constantly sore in a weird way. No sign of headache. Soreness disappeared yesterday (Monday).
2. Monday saw waves of photophobia, but it never lasted more than twenty minutes or so, or it was low-level enough that I could adapt to it, and wouldn't notice until the ambient light changed.
3. Today, a bit of photophobia and a faint headache, slightly nauseated. Am I just hyper-sensitive? Also, this morning when I awoke, saw the m-scaled lattice that I've mentioned before; it flashes on for just a few hundred milliseconds, and fades as the morning-light bedroom scene comes into view. I would say it looks most like Form Constant III as described by Bressloff et al (no I am not on any drugs, though Bressloff prefers to refer to drug-induced hallucinations):

(This image is from Bressloff et al's 2002 Neural Computation paper. Note the coincidental opposite symmetry between this kind of m-scaled 'spiral' pattern and the ancestor map in the previous post, which can be seen as another kind of spiral lattice - something like what you'd get if you plotted a flat lattice in the visual field and looked it it in cortical space.)

Also, to reiterate an observation I made in the AF post, in light of reading all this stuff about integrated information and consciousness over the past few days: migraine scotoma are invisible, unlike the disturbingly visible grayness I saw when my left retina stopped working. The normal explanation for the invisibility of cortical scotoma is that it is "filled in", which I've always felt was fishy.. I know it's well-studied, and now I'll have to read about it.

My feeling is that there is no filling in, at least not in the way it's usually talked about, but rather that the scotoma is a scotoma in visual space period - if you don't see the space, you don't see any blankness, and you see the scene continue directly from one side to the other, not knowing any better. Maybe it's hard to justify this intuition, but I think it's similar to noting (as hemianopia patients do) that beyond the edges of the visual field, there's not an expanse of nothingness, but rather no expanse at all. If there is no expanse, there is no edge, so you get the strange condition of not being able to see the boundaries of your own visual field, because the boundary would have to be defined as between two expanses. With a proper mapping between visual direction and field location, you can be properly aware of the geometry of the visible field, without any need for it to be bounded. (Put another way, topologically, the space behind my head is equivalent to a hole in the visual field - if I can't perceive that space as being bounded by the same boundary as the visible field, why should I be able to see the boundaries, and the invisible expanse, of a cortical scotoma?)

Thursday, April 04, 2013

amaurosis fugax

Yesterday, about 5 o'clock, sitting at my desk. Go to stretch, arms behind me, pulling on my shoulder, started to greyout (I don't know a better term for this - whatever you call the visual consequence of ocular hypotension) just a bit - which is normal for me when I stretch after having sat still for too long - and instead of resolving the greyout continues. My field of view starts to fade in blotches, but I can still see - I realize that it's just in one eye. I close one, then the other, and now I know that the view from my left eye is fading.

I jump up and run to Eli's office, and by this time, my left eye view is almost completely blank, except for a space around the fovea, maybe 5° wide and 2° high. This makes sense - the foveal blood supply comes from the choroid, not the apparently blocked ophthalmic artery. The blankness is plainly visible as a flat gray. This is different from the scotoma of the migraine aura, which is as visible as the space behind my head. The boundary between the visible center and the blankness is shimmering, flickering, like the smoldering edge of slowly burning paper. Eli gets his ophthalmoscope to try and see what's happening, and the superior field starts to fade back into view.

It's then stable for about a minute, the inferior field is blank gray, and there's a smoldering horizontal boundary between the superior and inferior fields. I see some strange parafoveal phosphenes, like super-high contrast arcs. Eli is shining a light in my eye, and I'm shocked to realize that this bright light is totally failing to punch through the grayness. I wave my hand in the scotoma and though I can't see it, I feel like I can sense the motion.

The inferior nasal field returns, very subtly, so that I just realize it's back without noticing much about how it returns. It's patchy but quick - then the inferior temporal field returns. After this point, I can't find any other blind areas; everything has returned. In fact, I can't find any obvious differences between the two eyes, though at this time Eli is urging me to go to the ER to get examined. My heart is pounding and my head is starting to hurt. For the next 10-15 minutes, as I'm on my way to the hospital, I can see my pulse with the left eye, but then no more, and everything is back to normal.

Eli gets me in to see someone at MEEI, and I'm examined by an ophthalmology resident. The doctor pronounces this a case of ocular migraine, which as far as I understand means "we don't know, but everything looks ok".

Typical Wednesday afternoon. Hey, April is here!

**edit @ 16:21**

While I don't like the 'migraine' label, I guess I can't deny that there might be something to it. It is unknown what the proximal cause of a migraine is, though it's definitely associated with cortical spreading depression in the brain, the physiological correlate of a migraine aura; the current consensus seems to be that the CSD produces substances that inflame tissues in the brain, which then is perceived as pain, and which fits with the experience of headache beginning partway through the aura.

But what causes the CSD? One sure way to cause it is to deprive an area of cortex of blood - stroke causes CSD even in areas of cortex that still have blood supply but happen to be nearby the ischemic areas. So it could be that the aura/CSD is caused by a very local, transient ischemia. The ischemia can't be very large or long-lived because there don't tend to be other symptoms accompanying the well-described auras.

I would be happy if I could confirm that this episode is somehow related to the migraines, i.e. that I experienced a spasm of the ophthalmic artery of the same sort that I usually experience on a much smaller scale, and deeper in my brain, immediately preceding a migraine aura. I also do not feel much affection for this experience, in contrast to the fascinating auras - I hope this does not happen repeatedly, because I don't think it can be good for the retina to periodically starve it of oxygen.

Thursday, March 28, 2013

boiling

1. Standing by the stove the other night, waiting for a kettle of water to boil - as it does, starting from the near-silent rattle through the increasing racket, and the whistle starting, and all the other noises that accompany that moment, I had the distinct feeling that I was hearing the big chromatic crescendo at the end of Prokofiev's great D-minor Toccata, one of my favorite piano pieces. It's not that I was fooled - this was not an auditory deja trompé, but something similar - but I've never felt a piece of familiar music so strongly evoked by some random physical event. It was definitely primed by having listened to that piece something like 10 times in the past week. Now whenever I hear that piece, when it gets to the silence at the end before the crescendo, I will think of a boiling kettle.

2. Horrible problems with the paper I've been working on and hoping to have submitted in a matter of days. A big part of the paper - the way that I interpret the data, basically - is a set of relatively simple contrast perception models which I run through the experiment as tests of different hypotheses. I had calibrated these to a set of human thresholds, which I was never quite comfortable with for various reasons, but that's the way I had done it; as a final touch to a figure, I decide to go and generate thresholds estimates for the 'best' model, to plot against the human data, just to show how similar they are, and when I go to do this, the model starts giving me imaginary numbers, which is bad.

By the time I figured out what was wrong - it wasn't really a problem, I was just not using my code properly - I had decided that calibrating the model to the thresholds for my humans was probably a bad idea, because the way I measured the human thresholds was kind of weird, and I could be sure of simulating these properly, so I should just use some standard thresholds. Why not? Nobody is going to argue with a standard CSF. So I plug a standard in and - and I'm going to note here that every time I do something with this model, I have to go and recompute the simulations, which takes hours - and it all goes haywire. The model that 'works', and that's consistent with all these nice facts that I've lined up and made a nice case out of, still works, but depending on how I implement the change in sensitivity, the alternatives either perform horribly - which you'd think is okay, but really doesn't look plausible, just makes it look like I haven't given them a fair chance - or they come out reasonably similar to the favored model.

So, I have to be fair, at the same time that I don't want everything to fall apart. I am certain that things work the way I think they do, and I'm prepared to be wrong, but if I'm wrong then I don't understand how I'm wrong. And building evidence either way progresses in these multi-hour steps in between which I'm sitting here with a stomach ache because I'm afraid that I'm going to wind up with evidence that my experiment isn't actually that good at discriminating these different models.

The problem seems to be in the low-frequency filters; the lowest frequency filter is basically four points in the Fourier domain, and it happens to take up a disproportionately huge amount of image contrast, so the 'not working' models tend to be uniformly low-pass in the simulation, which I know is not fair, because it's all because of that low frequency channel. So I figured that, since these are 'sustained' stimuli, I would be justified in just taking out the lowest few channels and leaving the top 5 or 6 band-pass channels - one thing here being that I'm not willing to go back and redesign everything to the point where we have low-frequency DC-sensitive channels. But then when I just have the mid-to-high frequency channels, the three models are too similar, which I don't like either, and which I know is just because I'm now allowing the low s.f. to get through. And I also know that this version, even though it has the 'standard' CSF, doesn't really because the lowest channels are shut off. So I turned them back on and changed the gain to the CSF, which I realized I had wrong the first time because....

Anyways, you see what I'm doing - changing more than one thing at a time, and making mistakes because I'm rushing it. This just prolongs everything, because every change, or every attempt to figure out what the effects of a change are, and every mistake, takes many hours to evaluate.

Anyways, high irritation and anxiety.

Sunday, January 27, 2013

TKD CSD

So, 4 days ago I dreamed of seeing a scintillating scotoma, and then became vaguely paranoid of how inconvenient it would be to have one on Sunday morning before my black belt test; then, today, after I'm home from the test and from grocery shopping, I for some reason start a conversation with j* about how migraines can be associated with relaxation after stress, and tell her the story of VanValkenburgh's Vacation; and this afternoon, coming in with the groceries, i get distracted for a little while by a foveal afterimage, probably from the sun glinting off a car windshield - but maybe it was something else. Yesterday, or a couple of days ago, I was distracted for maybe a minute by the noisiness of my visual field, having momentarily noticed how snowy everything looked. I think that was Friday...

So finally, about 9pm tonight, in the kitchen about to explain to j* why my shoulder hurts (because I briefly dislocated it by swinging my arm wildly at an odd angle towards an 18-year old), I realize that my foveal vision feels scotoma-like (if there is a word for 'feels like a scotoma', I don't know what it is. Specific feels, like 'rough', or 'bright', or 'salty', have specific, ancient, singular morphemes attached to them, so it's hard to invent a new word for a new feeling, or for one that is rare or obscure enough that it hasn't been named). I pick up a knife and start trying to use the sharp tip, at arm's length, to find a blind spot, but can't find it; the start is always odd, since I think the scotoma is very small and maybe discontinuous, and maybe even not binocular. For whatever reason, I can tell that it's there, but it's often hard to find. Then, as I've mentioned many times here, it seems to disappear; then it reappears.

This one was right field; headache is very slight, I felt it start midway through the aura, as a little jolt of pain, then disappeared. I have to shake my head to feel it; possibly would be a little worse if I hadn't taken an ibuprofen soon after the scotoma was after, though I took it for my shoulder...

So I got another recording with scotmap, which I think is good, but all my code from last summer is written implicitly for a left-field scotoma, and my code is complicated and uncommented, so it will take me a little while to straighten it out and make another good animation to post up here. I do have the data transformed and fitted to the wave model I came up with, and the result is very similar to the last measurement: exactly 3mm/min, starting a few millimeters on the V2 side of the inferior V1/V2 border, 10 or so millimeters from the foveal confluence. This is consistent with my feeling that there is a scotoma, and yet being unable to see it directly; the scotoma begins in V2.

There's something weird at the end, a bunch of data at a much smaller eccentricity; this may be an error, I don't think it's the 'rough spot' that I have mentioned before. Will work it out this week.

Meanwhile, here's a general migraine data plot, relating my estimates of headache intensity on a 10-point scale (notice what a fortunate migraineur I am) to the time elapsed since the previous headache. Most of these ratings are retrospective, based on these journal entries. I see a relationship: more frequent means more intense. The outlier at zero is the night in China last month, where there was no headache at all, which I attributed to the simultaneous alcohol intake with the aura.


In other news, I have failed to make progress this weekend on E*'s presentation.

Also, saw a nice show at the BSO last night: Hindemith, Liszt, and Prokofiev.

Thursday, December 27, 2012

random observations:

(rambling chinese vacation edition):

1. due to jet lag, woke up at about 5:30am yesterday, lay in bed for ~1.5hrs. of course a thousand random thoughts ran through my head, but for a while a lay there watching the augenlicht. long, long ago i noticed how it cycles: against the dark, reddish-black background, a brighter cloud coalesces around the fovea, then fades, then coalesces again. the cycle is somewhere between 5-10 seconds, the cloud is a very low-frequency modulation (maybe ~5degrees across) of the high-frequency noise grain.

what i noticed yesterday was that as the cloud fades, some parts of it seem to 'stick'; this is hard to describe. imagine that the cloud was displayed on a screen, and that its brightest parts, around the peak, were 'clipped'; then, as the could fades, the clipped parts persist, then brighten noticeably, then dissipate as the cycle continues. the impression is similar to a very bright afterimage floating in front of a fixated object, except that my eyes were closed, and i was certainly dark adapted. the clipped portions are sharp-edged, small (half or a quarter degree across), with the spatial appearance of little interconnected droplets of a liquid. i wasn't able to tell if they had the same structure on each cycle, but it seemed that they did.

i cannot guess meaningfully what this is. some sort of pattern formation machinery being stimulated by the structure of the cloud cycle, which has a slower decay constant? it seems familiar, so i might have noticed it at some other time in the past when i found myself lying in bed, unable to go to sleep. when i was in college, that happened a lot, because i would have classes in the morning and force myself to bed, despite wanting to stay up until 2 or 3, and so i'd lay in bed for hours sometimes, waiting to sleep.

i also noticed that i could very clearly see the 'eye crank lines', especially when looking down, whereas usually i can't see them when my eyes are closed.

2. when we finally got out of bed yesterday morning, discovered it was snowing. it eventually stopped snowing and started raining, so the weather yesterday was miserable. still, we drove down south to visit family. we went to visit j*'s father's older sister, who i'd never met before, in a village in another corner of fanchang; her home was like something out of a fairy tale, not so surrounded by garbage and chaos like some of the other villages (which are still nice to visit, don't get me wrong). i had jingping take some pictures. there was a mountain running up directly on the side of the village, with a bamboo forest; spread out away from the mountain and the village was a large expanse of vegetable gardens. we had lunch cooked on a wood stove (and with some electricity). i hope that china is able to keep from totally losing this world as it moves on into the future.. all they really need is to find a way to deal with the garbage.

on the way down there, we drove on a new highway which took us through several tunnels beneath the mountains. at some point, to the right, in the distance, maybe a mile or so distant (in the south of wuhu, there are mountains and there are flat plains, and stark, sudden transitionsn between them), through the snowy, rainy, smoggy haze, i saw a massive building, seemingly in the middle of nowhere. it looked like something in DC; the size of the pentagon, ten or twenty stories wide, sixty stories wide. then, a little further south, a gigantic factory or processing plant, like a refinery or the biggest concrete plant you've ever seen. then, a mountain. i didn't bring the GPS to track where we went this time, but i can probably figure it out from memory. this reminds me of last year, something i never wrote down; on the bus back to shanghai, in the distance i could see a glowing tower, probably a hotel, surrounded by nothing else. it was probably fifty stories high, and surrounded by what looked like a 4th or 5th-tier town. maybe we'll see it again this time, since we're probably taking the same bus back.

also on the wuhu note, i've noticed lots of songbirds here in the subdivision, first time in four winters. maybe whatever drove them away is getting better?

3. dinner at uncle's restaurant. dog meat tastes weird. it was worth a try.

4. still on the roman history kick, been reading Tacitus' history of the 'year of four emperors', on the civil war that commenced with the death of Nero. it really is great reading. in the section on Otho's last stand and suicide, i paused for a while and thought about how all this had happened. i still don't know much about roman history, but i've read livy, so i know something about the beginnings of the republic and how it came to be; and i've read plutarch's lives of marius, sulla, crassus, pompey, and caesar, so i kind of understand how the republic cascaded into the empire.

i thought, the romans had all these lawful institutions for separating power, trading offices more-or-less peacefully and agreeably, avoiding autocracy and civil wars. they kept this up for hundreds of years, but only because to have faltered would have probably meant the end of rome, because there were still so many other powerful players in the vicinity. only after those players - the etruscans, the gauls, carthage - were subjugated, only then could the internal struggles really commence. the rise of the emperors, through the disruptions of marius to caesar, put an end to those struggles by ending all the power sharing. but that meant that once an emperor had failed, the struggles would flare again, and there would be civil war. the situation described - and witnessed first-hand - by Tacitus was the first of several times that this would happen, and it would eventually bring the end of the empire.

so i thought all of that, putting together the pieces that so many others have put together so many times, and then i turned the page, and Tacitus himself begins a digression where he outlines the same reflections on the same reasoning, and again i was impressed at the immediacy of reading the thoughts of a person who lived and died more than 1800 years ago.

5. despite the preceding item on how great Tacitus is, i switched yesterday (at the beginning of the next book of Tacitus, on Vespasian's rebellion) to reading Darwin's 'on expressions of man and animals', or whatever the title is. i've wanted to read this for years, never got around to it until there it was, Free on Ibooks. reading Darwin is great because of the way he makes his thinking so transparent; he explains everything iteratively, first in broad terms, then more and more specific, each time tacking on anecdotes or examples with more and more density. origin of species and the descent of man were written similarly, spiraling down from general statements to specific demonstrations, with examples at every level, but there was less anecdote; here, Darwin is on every page noting a story from some friend or acquaintance, or describing the behavior of his own dogs or farm animals. so, the story is solidy anecdotal, but still convincing, because you can see how he is being led at each stage to a question; if such-and-such is true, we should observe this, and here is an example that we all know, or an anecdote that i'm sure you'll recognize (e.g. how a dog acts when in anticipation of something he likes).

i also like all the talk about "nerve-force". the idea that this nerve-force overflows from the channels of immediate use, into channels of frequent or necessarily convenient use, and only later into less frequently used channels, is important in a lot of his examples. also, his 'principle of antithesis' in explaining some expressions is, i think, an interesting example of something more general than an adaptation aftereffect. for example, the excited dog, when it finds that it will not get what it expects, will look dejected - the 'hot-house face' - with this expression explained as, essentially, the aftereffect of adaptation to an excited manner. i think i will look more into this idea of antithesis in behavior..

Sunday, December 02, 2012

visual phenomena at the two edges of sleep

1. going to sleep last night, and saw that high t.f. flicker, though i didn't have a headache at the time. actually, haven't had one in almost 2 months, i think. woke up this morning feeling like i had a hangover, but no headache per se, so maybe i had a migraine in my sleep? or, it was an overdose on thai food. there were definitely abdominal repercussions.

2. been meaning to write this down: jingping usually gets up before me what with the school and all, and usually when she gets up to leave it's still dark. if she turns on the bedroom light and i'm sufficiently conscious but still with eyes closed (and maybe also if my face is pointing in the right direction), i will see a quick red flash. nothing interesting, right? but the flash has a geometric structure, a hexagonal lattice, like an M-scaled honeycomb. a typical sort of visual field hallucination, but i only started noticing it in recent months.

that is all.

Monday, November 26, 2012

blur or no blur?

Some notes on the aftereffects of a paper revision I just submitted (not coincidentally linked to the rambling at the end of the previous entry):

The big problem I have left over after the last revision of the blur adapt paper is this: does it mean anything? I've wound up half convinced that while I have a good explanation for a complex and strange phenomenon, it may be seen as boiling down just to a measurement, by visual system proxy, of the stimuli themselves. That is, all the stuff about selectivity, slope changing, symmetry of adaptation, etc., might all just be a figment of the wholly unnatural way of blurring/sharpening images that we've used.

What's left? The method is good. There are also questions about the spatial selectivity of the phenomenon, and, most importantly I think, about its timecourse. If blur adaptation is something real and not just a spandrel interaction between contrast adaptation and strange stimuli, it doesn't make a lot of sense that it would manifest in everyone in the same way unless it did have some sort of perceptual utility. The utility that exists is a good question. Let's make a list:

1. Changes in fixation across depths. Most of the people who do these experiments are young and have good accommodation. Blur is one of the things that helps to drive accommodation, to the point where if everything is working correctly, within a few hundred (less?) milliseconds of changing fixation in depth, the image should be focused. So, blur adaptation would not be useful in this situation. Maybe it's useful when you're older, and for this reason it sits there, functional and in wait, for the lens to freeze up? Seems unlikely and implausible, but possible. When you get old, and look at different depths, the sharpness of the image will change, and it would be nice to have some dynamic means of clawing back whatever high s.f. contrasts might still be recoverable in a blurred image.

2. This begs the question of how much can be recovered from an image blurred locally. That is, the slope-change method is basically using an image-sized psf, which is what makes it so weird. Blur doesn't usually occur this way, instead it occurs by a spatially local psf applied to the image, like a gaussian filter. If an image is gaussian blurred, how much can it be sharpened?

3. Viewing through diffusive media, like gooey corneas or fog or rain, or muddy water. The latter phenomena, if I'm not mistaken, affect contrast at all frequencies, while stuff-in-the-eyes effects optical blur, i.e. more attenuation at high than at low frequencies. It would be nice to know, in detail, what types of blur or contrast reduction (it might be nice to reserve 'blur' for the familiar sense of high s.f. reduction) occur ecologically. We also have dark adaptation, where the image is sampled at a lower rate but is also noisier. The noise is effectively a physical part of the retinal image (photon, photochemical, neural), meaning that it's local like an optical defect and not diffusive like fog. Maybe blur adaptation is mostly good for night vision?

4. Television. CRTs. Maybe we're all adapted, long-term and dynamically, to blurred media. All captured and reproduced media are blurred. CRTs were worse than current technology, resulting in displayed images that were considerably blurrier than the transmitted images, which themselves were blurred on collection and analog transmission. Digital images are blurred on collection, although light field cameras seem to be getting around this, and digital displays are physically much less blurred. Maybe those of us who grew up watching CRT images, and accepting them as a special sort of normal, adapt more than the young people who are growing up with high-resolution LCD images?

5. Texture adaptation, i.e. adaptation to the local slope of the amplitude spectrum, i.e. exactly what is being manipulated in the experiments. This would be fine. Testing it would be a bit different; subjects would need to identify the grain or scale of a texture, something like that. I think that the materials perception people have done things like this. Anyways, this sort of adaptation makes sense. You might look at an object at a distance and barely be able to tell that its surface has a fine-grain texture, so a bit of local adaptation would allow you, after a few seconds, to see those small details. On the other hand, if you get in really close to the object so that the texture is loud and clear, and you can even see the texture of the elements of the larger texture, especially if there's a lot of light and the texture elements are opaque, this is effectively a much sharper texture than what you were seeing before, even within the same visual angle. The 1/f property of natural images is an average characteristic. Locally, images are lumpy in that objects represent discontinuities; textures on surfaces usually have a dominant scale, e.g. print on a page has a scale measured in points, and that will show up as a peak in the amplitude spectrum. So, texture adaptation, where the system wants to represent detail, seems like a plausible function for what we're calling blur adaptation. Maybe the system should work better somehow if images are classed in this way?

6. Parafoveal or 'off-attention' defocus. We almost always fixate things that are sharp, but if the fixated object is small, whatever is behind it will be blurred optically. Similar situation if the fixated object is viewed through an aperture, the aperture will be blurred. Whatever adaptation occurs in this situation must be passive, just contrast adaptation, as I can't imagine that there's much utility to the small gain in detail with adaptation to a gaussian blur.

For all of these situations, spatial selectivity makes sense but is not necessary. Even if you're viewing a scene through fog, nearby objects will be less fogged than faraway objects, but it all depends on where you're fixating; other object at different depths will be more or less fogged. At any rate, foveal or parafoveal adaptation is most important, as peripherally viewed details are, as far as I can understand, subordinate. If the process is spatially localized, as it should be if it is what it seems to be, then global adaptation is just a subset of all possible adaptation configurations. Temporal selectivity is more questionable. If the process is genuine, and not just broadband contrast adaptation (though this begs the question of what should the timecourse be for contrast adaptation), how fast should we expect it to be? If it's mostly used for long-term (minutes) activities (fixating muddy water, looking for fish; other veiling glare situations; gooey eyeball; accommodation failure), maybe it could stand to be slower, with a time constant measured in seconds, or tens of seconds. If it's mostly used for moment-to-moment changes in fixated structure, i.e. texture adaptation or depth (off-attention), it should be fast, with a time constant measured in hundreds of milliseconds.

Actually measuring the temporal properties of the adaptation might therefore help to some degree in understanding what the process is used for.

Sunday, November 18, 2012

finally a post about those eye crank lines

has anyone ever tested basic visual psychophysics as a function of gaze direction? i don't think so. would it be interesting or important to do so? i think so.

1. when i crank my eyes out as far as i can, i see weird phosphene patterns around my foveae (below). nobody has given me a good explanation for what these phosphenes are, except that they are probably produced by some sort of tension or torsion on the optic nerve. this isn't much of an explanation, because the phosphenes are so local and fine that if it was torsion i would expect them to be everywhere. it could be the correct explanation, but then i need an explanation for why they aren't everywhere, or what is special about foveal optic nerve fibers etc etc in their placement in the optic nerve. the sort of thing i guess i could figure out from reading.
whatever the cause of this effect, it means that in the extreme, direction of gaze has an effect on low-level perception, i.e. i am seeing spatial phosphenes - which, really, look like band-pass patterns - and not hallucinating faces or whatever. so, it stands to reason that less extreme directions might also have effects that are more subtle.

anyways, i hope i am not tearing apart my optic nerves by doing this experiment. i try not to do it too often, but it's like thinking about reciting pi. when you think about reciting pi, you have to recite as many digits as you can remember. you can't stop. give me a second.

2. if e.g. contrast sensitivity is entirely determined by retinotopically coordinated visual mechanisms - i.e. retina, LGN, V1, striate cortex - direction of gaze shouldn't make any difference, because these areas don't know anything about direction of gaze. but visual areas in the parietal cortex do know about direction of gaze - areas like LIP and VIP combine input from the visual system, of such quality that it is used to plan eye movements, with proprioceptive, vestibular, motor, and other inputs.

it's implicit in the theory of psychophysics - the theory that physical stimuli are translatable into perceptual states, which are then behaviorally accessible - that the last stage of vision is motor, since no psychophysics can be done without motor responses. this is one reason why neuroimaging is not psychophysics.

so, if vision interacts with non-visual inputs, and if these same inputs mediate behavioral measurement of visual ability - i.e. psychophysics - then is it reasonable to suppose that direction of gaze should affect basic visual abilities? a good hypothetical mechanism for producing an effect would be the internal noise source. no one should suppose that the noise limiting performance is entirely visual, because this assumes that the rest of the system is deterministic, which it is not. since the rest of the system is not deterministic, the portion of the random variation that is contributed by the parietal cortex might well vary with the tonic motor state of the system; the part of the brain that is guiding or maintaining the motor aspects of the system, and mediating the responses of the system according to the experiment design, might be better adapted or learned in one gaze state than in others.

3. visual neglect. i guess this is a higher-level thing, but from what i've heard, it's independent of basic sensitivity; how could this have been confirmed? how can basic testing be carried out with the same quality in the neglect region as in the unaffected region? this sounds like something that's been tried over and over, and that i could go read about. a quick survey of some titles, abstracts, and a couple of the most relevant-sounding papers suggests that when such sensitivity has been measured, its in the non-neglect areas, but that the researchers are nonetheless looking for a connection. there's a paper where they suggest there's no difference in contrast sensitivity or s.f. discrimination between two groups of stroke patients, some with neglect symptoms, some without; that could mean that even a stroke big enough to cause neglect, while sparing early visual cortex, won't bother basic sensitivity, or that any serious enough stroke will impair sensitivity on basic tasks. hm...

Thursday, November 15, 2012

stack puzzle

Okay, I’ve been wondering for a while whether or not something is a valid question – a good question or a bad question. It is related to a few entries I’ve written here in the past year (esp. this and this), and to a paper that I’m about to get ready for submission.

The question: are the percepts contributed by different layers or modules of visual processing perceived as embedded within one another, or as layered in front of or behind one another?

Such percepts could include brightness, location and sharpness of an edge, its color, its boundary association; color and shape and texture of a face, its identity, its emotional valence, its association with concurrent speech sounds; scale of a texture, its orientation, its angle relative to the frontal plane, its stereoscopic properties.

All of these, and more, are separately computed properties of images as they are perceived, separate in that they are computed by different bits of neural machinery at different parts of the visual system hierarchy. Yet, they are all seen together, simultaneously, and the presence of one implies another. That is, to see an edge implies that it must have some contrast, some color, some orientation, some blur; but this implication is not trivial. That is, a mechanism that senses an edge does not need to signal contrast or color or orientation or scale; the decoder could simply interpret the responses of the mechanism as saying ‘there is an edge here’. To decode the orientation of an edge requires that many such mechanisms exist, each preferring different orientations, and that some subsequent mechanism exists which can discriminate the responses of one from another, i.e. the fact that the two properties are both discriminable (edge or no; orientation) means that there must be a hierarchy, or that there must be different mechanisms.

So, whenever something is seen, the seeing of the thing is the encoding of the thing by many, many different mechanisms, each of which has a special place in the visual system, a devoted job – discriminate orientation, discriminate luminance gradients, discriminate direction of motion, or color, etc.

So, although we know empirically and logically that there must be different mechanisms encoding these different properties, there is no direct perceptual evidence for such differences: the experience is simultaneous and whole. In other words, the different properties are bound together; this is the famous binding problem, and it is the fundamental problem of the study of perception, and of all study of subjective psychology or conscious experience.

This brings us to the question, reworded: how is the simultaneity arranged? From here, it is necessary to adopt a frame of reference to continue discussion, so I will adopt a spatial frame of reference, which I am sure is a severe error, and which is at the root of my attempts so far to understand this problem; it will be necessary to rework what comes below from different points of view, using different framing metaphors.

Say that the arrangement of the simultaneous elements of visual experience is analogous to a spatial arrangement. This is natural if we think of the visual system as a branching series of layers. As far as subjective experience goes, are ‘higher’ layers in front of or behind the ‘lower’ layers? Are they above or below? Do they interlock like... it is hard to think of a metaphor here. When do layers, as such, interlock so that they form a single variegated layer? D* suggested color printing as something similar, though this doesn’t quite satisfy me. I imagine a jigsaw puzzle where the solution is a solid block, and where every layer has the same extent as the solution but is mostly empty space. D* also mentioned layers of transparencies where on each layer a portion of the final image – which perhaps occludes lower parts – is printed; like the pages in the encyclopedia entry on the human body, where the skin, muscles, organs, bones, were printed on separate sheets.

But after some thought, I don't think these can work. An image as a metaphor for the perceptual image? A useful metaphor would have some explanatory degrees of freedom; one set of things that can be understood in one way, used to understand something different in a similar way. Where do we get by trying to understand one type of image as another type of image? Not very far, I think. The visual field is a sort of tensor: at every point in the field, multiple things are true at the same time, they are combined according to deterministic rules, and a unitary percept results. Trying to understand this problem in terms of a simpler type of image seems doomed to fail.

So, whether or not there is a convenient metaphor, I think that the idea of the question should be clear: how are the different components of the percept simultaneously present? A prominent part of psychophysics studies how different components interact: color and luminance contrast, or motion and orientation, but my understanding is that for the most part different components are independently encoded; i.e. nothing really affects the perceived orientation of an edge, except perhaps the orientations of other proximal (in space or time) edges.

Masking, i.e. making one thing harder to see by laying another thing in proximity to it, is also usually within-layer, i.e. motion-to-motion, or contrast-to-contrast. Here, I am revealing that my thinking is still stuck in the lowest levels: color, motion, contrast, orientation, are all encoded together, in overlapping ensembles. So, it may well be that a single mechanism can encode a feature with multiple perceptual elements.

Anyways, the reason why I wonder about these things is, lately, because of this study where I had subjects judge the contrast of photographic images and related these judgments to the contrasts of individual scales within the images. This is related to the bigger question because there is no obvious reason why the percept contrast of a complex, broadband image should correspond to the same percept contrast of a simple spatial pattern like a narrowband wavelet of one type or another. This is where we converge with what I have written a few months ago: the idea of doing psychophysics with simple stimuli is that a subject’s judgments can be correlated with the physical properties of the stimuli, which can be completely described because they are simple. When the stimuli are complex and natural, there is a hierarchy of physical properties for which the visual system is specifically designed, with its own hierarchy, to analyze. Simple stimuli target components of this system; complex stimuli activate the entire thing.

It is possible that when I ask you to identify the contrast – the luminance amplitude – of a Gabor patch, you are able to do so by looking, from your behavioral perch, at the response amplitude of a small number of neural mechanisms which are themselves stimulated directly by luminance gradients, which are exactly what I am controlling by controlling the contrast of the Gabor. It is not only possible, but this is the standard assumption in most contrast psychophysics (though I am suspicious that the Perceptual Template people have fuzzier ideas than this, I am not yet clear on their thinking – is the noisiness of a response also part of apparent magnitude?).

It is also possible that when I ask you to identify the contrast of a complex image, like a typical sort of image you look at every day (outside of spatial vision experiments), you are able to respond by doing the same thing: you pool together the responses of lots of neural mechanisms whose responses are determined by the amplitude of luminance gradients of matched shape. This is the assumption I set out to test in my experiment, that contrast is more or less the same, perceptually, whatever the stimulus is.

But, this does not need to be so. This assumption means that in judging the contrast of the complex image, you are able to ignore the responses of all the other mechanisms that are being stimulated by the image: mechanisms that respond to edges, texture gradients, trees, buildings, depth, occlusions, etc. Why should you be able to do this? Do these other responses not get in the way of ‘seeing’ those more basic responses? We know that responses later in the visual hierarchy are not so sensitive to the strength of a stimulus, rather they are sensitive to the spatial configuration of the stimulus; if you vary how much the configuration fits, you will vary the response of the neuron, but if you vary its contrast you will, across some threshold, turn the neuron on and off.

I don’t have a solution; the question is not answered by my experiment. I don’t doubt that you can see the luminance contrast of the elements in a complex scene, but I am not convinced that what you think is the contrast is entirely the contrast. In fact, we know for certain that it is not, because we have a plethora of lightness/brightness illusions.

No progress here, and I'm still not sure of the quality of the question. But, maybe this way of thinking can make for an interesting pitch at the outset of the introduction of the paper.

Friday, September 21, 2012

grant, presentation, paper, model

Been trying to skip between several jobs: grant proposal with a looming deadline, modeling experiments for a paper revision with a looming deadline, looming conference presentation... well, the conference is over, and the grant is coming along, though I still do not believe I will make it.

The paper..  okay, another paper: poked an editor yesterday, and he came back with a 'minor revision' request, which I fulfilled by late afternoon today. So, finally, we have a journal article - in a 1.0 impact factor journal - to show for a 3 year postdoc. Sigh. Another in revision, in a better journal, but that's the big problem: I'm doing all these model tests, but I can't get any real momentum because I keep flipping back to the grant. Sigh. I keep complaining about the same thing. Need to set a deadline - 3 more years? - after which if I'm still making the same complaint, something needs to change.

Let's talk about the model stuff. I've talked about it already in the past few posts: in the original paper, I proposed a modification to an existing model, a minor modification, which was able to closely fit our data, but which was a bit complexified, and difficult to explain exactly why it worked as well as it did, and also unable to show how varying its parameters explained the variance in our data, etc. So, it "worked", but that's about all it did. It didn't explain much.

The existing model we call the "simple model". The simple model is indeed simple. It's so simple that it's almost meaningless, which is what frustrates me. Of course it's not that simple; you can interpret its components in very simplified, but real, visual system terms. And, it basically can describe our data, even when I complexify it just a bit to handle the extra complexity of our stimuli. And this complexification is fine, because it works best if I remove an odd hand-waving component that the original author had found it necessary to include to explain his data. Only... it doesn't quite work. The matching functions that make up the main set of data have slopes that are different in a pattern that is replicated by the simple model, but overall the model slopes are too shallow. I spent last week trying to find a dimension of the model that I could vary in order to shift the slopes up and down without destroying other aspects of its performance..  no dice.. fail fail fail.

So, I'm thinking that I can present a 'near miss': the model gets a lot of things right, and it fails to get everything right for reasons that I haven't thought hard enough about just yet. I really need to sit some afternoon and really think it out. Why, for the normal adaptor, is the matching function slope steeper than the identity line, but never steep enough? What is missing? Is it really the curvature of the CSF? How do I prove it?

Now, out of some horrible masochistic urge, I'm running the big image-based version of the "simple model". This version doesn't collapse the input and adaptation terms into single vectors until the 'blur decoding' stage. It seems like, really, some version of this has to work, but it hasn't come close yet. Looking at it now, though, I see that I did some strange things that are kind of hard to explain... Gonna give it another chance overnight.

Thursday, September 13, 2012

Deja Trompé

When I was in graduate school, I lived in Old Louisville, and walked, most days, down 3rd street to campus. Whenever I crossed the big road separating the neighborhood from campus, Cardinal Avenue, at a certain spot, I would see something up in my right peripheral visual field, and think, "Starlings!"

It was never starlings. It was always the tattered insulation hanging off a bunch of power lines strung over Cardinal. I remember this because even though I learned, pretty quickly, that I wasn't seeing starlings in that instant whenever it occurred, the fastest part of me - whatever part just automatically identifies salient stuff in the vast periphery - always thought that I was.

"Starlings!"

It not that I was hallucinating starlings. A bunch of speckly black stuff fluttering against the sky kind of looks like birds, even when you know it isn't. You can't blame me. I don't blame my visual system. It's an honest mistake. The interesting thing is that I kept making it, over and over again, with apparently no control over it. An inconsequential and incessant perceptual mistake.

I've noticed similar situations over the years, but right now I can't remember the others. I should start making a list. I bring this up because recently someone cleaned out the shared kitchen on this side of the institute, and because I always turn the lights out in the same kitchen.

I think that, because I always turn out the light when I leave the little kitchen, other people have started following my example, and now, often, when I go to the kitchen to get hot water for my tea, the light is already out. This makes me happy. It's happened very gradually. Change is slow, usually.

Usually. Recently, the development office got a new temp who is apparently a complete OCD clean freak. It's great. She cleaned this kitchen and the other one. She put up little signs everywhere telling people not to be such pigs. I love her.

Anyways, now, when I go into the little kitchen to get my water, I stand at the dispenser, watching it to make sure my hand doesn't stray and I don't get scalded, and the microwave with its little sign sits down in my lower left field. Often, lately, the light is out when I get there. I leave it that way, because there's enough light trickling in from the hallway. Every time I am in this configuration, with the light out, it looks for all the world that there is light coming out of the microwave window.

This happens over and over again. It's very robust; I can stand there and look straight at the microwave and its little paper sign, and that's what I see; then I look away, and the sign becomes an emission of lamp light from within the microwave. I can turn the mistake on and off by moving my eyes back and forth.

Again, I don't blame my visual system. It's doing the best it can. I've seen so many microwaves, and when they're cooking, they usually have little lamps inside, so you can see your whatever rotating on the little turntable. If the room is dark, the image is basically of a luminous rectangle in the front door of a microwave. Not many microwaves that I have known have worn little paper signs on their doors. To their disgusting, disgusting peril.

There must be a name for this, but I can't find it. So for now I'm going to invent a term: deja trompĂ©‎, "fooled again". Deja as in deja vu, "again seen"; trompĂ©‎ as in trompĂ© l'oeil, "deceives the eye". Seems like the right flavor for this sort of thing. I'll start keeping track of these, however rare they are. I'll inaugurate the list with a new entry label.

BACK TO WORK

Friday, September 07, 2012

talk: 97%


did a dry run today for my FVM talk. i think it went well, but there was a good amount of feedback. (incidentally, earlier this week i came to the lab, and passed my preceptor e* talking with a familiar old guy in the hall; a few minutes later, e* brings the guy to my office and asks me to show him my work. the old guy was l.s., one of the elder statesmen of european psychophysics. turns out he had been a postdoc at the instutute more than 40 years ago, and was in town, and had just dropped in to see old friends.. i took him through my presentation at quarter speed, and he was very enthusiastic. made some suggestions about controlling for the 'knowledge' aspect of my stimuli and experiment design. took notes. had a good talk with him, he seems to know my grad school mentor well, knows all his students. so i didn't go to ECVP this week, but i got to spend a morning with one of its founders...)

anyways, the dry run: p* was the only one, as i guess i expected, to make real comments on the substance of the talk. he had two points/questions:

1. what happens if the two images are different, i.e. if they have different phase spectra? i have not tried to do this experiment, or to predict the result. i guess that technically, the model that i am evaluating would make clear predictions in such an experiment, and the perceptual process i am claiming to occur would be equally applicable. but, really, i am tacitly assuming that the similarity of the two images is tamping down noise that would otherwise be there, somehow in the spatial summation, that isn't actually reflected in the model but that would be there for the humans. but, it might work just fine. i should really try it out, just to see what happens... (*edit: i tested it in the afternoon, and the result is exactly the same. experiment is harder, and the normalization is wacky, but seems clear it works...)

2. don't the weighting functions look just like CSFs? isn't this what would happen if perceived contrasts were just CSF-weighted image contrasts? yeah, sure, but there's no reason to believe that this is how perceived contrast is computed. the flat-GC model is close to this. i wonder if i shouldn't just show a family of flat-GC models instead of a single one, with one of them having 0-weighted GC...

the other main criticism was of the slide with all the equations. this is the main thing i think i need to address. i need to remake that slide so it more naturally presents the system of processes that the equations represent. some sort of flow or wiring diagram, showing how the equations are nested...

also need to modify the explanation of the contrast randomization; not add information, but make clearer that the two contrast weighting vectors are indeed random and (basically) independent.

Wednesday, August 29, 2012

dream science?

Too many entries - let this be the last one for August.

Fantastically, incredibly, unbelievably, implausibly, Monday night I had a dream that directly relates to Monday's entry. I'll leave out irrelevant details: I dreamed that I was experiencing a migraine aura.

In the dream, I noticed the phosphene-like foveal scotoma, and at first had the "is it an afterimage? what bright light did I look at?" reaction, and then realized what it really was. It was upsetting, actually, because the last one was just 1 week previous, and I felt like once a week is a bit too frequent.

I then set about trying to record the aura with my perimetry program, except that my computer was now a large, flat panel lying on the floor, like a giant i-pad. The layout was of course different - not a blank gray screen, but a thin-line black grid, like a Go board, on a wood-brown background. Jingping was there, and kept trying to move the grid around, and I kept telling her to stop.

Once I was trying to record it, the scotoma was no longer foveal, but extended 10-20 degrees out, straight to the left and then arcing downward towards the inferior vertical meridian. This makes me think that I wasn't actually experiencing an aura in my sleep - to get from the foveal scotoma to 10-20 degrees should take 15-20 minutes, and I don't think that much time actually passed in the dream - it seemed like less than a minute. Of course, time and space are both funny in dreams, so who knows. There was no headache on Tuesday, anyways.

It was very frustrating trying to set the fixation point in the dream perimetry program. I just couldn't fixate - I would set it in one place, and then felt that it should be somewhere else. I think I finally gave up and started sticking my hand in the scotoma to probe its size.

So, whether or not I was really experiencing an aura, or just dreaming that I was experiencing one, is an interesting question. It seemed like a real one, and I noted lots of spatial details: the tiny phosphenic bead of the foveal scotoma, the fuzzy noisiness of the peripheral scotoma arc (though the periphery seemed clearer somehow in than true peripheral vision), the thin black lines of the perimetry grid, the unfixable fixation spot.. If visual experience includes V1 activity, and if the visual aura occurs in V1, and if V1 is quiet or suppressed during dreaming, how could I have seen what I did, unless spatial vision includes a good deal of higher-level inference?

It seems that I proposed an experiment on Monday afternoon, and then did the experiment in my sleep that night. I have never been so efficient!

Monday, August 27, 2012

summation or conclusion?

So, I'm realizing now that this note from a few days ago is touching on this entry from several months back (if only I could keep everything in my head at once...). In the latter, I was talking about the idea that visual experience is a stack of phenomena, extending all the way down to the optical image, even to the light field, and all the way up to cognition and emotion. In the former, I realized that my standing, computational interpretation of the classification image experiment involves an assumption that estimates of a particular psychological construct - perceived contrast - are mediated by the same processes whether the stimulus is simple or complex.

This stance doesn't conflict with the 'stack' idea, but when you think of both together it seems dubious. With simple stimuli, there isn't much else elicited by the visual pattern, so estimates of its properties can be localized to a small set of possible mechanisms, which is the point of using simple stimuli in the first place. So, there are multiple layers to the stack, but most of them are relatively empty or inactive. When the stimulus is complex, all those other stacks are now active, and filled with activity which is ostensibly more important and interesting to the observer. Is it reasonable to continue to assume that the observer can make use of the same information in that 'spatial vision' layer that he could when there was nothing to distract him elsewhere?

I realized this connection because I was thinking of the implications of one alternative (complex visual qualia are the result of highly nonlinear summation of simple visual qualia) or the other (complex qualia may be inferences drawn from 'basis' qualia, that could possibly exist - as perhaps in a dream - independently of those bases). How do you tell the difference? Take away the spatial vision level, and see what is left. How to do this? Lesions maybe, but the first thing that comes to mind is to compare what imagery looks like when you're seeing it versus when you're dreaming it.

Thursday, August 23, 2012

vacation report

Spent the last 5 days (Sat-Wed) down in Tennessee/Alabama, visiting family. Monday morning, Jingping woke up about 8 and went looking around and came back saying that my parents were still home, when she thought they should be out taking a walk somewhere. I was barely awake, still hadn't opened my eyes. When I finally did a few minutes later, I found that I was halfway through a scintillating scotoma, maybe around the 15-20 minute point. It looked a lot like last time, left field, relatively straight scotoma from above fixation leftward, arcing downward and below. I got out of bed and went to sit in the sunroom to watch the rest of it. The scintillation was rather weak, but still noticeable - I knew what was happening within a second or two of opening my eyes. The headache started soon after I got out of bed, and was kind of a bad one. Above-behind my eyes, focused on the right side. Nauseated and dizzy for a day, which sucked because I had to drive down to Huntsville Monday afternoon (in my parents' Prius with an expired Kentucky driver's license, don't tell my mother). Still hurt a bit Tuesday night.

I think that maybe the slight headache I described on the 16th might have been part of the prodrome for this one, otherwise I didn't notice any signs.

**

Last night on the way home I had an insight into how to explain the low-pass gain control that I'm proposing. A basic Barlow-Foldiak type anti-Hebbian learning rule should develop low-pass weights if a set of scaled filters is repeatedly exposed to low-pass input, or maybe even if it's just exposed to white noise. Gonna try this later today!

Friday, August 17, 2012

contrast or inference?

Norma Graham makes an interesting point, which I've seen quoted many times, in her book on spatial vision. She notes that it is as if the brain is transparent to what is happening in the early levels of visual processing, and that this is curious. It's curious, but it's a typical stance for someone who studies spatial vision; we assume that discrimination or identification or detection of signal strength is mediated almost entirely by the filters that are transducing the signal, not those that understand it, or respond (overtly) to it.

Whether or not this transparency holds when complex images are viewed is, I think, totally unknown. It may be that the percept elicited by a complex scene really is the sum of its parts, and that it simply provokes additional sensations of meaning, identity, extension, etc., which are tied to spatial locations within the scene. So, the visible scene that we are conscious of is indeed an object of spatial vision. This is the point of view I generally adopt, and I think it is common.

Another view is that the percept is entirely inference. Boundaries, surfaces, colors, textures, etc., are qualities in themselves, inferred from particular organization of spatial structure, and these then are organized in such a way that objects and identities and meanings can be inferred in successive stages. These inferences are what is seen consciously; perhaps inferences, and the evidence for them (the matter of spatial vision), are experienced simultaneously, but the substantive inferences, being the important elements of experience, are what dominate consciousness. So, only a small part of the phenomenal scene is actually constituted by e.g. luminance contrasts, and much more of it is constituted by higher level inferences. I think this point of view is also common, maybe especially in the current generation of visual neuroscientists.

The latter view is not exclusive of Graham's observation. If the patterns that are viewed are simple enough, they will not form objects, and will not have meaning. Or, they will be interpreted only as what they are, which doesn't require much inference, or only circular inference (which isn't a bad thing necessarily, when you really do want to conclude that a thing is itself, e.g. a gaussian blob of light, on the basis of its being a blob of light; usually, you want to infer that there is a letter on a page on the basis of a particular arrangement of blobs of light).

So, in the experiment that I'm currently analyzing to death, I am clearly taking the first view, in which case I think my conclusions are solid. If the second view is more accurate, what does the result mean? It could mean that inferences about image strength are based on higher frequencies just because they are the more susceptible to loss in a weak signal. If I'm asking subjects to judge image contrast, they could easily interpret this as judging image strength, and then their judgments would be biased towards the most delicate parts of the image, but they would still take everything into account.

This latter interpretation is still interesting, but it doesn't require "suppression". It is worth mentioning and I should at least include it in the manuscript, although the FVM talk probably will not have space... already there's barely space for the default story.

Thursday, August 16, 2012

quick note

Something I've been meaning to write a note on for a while. Even as I write this, I don't have the idea quite in my head.

What is it about apparent contrast that is interesting, as opposed to sensitivity? By most measures, they're the same thing. If I ask someone to do an experiment where they have to discriminate between one contrast and another, I'm assuming that they are doing some mental comparison between two apparent contrasts, or memories of two apparent contrasts. By varying the physical difference between the two contrasts, I can then quantify the person's performance on this task, e.g. the difference at which they can no longer measurably discriminate. These are the sorts of measurements that are usually made in psychophysics.

These performance measures are understood to reflect the subjective, phenomenal properties of interest. But they involve a nearly unsolvable confound: we can't tell what part of the discrimination is due to relative internal response strength and what part is due to internal noise.

So to me, measuring apparent contrast is a way at getting around this problem. You get to measure, directly, the internal response to the stimulus. The new problem, then, is in quantifying what you've measured. The reverse correlation experiment that I did, I realize now, was cognizant of all this, but it was subliminal for me. The experiment is not measuring performance, but it is very similar to an experiment that would be measuring discrimination performance. In this experiment, the stimuli are always easily discriminable, so there are no limits to measure. The subject is asked to discriminate between the strengths of the two stimuli, but I measure no interval or reliability of this discrimination. This is because there is no objective stimulus strength.

The purpose of the experiment is to find out what constitutes stimulus strength when strength is defined, to an observer, as luminance contrast. What I get back is (no matter how I measure it) a description of what components count more or less than other components in making decisions about stimulus strength. I then test a bunch of plausible models to see whether they might also count components in similar ways. Lucky for me, only a particular type of model works, so I can make a sort of conclusion from the study.

So, apparent contrast is the way things look, and then behaviors can be carried out on the basis of how things look. Most visual psychophysics directly analyzes the behaviors that are based on appearances. I've tried to directly analyze the appearances themselves. Did I succeed?