Bo Xilai rides my train. He's usually there when I get on at Reservoir on the 9:45. He always has a seat in the rear car, where I ride in the morning. He sits facing the rear, which I figure he does so that fewer people have a chance to recognize him. A lot of Chinese people ride the D train, but I've never noticed anyone seeming to recognize him. Maybe they do and just ignore him.
He wears Nikes and blue jeans. He doesn't look wealthy or powerful. Sometimes I see him reading a Chinese newspaper, but usually he's just sitting there looking around kind of nervously, or napping with his eyes closed. He rides to the Chinatown stop and gets off. His son went to graduate school at Harvard, so he must have some connections to some Chinese people in town.
But still, why is Bo Xilai riding the train in Boston? Isn't he afraid of being recognized, especially in Chinatown? He's supposed to be under house arrest in China, not riding around on public transit in America. He can't assume that everyone will be friendly and understanding. You'd think it would be excellent tabloid material: "Bo Xilai Escapes to Boston". And what's he doing in Chinatown? Maybe he has a job in a store or a restaurant to pass the time, trying to start a new life, or maybe he's going to some kind of a meeting of exiles.
He always looks a little confused and uncomfortable. I feel like everything isn't right with him. Maybe he's homesick? I saw the pictures of his wife in the docket. Does he think she really did what they say she did? I wonder if she's here too, in Boston. I haven't seen her. Maybe he's just lonely. Maybe he doesn't know anyone here, and he goes to Chinatown to remind himself of China.
I wonder what will happen when it's time for Bo Xilai's trial. Will they use a look-alike? Maybe they'll cancel it, or hold it in secret. Maybe they'll announce that he's died. I can't believe that they'll announce that he's escaped. We'll see what happens - it will all be in the news. I won't tell anyone what I know, though, whatever happens. If Bo Xilai wants to stay in Boston and ride the D-Train, it's really none of my business.
Monday, December 10, 2012
Monday, December 03, 2012
train headache
slightly excruciating headache. developed on the train. may or may not be migraine, it's a fuzzy cloud of pain centered between behind my eyes and my palate. maybe a sinus thing instead, or maybe there's an interaction with the winter air and the train heating. nauseated and photophobic. i keep holding my breath.
Sunday, December 02, 2012
visual phenomena at the two edges of sleep
1. going to sleep last night, and saw that high t.f. flicker, though i didn't have a headache at the time. actually, haven't had one in almost 2 months, i think. woke up this morning feeling like i had a hangover, but no headache per se, so maybe i had a migraine in my sleep? or, it was an overdose on thai food. there were definitely abdominal repercussions.
2. been meaning to write this down: jingping usually gets up before me what with the school and all, and usually when she gets up to leave it's still dark. if she turns on the bedroom light and i'm sufficiently conscious but still with eyes closed (and maybe also if my face is pointing in the right direction), i will see a quick red flash. nothing interesting, right? but the flash has a geometric structure, a hexagonal lattice, like an M-scaled honeycomb. a typical sort of visual field hallucination, but i only started noticing it in recent months.
that is all.
2. been meaning to write this down: jingping usually gets up before me what with the school and all, and usually when she gets up to leave it's still dark. if she turns on the bedroom light and i'm sufficiently conscious but still with eyes closed (and maybe also if my face is pointing in the right direction), i will see a quick red flash. nothing interesting, right? but the flash has a geometric structure, a hexagonal lattice, like an M-scaled honeycomb. a typical sort of visual field hallucination, but i only started noticing it in recent months.
that is all.
Monday, November 26, 2012
blur or no blur?
Some notes on the aftereffects of a paper revision I just submitted (not coincidentally linked to the rambling at the end of the previous entry):
The big problem I have left over after the last revision of the blur adapt paper is this: does it mean anything? I've wound up half convinced that while I have a good explanation for a complex and strange phenomenon, it may be seen as boiling down just to a measurement, by visual system proxy, of the stimuli themselves. That is, all the stuff about selectivity, slope changing, symmetry of adaptation, etc., might all just be a figment of the wholly unnatural way of blurring/sharpening images that we've used.
What's left? The method is good. There are also questions about the spatial selectivity of the phenomenon, and, most importantly I think, about its timecourse. If blur adaptation is something real and not just a spandrel interaction between contrast adaptation and strange stimuli, it doesn't make a lot of sense that it would manifest in everyone in the same way unless it did have some sort of perceptual utility. The utility that exists is a good question. Let's make a list:
1. Changes in fixation across depths. Most of the people who do these experiments are young and have good accommodation. Blur is one of the things that helps to drive accommodation, to the point where if everything is working correctly, within a few hundred (less?) milliseconds of changing fixation in depth, the image should be focused. So, blur adaptation would not be useful in this situation. Maybe it's useful when you're older, and for this reason it sits there, functional and in wait, for the lens to freeze up? Seems unlikely and implausible, but possible. When you get old, and look at different depths, the sharpness of the image will change, and it would be nice to have some dynamic means of clawing back whatever high s.f. contrasts might still be recoverable in a blurred image.
2. This begs the question of how much can be recovered from an image blurred locally. That is, the slope-change method is basically using an image-sized psf, which is what makes it so weird. Blur doesn't usually occur this way, instead it occurs by a spatially local psf applied to the image, like a gaussian filter. If an image is gaussian blurred, how much can it be sharpened?
3. Viewing through diffusive media, like gooey corneas or fog or rain, or muddy water. The latter phenomena, if I'm not mistaken, affect contrast at all frequencies, while stuff-in-the-eyes effects optical blur, i.e. more attenuation at high than at low frequencies. It would be nice to know, in detail, what types of blur or contrast reduction (it might be nice to reserve 'blur' for the familiar sense of high s.f. reduction) occur ecologically. We also have dark adaptation, where the image is sampled at a lower rate but is also noisier. The noise is effectively a physical part of the retinal image (photon, photochemical, neural), meaning that it's local like an optical defect and not diffusive like fog. Maybe blur adaptation is mostly good for night vision?
4. Television. CRTs. Maybe we're all adapted, long-term and dynamically, to blurred media. All captured and reproduced media are blurred. CRTs were worse than current technology, resulting in displayed images that were considerably blurrier than the transmitted images, which themselves were blurred on collection and analog transmission. Digital images are blurred on collection, although light field cameras seem to be getting around this, and digital displays are physically much less blurred. Maybe those of us who grew up watching CRT images, and accepting them as a special sort of normal, adapt more than the young people who are growing up with high-resolution LCD images?
5. Texture adaptation, i.e. adaptation to the local slope of the amplitude spectrum, i.e. exactly what is being manipulated in the experiments. This would be fine. Testing it would be a bit different; subjects would need to identify the grain or scale of a texture, something like that. I think that the materials perception people have done things like this. Anyways, this sort of adaptation makes sense. You might look at an object at a distance and barely be able to tell that its surface has a fine-grain texture, so a bit of local adaptation would allow you, after a few seconds, to see those small details. On the other hand, if you get in really close to the object so that the texture is loud and clear, and you can even see the texture of the elements of the larger texture, especially if there's a lot of light and the texture elements are opaque, this is effectively a much sharper texture than what you were seeing before, even within the same visual angle. The 1/f property of natural images is an average characteristic. Locally, images are lumpy in that objects represent discontinuities; textures on surfaces usually have a dominant scale, e.g. print on a page has a scale measured in points, and that will show up as a peak in the amplitude spectrum. So, texture adaptation, where the system wants to represent detail, seems like a plausible function for what we're calling blur adaptation. Maybe the system should work better somehow if images are classed in this way?
6. Parafoveal or 'off-attention' defocus. We almost always fixate things that are sharp, but if the fixated object is small, whatever is behind it will be blurred optically. Similar situation if the fixated object is viewed through an aperture, the aperture will be blurred. Whatever adaptation occurs in this situation must be passive, just contrast adaptation, as I can't imagine that there's much utility to the small gain in detail with adaptation to a gaussian blur.
For all of these situations, spatial selectivity makes sense but is not necessary. Even if you're viewing a scene through fog, nearby objects will be less fogged than faraway objects, but it all depends on where you're fixating; other object at different depths will be more or less fogged. At any rate, foveal or parafoveal adaptation is most important, as peripherally viewed details are, as far as I can understand, subordinate. If the process is spatially localized, as it should be if it is what it seems to be, then global adaptation is just a subset of all possible adaptation configurations. Temporal selectivity is more questionable. If the process is genuine, and not just broadband contrast adaptation (though this begs the question of what should the timecourse be for contrast adaptation), how fast should we expect it to be? If it's mostly used for long-term (minutes) activities (fixating muddy water, looking for fish; other veiling glare situations; gooey eyeball; accommodation failure), maybe it could stand to be slower, with a time constant measured in seconds, or tens of seconds. If it's mostly used for moment-to-moment changes in fixated structure, i.e. texture adaptation or depth (off-attention), it should be fast, with a time constant measured in hundreds of milliseconds.
Actually measuring the temporal properties of the adaptation might therefore help to some degree in understanding what the process is used for.
The big problem I have left over after the last revision of the blur adapt paper is this: does it mean anything? I've wound up half convinced that while I have a good explanation for a complex and strange phenomenon, it may be seen as boiling down just to a measurement, by visual system proxy, of the stimuli themselves. That is, all the stuff about selectivity, slope changing, symmetry of adaptation, etc., might all just be a figment of the wholly unnatural way of blurring/sharpening images that we've used.
What's left? The method is good. There are also questions about the spatial selectivity of the phenomenon, and, most importantly I think, about its timecourse. If blur adaptation is something real and not just a spandrel interaction between contrast adaptation and strange stimuli, it doesn't make a lot of sense that it would manifest in everyone in the same way unless it did have some sort of perceptual utility. The utility that exists is a good question. Let's make a list:
1. Changes in fixation across depths. Most of the people who do these experiments are young and have good accommodation. Blur is one of the things that helps to drive accommodation, to the point where if everything is working correctly, within a few hundred (less?) milliseconds of changing fixation in depth, the image should be focused. So, blur adaptation would not be useful in this situation. Maybe it's useful when you're older, and for this reason it sits there, functional and in wait, for the lens to freeze up? Seems unlikely and implausible, but possible. When you get old, and look at different depths, the sharpness of the image will change, and it would be nice to have some dynamic means of clawing back whatever high s.f. contrasts might still be recoverable in a blurred image.
2. This begs the question of how much can be recovered from an image blurred locally. That is, the slope-change method is basically using an image-sized psf, which is what makes it so weird. Blur doesn't usually occur this way, instead it occurs by a spatially local psf applied to the image, like a gaussian filter. If an image is gaussian blurred, how much can it be sharpened?
3. Viewing through diffusive media, like gooey corneas or fog or rain, or muddy water. The latter phenomena, if I'm not mistaken, affect contrast at all frequencies, while stuff-in-the-eyes effects optical blur, i.e. more attenuation at high than at low frequencies. It would be nice to know, in detail, what types of blur or contrast reduction (it might be nice to reserve 'blur' for the familiar sense of high s.f. reduction) occur ecologically. We also have dark adaptation, where the image is sampled at a lower rate but is also noisier. The noise is effectively a physical part of the retinal image (photon, photochemical, neural), meaning that it's local like an optical defect and not diffusive like fog. Maybe blur adaptation is mostly good for night vision?
4. Television. CRTs. Maybe we're all adapted, long-term and dynamically, to blurred media. All captured and reproduced media are blurred. CRTs were worse than current technology, resulting in displayed images that were considerably blurrier than the transmitted images, which themselves were blurred on collection and analog transmission. Digital images are blurred on collection, although light field cameras seem to be getting around this, and digital displays are physically much less blurred. Maybe those of us who grew up watching CRT images, and accepting them as a special sort of normal, adapt more than the young people who are growing up with high-resolution LCD images?
5. Texture adaptation, i.e. adaptation to the local slope of the amplitude spectrum, i.e. exactly what is being manipulated in the experiments. This would be fine. Testing it would be a bit different; subjects would need to identify the grain or scale of a texture, something like that. I think that the materials perception people have done things like this. Anyways, this sort of adaptation makes sense. You might look at an object at a distance and barely be able to tell that its surface has a fine-grain texture, so a bit of local adaptation would allow you, after a few seconds, to see those small details. On the other hand, if you get in really close to the object so that the texture is loud and clear, and you can even see the texture of the elements of the larger texture, especially if there's a lot of light and the texture elements are opaque, this is effectively a much sharper texture than what you were seeing before, even within the same visual angle. The 1/f property of natural images is an average characteristic. Locally, images are lumpy in that objects represent discontinuities; textures on surfaces usually have a dominant scale, e.g. print on a page has a scale measured in points, and that will show up as a peak in the amplitude spectrum. So, texture adaptation, where the system wants to represent detail, seems like a plausible function for what we're calling blur adaptation. Maybe the system should work better somehow if images are classed in this way?
6. Parafoveal or 'off-attention' defocus. We almost always fixate things that are sharp, but if the fixated object is small, whatever is behind it will be blurred optically. Similar situation if the fixated object is viewed through an aperture, the aperture will be blurred. Whatever adaptation occurs in this situation must be passive, just contrast adaptation, as I can't imagine that there's much utility to the small gain in detail with adaptation to a gaussian blur.
For all of these situations, spatial selectivity makes sense but is not necessary. Even if you're viewing a scene through fog, nearby objects will be less fogged than faraway objects, but it all depends on where you're fixating; other object at different depths will be more or less fogged. At any rate, foveal or parafoveal adaptation is most important, as peripherally viewed details are, as far as I can understand, subordinate. If the process is spatially localized, as it should be if it is what it seems to be, then global adaptation is just a subset of all possible adaptation configurations. Temporal selectivity is more questionable. If the process is genuine, and not just broadband contrast adaptation (though this begs the question of what should the timecourse be for contrast adaptation), how fast should we expect it to be? If it's mostly used for long-term (minutes) activities (fixating muddy water, looking for fish; other veiling glare situations; gooey eyeball; accommodation failure), maybe it could stand to be slower, with a time constant measured in seconds, or tens of seconds. If it's mostly used for moment-to-moment changes in fixated structure, i.e. texture adaptation or depth (off-attention), it should be fast, with a time constant measured in hundreds of milliseconds.
Actually measuring the temporal properties of the adaptation might therefore help to some degree in understanding what the process is used for.
Subscribe to:
Comments (Atom)