Sunday, September 30, 2012


quick notes for the end of september:

week 1 of bring-your-laptop-to-work was a success; worked steadily in the lab every day, and came home each night to do particular jobs by hand, with pen and paper. extremely effective. laptop came back home friday night; going to continue this for the foreseeable future. should make the next MS revision and the following MS submission much easier.

headache last night, gradual onset; eventually focused pain above right eye socket; photophobia; went to bed, closed eyes, weird eigenlicht flicker, maybe 40-50Hz; what is that? slight headache remnant now, indistinct.

recent weirdness with reading text, usually notice in the morning; right now, left of fixation feels scotoma-like, but i can see there..


also, a story: when i sit at the kitchen table, in the chair by the window, i have a view of the pantry area, with the fridge and the back door. my leather sandals are wedged between the fridge and the wall, by the door, so i can wear them outside when i go to throw trash out.

i regularly mistake the sandals, peripherally, for Olive the Cat, sitting by the back door, wanting to go out. then i foveate them, and see that they are my sandals. this has happened repeatedly, maybe dozens of times: deja trompé!

Tuesday, September 25, 2012

why do i keep writing poems

Batten down the hatches!
In this electric squall
Or else we'll be sent to the deep -
The web will drown us all.

So home I'll go! To printed word,
With pen and paper work.
No opportunity to drift
Through forums or to lurk

In hiding from my calling,
I'll forge ideas by thought
And stare down syntax, words reform
To make all logic-wrought.

So batten down the hatches!
And keep the ship afloat
For though I'll try to steer us,
The net may wreck this boat.

Friday, September 21, 2012

grant, presentation, paper, model

Been trying to skip between several jobs: grant proposal with a looming deadline, modeling experiments for a paper revision with a looming deadline, looming conference presentation... well, the conference is over, and the grant is coming along, though I still do not believe I will make it.

The paper..  okay, another paper: poked an editor yesterday, and he came back with a 'minor revision' request, which I fulfilled by late afternoon today. So, finally, we have a journal article - in a 1.0 impact factor journal - to show for a 3 year postdoc. Sigh. Another in revision, in a better journal, but that's the big problem: I'm doing all these model tests, but I can't get any real momentum because I keep flipping back to the grant. Sigh. I keep complaining about the same thing. Need to set a deadline - 3 more years? - after which if I'm still making the same complaint, something needs to change.

Let's talk about the model stuff. I've talked about it already in the past few posts: in the original paper, I proposed a modification to an existing model, a minor modification, which was able to closely fit our data, but which was a bit complexified, and difficult to explain exactly why it worked as well as it did, and also unable to show how varying its parameters explained the variance in our data, etc. So, it "worked", but that's about all it did. It didn't explain much.

The existing model we call the "simple model". The simple model is indeed simple. It's so simple that it's almost meaningless, which is what frustrates me. Of course it's not that simple; you can interpret its components in very simplified, but real, visual system terms. And, it basically can describe our data, even when I complexify it just a bit to handle the extra complexity of our stimuli. And this complexification is fine, because it works best if I remove an odd hand-waving component that the original author had found it necessary to include to explain his data. Only... it doesn't quite work. The matching functions that make up the main set of data have slopes that are different in a pattern that is replicated by the simple model, but overall the model slopes are too shallow. I spent last week trying to find a dimension of the model that I could vary in order to shift the slopes up and down without destroying other aspects of its performance..  no dice.. fail fail fail.

So, I'm thinking that I can present a 'near miss': the model gets a lot of things right, and it fails to get everything right for reasons that I haven't thought hard enough about just yet. I really need to sit some afternoon and really think it out. Why, for the normal adaptor, is the matching function slope steeper than the identity line, but never steep enough? What is missing? Is it really the curvature of the CSF? How do I prove it?

Now, out of some horrible masochistic urge, I'm running the big image-based version of the "simple model". This version doesn't collapse the input and adaptation terms into single vectors until the 'blur decoding' stage. It seems like, really, some version of this has to work, but it hasn't come close yet. Looking at it now, though, I see that I did some strange things that are kind of hard to explain... Gonna give it another chance overnight.

Sunday, September 16, 2012

too far, too far

one hundred words
in haiku form
while waiting for
my flight on a
sunday evening
in september:

Rochester airport
September Sunday evening
me and three women

80s pop radio
electric piano solo
fluorescent lighting

now another man
sneakers, backwards baseball cap
the sun is setting

PA announcement
the guy's voice croaks like Stallone
a fine disco beat

two smartphones, a book
two pair boots, one pair flip flops
not a conjunction

what will our plane be?
CRJ, Boeing, Airbus?
another man comes

three women, three men
the humans are trickling in
going to Boston

the sun sets slowly
slower than it usually does
suspicious liquids

dinner of junk food
reflection of ceiling lights
in my laptop screen

Saturday, September 15, 2012

morning aura

in rochester for the OSA vision meeting.

woke up this morning about 6:30ish, with terry yelling at me to wake up. went to take a shower, and while there, realized i couldn't see my fingertips as i was washing, grabbing soap, etc. got out of the shower, got dressed, left the room and went to the lobby. got there a little after seven, and the scotoma was well into the periphery, flickering etc. it was just like the last three: left field, straight right-left through the upper field and arcing downward into the lower field.

the last one i managed to record, back in june, had a time gap between the foveal scotoma and the peripheral arcs, which i had post hoc explained as me needing to recalibrate some part of the perimeter or something. but during the last two, i noticed that the scotoma actually does seem to disappear between the foveal appearance and the peripheral arcs. wonder what is going on with that...

didn't notice any peripheral rough spot this time, but it was so early that i might just have been too dazed.. these morning scotomas that i've experienced - the last one a few weeks ago, and the one last year in the winter - they seem to have started just as i awoke. may be coincidental, since it's just a sample size of three, but i haven't had one start a half hour after waking, and i haven't woken up halfway through one (though the first time, i think i lay there with my eyes closed for the first 10 minutes or so). might be interesting to look up what sort of neurochemical changes occur in cortex, esp. visual cortex, during waking.

headache was ok, took some tylenol this time. nauseated all day long.


yesterday (or maybe thursday night, not sure), i remember feeling suspicious that something might be about to happen: i had the thought, i should keep track of these suspicions, to see if they're actually correlated. it is possible that i am suspicious very frequently, and just notice the coincidences..

Thursday, September 13, 2012

Deja Trompé

When I was in graduate school, I lived in Old Louisville, and walked, most days, down 3rd street to campus. Whenever I crossed the big road separating the neighborhood from campus, Cardinal Avenue, at a certain spot, I would see something up in my right peripheral visual field, and think, "Starlings!"

It was never starlings. It was always the tattered insulation hanging off a bunch of power lines strung over Cardinal. I remember this because even though I learned, pretty quickly, that I wasn't seeing starlings in that instant whenever it occurred, the fastest part of me - whatever part just automatically identifies salient stuff in the vast periphery - always thought that I was.


It not that I was hallucinating starlings. A bunch of speckly black stuff fluttering against the sky kind of looks like birds, even when you know it isn't. You can't blame me. I don't blame my visual system. It's an honest mistake. The interesting thing is that I kept making it, over and over again, with apparently no control over it. An inconsequential and incessant perceptual mistake.

I've noticed similar situations over the years, but right now I can't remember the others. I should start making a list. I bring this up because recently someone cleaned out the shared kitchen on this side of the institute, and because I always turn the lights out in the same kitchen.

I think that, because I always turn out the light when I leave the little kitchen, other people have started following my example, and now, often, when I go to the kitchen to get hot water for my tea, the light is already out. This makes me happy. It's happened very gradually. Change is slow, usually.

Usually. Recently, the development office got a new temp who is apparently a complete OCD clean freak. It's great. She cleaned this kitchen and the other one. She put up little signs everywhere telling people not to be such pigs. I love her.

Anyways, now, when I go into the little kitchen to get my water, I stand at the dispenser, watching it to make sure my hand doesn't stray and I don't get scalded, and the microwave with its little sign sits down in my lower left field. Often, lately, the light is out when I get there. I leave it that way, because there's enough light trickling in from the hallway. Every time I am in this configuration, with the light out, it looks for all the world that there is light coming out of the microwave window.

This happens over and over again. It's very robust; I can stand there and look straight at the microwave and its little paper sign, and that's what I see; then I look away, and the sign becomes an emission of lamp light from within the microwave. I can turn the mistake on and off by moving my eyes back and forth.

Again, I don't blame my visual system. It's doing the best it can. I've seen so many microwaves, and when they're cooking, they usually have little lamps inside, so you can see your whatever rotating on the little turntable. If the room is dark, the image is basically of a luminous rectangle in the front door of a microwave. Not many microwaves that I have known have worn little paper signs on their doors. To their disgusting, disgusting peril.

There must be a name for this, but I can't find it. So for now I'm going to invent a term: deja trompé‎, "fooled again". Deja as in deja vu, "again seen"; trompé‎ as in trompé l'oeil, "deceives the eye". Seems like the right flavor for this sort of thing. I'll start keeping track of these, however rare they are. I'll inaugurate the list with a new entry label.


Wednesday, September 12, 2012


Trying to figure out how to proceed with this adaptation paper, and I retreat here.

Minor problem is the rewrite: this will get done, not too worried about it. May be the last thing that gets done, since the major problem needs to be solved materially first.

Major problem is the modeling. The original paper details a complexified version of the model proposed by the authors of a paper that our paper basically replicates, accidentally. We were scooped, and so I thought that to novelify our paper, I would take their model and try to push it a little further, and do some extra analysis of it.

What I didn't do was what I should have done, which was to also test the simple model and show that it is somehow inadequate, and that complexification is therefore justified or necessary. I am actually ambivalent about this. My main idea was that we should take a model which has generalizable features and use it to explain the data; but, it's true that the more sophisticated version can't really be credited with achieving anything unless the simple one can also be shown to fail.

So the problem is that I have to do a lot of testing of the simple model. So, I decided that I would scrap the section that was already in the paper and replace it with an evaluation of the simple model, but make up for the lack of 'advance' by employing the simple model in a more realistic simulation of the actual experiments. This is what I've been trying to do, and basically failing at, for several weeks now.

The first idea was to use the simplest form of the model, but the most complete form of the stimuli: videos, played frame by frame and decomposed into the relevant stimulus bands, adaptation developing according to a simple differential equation with the same dimensions as the stimulus. This didn't work. Or, it almost worked. The problem is that adaptation just won't build up in the high frequency channels, unless it's way overpowered, which is against any bit of evidence I can think about. If high frequency adaptation were so strong, everything would be blurry all the time. I think it should be the weakest, or the slipperiest.

Soon after that, I gave up and retreated to the 'global sum' model, where instead of using 2d inputs, I use 0d inputs - i.e. the stimulus is treated as a scalar. I get the scalars from the real stimuli, and the same dynamic simulation is run. It's tons faster, of course, which makes it easier to play around with. I figured I would have found a solution by now.

See, it's so close. It's easy to get a solution, by adjusting the time constants, how they vary with frequency, and the masking strength, and get a set of simulated matching functions that look a lot like the human data. But I figure this is uninteresting. I have a set of data for 10 subjects, and they seem to vary in particular ways - but I can't get the simulated data to vary in the same way. If I can't do that, what is the point of the variability data?

Also, last night I spent some time looking closely at the statistics of the original test videos. There's something suspicious about them. Not wrong - I don't doubt that the slope change that was imposed was imposed correctly. But the way contrast changes with frequency and slope is not linear - it flattens out, at different frequencies, at the extreme slope changes. In the middle range, around zero, all contrasts change. Suspiciously like the gain peak, which I'm wondering isn't somehow an artifact of this sort of image manipulation.

I don't expect to figure that last bit out before the revision is done. But, I'm thinking it might be a good idea to play down the gain peak business, since I might wind up figuring out that e.g. adaptation is much more linear than it appears, and that the apparent flattening out is really an artifact of the procedure. I don't think I'll find that, but - did I mention I'm going to write a model-only paper after this one? - seems a good idea not to go too far out on a limb when there are doubts.

I have a nagging feeling that I gave up too soon on the image-based model...

Friday, September 07, 2012

talk: 97%

did a dry run today for my FVM talk. i think it went well, but there was a good amount of feedback. (incidentally, earlier this week i came to the lab, and passed my preceptor e* talking with a familiar old guy in the hall; a few minutes later, e* brings the guy to my office and asks me to show him my work. the old guy was l.s., one of the elder statesmen of european psychophysics. turns out he had been a postdoc at the instutute more than 40 years ago, and was in town, and had just dropped in to see old friends.. i took him through my presentation at quarter speed, and he was very enthusiastic. made some suggestions about controlling for the 'knowledge' aspect of my stimuli and experiment design. took notes. had a good talk with him, he seems to know my grad school mentor well, knows all his students. so i didn't go to ECVP this week, but i got to spend a morning with one of its founders...)

anyways, the dry run: p* was the only one, as i guess i expected, to make real comments on the substance of the talk. he had two points/questions:

1. what happens if the two images are different, i.e. if they have different phase spectra? i have not tried to do this experiment, or to predict the result. i guess that technically, the model that i am evaluating would make clear predictions in such an experiment, and the perceptual process i am claiming to occur would be equally applicable. but, really, i am tacitly assuming that the similarity of the two images is tamping down noise that would otherwise be there, somehow in the spatial summation, that isn't actually reflected in the model but that would be there for the humans. but, it might work just fine. i should really try it out, just to see what happens... (*edit: i tested it in the afternoon, and the result is exactly the same. experiment is harder, and the normalization is wacky, but seems clear it works...)

2. don't the weighting functions look just like CSFs? isn't this what would happen if perceived contrasts were just CSF-weighted image contrasts? yeah, sure, but there's no reason to believe that this is how perceived contrast is computed. the flat-GC model is close to this. i wonder if i shouldn't just show a family of flat-GC models instead of a single one, with one of them having 0-weighted GC...

the other main criticism was of the slide with all the equations. this is the main thing i think i need to address. i need to remake that slide so it more naturally presents the system of processes that the equations represent. some sort of flow or wiring diagram, showing how the equations are nested...

also need to modify the explanation of the contrast randomization; not add information, but make clearer that the two contrast weighting vectors are indeed random and (basically) independent.

Monday, September 03, 2012

two out of three ain't enough

okay, so, really, i spent the labor day weekend watching youtube videos, looking at funny gifs, reading the news, and other random things, while running half-baked model simulations for the blur adaptation revision.

first thing i did was to run the video-based model through the experiment on the same three adaptation levels used in the original experiment. it worked at an operational level, i.e. it matched sharper things with sharper things and blurrier things with blurrier things, and the effects of the adaptors were correctly ordered - it didn't do anything crazy. on an empirical level, though, it was wrong.

for the original subjects, and most of the replication subjects, the perceived normal after blank adaptation should be matched to a slightly sharpened normal-video-adapted test; the simulation did the opposite. not a huge problem, but like i said, against the trend.

bigger problem is that the simulation failed to get the 'gain' peak for the normal adaptation condition; instead, gain just increased with sharpness of the adaptor. now i'm rerunning the simulation with some basic changes (adding white noise to the spatial inputs, which i don't think will work - might make it worse by increasing the effective sharpness of all inputs - but might have something of a CSF effect; and windowing the edges, which i should have done from the start).

one funny thing: even though the gain for the sharp adaptor is too high (being higher than for the normal adaptor), the gains for the normal and blurred adaptors are *exactly* the same as the means for the original three subjects: enough to make me think i was doing something horribly weirdly wrong in the spreadsheet, but there it is:

weird, but too good to be true. undoubtedly, every change to the model will change all of the simulation measurements, and the sim is definitely as noisy as the humans - even the same one run again would not get the same values.

Sunday, September 02, 2012


I seem to have gotten into treating this thing as a migraine journal, so here: headache last night (Saturday). Strange one, came on slowly, from mid-afternoon, increased gradually until 10 or so, when it was actually pretty irritating. May be something else. It's kind of still here, vaguely. Front of the head, above-behind the eyes, but something about it is different. Dunno.

As for work, I should have done more this weekend. I have 3 current main foci: FVM presentation, blur adaptation revision, and R01 application.

The presentation is >90% done. I'm leaving it for a few days.

The blur adapt revision is 0% done. I'm trying to figure out what "simple" model to replace the section in the paper with. If I can't get it to work by the end of the week, I think I'll have to stick with the original "complicated" model, and *add* material (thus making it *more* complicated) to explain why the simple version can't be easily adapted to work. What this entails is about an hour of programming and 24 hours of running the simulations/measurements so I can see the results and decide on what isn't working and make changes and repeat the process. In the meantime, I do nothing productive. So:

R01 application is... well... I don't want to do it. It's futile, but it's my job. Will start soon. Should have started this weekend.