Thursday, September 13, 2012

Deja Trompé

When I was in graduate school, I lived in Old Louisville, and walked, most days, down 3rd street to campus. Whenever I crossed the big road separating the neighborhood from campus, Cardinal Avenue, at a certain spot, I would see something up in my right peripheral visual field, and think, "Starlings!"

It was never starlings. It was always the tattered insulation hanging off a bunch of power lines strung over Cardinal. I remember this because even though I learned, pretty quickly, that I wasn't seeing starlings in that instant whenever it occurred, the fastest part of me - whatever part just automatically identifies salient stuff in the vast periphery - always thought that I was.

"Starlings!"

It not that I was hallucinating starlings. A bunch of speckly black stuff fluttering against the sky kind of looks like birds, even when you know it isn't. You can't blame me. I don't blame my visual system. It's an honest mistake. The interesting thing is that I kept making it, over and over again, with apparently no control over it. An inconsequential and incessant perceptual mistake.

I've noticed similar situations over the years, but right now I can't remember the others. I should start making a list. I bring this up because recently someone cleaned out the shared kitchen on this side of the institute, and because I always turn the lights out in the same kitchen.

I think that, because I always turn out the light when I leave the little kitchen, other people have started following my example, and now, often, when I go to the kitchen to get hot water for my tea, the light is already out. This makes me happy. It's happened very gradually. Change is slow, usually.

Usually. Recently, the development office got a new temp who is apparently a complete OCD clean freak. It's great. She cleaned this kitchen and the other one. She put up little signs everywhere telling people not to be such pigs. I love her.

Anyways, now, when I go into the little kitchen to get my water, I stand at the dispenser, watching it to make sure my hand doesn't stray and I don't get scalded, and the microwave with its little sign sits down in my lower left field. Often, lately, the light is out when I get there. I leave it that way, because there's enough light trickling in from the hallway. Every time I am in this configuration, with the light out, it looks for all the world that there is light coming out of the microwave window.

This happens over and over again. It's very robust; I can stand there and look straight at the microwave and its little paper sign, and that's what I see; then I look away, and the sign becomes an emission of lamp light from within the microwave. I can turn the mistake on and off by moving my eyes back and forth.

Again, I don't blame my visual system. It's doing the best it can. I've seen so many microwaves, and when they're cooking, they usually have little lamps inside, so you can see your whatever rotating on the little turntable. If the room is dark, the image is basically of a luminous rectangle in the front door of a microwave. Not many microwaves that I have known have worn little paper signs on their doors. To their disgusting, disgusting peril.

There must be a name for this, but I can't find it. So for now I'm going to invent a term: deja trompé‎, "fooled again". Deja as in deja vu, "again seen"; trompé‎ as in trompé l'oeil, "deceives the eye". Seems like the right flavor for this sort of thing. I'll start keeping track of these, however rare they are. I'll inaugurate the list with a new entry label.

BACK TO WORK

Wednesday, September 12, 2012

adaptomatic

Trying to figure out how to proceed with this adaptation paper, and I retreat here.

Minor problem is the rewrite: this will get done, not too worried about it. May be the last thing that gets done, since the major problem needs to be solved materially first.

Major problem is the modeling. The original paper details a complexified version of the model proposed by the authors of a paper that our paper basically replicates, accidentally. We were scooped, and so I thought that to novelify our paper, I would take their model and try to push it a little further, and do some extra analysis of it.

What I didn't do was what I should have done, which was to also test the simple model and show that it is somehow inadequate, and that complexification is therefore justified or necessary. I am actually ambivalent about this. My main idea was that we should take a model which has generalizable features and use it to explain the data; but, it's true that the more sophisticated version can't really be credited with achieving anything unless the simple one can also be shown to fail.

So the problem is that I have to do a lot of testing of the simple model. So, I decided that I would scrap the section that was already in the paper and replace it with an evaluation of the simple model, but make up for the lack of 'advance' by employing the simple model in a more realistic simulation of the actual experiments. This is what I've been trying to do, and basically failing at, for several weeks now.

The first idea was to use the simplest form of the model, but the most complete form of the stimuli: videos, played frame by frame and decomposed into the relevant stimulus bands, adaptation developing according to a simple differential equation with the same dimensions as the stimulus. This didn't work. Or, it almost worked. The problem is that adaptation just won't build up in the high frequency channels, unless it's way overpowered, which is against any bit of evidence I can think about. If high frequency adaptation were so strong, everything would be blurry all the time. I think it should be the weakest, or the slipperiest.

Soon after that, I gave up and retreated to the 'global sum' model, where instead of using 2d inputs, I use 0d inputs - i.e. the stimulus is treated as a scalar. I get the scalars from the real stimuli, and the same dynamic simulation is run. It's tons faster, of course, which makes it easier to play around with. I figured I would have found a solution by now.

See, it's so close. It's easy to get a solution, by adjusting the time constants, how they vary with frequency, and the masking strength, and get a set of simulated matching functions that look a lot like the human data. But I figure this is uninteresting. I have a set of data for 10 subjects, and they seem to vary in particular ways - but I can't get the simulated data to vary in the same way. If I can't do that, what is the point of the variability data?

Also, last night I spent some time looking closely at the statistics of the original test videos. There's something suspicious about them. Not wrong - I don't doubt that the slope change that was imposed was imposed correctly. But the way contrast changes with frequency and slope is not linear - it flattens out, at different frequencies, at the extreme slope changes. In the middle range, around zero, all contrasts change. Suspiciously like the gain peak, which I'm wondering isn't somehow an artifact of this sort of image manipulation.

I don't expect to figure that last bit out before the revision is done. But, I'm thinking it might be a good idea to play down the gain peak business, since I might wind up figuring out that e.g. adaptation is much more linear than it appears, and that the apparent flattening out is really an artifact of the procedure. I don't think I'll find that, but - did I mention I'm going to write a model-only paper after this one? - seems a good idea not to go too far out on a limb when there are doubts.

I have a nagging feeling that I gave up too soon on the image-based model...

Friday, September 07, 2012

talk: 97%


did a dry run today for my FVM talk. i think it went well, but there was a good amount of feedback. (incidentally, earlier this week i came to the lab, and passed my preceptor e* talking with a familiar old guy in the hall; a few minutes later, e* brings the guy to my office and asks me to show him my work. the old guy was l.s., one of the elder statesmen of european psychophysics. turns out he had been a postdoc at the instutute more than 40 years ago, and was in town, and had just dropped in to see old friends.. i took him through my presentation at quarter speed, and he was very enthusiastic. made some suggestions about controlling for the 'knowledge' aspect of my stimuli and experiment design. took notes. had a good talk with him, he seems to know my grad school mentor well, knows all his students. so i didn't go to ECVP this week, but i got to spend a morning with one of its founders...)

anyways, the dry run: p* was the only one, as i guess i expected, to make real comments on the substance of the talk. he had two points/questions:

1. what happens if the two images are different, i.e. if they have different phase spectra? i have not tried to do this experiment, or to predict the result. i guess that technically, the model that i am evaluating would make clear predictions in such an experiment, and the perceptual process i am claiming to occur would be equally applicable. but, really, i am tacitly assuming that the similarity of the two images is tamping down noise that would otherwise be there, somehow in the spatial summation, that isn't actually reflected in the model but that would be there for the humans. but, it might work just fine. i should really try it out, just to see what happens... (*edit: i tested it in the afternoon, and the result is exactly the same. experiment is harder, and the normalization is wacky, but seems clear it works...)

2. don't the weighting functions look just like CSFs? isn't this what would happen if perceived contrasts were just CSF-weighted image contrasts? yeah, sure, but there's no reason to believe that this is how perceived contrast is computed. the flat-GC model is close to this. i wonder if i shouldn't just show a family of flat-GC models instead of a single one, with one of them having 0-weighted GC...

the other main criticism was of the slide with all the equations. this is the main thing i think i need to address. i need to remake that slide so it more naturally presents the system of processes that the equations represent. some sort of flow or wiring diagram, showing how the equations are nested...

also need to modify the explanation of the contrast randomization; not add information, but make clearer that the two contrast weighting vectors are indeed random and (basically) independent.

Monday, September 03, 2012

two out of three ain't enough

okay, so, really, i spent the labor day weekend watching youtube videos, looking at funny gifs, reading the news, and other random things, while running half-baked model simulations for the blur adaptation revision.

first thing i did was to run the video-based model through the experiment on the same three adaptation levels used in the original experiment. it worked at an operational level, i.e. it matched sharper things with sharper things and blurrier things with blurrier things, and the effects of the adaptors were correctly ordered - it didn't do anything crazy. on an empirical level, though, it was wrong.

for the original subjects, and most of the replication subjects, the perceived normal after blank adaptation should be matched to a slightly sharpened normal-video-adapted test; the simulation did the opposite. not a huge problem, but like i said, against the trend.

bigger problem is that the simulation failed to get the 'gain' peak for the normal adaptation condition; instead, gain just increased with sharpness of the adaptor. now i'm rerunning the simulation with some basic changes (adding white noise to the spatial inputs, which i don't think will work - might make it worse by increasing the effective sharpness of all inputs - but might have something of a CSF effect; and windowing the edges, which i should have done from the start).

one funny thing: even though the gain for the sharp adaptor is too high (being higher than for the normal adaptor), the gains for the normal and blurred adaptors are *exactly* the same as the means for the original three subjects: enough to make me think i was doing something horribly weirdly wrong in the spreadsheet, but there it is:



weird, but too good to be true. undoubtedly, every change to the model will change all of the simulation measurements, and the sim is definitely as noisy as the humans - even the same one run again would not get the same values.

Sunday, September 02, 2012

random

I seem to have gotten into treating this thing as a migraine journal, so here: headache last night (Saturday). Strange one, came on slowly, from mid-afternoon, increased gradually until 10 or so, when it was actually pretty irritating. May be something else. It's kind of still here, vaguely. Front of the head, above-behind the eyes, but something about it is different. Dunno.

As for work, I should have done more this weekend. I have 3 current main foci: FVM presentation, blur adaptation revision, and R01 application.

The presentation is >90% done. I'm leaving it for a few days.

The blur adapt revision is 0% done. I'm trying to figure out what "simple" model to replace the section in the paper with. If I can't get it to work by the end of the week, I think I'll have to stick with the original "complicated" model, and *add* material (thus making it *more* complicated) to explain why the simple version can't be easily adapted to work. What this entails is about an hour of programming and 24 hours of running the simulations/measurements so I can see the results and decide on what isn't working and make changes and repeat the process. In the meantime, I do nothing productive. So:

R01 application is... well... I don't want to do it. It's futile, but it's my job. Will start soon. Should have started this weekend.

Wednesday, August 29, 2012

dream science?

Too many entries - let this be the last one for August.

Fantastically, incredibly, unbelievably, implausibly, Monday night I had a dream that directly relates to Monday's entry. I'll leave out irrelevant details: I dreamed that I was experiencing a migraine aura.

In the dream, I noticed the phosphene-like foveal scotoma, and at first had the "is it an afterimage? what bright light did I look at?" reaction, and then realized what it really was. It was upsetting, actually, because the last one was just 1 week previous, and I felt like once a week is a bit too frequent.

I then set about trying to record the aura with my perimetry program, except that my computer was now a large, flat panel lying on the floor, like a giant i-pad. The layout was of course different - not a blank gray screen, but a thin-line black grid, like a Go board, on a wood-brown background. Jingping was there, and kept trying to move the grid around, and I kept telling her to stop.

Once I was trying to record it, the scotoma was no longer foveal, but extended 10-20 degrees out, straight to the left and then arcing downward towards the inferior vertical meridian. This makes me think that I wasn't actually experiencing an aura in my sleep - to get from the foveal scotoma to 10-20 degrees should take 15-20 minutes, and I don't think that much time actually passed in the dream - it seemed like less than a minute. Of course, time and space are both funny in dreams, so who knows. There was no headache on Tuesday, anyways.

It was very frustrating trying to set the fixation point in the dream perimetry program. I just couldn't fixate - I would set it in one place, and then felt that it should be somewhere else. I think I finally gave up and started sticking my hand in the scotoma to probe its size.

So, whether or not I was really experiencing an aura, or just dreaming that I was experiencing one, is an interesting question. It seemed like a real one, and I noted lots of spatial details: the tiny phosphenic bead of the foveal scotoma, the fuzzy noisiness of the peripheral scotoma arc (though the periphery seemed clearer somehow in than true peripheral vision), the thin black lines of the perimetry grid, the unfixable fixation spot.. If visual experience includes V1 activity, and if the visual aura occurs in V1, and if V1 is quiet or suppressed during dreaming, how could I have seen what I did, unless spatial vision includes a good deal of higher-level inference?

It seems that I proposed an experiment on Monday afternoon, and then did the experiment in my sleep that night. I have never been so efficient!

Monday, August 27, 2012

summation or conclusion?

So, I'm realizing now that this note from a few days ago is touching on this entry from several months back (if only I could keep everything in my head at once...). In the latter, I was talking about the idea that visual experience is a stack of phenomena, extending all the way down to the optical image, even to the light field, and all the way up to cognition and emotion. In the former, I realized that my standing, computational interpretation of the classification image experiment involves an assumption that estimates of a particular psychological construct - perceived contrast - are mediated by the same processes whether the stimulus is simple or complex.

This stance doesn't conflict with the 'stack' idea, but when you think of both together it seems dubious. With simple stimuli, there isn't much else elicited by the visual pattern, so estimates of its properties can be localized to a small set of possible mechanisms, which is the point of using simple stimuli in the first place. So, there are multiple layers to the stack, but most of them are relatively empty or inactive. When the stimulus is complex, all those other stacks are now active, and filled with activity which is ostensibly more important and interesting to the observer. Is it reasonable to continue to assume that the observer can make use of the same information in that 'spatial vision' layer that he could when there was nothing to distract him elsewhere?

I realized this connection because I was thinking of the implications of one alternative (complex visual qualia are the result of highly nonlinear summation of simple visual qualia) or the other (complex qualia may be inferences drawn from 'basis' qualia, that could possibly exist - as perhaps in a dream - independently of those bases). How do you tell the difference? Take away the spatial vision level, and see what is left. How to do this? Lesions maybe, but the first thing that comes to mind is to compare what imagery looks like when you're seeing it versus when you're dreaming it.

Friday, August 24, 2012

punched in the head

title says it all. punched a few times in the head, and now i have a headache. feels migrainish. it is possible that getting punched in the head in just the right way can trigger a migraine. or maybe i have a concussion? i do have a big red bruise on my forehead, so maybe the pain is on the outside, muscular, and i just can't tell the difference since it's all just front-of-the-head.

anyways.

**edit, 0:23, 8/27/12
the bruise is still there, fading, and the flesh is a bit tender - worse yesterday - but the headache was gone when i woke up saturday morning.

Thursday, August 23, 2012

vacation report

Spent the last 5 days (Sat-Wed) down in Tennessee/Alabama, visiting family. Monday morning, Jingping woke up about 8 and went looking around and came back saying that my parents were still home, when she thought they should be out taking a walk somewhere. I was barely awake, still hadn't opened my eyes. When I finally did a few minutes later, I found that I was halfway through a scintillating scotoma, maybe around the 15-20 minute point. It looked a lot like last time, left field, relatively straight scotoma from above fixation leftward, arcing downward and below. I got out of bed and went to sit in the sunroom to watch the rest of it. The scintillation was rather weak, but still noticeable - I knew what was happening within a second or two of opening my eyes. The headache started soon after I got out of bed, and was kind of a bad one. Above-behind my eyes, focused on the right side. Nauseated and dizzy for a day, which sucked because I had to drive down to Huntsville Monday afternoon (in my parents' Prius with an expired Kentucky driver's license, don't tell my mother). Still hurt a bit Tuesday night.

I think that maybe the slight headache I described on the 16th might have been part of the prodrome for this one, otherwise I didn't notice any signs.

**

Last night on the way home I had an insight into how to explain the low-pass gain control that I'm proposing. A basic Barlow-Foldiak type anti-Hebbian learning rule should develop low-pass weights if a set of scaled filters is repeatedly exposed to low-pass input, or maybe even if it's just exposed to white noise. Gonna try this later today!

Friday, August 17, 2012

contrast or inference?

Norma Graham makes an interesting point, which I've seen quoted many times, in her book on spatial vision. She notes that it is as if the brain is transparent to what is happening in the early levels of visual processing, and that this is curious. It's curious, but it's a typical stance for someone who studies spatial vision; we assume that discrimination or identification or detection of signal strength is mediated almost entirely by the filters that are transducing the signal, not those that understand it, or respond (overtly) to it.

Whether or not this transparency holds when complex images are viewed is, I think, totally unknown. It may be that the percept elicited by a complex scene really is the sum of its parts, and that it simply provokes additional sensations of meaning, identity, extension, etc., which are tied to spatial locations within the scene. So, the visible scene that we are conscious of is indeed an object of spatial vision. This is the point of view I generally adopt, and I think it is common.

Another view is that the percept is entirely inference. Boundaries, surfaces, colors, textures, etc., are qualities in themselves, inferred from particular organization of spatial structure, and these then are organized in such a way that objects and identities and meanings can be inferred in successive stages. These inferences are what is seen consciously; perhaps inferences, and the evidence for them (the matter of spatial vision), are experienced simultaneously, but the substantive inferences, being the important elements of experience, are what dominate consciousness. So, only a small part of the phenomenal scene is actually constituted by e.g. luminance contrasts, and much more of it is constituted by higher level inferences. I think this point of view is also common, maybe especially in the current generation of visual neuroscientists.

The latter view is not exclusive of Graham's observation. If the patterns that are viewed are simple enough, they will not form objects, and will not have meaning. Or, they will be interpreted only as what they are, which doesn't require much inference, or only circular inference (which isn't a bad thing necessarily, when you really do want to conclude that a thing is itself, e.g. a gaussian blob of light, on the basis of its being a blob of light; usually, you want to infer that there is a letter on a page on the basis of a particular arrangement of blobs of light).

So, in the experiment that I'm currently analyzing to death, I am clearly taking the first view, in which case I think my conclusions are solid. If the second view is more accurate, what does the result mean? It could mean that inferences about image strength are based on higher frequencies just because they are the more susceptible to loss in a weak signal. If I'm asking subjects to judge image contrast, they could easily interpret this as judging image strength, and then their judgments would be biased towards the most delicate parts of the image, but they would still take everything into account.

This latter interpretation is still interesting, but it doesn't require "suppression". It is worth mentioning and I should at least include it in the manuscript, although the FVM talk probably will not have space... already there's barely space for the default story.

Thursday, August 16, 2012

nausea, photophobia, headache

woke up slightly late today, started to feel sick on the train; slightly nauseated all day long (mainly manifesting as excessive salivation, gross), a bit of photophobia. might be eyestrain or due to my crooked (broken, $10) glasses. briefly on tuesday night inverted the contrast on firefox, but then switched it back. at the time it was because my glasses seemed smudgy and i just couldn't get them clear, thought, "ah, it's a sign!", but decided no, it's the glasses. right now definitely a slight headache. as usual in my forehead, just above my eyes, maybe a bit left of center.

quick note

Something I've been meaning to write a note on for a while. Even as I write this, I don't have the idea quite in my head.

What is it about apparent contrast that is interesting, as opposed to sensitivity? By most measures, they're the same thing. If I ask someone to do an experiment where they have to discriminate between one contrast and another, I'm assuming that they are doing some mental comparison between two apparent contrasts, or memories of two apparent contrasts. By varying the physical difference between the two contrasts, I can then quantify the person's performance on this task, e.g. the difference at which they can no longer measurably discriminate. These are the sorts of measurements that are usually made in psychophysics.

These performance measures are understood to reflect the subjective, phenomenal properties of interest. But they involve a nearly unsolvable confound: we can't tell what part of the discrimination is due to relative internal response strength and what part is due to internal noise.

So to me, measuring apparent contrast is a way at getting around this problem. You get to measure, directly, the internal response to the stimulus. The new problem, then, is in quantifying what you've measured. The reverse correlation experiment that I did, I realize now, was cognizant of all this, but it was subliminal for me. The experiment is not measuring performance, but it is very similar to an experiment that would be measuring discrimination performance. In this experiment, the stimuli are always easily discriminable, so there are no limits to measure. The subject is asked to discriminate between the strengths of the two stimuli, but I measure no interval or reliability of this discrimination. This is because there is no objective stimulus strength.

The purpose of the experiment is to find out what constitutes stimulus strength when strength is defined, to an observer, as luminance contrast. What I get back is (no matter how I measure it) a description of what components count more or less than other components in making decisions about stimulus strength. I then test a bunch of plausible models to see whether they might also count components in similar ways. Lucky for me, only a particular type of model works, so I can make a sort of conclusion from the study.

So, apparent contrast is the way things look, and then behaviors can be carried out on the basis of how things look. Most visual psychophysics directly analyzes the behaviors that are based on appearances. I've tried to directly analyze the appearances themselves. Did I succeed?


Monday, August 13, 2012

unifications of china 1

Random idea from this weekend: create a set of spatiotemporal maps illustrating the unifying conquests of China. For fun. Let's make a list:

1. Qin: Ying Zheng and Guanzhong
Qin was one of many Warring States in the centuries leading up to the first true unification of China in 9780HE. Qin was based in the area around and to the west of Xi'an, which is protected by mountain ranges and accessible only through narrow passes (I traveled through the Hangu pass to visit Xi'an in 12010HE): hence the region's name of Guanzhong, "within the passes". The conquest has an ill-defined starting point, since the different states had been in contention for centuries. However, it was with Ying Zheng's rule that most of the work was done: between 9771 and 9780, China proper went from seven states to one. This period, the Qin Unification War, could be taken as the first.

2. Han: Liu Bang and Guanzhong
Qin didn't long outlive Ying Zheng, who died in 9791. Soon after his death, Qin was overthrown and broken up into a number of kingdoms, united in theory by the Emperor of Chu, who was in fact a puppet of the warlord Xiang Yu. Xiang Yu's confederation quickly disintegrated into civil war between Xiang's Chu state and Liu Bang's Han state, which lasted from 9795 to 9799, when Chu was finally defeated and absorbed into Han. Han, by the way, based its power in the mountain-protected cities of Hanzhong and Chang'an, as Qin had done. Xiang Yu had placed his capital in Pengcheng, in eastern China. This war, and its result, was messy: at any given point in time, even for a decade after the war ended, it was unclear just who was in charge of particular regions, and a lot rested on the proclaimed allegiances of one or another warlord. However, there are standard interpretations of who was with who and when, that could be used to clarify an illustration.

3. Wei/Jin: Cao Cao and Guandong (east of the passes)
Han lasted for 400 years. When it finally collapsed around 10190, there were more than 20 years of war, followed by a half-century period of fracture into the Three Kingdoms of Wei, Wu, and Shu. The Wei state, based in the western edges of the central plains, just east of the mountain strongholds favored by Qin and early Han, eventually conquered Shu in 10263, was replaced in a coup by Jin in 10265, and finally conquered Wu in 10280. After this, Jin slowly fell apart, and China wouldn't be put together under one government again for another four centuries. The Wei/Jin unification was so slow, taking more than 80 years, that it can't really be considered a 'conquest'; it was a slow succession of local wars, with long spaces of quiet in between. I don't think this one would count.

I don't know much about the establishment of Sui - we'll wait until I've read a bit more on it before I continue.

Wednesday, August 08, 2012

20/100+

Yes, so my glasses broke about 2 weeks ago. I explained it already in the July 24 post.

I still don't have my new pair. Should be here any day now. I'm wearing a 10-year old pair, over-negative in both eyes, but only at night to watch the Olympics and "work" on my laptop. Otherwise I need to be within 12 inches or so to see clearly.

At distance, my acuity is no better than 20/200. I can tell just by looking into the distance and sticking a finger out: no detail that I can see is much smaller than maybe a quarter of the thickness of a fingernail (a little bigger than a degree). So, my acuity limit is probably not much more than 4cpd.

If this was the best I could do, I would be on the bad end of low vision. But it gets better the closer you get to my face (come on, get closer to my face) - like I said, everything is sharp and clear within a foot or so. But, at distance - which is how I spend a lot of my commuting time, at least, and a lot of time at taekwondo - I'm 20/200. How bad is that?

For one thing, 4cpd is about the acuity limit of a cat (or of some jumping spiders). So it's not bad on a basic vision standard, because cats and jumping spiders are very visual creatures. 4cpd is good enough to get by on vision. But for a human, in the world of humans, it's not too good. At distance I can't recognize faces, at all. Ten days' practice, and I just can't do it if I don't have some other information, and then I don't think it counts. I can't read signage - I can't tell what trains are coming into the station at Government Center. None of that is debilitating, but it makes sense to call it a handicap. High frequencies aren't just details, they're content.

20/200 is resolvable detail of 10 minutes of arc. At 35cm, about where my laptop screen is right now, 10ma is about 1mm, which sounds small. But the dot pitch (vertical/horizontal pixel separation) on my screen is about .227mm. With my corrected acuity being around 20/15, I should be able to see details at least as small as 1ma, about .1mm apart, so I can discriminate individual pixels with my normal acuity. At 20/200, I wouldn't be able to discriminate details smaller than 4 or 5 pixels across; I would definitely not be able to read this text (whose lines are 1 pixel thick, and which tend to about 10x5 pixels HxW).

Printed text, which I like to hold pretty close to my face, at least 20cm, would still be unreadable. I'd have to hold it much closer, where the shadow of my head would start to get in the way. I'd need large print. I wouldn't be able to read music. I'm wondering how much acuity you need to do the classic threading of a needle (I haven't even started to wonder about depth perception - I noticed that it was off for the first few days, but I seem to have adapted pretty well, and I'm not afraid to cross the street as I was at the start), or to slice meat and vegetables without slicing yourself - it's not the same as reading, since you're looking for centroids for those things, but I wouldn't really want to try...

Not that I'm going to try, but I could: visual acuity at about 20 degrees eccentricity is close to 20/200. There you have to contend with crowding, though, so you're effectively worse off.

So 20/200 isn't disabling, but it does prevent you from accessing all sorts of primate-relevant stuff. Faces, reading, music, fine finger-based activities. It's been interesting, but I'm just about ready for my new pair of ($10!!) glasses to arrive.

Thursday, August 02, 2012

post

very slight headache lately. nothing obvious beforehand - maybe some occasional false-alarm-foveal-scotoma feelings, but nothing ever showed up.

those false alarms: when reading, i get a feeling that the text is smudged or jumbled, but when i try to see what's wrong, even one eye at a time, i can't actually find any defects. at other times, it's less clear, because the non-text world is more complicated, but i still haven't actually caught and analyzed any of these defects. so, i still don't know if they're false starts, like a CSD bubble that stops soon after is starts, or just high-level false alarms, i.e. paranoia.

anyways, that's it. working on text for a talk next month. tiny bit of writing lately. somehow i have three people reviewing the classification image manuscript. need to get back to the blur adaptation revision sooner rather than later. not looking forward to it, but once i'm into it i think it should be okay.

Wednesday, August 01, 2012

i know this is stupid


I'm dusty,
I'm crusty,
I'm Augusty!
you're crusty,
you're lusty,
you're Augusty!
she's lusty,
she's busty,
she's Augusty!
you're busty,
you're trusty,
you're Augusty!
we're trusty,
we're gusty,
we're musty,
we're Augusty!
we're Augusty.
we're Augusty.

Thursday, July 26, 2012

cloud

Hey! Internet post!

I still do not have a "smart phone" (the quotes there represent my fingers doing the quote gesture as I say "smart phone"). So, you might think that I am behind on the internet times.

But no! If you think this you are wrong. I use three computers (home, office, and lab), and they are all linked together. They share a Dropbox, which has basically replaced FTP in my daily file-shuffling. From what I hear, once MEEI has finished eating SERI, they're going to do away with our FTP server anyways, so that's fine. My computers also are all equipped with Logmein, so I can use any of them from anywhere, so long as they remain connected. Effectively, all three computers can be used as one.

Those are both pretty basic, though. The thing I'm excited about is SVN: version control. D* just taught me about this last week, and I'm already using it to manage my manuscripts. You create a repository to store files for a project, and it contains all versions of the project over time, as you make changes. It is amazing.

Sophisticated users, e.g. software developers, will give SVN its own server so it can be accessed from multiple locations by different users. What I'm doing is leaving the repositories on Dropbox; so, wherever I am, so long as the files are synchronized (i.e. as long as the internet is working between the two relevant computers), I can always get to the current version of my files. This is great. I don't have to worry about whether I'm moving the right ones, or which were the most recent versions (on which computer) after a long pause in a project - the most recent versions are all contained in the central Dropbox location, and I don't have to think about it. I'm sure there's a pitfall there.

The Dropbox is backed up on the lab and office computers, but I haven't set up a backup on my laptop yet. Need to do that. Anyways, I feel this is a great advance in organization. We'll see what my files look like after a few years of this. Next big modeling project should definitely take advantage of this system!

Tuesday, July 24, 2012

height and a black belt

two posts in a day, this is bad.

just been meaning to write this down: saturday afternoon I leave the apt to go to tkd, walking down the sidewalk, take off my glasses to blow off some dust, then go to put them back on, and they snap in half. so my glasses broke.

i go to tkd, not wearing glasses, and not really able to precisely recognize faces beyond a couple of meters. in the distance i see a woman practicing something, a black belt, with mid-length black hair pulled back in a ponytail. at first i think, that's j*, just i guess because her style and general characteristics seemed right. but then i thought, no, that's not her, she's too small. i got a little closer, and yes, this woman was too small to be j* - a couple of inches shorter than me. a little closer and, then, yes - it was indeed j*.

i felt certain that j* was taller than me - if you had asked me before how tall i thought j* was, i would have guessed, oh, maybe 6' or so, she's a real amazon. apparently she's closer to 5'8". taller than the average woman, and still an amazon, okay, but not what i thought. in my mind, i guess, her high status (at tkd) had me convinced that she was actually larger than in reality. i've noticed this effect before (it's been studied for a long time, and probably known forever), but never in a context like this: certain that a person had a relatively unusual trait (woman taller than me), and then unable to recognize her because i don't see the trait, when it was never there in the first place.

"she's actual size, but she seems much bigger to me"

City System

A couple of years ago, - no, oh God, nearly 3 years ago - I got talked into trying to write a Nanowrimo novel. I made it to the end of November with something like 15,000 words, about 35k short (of the official 50k target), but then I kept going, and finally stopped towards the end of the spring in 2010. There was other writing that was more important, and I had reached what seemed like a good stopping point, about halfway through the story - and, coincidentally, I had reached 50,000 words.

Anyways, I still think about the story sometimes. Maybe someday I'll finish it. Yesterday I was thinking about the world that I had set up, vaguely, in the background. It's a reflection of some of my political thinking, but I never made any of it really concrete, just alluding to details here and there. A lot more of it is put together in my head than there in the story. Since it's on my mind, though, I thought maybe I'd write some of it out here. First some background for the background:

The story is centered on a few characters over the course of two days of what I would call the Second NEL Crisis. The setting is the distant future (the year Akan Era 852, sometime during the fourth millennium of the Common Era), on the planet Akan, populated by hundreds of millions of humans. Soon after the initial phases of the colonization, interstellar ships stopped arriving from Earth, and there has been no contact between the worlds in more than 800 years. It is unknown what stopped the ships, but some sort of natural or man-made disaster is assumed to have severely disrupted Earth's civilization, which was already dangerously unstable.

The Akan civilization is based around something called the City System, where the only sovereign entities are city-states, of which there are hundreds, and where certain rules maintain the independence and adaptability of the units of the System. Different Cities (always capitalized in this context) operate under different rules, according to their own preferences. Some may be libertarian or anarchistic, some may be communistic or totalitarian, etc., but they all have to abide by certain City Laws, established long ago, which ensure the stability of the overall system. There must be a dozen or more of these Laws, but I have only really thought about a few of of them:

The 1st City Law institutes an Intercity Congress, where revisions to the original laws are discussed and passed. Decisions can only be made through consensus of the entire body, so as new Cities are added, deliberations slow down more and more. Revisions to the Laws typically take decades. The more immediate responsibility of the IC is to monitor the City Law Enforcement Agency, which is discussed below.

The 2nd City Law is that no City may make administrative decisions for another: the Independence Law. Administrative control is measured quantitatively, and if more than half of a City's administration can be traced to other Cities, then it is the responsibility of demonstrably independent Cities to rectify the problem through mediation. Naturally, the methods for measuring control are controversial, and change with the times, but they have to be universally applied and agreed on through consensus. A result of the consensus requirement is that while the control measures do change with the times, they change very slowly, and the main problem with them is usually that they are seen as out of date. Consensus on new terms can take generations, and only a few dozen methods have been fully instituted over the 852 years of Akan history.

The 3rd City Law is that all citizens must be able to enter and leave Cities freely: the Open City Law. So, Cities cannot block entry by citizens, and cannot prevent citizens from leaving. The standard exception to this rule is that citizens may be arrested or imprisoned; immigration and emigration, however, are held to be out of the fundamental control of the Cities. Some cities may have such strange and insular cultures that few outsiders want to join, or that few insiders feel capable of disconnecting, but these situations are quantified as matters of individual choice and not City coercion. Cities may have requirements for City registration, and may tax insiders or outsiders differently, but requirements are quantified as prohibitive or not. Again, the methods by which such quantifications are made are instituted by consensus, and change but slowly. Before and after the comet Yandel-Yokum impact of AE832, the 3rd Law was widely suspended as Cities struggled to accommodate enormous population transfers from destroyed or abandoned Cities. This disruption of the System resulted in numerous crises across Akan, including the First NLE Crisis of AE835.

The 4th City Law is that intercity aggression and standing armies are jointly prohibited. Police forces, when they exist, are required to demonstrate that they cannot project force beyond the borders of their respective Cities. Intercity violence triggers intervention from other, non-involved Cities. Given that much of human life in AE852 is virtual, lived through machines and computer networks distributed, in some cases, over many Cities, just what constitutes 'force' is a recurring controversy. It is currently agreed that physical and virtual force should generally be treated as equivalent.

The Tensor Law, effectively the Last City Law (if I knew exactly how many there were, it would be the Nth), institutes the agency of the City Law Enforcers. These are a system of inspector-judges who quietly monitor and evaluate the legal performance of the Cities. Their power derives from the CLE Tensor, a mathematical instantiation of the City Laws, to which individual CLEs (in English, "see-el-eez", not "kleez") are neurologically bound - the Tensor acts as a key for the agents, giving them unrestricted access to City networks all over Akan, but also acts as a strict behavioral constraint: agents are prohibited from directly interfering in City affairs. All of their judgments and observations are reported through public channels to the Intercity Congress, although they strive to act in secrecy. The CLEs are not under the direct control of the IC, however: they act autonomously in accord with the Tensor. CLEs are widely seen as incorruptible and infallible. Changes to the Tensor, like changes to any individual City Law, require IC consensus, and happen only very, very slowly.

By the way, a story from a while back was told from the point of view of an individual CLE.

Other Laws establish the responsibilities and limits of Intercity governance of natural resources, of interplanetary space, and of inhabited areas outside City borders.

The Second NLE Crisis, the subject of the novel, is where this system is subjected to a serious and perhaps permanent breach. What happens when the best course of action is to abandon a deeply entrained system that has persisted for nearly a thousand years?

Friday, July 20, 2012

update

that last entry was kind of embarrassing. guess it's worthwhile to keep a record of peaks in frustration.

anyways, kind of better now. with the data from the new rivalry experiment, i was 1) making an error in the processing, and 2) even with the error corrected it was a dumb analysis. i did the 'better' analysis, which i had had in mind but thought would be more complicated than it was (and which did require that that error was fixed), and got basically what i was looking for: before a target is reported as seen, there is an increase in its strength.

i then tried to expand it out, looking for effects in non-target locations. this also seems to work; i'll have to figure out how to separate the effects of spatial correlation in target strength, i.e. a part (maybe the major part) of these peripheral effects will be non-interesting because they will be firmly tied to the central target effect.

i also will need to make the analysis more specific, since each time a target is reported, it matters whether the transition is from a different report or an absence of report. this makes a difference in how the data are interpreted: 1) the increase in stimulus strength caused a dominance change (if there was an immediately previous report of a different target), 2) the increase in stimulus strength firmed up an indeterminate state (if the previous report was 'mixed'), or 3) the increase in stimulus strength made the current dominance state noticeable, i.e. made the target color visible.3

so, i expect that i will need much more data to make these sorts of different relationships clear. i will collect another half-hour's worth of data today, then i'll have more than an hour total. may be able to get something interesting out of that...