Saturday, June 30, 2012

summary

Alright. It's been a lousy week overall. The migraine stuff was fun, but a distraction. Proposal failed. Barely managed to start work on what I was supposed to start at the beginning of the week. Actually, kind of a typical week. Also, my Diablo III hardcore character died, basically because I forgot to turn the video card on. So I am not playing that game anymore, ugh.

I'm downloading files for doing the driving-vid binocular rivalry experiment to my laptop now, so I can work on this at home. Must have data on this thing by the end of the week. I am optimistic, but I should have been here 3 days ago. Blame the migraine.

Really, the main reason for this entry is that by making it, I have 13 entries for the month of June. That's an all-time record for this journal: April 2010, when I was making all those internet posts, I was at 12, and May 2010 was 11. Technically that was my peak posting activity, though most of those entries were short "look what i saw" sorts of things. Most of my entries for this year, since I decided to double down and regenerate my writing skills, have been semi-substantial. I'm trying to get fluent again, and I think it's kind of working. Hopefully we'll take off from here.




Yeah, that's a plot for this journal (I am strenuously avoiding using the word "blog", though I fail now and then. I am intermittently succeeding at replacing "post" with "entry"). I have plots for everything. Abscissa is year, ordinate is number of entries. The blue markers are just counts; the red line is a 'recent activity' smoothing, just an exponential decay function applied to the data. I'm not going to analyze it here. Combined with my memory of time and place, it speaks for itself. Too bad you aren't me.

Thursday, June 28, 2012

priorities, summer 2012

augh... so bored... let's do a post on what i'm supposed to be doing right now, that i'm not doing: yes, it's time to catch up with the priority worksheet!

yes, i've been keeping it up to date every month or so.


there it is. the CI manuscript (MS_Class) is with E* right now, so i have a couple of days i could be devoting to the equivalent priority: ProjADI. but i don't want to do ProjADI. i am depressed about my failed fellowship application and all i want to do is work on modeling my stupid migraine auras, which is not what i am being paid to do, and which, as far as i can tell, does not promise to reveal anything sufficiently new or interesting to warrant spending my time on - i.e. unless i actually do start collection of real psychophysical data on that, it's not publishable. and, it's not on the priority list. migraine modeling has a priority of 0, do you hear that?

the next highest priority is ProjPrism, which i should get on with before all my subjects have moved on to other places. that's going to take some creative programming though, and i don't have a good idea yet of just how i'm going to do it. i really need to go and just sit down in the lab and figure it out. but i'm depressed, so that's my excuse. instead i'm here looking at the internet and writing a stupid journal entry on what i should be doing.

one more thing:


there you can see the evolution of my priorities over time. i do think this system is helpful in my evaluation of my projects. i do seem to be kicking off things with high priorities, though nothing has officially dropped off the list yet. waiting for reviews on two papers (MS_class and MS_blur); if those go okay, then maybe they can both be off the chart by end of the year. maybe MS_class, too.

Projs, though, need to get Projs moving. just sitting there. you're just sitting there. get up. go to the lab. go. go go go.

Tuesday, June 26, 2012

scintillating scotoma 3.a

I went ahead and figured out how to do the cortex transform. there are many papers describing the equations; some good recent ones (like this one) which are basically reviews at the same time that they are tweaking one or another aspect of the basic model. It's not really that complicated; the log-polar transform is very similar, except that the angles are calculated outside the logarithm. The space-V1 transform is the logarithm of a complex number representing the spatial coordinates plus the limit of the foveal confluence. The paper I linked above describes what further steps can be taken to get the transform more precise, accounting for meridional anisotropies. they go further, but I stopped there. The basic model was proposed by E.L. Schwartz in 1977, and hasn't changed much since then; I'm using Schira et al's version with their shear equation, and some parameters they cite in another paper.



This is similar to the second plot from the last post, but you will notice the geometry is different, as it gets narrower towards the fovea (lower part). Colors indicate time in minutes as shown by the colorbar. The grid drawn in the background isn't labeled, but it's easy to understand if you've seen these before. The lines going up and down are, from left to right, the superior vertical meridian, the left superior 45 degree meridian, the left horizontal meridian, and so on. From bottom to top, the left-right lines are spaced 5 degrees of visual angle apart. You don't see the first one until about 30mm up. The origin in this plot, (0,0), is where the foveal representation converges with V2 and V3, the foveal confluence.

This is interesting, the foveal confluence. I probably had heard of this before and forgotten it. I actually stated to E* yesterday that I didn't know what was on the other side of the foveal edge of V1, though I knew that the edges are flanked all the way around by V2. In fact, V1, V2, and V3 foveae all meet in the same place. This is apparently a relatively poorly understood region of visual cortex; imaging and physiology studies have focused on the more peripheral regions. The reason is that it can't be certain of what is being studied if one looks closely at the confluence, since the three areas are mixed together in a fashion that is still not well understood. I'm going to read more about this (the main writing on it is by the same group as the paper I cited at top; this one explains things up front).

Okay, so that map. What can I do with it, now that I have the coordinates right (or as close to right as I can)? Yes: I can measure the rate of progression of the wave in cortical distance over time. Awesome. I don't have the best method worked out just yet, but here's my approximation, summarized in the last figure below:

On the left, we have the same coordinates as in the figure above. The plotted line is the mean, over time, of the recorded scotoma regions. This is not a great measure of position of the waves, since as they got further out and larger, I couldn't trace them completely, and because it took time to trace them, so at a given epoch a trace might be in one place, or another, and that shows up here as a back-and-forth wave, on top of whatever sort of limiting bias is imposed by the screen size, etc. Still, it's okay. We know this because of the next plot: On the right, we have the distance of that waggly trace (from its starting point near the bottom of the left plot) as a function of time. A straight line. That's not why we know it's okay; it's because of the slope of this line: 2.76 mm/min. This is extremely slow, but exactly in the realm of cortical spreading depression. Not going to give references on that (need to save some work for an actual paper on this business), but they're there. Pretty sure I'm doing this right.




Monday, June 25, 2012

scintillating scotoma 3

another event is now shimmering its way out into my far left periphery. i used the dynamic perimetry program i had written; seems to work. some improvements can be made; need to include warnings or preventions for the cursor going out of screen. should implement cursor size info/adjustment. maybe cursor color or shape.

i mention the last two because definitely, there is a general sort of aftereffect. it's scotoma-like, but with no clear location of the scotoma; i.e., it feels like things are missing, e.g. if i put my hand out about 15-20 degrees left, stuff just feels kind of scrambled. actually, there is definitely a blind spot about 30 degrees left, i just found it. this is too far to measure with this screen; perimetry stops at ~20 degrees, i guess.

again, i don't know what the precursors were. been depressed all weekend (see previous post); saw some funny spots yesterday, and got preoccupied with some really visible floaters yesterday afternoon at tkd, which is probably unrelated. seems like the main indicator is just a superstitious feeling that "i wonder if it's going to happen again". maybe that's the CSD running through my frontal lobe somewhere.

note that i'm basically having a monthly period: the first of these (that i recorded here) was April 24, then May 28, and now it's June 25 (24).

i've been meaning for a while to list the other occasions that i can remember. i may be able to remember them all. before these past three, it happened twice earlier this year: once was during, uh, sex, which was weird, in China (within a few days of new years), and the second was on a sunday afternoon in late january or early february, when i was on my way to tkd, walking around cleveland circle.

before that, probably have to go back a year. it's happened several times with me just sitting here at my computer; probably half the times. i announced a few on facebook. i woke up once, early last year, with the SS starting right off. makes me wonder if it happens sometimes when i'm sleeping. it's happened after sex a couple of times. only once in the lab, i think, after j** checking my eyes. except for that morning one, i think it's almost always at night.

i would guess that, all together, this has happened 10-15 times in the last 2.5 years. until now, i don't think it had happened in summertime, only winter and spring. lots of headaches, maybe biweekly on average, without interesting symptoms.

i think these map data will be usable, and better quality than the paint drawings. not as pretty, but i can just generate post hoc pictures. i'll process them in the next couple of days, probably tomorrow. basically, it seemed very similar to the last two times, except that after the first minute or two of scotoma (again, i noticed it had started because, suddenly, i couldn't read), which i managed to record, the scotoma disappeared, or at least i couldn't find it. i thought maybe i had scared it away, but then it returned, right on track. anyways, update later when maps and stats are done.

*update*

some plots: sorry, didn't label any axes. descriptions accompany each:


This is the progression in spatial coordinates. color represents time in minutes, which you see in the colorbar. i also tied marker size to time, to help represent the thickening of the scotoma with time. 


This is progression in logpolar coordinates; x-axis is degrees from leftwards (in the last post, i was coding angle relative to 'up'), y-axis is log(degecc + 1). now it looks a lot more like a straight wave, though there is a bend to it, as if the wave has a 160° trailing angle. maybe that would be straighter with a more realistic cortical space transform? maybe not. i'll get around to finding out later this summer.
 


This last one is just binning the previous plot into 10° strips, plotting logecc+1 against time (like in the previous headache post). if i assume that the speed of the wave is closest to the fastest estimate that i get from these kinds of fits (it must be faster, since i'm only measuring at an angle to the wave), then i estimate that for this event, the speed of the wave was at least 0.248 logecc+1 / min. this compares with estimates of 0.250 and 0.258 for the last two events (i cited a lower number last time; that was the median, this is the max). once i learn to transform logpolar into v1 coordinates, i'll bother to do the extra geometry to measure the true transverse speed of the wave.

Sunday, June 24, 2012

nope

okay, so, that grant i applied for? failed. not discussed. not.. even.. discussed.

so, that's disappointing. hit rate was just 10-15%, but i felt like i had something. i've seen others with the same fellowship, and i don't think what i was proposing was of any lower quality. maybe a bit further from the norm, proposing things two steps from what anyone had done before - probably better to go one step at a time. there's also the fact that i'm obviously an underachiever. i can't hide it anymore - CVs don't lie. an underachiever with an abnormal proposal.

hurts my feelings, i guess. how can it not? well.. like i've been telling everyone, as a preemptive defense, i knew i wouldn't get it. a long shot. but it wasn't some self-fulfilling bullshit. i did my best. there's good stuff in there, and i'll do it anyways. but not putting it in the top half, not putting it on par with the rest of the proposals. that does hurt. i was hoping for a rejection despite a good score.

i think i'll probably still get comments back on it. i think. d** got comments even when his wasn't discussed in the last round.

let's rephrase the bit about being an abnormal underachiever. how about.. outsider?

let's get romantic.
tell the truth.
you see yourself as an outsider,
don't you?

i don't do it on purpose.
i don't try to be on the outside
in order to satisfy some requirement
that i've set for myself.

it's just what happens.
it's what i'm drawn to.
i'm drawn away.

you make choices
that put you on the outside.
your mentor is an outsider, and
you are the outsider in the lab.

in groups of friends,
i am the one who isn't
part of the group,
who tagged along,
happily accepting all invitations.

the underachiever.
the one you don't know.
i reject what they accept.

always the quiet one.
the different one
who finds himself in strange places.

it shines through
even in an NIH fellowship proposal:
you are a risk.

yeah, screw you. i wrote all that. i wrote it, then edited it into a poem. it's because of my self consciousness, not in spite of it. i am afraid to confront what i am, but i just did it. fine, i'm mad, and my feelings are hurt. i'm a pretentious kid. i'm used to it.

have to get used to this, kid. i hear there's a whole career of this ahead. have to keep writing these things and sending them in. some will succeed, some won't. i'll keep doing what i want to do, this is my guarantee.

Monday, June 18, 2012

gain

why would a process vary with the square root of wavelength?

with constant bandwidth, e.g. receptive field area will vary with the square of frequency (of wavelength). the linear size (radius) of the r.f. will vary directly with wavelength. a process in volume would vary with the cube of wavelength. how do you go backwards from here?

okay, so the inhibitory inputs are all squared. i want the weights on these inputs to be proportional to the square root of filter wavelength. i could get a step closer by making the linear inputs proportional to wavelength before the squaring, which changes the question to:

why would a process vary with the inverse of spatial frequency? in my mind, the weights are still tied to the size of the r.f., so that the bigger it is, the more inhibitory connections it has. strictly speaking, this would make inhibition vary with the square of wavelength.

a bigger r.f. would have more inhibition, then. i am just making this up. so, an r.f. that's twice as big would have four times the inhibition. fine, but then why wouldn't it have four times the excitation? they would balance out. but maybe the excitation isn't balanced. maybe excitatory inputs are sparser and sparser for larger r.f.s. is that true?

if it's true, then effectively the gain for different wavelength r.f.s should increase with frequency, because the density of excitatory inputs should increase with frequency.


i feel like this is getting somewhere... atick and redlich, brady and field... somewhere in there...

Saturday, June 16, 2012

stupid lazy

Alright, two posts in a row of me admonishing myself. Publicly, in theory. In theory, this is more embarrassing than it actually is.

It's Saturday evening. I have done nothing all day. Nothing. Played a computer game all morning. Read the front page of the WSJ. Ate a bowl of noodles and drank a pot of coffee. Played some piano. Looked at lots of funny gifs. Tried again to get Endnote properly installed on this stupid computer, and failed. X.0.2 + Office 2007 + Windows 7 = not work.

I have that SID manuscript open. I need to clean it up, add in those two other references I found but haven't really read because they look really dull. They're just 'relevant', in a parallel sense, but nothing obviously consequent. That's what led me into that stupid Endnote cul-de-sac again. I can do it remotely, so what. Just now I opened up the For Authors page on the journal site.

The CI paper is fine. Adding the MTF into the calculations didn't have as big of an effect as I expected, or hoped. Scaled filters are pretty resilient.

I haven't studied Chinese much in a while. I could be doing that.

No, no. God dammit. SID paper. Finish the goddam paper and upload it. There is no excuse. The paper is finished. Send it in. Dammit. I hate you.


Thursday, June 14, 2012

ohhh...

morning, myself. heart.
end. voice. nightingale?

This is just a diversion. There are things in life, every day, that we want to reach out and touch, or interact with, or follow, or watch, but we can't, because there are other things that we have to do instead. Other things that we should do instead. Self control can be suppression of the self, but sometimes it is just being rational, maintaining normal, keeping things the way you want them. Your mind is made up of many different parts which, on their own, are not as intelligent as you are. They don't have the same priorities as you. They don't even have the same memories as you - some of them have only existed for a few days, or months, or years. Maybe, some of them, you can remember when they came into existence. You can remember, because you are the one governing the rest, corralling them. You have to choose, at these instances, what to do - even if these things in life are like lures, and you see that between you and this other possibility, even just what ultimately would be a fleeting bit of soon-to-be-nothing, is a transparent membrane of a single impulse.

It's just a diversion. Maybe don't go back there. Maybe come back here, and see what you did, to keep from going there. Remember what there is in other places. Keep things level. Life is hard.

Wednesday, June 13, 2012

monitor MTF? sure!

Okay, so I need to know the spatial frequency transfer function of the monitor I've used to do most of my experiments over the past couple of years. I've never done this before, so I go around asking if anyone else has done it. I expected that at least B** would have measured a monitor MTF before, but he hadn't. I was surprised.

Still, B**'s lab has lots of nice tools, so I go in there to look around, and lo, T** is working on something using exactly what I need, a high speed photometer with a slit aperture. So today I borrowed it and set to work doing something I had never done and didn't know how to do. It was great fun.

D** helped me get the photometer head fixed in position. We strapped it with rubber bands to an adjustable headrest. I've started by just measuring along the raster. The slit is (T** says) 1mm, which is about 2.67 pixels on my display. I drifted (slowly) squarewave gratings with different wavelengths past the aperture - this was more complicated than it sounds. The monitor is run at 100Hz, and CRTs flash frames very rapidly, just a millisecond, so getting the photometer settings just right (it runs at 18Khz) took a bit of adjustment, and figuring out good settings for the gratings, slow-enough speed to drift them at (I'm limited by the 10 second block limit imposed by the photometer)..

Anyways, I got back temporal waveforms which I treat as identical to the spatial waveforms. As expected, the power of these waveforms drops off as the gratings get finer. But, I know that it drops off too fast, because of the aperture. If the aperture were exactly 1 pixel across, and if it were aligned precisely with the raster, and if a bunch of other things were true, then I could know that each epoch recorded by the photometer reflected the luminance of a pixel, and my measurements would reflect the monitor MTF. But, like I said, the aperture is 1mm, so each 10ms epoch is an aliased average over >2 pixels. I'm not even thinking about the reflections from the photometer head (there's a metal rim to the aperture T** had taped on there).

My solution: code an ideal monitor, record from it with the same sized aperture, and divide it out of the measurements. I can then guess a blur function - Gaussian - and fit that to my (4) data points. That's what I did: here is my first estimate of the vertical MTF of my Dell p1130 Trinitron:

The Nyquist limit for this display, at the distance modeled here, is about 23cpd, so I guess this Gaussian is in about the right place. It's hard to believe, though, because horizontal 1-pixel gratings look so sharp on this display. I feel like these must be underestimates of the transfer. I am nervous about how awful the vertical will be...

*edit*
It wasn't too bad, just a bit blurrier than the horizontal. Still makes me suspicious that I'm underestimating the horizontal. Not going to bother putting up plots, but here's my estimate of the pixel spread function (you can just see that it's a little broader left-right, that's the vertical blur):
 

Thursday, June 07, 2012

zimbra

internet post!

So, earlier today, I got an email from the "Security Operations Lead" at NASA Ames, saying that a whole batch of people's passwords and account names had been accessed. I had an account there for a meeting I went to earlier this month; coincidentally, immediately after attending that meeting, I noticed that one of my peripheral email accounts had been accessed, and at the time I blamed it on the hotel.

Just now, I get an email from something called Zimbra, informing me that:
You requested your Email Account  on June 7, 2012 at 11:02 PM CS to be deactivated and deleted from a location in with this IP number; 201.130.47.33.
2. Click on  (https://secure.zimbra.com/verifyf?intl=us&.partner= cancelrequest) to cancel this request; else your email account will be deactivated and deleted within 24 hours
The sender's address was "bankofcard@yahoo.com". Yeah. Zimbra is apparently some sort of open source email server software for Linux machines. So this doesn't have anything to do with Zimbra.

The IP address leads to a machine in Mexico, with the URL niie2e.nextel.com.mx. This machine seems to have all ports open, i.e. it's either a totally open proxy server, or some sort of disguise for something else.

That URL to 'secure.zimbra.com' was actually an alias in the email (no I did not click on it, I am not stupid), for "http://www.contactme.com/4fcf723e2e22a2000103d1b6". From their website I can't tell what the hell contactme is, but it looks their site was probably co-opted. I wonder what's there...

Anyways, the relationship to the NASA thing is just coincidental timing, but makes me a bit paranoid.

*edit 6-19-12*
Got called down to to the network office this morning to change my password; apparently the NASA thing had gotten distributed to everyone whose ids were leaked. The admin forwarded to me the info he'd gotten through the Harvard IT director, and based on that I found this:


http://pastebin.com/nSJ9Nn9Z

who knows how long that link will stay alive. anyways, it's a list of the email addresses, but no passwords, for everyone that attended that workshop.

the header on the document:

[HACKED] NASA.GOV - AMES RESEARCH CENTER - By ZYKLON B
 ...
Join me on twitter : https://twitter.com/#!/bzyklon

Author : ZYKLON B
Target : NASA Ames Research Center - Ocular Imaging Laboratory (ace.arc.nasa.gov)
Reason : Curiosity, Challenge.

IS THE TARGET COMPROMISED ? YES.
Note : NASA Glenn research center already hacked 5-6 weeks ago.

anyways, that's interesting. you look down the document, and there we all are! yeah, hackers have twitter accounts!

temporal friction

We came to see the problem as one of material friction, rather than of abstract entropy. Information was still the key, but it was finally clear that what we perceive as information is just the tip of an iceberg.

The resolution of the dark matter problem demonstrated that the deeper structure of reality is anisotropic, made up of tiny needles of entropy - a material structure, now, not a mathematical description. And these needles, they are all pointed towards the past. Whatever part of the universe that moves away from the past is brushed through those needles, which scrape and tear and lacerate the informational structure that we recognize as reality.

That reality we came to see as the product of a long process of selection - some informational structure is torn apart as soon as it forms, while some is hardier, even self-correcting. Life was a structure, we saw, that had adapted to a long path through the deeper universe, even making use of the dark matter anisotropies, to store and convert energy, to drive its own processes of selection and sub-adaptation. But no forms of life could navigate through the darkness. Not until the humans came, and then, for them, it became a prime concern.

We navigated the Earth's surface, its skies and its seas. We navigated the first darkness, of space, and the surfaces of other worlds. All the time, there was that constant abrasion, wearing off man after woman, nation after world, age after eon. All along, we were looking for the way through - the clear way through the darkness. We found it - as I said, we came to see the problem as one of material friction.

Moving against the anisotropy disrupts information. Moving with it smooths information, conforms it, puts it in neat rows and columns, but this has the strange effect of making life uninteresting. If you're adapted to something, then when you lose it you notice it's gone. It's the same with entropy - when you go through life with information conforming rather than disrupting, you go from senselessness to perfection. This can be beautiful, but too often it's just another level of senselessness, on top of the fact that conformative memory processes take a lot of time to install and master, and you find yourself missing the old order of things, no matter how far beyond human you've gone. To have your thoughts and actions gather energy, collect and bind heat, and deliver it into your body... a useful novelty, a tool or practice, but nothing fundamental.

What was interesting was when you could remove the anisotropy altogether. No arrow of time, except for what you choose to arrange. Laws of thermodynamics become adjustable, optional. When we learned not just to navigate through the darkness, but to engineer it to our own purposes, then...

That, more than any other development, was what changed us.

Tuesday, June 05, 2012

transit venus

SO, I had been planning to actually get my old reflector out and make a little projector to watch a bit of the transit, before the sun set today. but, it's been days since the sun was out, so I never had any chance to test anything; actually, with no way to test a setup, and with the forecast looking like it was going to be the way it was, I didn't wind up actually putting any of the parts together.

In other words, it was too cloudy to see it from Boston.

I've been watching the NASA feed of it, it's better than nothing. I think the transit will be over soon; I'll stay up as late as I can to see if I can't see the planet crossing the limb.

That picture is awesome, right? NASA again.

Sunday, June 03, 2012

sum.. sum.. summm... summation?



ok, so i'm working on this classification image paper, and it's going really well, and i'm pretty happy about it. i feel like i've got a good handle on it, i'm writing it in one big shot, the analyses are all good, the data are fine, it's all under control. i'm pretty happy about this one. i keep telling myself that, and then noticing that i keep telling myself that. i guess it's in contrast to the blur adaptation paper, which was such an ordeal (took 2 years, basically), and then the magnification paper, which just isn't much fun. i feel myself moving down that priority list - hey, i should do a post on the priority spreadsheet! i made some nice plots in there!

anyways, the CI paper, it's going well, but i'm constantly on the lookout for problems. so tonight, i finally thought of one. not a crucial, deep problem, but a problem with how i've calculated some of the modeling stuff, a serious enough problem that i'll probably have to redesign a bit of it before doing the final runthroughs. i'm writing this entry so i can just sort of kick off thinking of how to solve the problem. here it is, right plain as day in this little cluster of plots from last year's poster, which has become this fine little paper:

the problem is spatial summation - or, the problem is that you don't see anything about spatial summation in those plots. for the main models, i have a CSF that was measured using test-field-sized images. the thresholds measured must reflect a sort of spatial summation, then. the problem is, i've been using those thresholds to set the baseline thresholds for the models, and then summing over the spatial responses. i had kind of had an inkling that i was being lazy there, but had overlooked how obviously stupid that is. i haven't tested the models on the threshold tasks, but i think that they would necessarily get much lower thresholds than the humans; spatial summation should give you a lower overall threshold than you would get for any single location. i need to think of a quick way to solve this, because i don't want to wind up estimating the model CSF through simulations...

and the simulations then raise the problem of noise, and how many samples should there actually be, etc etc... i guess there are benefits to doing things the simple way first, but i think i've run myself into a weird little corner here. gonna need to talk to somebody about this, probably..

Monday, May 28, 2012

scintillating scotoma 2

another aura! (no post in a month, and now it is forced!)

right now it's flying out to the far left field, but there's also a weird 'rough patch', a bit nearer in that appeared late on. i got a good map. i didn't expect this one coming on at all, except for what i think is just sort of standard low-level paranoia which i've developed since this all started. it's hard to tell what's prodrome and what's just coincidental suspicion.

anyways, i was reading through a section of the classification image manuscript, and, yep, can't see the letters. i think that 8/10 of these things so far have begun while i was reading. not sure if that's because i spend the majority of my time working with text in one way or another, or if that actually pushes things over the edge.

***

it's been a few hours now. here's the map i drew of this last event:

similar to last time, except that it's in the left field this time. it started out below fixation, and did the slow arc outward, following a really similar path as the last one. below, i'll show you below just how similar they are.

as for the rough patch: in the plot above, notice that in the superior field there's some green (and hard-to-see gray) scribbles over the neat arcs; that was a region that i noticed late which wasn't blind, and wasn't flickering, but was clearly..  unclear. it's at about 10 degrees eccentricity, so it's hard to see well out there anyways, but it was obvious that something was wrong. i could see the scribbles that had already been drawn, but it was all very indistinct and jumbled, and i couldn't see the motion of the cursor even though i could tell that it was laying down green/gray scribbles of its own. without any other explanation, i'm going to guess that this was the fabled extrastriate scotoma - my V2 was getting some CSD!

ok, now some analysis. first, log-polar maps. i haven't gone and gotten/worked out a cortical remapping scheme, but putting things in in log-polar coordinates is almost as good. actually, what you do is put them in log(ecc+1)-polar space, so you can see where the fovea is (zero in case you're dull). here are logecc+1-polar-time plots for the last two events (lets say L = log(ecc+1), for easier reference):

time is measured here from start of recording. these data are smoothed versions of the drawn maps in 10 degree radial steps. the scales are the same except that these are for opposite hemifields. the z-axis is color coded.

i mean, you can just see that those two maps are almost identical. i got more data the second time because i wasn't occupied with working out a system (btw i was caught off-guard; i wrote a matlab script to record in real time, but it not work in matlab-64, and i put off doing the -32 install because... ah.. it's done now). the origin is similar - look at these plots:

sorry about the colors. these are the same data as in the scatter plots above, except these are collapsed over the x(angle)-axis. actually they aren't quite the same: here i've used linear regression to estimate the earliest time (before recording began) that the visual event could have begun, and subtracted it out of the time axis, so these plots start when recording began with respect to when i estimate the event began. so, basically, this aligns the data. 2 things: one, the rate of advance (remember that it's radial advance, so these technically are distorted plots; i assume that's why the slope changes with angle) is basically identical in the two cases, ~.16 L/m. and the origin, ie where it all begins, is ~180-170 degrees in both cases (that's directly below fixation).

i've got other analyses, but the above sums up the interesting stuff. i know i've seen these things do weirder things, following more difficult-to-understand courses, and i hope i see one of those next time i'm able to record this business.

Tuesday, April 24, 2012

scintillating scotoma

a few minutes ago (~22:35), reading, and i notice that letters are hard to see. that sensation of having a bright afterimage at fixation; it's moving rightward, usually it's been leftward (I haven't taken notes on the past 2 occurrences, sadly..); this it is at least the first of the last 4 to go through the right field, if not longer.

it begins with just a weird sense of scotoma-ness, very near fixation, but the blind areas are hard to pin down - they seem to change very rapidly, or else it's more a sensation of blindness rather than actual blindness. it's strange how it sticks at fixation even as it arcs out into the periphery; it seems it always arcs into the lower field, after arcing just a bit above fixation. i've not noticed yet one passing across the field, maybe it is restricted to hemisphere?

it's almost gone at this point (almost 30 minutes after the first signs), and all that's left is a flickering at the very top of my visual field, as if there's a light flashing on my eyebrows; interestingly, if i look up, it disappears, which is strange because it should be attached to the field location. i can look up, it disappears, look down again and it reappears. maybe was an interaction with the reflection of room/computer lights off my eyeglass frames? can't test, it's all gone now..

and i have a headache (actually it started a few minutes in; the light show was so slow to start, i thought we were skipping straight to the headache for a few minutes, a bit of disappointment, but it worked out!)

also, some hints: today and yesterday, i several times wondered if i wasn't going to get a headache soon, without understanding why. not sure what sets off those feelings.. this afternoon, i thought i saw some flashes at some point when i was walking down the hallway, and that really made me suspicious; and, all day, really tight, painful muscle spasm throughout my upper back, both sides trapezius.

23:05

map below: i have a few of these now, should get to processing them this summer..



Edit: look at what this guy has done: http://www.pvanvalkenburgh.com/MigraineAura/MigraineAuraMaps.html. pretty amazing.

also, i did wind up writing a script to analyze these plots; once I get some stuff settled, i'll post those in a new entry.

Friday, April 20, 2012

lazy friday dark adapt


Spent afternoon of Friday, 4-20-12, with a ~3 log unit (~.2%) ND filter over my right eye. Made the following observations:
(Took the filter off after about 4 hours. It wasn’t bothering me much anymore, but I think the plastic and rubber stuff in the goggles was irritating my eyes, which were starting to feel kind of dry and red. Light adaptation is really fast, it’s just been a minute and the (formerly dark adapted) right eye’s image only seems slightly brighter than the left’s.)
  1. Noise: the dark adapted eye’s view is noisy, and the noise intrudes into the dominant view. It’s irritating. The dark adapted view isn’t being suppressed, though, it’s there like a ghost. Double images, from depth, are strikingly noticeable, not sure why.
  2. Pulfrich effect: first time I’ve really seen this work. I put my index fingers tip-to-tip and move one from side to side, fixating on the still one, and the moving finger looks like it’s rotating. My hand even feels like it’s rotating.
  3. Pulfrich effect 2: Fusion isn’t always working, but I seem to be ortho a lot of the time. I just noticed, though, that if I make quick motions, e.g. a flick of a finger, there’s a delay in the motion between the two eyes; the dark adapted image is delayed by several hundred milliseconds! Especially obvious if I focus at distance so I have a double image of the finger. Explains the strong Pulfrich effect.
  4. Noise 2: Just looked at some high frequency gratings. With the dark-adapted eye, the noise was very interesting, looked like waves moving along the grating orientation, i.e. along the bright and dark bars there’s a sort of undulating, grainy fluctuation.
  5. I still have foveal vision and color vision, but both are very weak. Dim foveal details are invisible. High contrast details (text, the high frequency gratings) are low contrast, smudgy..
  6. Motion is kind of irritating, I think because it brings about lots of uncomfortable Pulfrich-type effects. Even eye movements over a page of text can be bothersome, because there is always an accompanying, delayed motion. I’m guessing that the saccade cancellation is being dominated by the light-adapted eye, and so I’m seeing the dark-adapted saccades. I don’t notice a depth effect, but walking around in the hallways I do feel kind of unsteady, maybe because of motion interfering with stereopsis. If objects are still, stereopsis seems to be okay.
  7. If I take vertical and horizontal gratings (64c/512px), add them together, then look at them at 25% contrast from about 30cm (here at my desk), I don’t see a compound grating – I see patches of vertical and patches of horizontal. I’ve never noticed this before; I wonder what differences there are with scotopic vision and cross-orientation suppression..
  8. I tried to watch my light-adapted eye move in a mirror, but the dark-adapted eye just couldn't see well enough. I think a weaker filter would make it possible.

Thursday, April 19, 2012

Memes and Pharmacies

That rivalry proposal went in last Monday with no problems, along with a long-festering manuscript, so it seems I took a week off from writing. Now I have progress reports to do, presentations to prepare, other manuscripts to complete, on and on.. I need to make sure I keep up with this journal, which seems to be helping in keeping my writing pace up.

Vacation is over!

About 12 years ago, I read the Meme Machine by Susan Blackmore. It was around the time that I decided to major in psychology, and I was reading all this Dan Dennett Douglas Hofstadter stuff, but it was her popular science book that really had a big effect on me - I would say that it changed my worldview completely, to the extent that I would identify myself as a memeticist when discussions of religion or that sort of thing came up (it was still college, see) - I really felt like it was a great idea, that human culture and human psychology could be explained as essentially a type of evolutionary biology. I still believe it, and so I suppose this book still really sits near the base of my philosophical side, even though I don't think about these things so much anymore.

I bring this up because the other night Jingping and I were talking about being tricked, since this was the topic of a Chinese textbook lesson I had just read, and I recounted the story of being completely conned by a thief once when I was a clerk working at CVS. I wound up going on more generally about working there, and I remembered that I had worked out a memetics-inspired 'model' of that store, which I hadn't thought about in a long time. One of the things about the memetics idea that had really gotten to me was that you could see social organizations as living creatures with their own biological processes - not that this is an idea original to Blackmore, and I'm sure that's not the first place I had heard it (like I said, I was also reading Dennett and Hofstadter at the time), but she did work it into a larger sort of scientistic system which seemed to simplify and unify a lot of questions.

Now I realize that the criticisms of memetics that I heard from professors in college (when I questioned them about it) were mostly right, in that it mostly consists of making analogies between systems; only in the last few years, with the whole online social networking advent, has a real science of something like memetics actually gotten off the ground (this is a neat example from a few weeks ago), and it's very different from what had been imagined when the idea was first getting around.

Anyways, I thought I'd detail here my biology-inspired model of a CVS store, ca. 2001. I have a notebook somewhere where I had detailed a whole system, with functional syntax and everything, for describing social organizations in terms of cellular, metabolic systems. This might have been the first time that I had tried to put together a comprehensive model of a system, now that I think of it. The store, I thought, was itself a cell in a larger CVS system dispersed across the city, which itself was a system dispersed across the country. I was mainly interested in the store level, where you could see different components acting as reagents. It was a strongly analogical system, but not completely analogical, to plant metabolism: a plants needs carbon, so it uses sunlight to break down carbon dioxide, releasing the unneeded oxygen back into the world. A store needs money, so it uses salable products to break down a money-human bond, releasing the human back into the world.

Of course, with plants the sunlight comes free from heaven - all the plant can do is spread out and try to catch it - while salable products must be delivered from another part of the CVS system: the distribution center. The distribution center emits reagents to the stores, where the money-human bond is broken down. The CVS system also emits catalytic agents into the world - advertisements - to facilitate the crucial reaction. The money absorbed by the system is energy which is used to drive the system through a reaction not unlike oxidation - the money-human bond is reformed systematically, with employees as the human component. This reformation is what really drives the system. Those new human-money bonds then go out into the world and fulfill the same function, breaking apart, as they interact with other businesses.

Looking at a business in this way totally changed the way I understood the world. Businesses, churches, governments, political parties, armies - all of them can be thought of as living creatures, or as organs of larger creatures, rather than as some sort of human means-to-an-end. By changing perspective between levels, we can see ourselves as means to the ends of these larger systems, just as our cells and organs are means to ours. Now I'm finally getting around to reading straight through Hofstadter's GEB, and so I can see that this general idea of shifting perspective across levels is an old one that has been astonishing people for a long time. But for me, coming to see human culture as as being alive was a fundamental shift in my intellectual development, one that hasn't really been superseded since. I haven't become a real memeticist yet, but it's all still there, underneath... these tiny tendrils of memetics live yet...

Tuesday, April 03, 2012

dream post!

recurring dream:

jingping and i are trying to get to the train station. the city is like a cross between boston and chicago - it's boston but with lots of overhead walkways and more of that chicagoesque feeling of sharp-edged criss-crossedness.

lots of things happen as we're on our way, it's like we're being chased, but the recurring part is where we get into the station and have to start climbing a stairwell, up and up. i know what's going to happen as the dream progresses. there's a fear of falling down the stairwell, but what happens is that it gets narrower and narrower, less and less place to put your feet, and you're crawling finally up a spiral tunnel, until you can't go further because there's just not enough space - around this point i know it's a dream, because i'm thinking that it can't really be this way, and i'm trying to change it because it's so damn uncomfortable. even in the dream, i'm thinking, why does this happen, why can't i fix it?

once it got to that point, i realized that my eyes were closed, but i couldn't open them, and yet i could still kind of see the twisting stairwell tunnel ahead - and there was a confusing sensation of being able to see but not being able to see, at the same time (interesting relevance to the visual consciousness stuff i was wondering about earlier, which is really why i'm writing it down). i was feeling around for the gap ahead, to see if i would fit, and i knew jingping was behind me and i couldn't back up, but i also felt like i could see it all...

i think i woke up soon after. i figure that noticing my eyes were closed and not being able to open them, and yet still having a sense of vision, must have been REM atonia - sleep paralysis, the sort of thing that gives you the feeling of being trapped and immobile in a bad dream.

anyways, i'm pretty sure i've had this dream a few times, the "shrinking stairwell dream".

dream post, yeah!

Monday, April 02, 2012

model update

I'm working on other things lately, but I did finally get that multi-channel rivalry model working - main problem was that I had written the convolution equations out wrong. I had to do the convolution there in the code because the filter array is irregular - there's no function to call for 2d irregular-array convolution, much less for switching the convolution between different layers.

Here's what I had done:

Z´(x) = Z´(x) + F(x)·Z(x), where x is a vector of spatial indices, Z is the differential equation describing the change in excitation or adaptation over time, F is basically just a 2-d Gaussian representing spatial spread of activation for the inhibitory or excitatory unit, and Z´ is (supposed to be) the differential convolved with the spread function.

Now that doesn't make any sense at all. I don't know what that is. In the actual code that equation was actually 3 lines long, with lots and lots of indices going on because the system has something like five dimensions to it; so, I couldn't see what nonsense it was.

This is how it is now:

Z´(x) = Z´(x) + sum(F(x)·Z(x))*F(x)

THAT is convolution. I discovered what was going on by looking at the filter values as images rather than as time plots; Z and Z´ didn't look different at all! Z´ should look like a blurred version of Z. Such a waste of time...

Anyways, it kind of works now. Different problems. Not working on it until later in April. The 'simple' single resolution model was used to generate some images for my NRSA application. Here's a sample simulation of strabismus (with eye movements):


Monday, March 26, 2012

standing at fenway station, thinking, as usual, "what's my problem", and i came up with a nice little self-referential, iterative statement of it: it's pablum, but i'm not usually this verbally clever, so let's write it down:

don't do what you don't believe
can't believe what you can't understand
won't understand what you won't do

so that's the problem; it's not exactly as i would normally say these things. if you asked me before i formulated this, i would probably say, "i don't like to do what i don't understand", and that's what i started out thinking. but then i asked, "why is that?", and decided that if i don't understand it, i can't really attach to it - then i saw the loop.

interestingly enough, the solution is the negation of the problem, literally:

do what you believe
believe what you understand
understand what you do

both of these statements have a sort of inertia; once you have one of the predicates, it starts rolling and keeps going. since they aren't specific, both statements are generative or productive - the referents don't need to be the same on each loop, but of course they should be logically linked.

(really, the middle statement isn't necessary in either one, with 'believe' replaced with 'understand' in the first line. i feel like the middle line adds some depth, though, so there it is.)

Wednesday, March 21, 2012

vision includes the world

I need to procrastinate slightly more productively, so here is a short essay relating some of my thoughts on visual consciousness.

For years now, I've understood visual experience or consciousness (experience is easier to say and write, and has less near-meaning baggage, so let's continue with that term) as having two components:

1. The image. A part of vision is direct, which means that when you see an object, it is true to say that what you see is the thing itself, or at least the light reflected/emitted by that object (this similar to the idea of the 'optic array'). This is a difficult position to hold, but I think it is a necessary default. The alternative, which is definitely more popular these days, is to say that what you see is entirely a representation of the thing itself, instantiated in the brain. This sort of idealism is attractive because the brain is obviously a self-contained system, and because experience also seems to be self-contained, and because every aspect of experience seems to have a neural correlate. If I say that vision involves processes or structure outside the brain, I have to explain why we don't see what we don't see; why don't I see what you see, for example?

It seems to me that in placing the contents of consciousness somewhere in the physical world, there are two possible null hypotheses: either everything is absolutely centralized, completely contained within the brain, or everything is absolutely external, completely outside the brain. The second account is rare these days (see Gibson), as the only job it leaves for the brain is sorting out of responses to visual experiences. It seems clear that much of vision actually does occur within the brain, and I'll get to that in part 2, below. Now, these null hypotheses: that everything is internal is an objective hypothesis, based on e.g. a scientist's observations that the brain is correlated with experience; that everything is external is a subjective hypothesis, based on e.g. my observations that what seems to be in the world is actually there, i.e. that my sensations are always accurate.

Since visual experience is a subjective process which cannot be observed, I like to stick to the subjective null hypothesis: everything is external unless shown otherwise. Immediately on stating this hypothesis, we can start to make a list of the components of visual experience which are surely neural.

2. The brain. Let's start with the subjective null hypothesis: everything you see is there, in the world. Just a little thought proves that this can't be true: faces are a great example. Look at two faces, one of a person you know well - your sister or brother, maybe - and one of a strange that you've never seen before. There, in the faces, you see a difference that you can't deny, because one seems to have an identity and the other does not. This difference isn't purely cognitive or emotional, either, because one will easily make the admission that the face of his sister is his sister. Seeing her face, he will say, "That is her!" Clearly, however, the identity is not in the face - it is in the observer.

If this isn't a satisfying example, color perception must be. Color is not a property of images, it is a construct of the brain - this is not difficult to show, either with the proof that identical wavelength distributions can yield different color percepts in different conditions ('color constancy'), or with the inverse proof that different wavelength distributions can yield identical color percepts ('metamers'). We understand color as a brain's capacity to discriminate consistently between different (simultaneous or asynchronous) distributions of visible radiation. It is something that exists only in the observer.

These are easy, but it does get harder. Consider depth perception. In a scene, some things are nearer or further from you, but there is nothing in the images you sense that labels a given point in the scene as being at a particular depth. There is information in the scene that can be used by the observer to infer depth. So, depth is another part of the brain's capacity to interpret the image, but it is not a part of the scene. This is a more more difficult step than with faces or colors, and here's why: whereas a face's identity, or a light's color, is plainly not a property of the world itself, we know that the world is three dimensional, and that objects have spatial relationships; and, we know that what we see as depth in a scene informs us as to these spatial relationships. However, we then make the mistake of believing that visual depth is the same as space; on reflection, however, we can begin to understand that they are not the same. Depth is an neural estimate of space based on image information.

Let's keep going. Spatial orientation is another good one: 'up' and 'down' and 'left' and 'right' are, in fact, not part of space. I've already made my complaint about this one: spatial orientation is created by the brain.

If we keep going like this, what do we have left? What is there about visual experience that is not in some way created by the brain? How can I state that there is an 'external' component to vision?

The only feature of vision, it seems, that is not generated by the brain is the internal spatial organization of the image, the positional relationships between points in the image - what in visual neuroscience is recognized as retinotopy. Spatial relationships between points in the visual field do not need to be recovered, only preserved. A person's ability to use this information can be lost, certainly, through damage to the dorsal stream (simultanagnosia, optic ataxia, neglect, etc). This does not mean that the visual experience of these relationships is lost, only that it is unable to contribute to behavioral outputs. I think it is a mistake - commonly made - to assume that a patient with one of these disorders is unable to see the spatial relationships that they are unable to respond to. Assigning to the brain the generation of positional relationships needs evidence, and I know of none. A digital, raster image based system would be different, of course: a video camera detects images by reading them into a long, one-dimensional string of symbols. Positional relationships are lost, and can only be recovered by using internal information about how the image was encoded to recreate those positions. The visual system never needs to do this: it's all there, in the very structure of the system, starting at the pupil of the eye.

So, here is my understanding of vision: it is a stack of transformations, simultaneously experienced. The bottom of the stack is, at the very least, the retinal image (and if the image, why not the logically prior optic array?). Successive levels of the stack analyze the structure of the lower levels, discriminating colors, brightnesses, depths, and identities; this entire stack is experienced simultaneously, and is identical with visual consciousness. But, the entire thing is anchored in the reality of that bottom layer; take it away, and everything above disappears. Activity in the upper levels can be experienced independently - we can use visual imagination, or have visual dreams, but these are never substantial, and I mean this not in a figurative sense - the substance of vision is the retinal image.

This view has consequences. It means that it is impossible to completely reproduce visual experience by any brain-only simulation, i.e. a 'brain in a vat' could never have complete visual experience. Hallucinations must be mistakes in the upper levels of the stack, and cannot involve substantial features of visual experience - a hallucination is a mistaking of the spatial organization in the lowest levels for something that it is not. Having had very few hallucinations in my life, this does not conflict with my experiences. I can imagine that a hallucination of a pink elephant could actually involve seeing a pink elephant in exactly the same experiential terms as if one was there, in physical space, to be seen, but i don't believe it, and I don't think there's any evidence for vision working that way. Similarly, dreams are insubstantial, I claim, because there is nothing in that bottom layer to pin the stack to a particular state; memory, or even immediate experience, of a dream may seem like visual experience, but this is a mistake of association: we are so accustomed to experiencing activity in the upper stacks as immediately consequent to the image, that when there is activity with no image, we fail to notice that it isn't there! I think, though, that on careful inspection (which is difficult in dreams!), we find that dream vision has indeterminate spatial organization.

Anyways, that's my thinking. This has gone on long enough, I need to work on this proposal...

Sunday, March 18, 2012

oscillate, explode, or stabilize

must learn about runge-kutta methods,
must learn about runge-kutta methods,
must learn about runge-kutta methods.

clearly this too-complicated model is suffering because of the temporal resolution. i've spent nights now trying to figure out why the thing wasn't working right - and did find a few errors along the way, which i don't think would have made or brake the thing anyways - and finally i conclude that the response time constant was too small. this is strange, because the same model works great with a 2d network, and perfect with a single unit; apparently there's something about this network, which is essentially 3d, which effectively makes the time constants faster... it must be that compounding the differential during the convolution, over multiple filter layers, effectively speeds everything up.

it's not like i wasn't aware of this problem at first. i thought i had solved that by doing the global normalization, where the convolution stage would basically be treated as a single layer. last night, i decided that collapsing that stage to one layer was a mistake, because it resulted in the pools everywhere being overwhelmed by the finer-grain channels, since those filters are more numerous. that may actually be correct, with some sort of leveling factor, but at any rate i took out the collapse. it didn't change performance much, but that's when i was using a too-complex test case (two faces), instead of the current test case of two gratings. now i realize that the pooling was accelerating the responses, resulting in useless behavior by the network - turning up the interocular inhibition to any level that did anything tended to result in ms-to-ms oscillations.

so, the compounding of responses was doing it, i guess, and would be doing it even if i had the pooling collapse still worked in. but now i can't understand why i didn't get the same problem, apparently ever, with the fft-based version of the model. now i'm suspicious that maybe i *did* get it, and just never perceived it because i wasn't doing the same sorts of tests with that thing.

not quite back to the drawing board. i wish i could get away from the drawing board, for just a few nights, so i could work on this goddam proposal like i should have been doing for the past 2 months.

Tuesday, March 13, 2012

An Instantiation of a General Problem

(I wrote this, but never finished it, in China back around Christmastime. Randomly remembered it today, and thought this would be as good a place as any for it.)

The key was to be found across the city, in the old commercial district. We had tried simulations, implanted demos, viewed stereoscopic images through a haploscope we found in storage in the medical school. After all of these, we had tried hallucinogens to modulate the imagined presence of the key, but it was all to no avail. At least, we said to ourselves, when we finally approach the key we will be familiar with it. The front end of the process will not be a surprise.

The approach, however, to that front end, would be horrendous. First, our camp was protected from the feed. This kept the peace from finding us, but it also meant that our emergence into the feed would stand out like a tree in the desert. We had monitored the security cycles for days. Most would say that such monitoring was futile, since the cycle paths were random, generated with new seeds every minute give or take another random cycle. Any attempt, most would say, to predict gaps in the cycle would result in no better chance of unnoticed entry than no attempt at all, with the added hazard of false confidence to mask the creeping signs of detection.

It was possible, though, to closely estimate the number of cycles. We could detect the passes themselves, which gave us data for the estimation. The different cycles were unique, originating from different security servers, each assigned its own identification during its current generation. Given all these data, we had a method for estimating, at any given moment, the likelihood of a pass. The optimal estimate could be made using the previous twenty seconds of data. You could have pointed out that a likelihood is the opposite of a certainty, at least along a certain conceptual dimension. You could also have pointed out that the optimal estimate was lousy if those twenty seconds contained a generation update. We would have ignored you.

Once inside, we would have to obtain city ids from an admin, which was not trivial, but not a problem as long as we could quickly make contact with Tsai, our woman on the inside. We knew she was still online and that her admin was current, so as long as she wasn't in some unshakable stupor, she would tie us on and we'd be set for the rest of the trip. Anyways, persisting for a few minutes with unregistered cids wasn't as dangerous as suddenly emerging out of the void. An impulse is like that tree in the desert and the primary means of detecting aliens, while trouble finding a cid registration is a basic function of the feed servers, which would be checked in serial, assuming corruption or damage first and alien somewhere further down the line. Tsai could just tie us onto the oldest and most remote server, plot a false geographic history of intermittent reception and an outstanding service request, and there would be nothing in the feed to mark us out. The tree would dissolve into a puff of dust.

The next problem would be the actual emergence into the city. Feed presence can be smoothed over, anyone can appear to be anyone, fit into any group, assume any identity. The body, however, is much less convenient to modify. Their hair is long, but ours is short. Their skin is yellow, but ours is brown. We stand head and shoulders above them on the street, and we have no choice but to travel on the street for the most part, by foot, in the open, making stark and clear the comparison between foreigner and local. But, there are other foreigners in Haisheng. They are few and far between, but there are others, and though we draw attention it is natural, because who can ignore a brown spot among yellow? The noticing is in itself not a threat. But when others are looking for you, being easily noticed is a step away from being easily found. We did not want to be found, but there was no choice but to be noticed.

The final hazard was beyond any interaction with the first two. At the time I could not imagine how, but I was still cognizant that there was a possibility that the locked id had already been accessed by my competitors before I had retrieved it. If so, they may even have already decrypted it, outformed the important information inside, and restored the encryption. This was beyond any vital worry on my part, since the main danger was that knowing the key, and that I was looking to open the id, they might be waiting for me at the site. This meant I would have to move slowly through the streets, below them when possible, work quickly when it was time to get the key, and maintain vigilance on all channels at all times. There was nothing else we could do but be vigilant.

I can tell you more about the key without compromising the truth of the mission. Someday down the line, you may be able to put two and two together, but by that time whether or not you know such an obscure truth won't matter much, and you'll be occupied with obscuring your own. Anyways, it is an interesting detail, and may spark one or another interest in you.

The id I had retrieved was that of a neural engineer from a century or so earlier. We needed to query it regarding some interactions it had had at one time with our main objective, whose id at the time was missing and presumed destroyed. As it turns out this engineer had dabbled in id encryption, which was a new field in those days, specifically in encryption through perceptual experience. Though the field was active at the time, it was - and remains - completely unknown to the science that this particular engineer had worked on the problem. It was a private pastime, perhaps a paranoid fear that a great advance might be stolen, or maybe it was just a fear of inadequacy in an outsider bringing to the field such an idiosyncratic development. At any rate, this engineer had come up with something exquisite, which was probably unmatched by anything else produced by her generation. She may have meant it entirely for herself. Today, it's a work of art, but the tech is fundamentally outdated.

This is a digression, I'm sorry. Outdated or not, it was a good lock, and on site we still needed the key to open it. The encryption was applied to the id by taking the online state of some suite of perceptual systems, definitely including visual, possibly other - and by the way, don't take my ambiguity as indicating anything other than an intention to be ambiguous - and using this neural state as the key for the encrypted id. The entire state couldn't  be recorded, of course, since the subject would have to be standing out in the open at the location, i.e. a true state scan would be impractical, especially in those days. Instead, something was probably worn, perhaps obvious or perhaps hidden, instantaneously recording a blocked brain state amounting to just a few terabytes. It was a functional state, meaning that it could be reproduced in other human brains, but our initial estimate that a good visual simulation would suffice proved wrong. We needed to be there, unless someone could explain exactly what composed the key, and the only person who could tell us that, it appeared, was the one locked in that id.

Back to the problem. Being noticed, maybe being scooped, these were mostly outside our control. But skipping as an alien into a secure feed using random-cycle maintenance, that's something we can deal with. Look at the figure field. We used standard methods to monitor the cycles and establish their regeneration characteristics, how many there were, durations of the cycles, amplitude of the duration modulation - everything here is something you've seen before. You all have four minutes to generate the optimal estimate from these data, starting - now.

Monday, March 12, 2012

multi-channel M-scaled discrete filter convolution

Okay, so, I built this really neat discrete filter-based visual field model, planning to use it to measure binocular image statistics and to generate more realistic rivalry simulations. I hoped that doing the simulations would actually be quicker using the filters, since there would be far fewer filters than pixel images (I was using image-filter convolution to do the simulations I showed 2 posts ago), and the filters only needed to be represented by their scalar responses. Hoped but did not believe..

So now, I just spent the weekend (wrote that first paragraph a week ago) staring at the code, trying to figure out how to do, essentially, convolution of a function with an irregular array. It is complicated! I wrote a function to get local neighborhood vectors for each filter within its own channel, and then stared at that for a couple of days, and then realized that I should have written it to get the neighborhood without regard to channel. It's a pretty gangly operation, but it does have a good structural resemblance to stuff I've been thinking about for years. Ed and Bruce's relatively abstract idea about the broadband gain control pools, well, I've built it. Not for the intended purposes, since there's not going to be any gain control here - the only suppression that will be involved is like an 'exit gate', the permission for information in the channel array to be moved out to the later stages ("consciousness", we'll call it).

And, I say again, it's complicated. It's definitely not going to be faster than the rectangular filter convolution; in fact, it's likely to be 3 or 4 times slower, and it's going to produce rougher looking images on top of that. All this just to incorporate stupid M-scaling into these stupid rivalry waves. I swear, I can't think of a better way to do it. And the thing still isn't going to know anything about surfaces or faces or houses or any of that stuff, and it's going to take forever to debug and proof since it's going to be so slow...

But it's going to be cool.

Monday, March 05, 2012

retrograde inversion

Several times in your life you may hear it noted that the retinal image is reversed and upside-down. Fewer times than that, hopefully, you may then hear it noted with curiosity that the brain somehow undoes this retrograde inversion. When you do hear this, please interject with the following:

"The brain does not reverse the coordinates of the retinal image. The brain does not know or care about about the retinal image's orientation relative to the world; as far as the brain is concerned, the image is not upside-down, or upside-up, or flipped or double-flipped. It is not delivered to the brain with reversed coordinates, but with no coordinates at all. The brain assigns spatial coordinates to the visual information it obtains from the eyes. It does this by integrating information about body position, gravity, and other consistent sensory cues about the state of the world. There is no reversal or correction of coordinates, there is only assignment of coordinates."

You will promptly be thanked for clearing up the misunderstanding, and hopefully your interjection will serve to end one strain of a particularly irritating bit of pernicious nonsense.

Thank you.

Wednesday, February 22, 2012

Rivalry and Diplopia

A simulation of binocular rivalry and fusion with eye movements:

First, the input:

If you can cross-fuse, you want to fuse that white rectangle (and the matched noise background). It's hard to do, especially since there will be a strong urge to fuse the face, not the background. If you succeed, the girl's face will be diplopic (seen double). The video below is a simulation of what is happening in the parts of the visual field where the face is seen.

The photo of the girl is represented at two different ('disparate') locations for the two 'eyes' (just different filter streams in the simulation), while both eyes see the same background (noise with a little white block below the photos). At locations where the two eyes get different inputs (i.e. wherever the photo is seen), the two streams suppress one another and 'binocular rivalry' is induced. This rivalry is unstable, and results in periodic fluctuations where either one or the other eye's image is seen, but not both.

On the other hand, when both eyes get the same input, there is no suppression between streams (this isn't physiologically accurate, just convenient in this simulation). This results in 'fusion' of the two eyes images.

Every second, the filter streams - the eyes - shift to new, random coordinates (they are yoked together of course). You can see that by the shifts in position of the little black dot, which starts out near the white block.

(Both of these videos look a lot better if magnified, i.e. hit that little box in the lower-right corner and look at them full-screen.)

 

To make a little clearer what's happening, here's a color-coded version:

 
Here, locations where one eye's image gets through to be 'seen' are colored red or green (depending on which eye - geometrically it only makes sense that green-is-left and red-is-right, which would mean that the photo is between the viewer and the gray background), while regions where there is fusion are colored yellow or brown. The stream marker is now a blue dot (not really; the googlevideo encoder seems to favor dumping small blue dots against red/green backgrounds, go figure)!

Look at that mess of imbalanced fusion that builds up all over the scene. What a mess!