Saturday, June 16, 2012

stupid lazy

Alright, two posts in a row of me admonishing myself. Publicly, in theory. In theory, this is more embarrassing than it actually is.

It's Saturday evening. I have done nothing all day. Nothing. Played a computer game all morning. Read the front page of the WSJ. Ate a bowl of noodles and drank a pot of coffee. Played some piano. Looked at lots of funny gifs. Tried again to get Endnote properly installed on this stupid computer, and failed. X.0.2 + Office 2007 + Windows 7 = not work.

I have that SID manuscript open. I need to clean it up, add in those two other references I found but haven't really read because they look really dull. They're just 'relevant', in a parallel sense, but nothing obviously consequent. That's what led me into that stupid Endnote cul-de-sac again. I can do it remotely, so what. Just now I opened up the For Authors page on the journal site.

The CI paper is fine. Adding the MTF into the calculations didn't have as big of an effect as I expected, or hoped. Scaled filters are pretty resilient.

I haven't studied Chinese much in a while. I could be doing that.

No, no. God dammit. SID paper. Finish the goddam paper and upload it. There is no excuse. The paper is finished. Send it in. Dammit. I hate you.


Thursday, June 14, 2012

ohhh...

morning, myself. heart.
end. voice. nightingale?

This is just a diversion. There are things in life, every day, that we want to reach out and touch, or interact with, or follow, or watch, but we can't, because there are other things that we have to do instead. Other things that we should do instead. Self control can be suppression of the self, but sometimes it is just being rational, maintaining normal, keeping things the way you want them. Your mind is made up of many different parts which, on their own, are not as intelligent as you are. They don't have the same priorities as you. They don't even have the same memories as you - some of them have only existed for a few days, or months, or years. Maybe, some of them, you can remember when they came into existence. You can remember, because you are the one governing the rest, corralling them. You have to choose, at these instances, what to do - even if these things in life are like lures, and you see that between you and this other possibility, even just what ultimately would be a fleeting bit of soon-to-be-nothing, is a transparent membrane of a single impulse.

It's just a diversion. Maybe don't go back there. Maybe come back here, and see what you did, to keep from going there. Remember what there is in other places. Keep things level. Life is hard.

Wednesday, June 13, 2012

monitor MTF? sure!

Okay, so I need to know the spatial frequency transfer function of the monitor I've used to do most of my experiments over the past couple of years. I've never done this before, so I go around asking if anyone else has done it. I expected that at least B** would have measured a monitor MTF before, but he hadn't. I was surprised.

Still, B**'s lab has lots of nice tools, so I go in there to look around, and lo, T** is working on something using exactly what I need, a high speed photometer with a slit aperture. So today I borrowed it and set to work doing something I had never done and didn't know how to do. It was great fun.

D** helped me get the photometer head fixed in position. We strapped it with rubber bands to an adjustable headrest. I've started by just measuring along the raster. The slit is (T** says) 1mm, which is about 2.67 pixels on my display. I drifted (slowly) squarewave gratings with different wavelengths past the aperture - this was more complicated than it sounds. The monitor is run at 100Hz, and CRTs flash frames very rapidly, just a millisecond, so getting the photometer settings just right (it runs at 18Khz) took a bit of adjustment, and figuring out good settings for the gratings, slow-enough speed to drift them at (I'm limited by the 10 second block limit imposed by the photometer)..

Anyways, I got back temporal waveforms which I treat as identical to the spatial waveforms. As expected, the power of these waveforms drops off as the gratings get finer. But, I know that it drops off too fast, because of the aperture. If the aperture were exactly 1 pixel across, and if it were aligned precisely with the raster, and if a bunch of other things were true, then I could know that each epoch recorded by the photometer reflected the luminance of a pixel, and my measurements would reflect the monitor MTF. But, like I said, the aperture is 1mm, so each 10ms epoch is an aliased average over >2 pixels. I'm not even thinking about the reflections from the photometer head (there's a metal rim to the aperture T** had taped on there).

My solution: code an ideal monitor, record from it with the same sized aperture, and divide it out of the measurements. I can then guess a blur function - Gaussian - and fit that to my (4) data points. That's what I did: here is my first estimate of the vertical MTF of my Dell p1130 Trinitron:

The Nyquist limit for this display, at the distance modeled here, is about 23cpd, so I guess this Gaussian is in about the right place. It's hard to believe, though, because horizontal 1-pixel gratings look so sharp on this display. I feel like these must be underestimates of the transfer. I am nervous about how awful the vertical will be...

*edit*
It wasn't too bad, just a bit blurrier than the horizontal. Still makes me suspicious that I'm underestimating the horizontal. Not going to bother putting up plots, but here's my estimate of the pixel spread function (you can just see that it's a little broader left-right, that's the vertical blur):
 

Thursday, June 07, 2012

zimbra

internet post!

So, earlier today, I got an email from the "Security Operations Lead" at NASA Ames, saying that a whole batch of people's passwords and account names had been accessed. I had an account there for a meeting I went to earlier this month; coincidentally, immediately after attending that meeting, I noticed that one of my peripheral email accounts had been accessed, and at the time I blamed it on the hotel.

Just now, I get an email from something called Zimbra, informing me that:
You requested your Email Account  on June 7, 2012 at 11:02 PM CS to be deactivated and deleted from a location in with this IP number; 201.130.47.33.
2. Click on  (https://secure.zimbra.com/verifyf?intl=us&.partner= cancelrequest) to cancel this request; else your email account will be deactivated and deleted within 24 hours
The sender's address was "bankofcard@yahoo.com". Yeah. Zimbra is apparently some sort of open source email server software for Linux machines. So this doesn't have anything to do with Zimbra.

The IP address leads to a machine in Mexico, with the URL niie2e.nextel.com.mx. This machine seems to have all ports open, i.e. it's either a totally open proxy server, or some sort of disguise for something else.

That URL to 'secure.zimbra.com' was actually an alias in the email (no I did not click on it, I am not stupid), for "http://www.contactme.com/4fcf723e2e22a2000103d1b6". From their website I can't tell what the hell contactme is, but it looks their site was probably co-opted. I wonder what's there...

Anyways, the relationship to the NASA thing is just coincidental timing, but makes me a bit paranoid.

*edit 6-19-12*
Got called down to to the network office this morning to change my password; apparently the NASA thing had gotten distributed to everyone whose ids were leaked. The admin forwarded to me the info he'd gotten through the Harvard IT director, and based on that I found this:


http://pastebin.com/nSJ9Nn9Z

who knows how long that link will stay alive. anyways, it's a list of the email addresses, but no passwords, for everyone that attended that workshop.

the header on the document:

[HACKED] NASA.GOV - AMES RESEARCH CENTER - By ZYKLON B
 ...
Join me on twitter : https://twitter.com/#!/bzyklon

Author : ZYKLON B
Target : NASA Ames Research Center - Ocular Imaging Laboratory (ace.arc.nasa.gov)
Reason : Curiosity, Challenge.

IS THE TARGET COMPROMISED ? YES.
Note : NASA Glenn research center already hacked 5-6 weeks ago.

anyways, that's interesting. you look down the document, and there we all are! yeah, hackers have twitter accounts!

temporal friction

We came to see the problem as one of material friction, rather than of abstract entropy. Information was still the key, but it was finally clear that what we perceive as information is just the tip of an iceberg.

The resolution of the dark matter problem demonstrated that the deeper structure of reality is anisotropic, made up of tiny needles of entropy - a material structure, now, not a mathematical description. And these needles, they are all pointed towards the past. Whatever part of the universe that moves away from the past is brushed through those needles, which scrape and tear and lacerate the informational structure that we recognize as reality.

That reality we came to see as the product of a long process of selection - some informational structure is torn apart as soon as it forms, while some is hardier, even self-correcting. Life was a structure, we saw, that had adapted to a long path through the deeper universe, even making use of the dark matter anisotropies, to store and convert energy, to drive its own processes of selection and sub-adaptation. But no forms of life could navigate through the darkness. Not until the humans came, and then, for them, it became a prime concern.

We navigated the Earth's surface, its skies and its seas. We navigated the first darkness, of space, and the surfaces of other worlds. All the time, there was that constant abrasion, wearing off man after woman, nation after world, age after eon. All along, we were looking for the way through - the clear way through the darkness. We found it - as I said, we came to see the problem as one of material friction.

Moving against the anisotropy disrupts information. Moving with it smooths information, conforms it, puts it in neat rows and columns, but this has the strange effect of making life uninteresting. If you're adapted to something, then when you lose it you notice it's gone. It's the same with entropy - when you go through life with information conforming rather than disrupting, you go from senselessness to perfection. This can be beautiful, but too often it's just another level of senselessness, on top of the fact that conformative memory processes take a lot of time to install and master, and you find yourself missing the old order of things, no matter how far beyond human you've gone. To have your thoughts and actions gather energy, collect and bind heat, and deliver it into your body... a useful novelty, a tool or practice, but nothing fundamental.

What was interesting was when you could remove the anisotropy altogether. No arrow of time, except for what you choose to arrange. Laws of thermodynamics become adjustable, optional. When we learned not just to navigate through the darkness, but to engineer it to our own purposes, then...

That, more than any other development, was what changed us.

Tuesday, June 05, 2012

transit venus

SO, I had been planning to actually get my old reflector out and make a little projector to watch a bit of the transit, before the sun set today. but, it's been days since the sun was out, so I never had any chance to test anything; actually, with no way to test a setup, and with the forecast looking like it was going to be the way it was, I didn't wind up actually putting any of the parts together.

In other words, it was too cloudy to see it from Boston.

I've been watching the NASA feed of it, it's better than nothing. I think the transit will be over soon; I'll stay up as late as I can to see if I can't see the planet crossing the limb.

That picture is awesome, right? NASA again.

Sunday, June 03, 2012

sum.. sum.. summm... summation?



ok, so i'm working on this classification image paper, and it's going really well, and i'm pretty happy about it. i feel like i've got a good handle on it, i'm writing it in one big shot, the analyses are all good, the data are fine, it's all under control. i'm pretty happy about this one. i keep telling myself that, and then noticing that i keep telling myself that. i guess it's in contrast to the blur adaptation paper, which was such an ordeal (took 2 years, basically), and then the magnification paper, which just isn't much fun. i feel myself moving down that priority list - hey, i should do a post on the priority spreadsheet! i made some nice plots in there!

anyways, the CI paper, it's going well, but i'm constantly on the lookout for problems. so tonight, i finally thought of one. not a crucial, deep problem, but a problem with how i've calculated some of the modeling stuff, a serious enough problem that i'll probably have to redesign a bit of it before doing the final runthroughs. i'm writing this entry so i can just sort of kick off thinking of how to solve the problem. here it is, right plain as day in this little cluster of plots from last year's poster, which has become this fine little paper:

the problem is spatial summation - or, the problem is that you don't see anything about spatial summation in those plots. for the main models, i have a CSF that was measured using test-field-sized images. the thresholds measured must reflect a sort of spatial summation, then. the problem is, i've been using those thresholds to set the baseline thresholds for the models, and then summing over the spatial responses. i had kind of had an inkling that i was being lazy there, but had overlooked how obviously stupid that is. i haven't tested the models on the threshold tasks, but i think that they would necessarily get much lower thresholds than the humans; spatial summation should give you a lower overall threshold than you would get for any single location. i need to think of a quick way to solve this, because i don't want to wind up estimating the model CSF through simulations...

and the simulations then raise the problem of noise, and how many samples should there actually be, etc etc... i guess there are benefits to doing things the simple way first, but i think i've run myself into a weird little corner here. gonna need to talk to somebody about this, probably..

Monday, May 28, 2012

scintillating scotoma 2

another aura! (no post in a month, and now it is forced!)

right now it's flying out to the far left field, but there's also a weird 'rough patch', a bit nearer in that appeared late on. i got a good map. i didn't expect this one coming on at all, except for what i think is just sort of standard low-level paranoia which i've developed since this all started. it's hard to tell what's prodrome and what's just coincidental suspicion.

anyways, i was reading through a section of the classification image manuscript, and, yep, can't see the letters. i think that 8/10 of these things so far have begun while i was reading. not sure if that's because i spend the majority of my time working with text in one way or another, or if that actually pushes things over the edge.

***

it's been a few hours now. here's the map i drew of this last event:

similar to last time, except that it's in the left field this time. it started out below fixation, and did the slow arc outward, following a really similar path as the last one. below, i'll show you below just how similar they are.

as for the rough patch: in the plot above, notice that in the superior field there's some green (and hard-to-see gray) scribbles over the neat arcs; that was a region that i noticed late which wasn't blind, and wasn't flickering, but was clearly..  unclear. it's at about 10 degrees eccentricity, so it's hard to see well out there anyways, but it was obvious that something was wrong. i could see the scribbles that had already been drawn, but it was all very indistinct and jumbled, and i couldn't see the motion of the cursor even though i could tell that it was laying down green/gray scribbles of its own. without any other explanation, i'm going to guess that this was the fabled extrastriate scotoma - my V2 was getting some CSD!

ok, now some analysis. first, log-polar maps. i haven't gone and gotten/worked out a cortical remapping scheme, but putting things in in log-polar coordinates is almost as good. actually, what you do is put them in log(ecc+1)-polar space, so you can see where the fovea is (zero in case you're dull). here are logecc+1-polar-time plots for the last two events (lets say L = log(ecc+1), for easier reference):

time is measured here from start of recording. these data are smoothed versions of the drawn maps in 10 degree radial steps. the scales are the same except that these are for opposite hemifields. the z-axis is color coded.

i mean, you can just see that those two maps are almost identical. i got more data the second time because i wasn't occupied with working out a system (btw i was caught off-guard; i wrote a matlab script to record in real time, but it not work in matlab-64, and i put off doing the -32 install because... ah.. it's done now). the origin is similar - look at these plots:

sorry about the colors. these are the same data as in the scatter plots above, except these are collapsed over the x(angle)-axis. actually they aren't quite the same: here i've used linear regression to estimate the earliest time (before recording began) that the visual event could have begun, and subtracted it out of the time axis, so these plots start when recording began with respect to when i estimate the event began. so, basically, this aligns the data. 2 things: one, the rate of advance (remember that it's radial advance, so these technically are distorted plots; i assume that's why the slope changes with angle) is basically identical in the two cases, ~.16 L/m. and the origin, ie where it all begins, is ~180-170 degrees in both cases (that's directly below fixation).

i've got other analyses, but the above sums up the interesting stuff. i know i've seen these things do weirder things, following more difficult-to-understand courses, and i hope i see one of those next time i'm able to record this business.

Tuesday, April 24, 2012

scintillating scotoma

a few minutes ago (~22:35), reading, and i notice that letters are hard to see. that sensation of having a bright afterimage at fixation; it's moving rightward, usually it's been leftward (I haven't taken notes on the past 2 occurrences, sadly..); this it is at least the first of the last 4 to go through the right field, if not longer.

it begins with just a weird sense of scotoma-ness, very near fixation, but the blind areas are hard to pin down - they seem to change very rapidly, or else it's more a sensation of blindness rather than actual blindness. it's strange how it sticks at fixation even as it arcs out into the periphery; it seems it always arcs into the lower field, after arcing just a bit above fixation. i've not noticed yet one passing across the field, maybe it is restricted to hemisphere?

it's almost gone at this point (almost 30 minutes after the first signs), and all that's left is a flickering at the very top of my visual field, as if there's a light flashing on my eyebrows; interestingly, if i look up, it disappears, which is strange because it should be attached to the field location. i can look up, it disappears, look down again and it reappears. maybe was an interaction with the reflection of room/computer lights off my eyeglass frames? can't test, it's all gone now..

and i have a headache (actually it started a few minutes in; the light show was so slow to start, i thought we were skipping straight to the headache for a few minutes, a bit of disappointment, but it worked out!)

also, some hints: today and yesterday, i several times wondered if i wasn't going to get a headache soon, without understanding why. not sure what sets off those feelings.. this afternoon, i thought i saw some flashes at some point when i was walking down the hallway, and that really made me suspicious; and, all day, really tight, painful muscle spasm throughout my upper back, both sides trapezius.

23:05

map below: i have a few of these now, should get to processing them this summer..



Edit: look at what this guy has done: http://www.pvanvalkenburgh.com/MigraineAura/MigraineAuraMaps.html. pretty amazing.

also, i did wind up writing a script to analyze these plots; once I get some stuff settled, i'll post those in a new entry.

Friday, April 20, 2012

lazy friday dark adapt


Spent afternoon of Friday, 4-20-12, with a ~3 log unit (~.2%) ND filter over my right eye. Made the following observations:
(Took the filter off after about 4 hours. It wasn’t bothering me much anymore, but I think the plastic and rubber stuff in the goggles was irritating my eyes, which were starting to feel kind of dry and red. Light adaptation is really fast, it’s just been a minute and the (formerly dark adapted) right eye’s image only seems slightly brighter than the left’s.)
  1. Noise: the dark adapted eye’s view is noisy, and the noise intrudes into the dominant view. It’s irritating. The dark adapted view isn’t being suppressed, though, it’s there like a ghost. Double images, from depth, are strikingly noticeable, not sure why.
  2. Pulfrich effect: first time I’ve really seen this work. I put my index fingers tip-to-tip and move one from side to side, fixating on the still one, and the moving finger looks like it’s rotating. My hand even feels like it’s rotating.
  3. Pulfrich effect 2: Fusion isn’t always working, but I seem to be ortho a lot of the time. I just noticed, though, that if I make quick motions, e.g. a flick of a finger, there’s a delay in the motion between the two eyes; the dark adapted image is delayed by several hundred milliseconds! Especially obvious if I focus at distance so I have a double image of the finger. Explains the strong Pulfrich effect.
  4. Noise 2: Just looked at some high frequency gratings. With the dark-adapted eye, the noise was very interesting, looked like waves moving along the grating orientation, i.e. along the bright and dark bars there’s a sort of undulating, grainy fluctuation.
  5. I still have foveal vision and color vision, but both are very weak. Dim foveal details are invisible. High contrast details (text, the high frequency gratings) are low contrast, smudgy..
  6. Motion is kind of irritating, I think because it brings about lots of uncomfortable Pulfrich-type effects. Even eye movements over a page of text can be bothersome, because there is always an accompanying, delayed motion. I’m guessing that the saccade cancellation is being dominated by the light-adapted eye, and so I’m seeing the dark-adapted saccades. I don’t notice a depth effect, but walking around in the hallways I do feel kind of unsteady, maybe because of motion interfering with stereopsis. If objects are still, stereopsis seems to be okay.
  7. If I take vertical and horizontal gratings (64c/512px), add them together, then look at them at 25% contrast from about 30cm (here at my desk), I don’t see a compound grating – I see patches of vertical and patches of horizontal. I’ve never noticed this before; I wonder what differences there are with scotopic vision and cross-orientation suppression..
  8. I tried to watch my light-adapted eye move in a mirror, but the dark-adapted eye just couldn't see well enough. I think a weaker filter would make it possible.

Thursday, April 19, 2012

Memes and Pharmacies

That rivalry proposal went in last Monday with no problems, along with a long-festering manuscript, so it seems I took a week off from writing. Now I have progress reports to do, presentations to prepare, other manuscripts to complete, on and on.. I need to make sure I keep up with this journal, which seems to be helping in keeping my writing pace up.

Vacation is over!

About 12 years ago, I read the Meme Machine by Susan Blackmore. It was around the time that I decided to major in psychology, and I was reading all this Dan Dennett Douglas Hofstadter stuff, but it was her popular science book that really had a big effect on me - I would say that it changed my worldview completely, to the extent that I would identify myself as a memeticist when discussions of religion or that sort of thing came up (it was still college, see) - I really felt like it was a great idea, that human culture and human psychology could be explained as essentially a type of evolutionary biology. I still believe it, and so I suppose this book still really sits near the base of my philosophical side, even though I don't think about these things so much anymore.

I bring this up because the other night Jingping and I were talking about being tricked, since this was the topic of a Chinese textbook lesson I had just read, and I recounted the story of being completely conned by a thief once when I was a clerk working at CVS. I wound up going on more generally about working there, and I remembered that I had worked out a memetics-inspired 'model' of that store, which I hadn't thought about in a long time. One of the things about the memetics idea that had really gotten to me was that you could see social organizations as living creatures with their own biological processes - not that this is an idea original to Blackmore, and I'm sure that's not the first place I had heard it (like I said, I was also reading Dennett and Hofstadter at the time), but she did work it into a larger sort of scientistic system which seemed to simplify and unify a lot of questions.

Now I realize that the criticisms of memetics that I heard from professors in college (when I questioned them about it) were mostly right, in that it mostly consists of making analogies between systems; only in the last few years, with the whole online social networking advent, has a real science of something like memetics actually gotten off the ground (this is a neat example from a few weeks ago), and it's very different from what had been imagined when the idea was first getting around.

Anyways, I thought I'd detail here my biology-inspired model of a CVS store, ca. 2001. I have a notebook somewhere where I had detailed a whole system, with functional syntax and everything, for describing social organizations in terms of cellular, metabolic systems. This might have been the first time that I had tried to put together a comprehensive model of a system, now that I think of it. The store, I thought, was itself a cell in a larger CVS system dispersed across the city, which itself was a system dispersed across the country. I was mainly interested in the store level, where you could see different components acting as reagents. It was a strongly analogical system, but not completely analogical, to plant metabolism: a plants needs carbon, so it uses sunlight to break down carbon dioxide, releasing the unneeded oxygen back into the world. A store needs money, so it uses salable products to break down a money-human bond, releasing the human back into the world.

Of course, with plants the sunlight comes free from heaven - all the plant can do is spread out and try to catch it - while salable products must be delivered from another part of the CVS system: the distribution center. The distribution center emits reagents to the stores, where the money-human bond is broken down. The CVS system also emits catalytic agents into the world - advertisements - to facilitate the crucial reaction. The money absorbed by the system is energy which is used to drive the system through a reaction not unlike oxidation - the money-human bond is reformed systematically, with employees as the human component. This reformation is what really drives the system. Those new human-money bonds then go out into the world and fulfill the same function, breaking apart, as they interact with other businesses.

Looking at a business in this way totally changed the way I understood the world. Businesses, churches, governments, political parties, armies - all of them can be thought of as living creatures, or as organs of larger creatures, rather than as some sort of human means-to-an-end. By changing perspective between levels, we can see ourselves as means to the ends of these larger systems, just as our cells and organs are means to ours. Now I'm finally getting around to reading straight through Hofstadter's GEB, and so I can see that this general idea of shifting perspective across levels is an old one that has been astonishing people for a long time. But for me, coming to see human culture as as being alive was a fundamental shift in my intellectual development, one that hasn't really been superseded since. I haven't become a real memeticist yet, but it's all still there, underneath... these tiny tendrils of memetics live yet...

Tuesday, April 03, 2012

dream post!

recurring dream:

jingping and i are trying to get to the train station. the city is like a cross between boston and chicago - it's boston but with lots of overhead walkways and more of that chicagoesque feeling of sharp-edged criss-crossedness.

lots of things happen as we're on our way, it's like we're being chased, but the recurring part is where we get into the station and have to start climbing a stairwell, up and up. i know what's going to happen as the dream progresses. there's a fear of falling down the stairwell, but what happens is that it gets narrower and narrower, less and less place to put your feet, and you're crawling finally up a spiral tunnel, until you can't go further because there's just not enough space - around this point i know it's a dream, because i'm thinking that it can't really be this way, and i'm trying to change it because it's so damn uncomfortable. even in the dream, i'm thinking, why does this happen, why can't i fix it?

once it got to that point, i realized that my eyes were closed, but i couldn't open them, and yet i could still kind of see the twisting stairwell tunnel ahead - and there was a confusing sensation of being able to see but not being able to see, at the same time (interesting relevance to the visual consciousness stuff i was wondering about earlier, which is really why i'm writing it down). i was feeling around for the gap ahead, to see if i would fit, and i knew jingping was behind me and i couldn't back up, but i also felt like i could see it all...

i think i woke up soon after. i figure that noticing my eyes were closed and not being able to open them, and yet still having a sense of vision, must have been REM atonia - sleep paralysis, the sort of thing that gives you the feeling of being trapped and immobile in a bad dream.

anyways, i'm pretty sure i've had this dream a few times, the "shrinking stairwell dream".

dream post, yeah!

Monday, April 02, 2012

model update

I'm working on other things lately, but I did finally get that multi-channel rivalry model working - main problem was that I had written the convolution equations out wrong. I had to do the convolution there in the code because the filter array is irregular - there's no function to call for 2d irregular-array convolution, much less for switching the convolution between different layers.

Here's what I had done:

Z´(x) = Z´(x) + F(x)·Z(x), where x is a vector of spatial indices, Z is the differential equation describing the change in excitation or adaptation over time, F is basically just a 2-d Gaussian representing spatial spread of activation for the inhibitory or excitatory unit, and Z´ is (supposed to be) the differential convolved with the spread function.

Now that doesn't make any sense at all. I don't know what that is. In the actual code that equation was actually 3 lines long, with lots and lots of indices going on because the system has something like five dimensions to it; so, I couldn't see what nonsense it was.

This is how it is now:

Z´(x) = Z´(x) + sum(F(x)·Z(x))*F(x)

THAT is convolution. I discovered what was going on by looking at the filter values as images rather than as time plots; Z and Z´ didn't look different at all! Z´ should look like a blurred version of Z. Such a waste of time...

Anyways, it kind of works now. Different problems. Not working on it until later in April. The 'simple' single resolution model was used to generate some images for my NRSA application. Here's a sample simulation of strabismus (with eye movements):


Monday, March 26, 2012

standing at fenway station, thinking, as usual, "what's my problem", and i came up with a nice little self-referential, iterative statement of it: it's pablum, but i'm not usually this verbally clever, so let's write it down:

don't do what you don't believe
can't believe what you can't understand
won't understand what you won't do

so that's the problem; it's not exactly as i would normally say these things. if you asked me before i formulated this, i would probably say, "i don't like to do what i don't understand", and that's what i started out thinking. but then i asked, "why is that?", and decided that if i don't understand it, i can't really attach to it - then i saw the loop.

interestingly enough, the solution is the negation of the problem, literally:

do what you believe
believe what you understand
understand what you do

both of these statements have a sort of inertia; once you have one of the predicates, it starts rolling and keeps going. since they aren't specific, both statements are generative or productive - the referents don't need to be the same on each loop, but of course they should be logically linked.

(really, the middle statement isn't necessary in either one, with 'believe' replaced with 'understand' in the first line. i feel like the middle line adds some depth, though, so there it is.)