Thursday, September 06, 2007

Introducing Elgar and Stern

Elgar: It's interesting that, still, no one is able to explain the nonlinearity of contrast detection for human observers.

Stern: Why is that so interesting?

Elgar: I mean, academically it's interesting. In an everyday sense, it's probably not as interesting as most things that-

Stern: I understand. So, why would you say it's so interesting?

Elgar: It's just something that people have been talking about for a long time. Very weak contrasts seem to be brought into visual awareness by an expansive nonlinearity.

Stern: What does that mean, exactly?

Elgar: Basically, it means that input is being raised to a power greater than one, as a part of the detection process.

Stern: Input being contrast.

Elgar: Right. Specifically, it seems as if contrast is raised to a power of around 2.5. The thing is, your brain is not an equation. Even though we can write an equation to perfectly describe your perception of different signal intensities, we really don't have a good idea of what, physically at least, that equation is describing. There are several candidates.

Stern: I can't wait for you to describe them to me.

Elgar: The simplest one is just to say that the transducer is simply built in such a way that it transforms input into output as a power function.

Stern: Like a neuron, maybe?

Elgar: Could be. Or maybe a networked population of neurons. Maybe for low signal intensities, a contrast-detection neuron just has an accelerating response to increasing input. Then, you still have to explain why that particular nonlinearity goes away for higher contrasts, but people love to suggest different sorts of gain control, so it's not really a problem.

Stern: Wait, it goes away? Are you talking about transducer saturation? Weber's law, that kind of stuff?

Elgar: Right. Once you've detected a signal, and intensity continues to increase, the apparent increase in response, as well as your perceived intensity, increases as a power less than one. So, for example, the stronger the signal is, the bigger the difference in intensity you're going to need to notice an increase. That's kind of like Weber's law.

Stern: I thought that was Weber's law.

Elgar: Strictly speaking, Weber's law is where you need a constant fraction of the current signal intensity in order to tell a difference. If I need to add 1 pound for you to notice a difference in a 10 pound load, and I also need to add 5 pounds for you to notice a difference in a 50 pound load, the fraction is constant, and that's Weber's law behavior.

Stern: Okay, I get it. So, an accelerating transducer is one explanation for the detection nonlinearity. What else is there?

Elgar: Well, it could be that all of your neurons transduce linearly near the detection threshold. Plus, it's certainly true that you have lots and lots of neurons. If both of these are the case, and if you're monitoring lots and lots of neurons waiting for a signal to pop out against the background noise level, then uncertainty theory suggests that as intensity increases your sensitivity to the signal will increase rapidly as you become more and more certain as to which neurons are the best ones to monitor.

Stern: So why does uncertainty theory predict an accerating increase in sensitivity? That's not exactly an intuitive idea.

Elgar: I know. It's a mathematical thing. 'Certainty' is kind of just an ad hoc way of describing an outcome. If you're making decisions based on the biggest responses you see over a set of neurons, you effectively have a variable noise source. When the signal is weak, the important noise is a combination of all those neurons that don't matter, and the ones that do. When the signal is strong, the only noise that matters is what's in the relevant neurons, because those will always have the largest responses. The transition between weak and strong signals, then, basically corresponds to a transition from high to low noise, which is equivalent to an increase in sensitivity. An increase in instantaneous sensitivity with increasing signal strength appears as an acceleration in overall sensitivity! For strong signals, the observer's behavior will just follow whatever the transduction function of the neuron is. In this case, maybe it saturates as a power less than one.

Stern: Man.

Elgar: There's one more explanation, one that I don't know much about.

Stern: So this will be a brief explanation.

Elgar: I hope so. The nonlinear transducer and uncertainty theories both abide by standard assumptions of signal detection theory. So, they assume that even below 'threshold', the neurons, or whatever, are actually responding to the signal; the response is just hopelessly buried in noise.

Stern: What if there is no noise? Why do you keep mentioning noise?

Elgar: All systems are noisy, and usually the noise has a number of different sources. In the visual system you have photon noise, metabolic variability, eye movements, thermal noise, and other things. All of these, we hope, combine to produce basically Gaussian noise. But there's no chance at all that there could be no noise, and in fact every model of signal detection, perceptual or otherwise, implicitly contains terms for performance-limiting noise.

Stern: I think I knew that already. I should have known that this wouldn't be a simple idea.

Elgar: Actually, noise isn't what I'm talking about. My point is that the first two theories assume, sort of, that the signal is always transduced, and that uncertainty or noise limit detection. The last option is that this isn't true; that there is a true, 'hard threshold', which has to be acheived before any transduction takes place.

Stern: I see. Kind of like overcoming friction to get something moving across a surface. Up to a point, you may push and get no result, but with enough force you'll get it moving.

Elgar: That's it! So, maybe the transducer is linear, but it has a real zero-point. Some intensities just fail to evoke a response, but at some point the neuron gets turned on and starts transducing. If it's a steep enough function, depending how the noise is implemented something like this might just appear from the outside to be a sudden, brief acceleration of response to an input.

Stern: Okay, I agree with you that maybe this is kind of interesting. But if I had to hear it more than once, I don't think I could take it.

Elgar: That's understandable. So, aren't you going to ask about how the ways in which noise can be implemented in a hard-threshold theory are especially interesting?

Stern: We'll save that for later. Can I just have my hamburger now?

Elgar: Alright. Did you want fries? I can't remember.

Stern: No fries, just a burger.

1 comment:

  1. Hey kitty, I think you typed your prelim in the wrong place~~~

    ReplyDelete