Showing posts with label metaphors. Show all posts
Showing posts with label metaphors. Show all posts

Tuesday, June 03, 2014

Sorry May


Okay, so this picture illustrates why I am not a Tegmarkian. Tegmark, if you don't know, is a clever cosmologist at MIT who's put forward (a book on) the thesis that mathematics is the ultimate reality, and that all mathematics is in fact a kind of reality - that there is a mathematical multiverse, which we know exists on account of the mathematics existing.

So I don't buy this. I'm diametrically opposed to this idea. Not opposed, really - I don't care too much, but I am opposed in that I believe the complete opposite. Mathematics - and physics as a subset of mathematics - is an artifact of the human mind, that's all it is. The fact that the world exists in some form is curious, although it seems incoherent to me that we can actually know anything about its true nature - but to suppose that its true nature is mathematics seems so backwards that I just wanted to write some things down.

I get where he's coming from. The world does exist, there is a reality, and it is somehow regular and consistent - it has properties that repeat or sustain, and why should it? Its continuities and discontinuities are all so numerically describable, and why should they be? And the most basic elements that we know to exist - photons, quarks, magnetic fields - seem to be perfectly and completely described as systems of numbers. And why should this be?

My mind seems to have taken the easy way out, because it just screams: but numbers and math are things that human minds *do*! They describe the world because the brain is a description machine, that's what it *does*! If the curious thing is that the description is so perfect and complete, then I have two responses - the space of possible descriptions that the mind can form is so vast, so impossibly vast, that it would be surprising if we could *not* find consistent systems of description for the world; and no description of the world is by any means *complete*.

The completion point is worth going on about. The scope and complexity of the natural world is impossible to comprehend. It's absolutely impossible to describe it all - and I'm saying this as a scientist with full faith in science as an endeavor for helping us to understand the world. We might choose some very narrow sliver of reality and subject it to intensive study, and then, there, we can describe it in such detail that we feel that it's okay to say we've basically got it all down. But that's it - those little, tiny, infinitesimally small splinters, and we think we have a complete description? What we have is a consistent system - mathematical physics - that can be used to describe anything we come across, but each description will be new, different, from what has been seen before.

So no description is complete. Okay, maybe that's a straw man, but I don't think so. Tegmark wants to claim that not only is physics a (potentially) complete description of our reality - or no, not a description, but *the thing itself* - but that realities we haven't yet encountered, i.e. realities *outside our reality* are contained within it. He likes the example of the discovery of Neptune. Astronomers had noted disturbances in the orbit of Uranus, and finally realized that there must be another planet even further out - they realized this mathematically, in such detail that they knew where to point their telescopes to find Neptune, and they did so, successfully.

Tegmark wants to use this example to imply that mathematics is a kind of tapestry containing all reality, and that by following it out from what was known, an *entire planet* was discovered, first in the mathematics, and only later by human senses. But this doesn't prove any kind of point about the reality of mathematics, and it's not even true, strictly, that Neptune was first discovered in a mathematical form. It was first discovered in the form of its gravitational influence, which affected Uranus. It's just that at first, astronomers didn't understand what they were seeing - they had to *do some math* in order to understand. But the data were all there - the measurements of Neptune in the flesh were there already, before Galle saw it with his own eyes (and others had seen it before, all the way back to Galileo, albeit not knowing what they were looking at).

The point here is that, really, new knowledge about the world can only come from new data about the world. Mathematics based on reality that has been observed - i.e. physics - can then tell you how to understand those data, but it is only that, a tool, an activity of the human observers. It doesn't exist outside of human endeavor. I am dead set in this opinion.

Anyways, so I basically had that conversation with myself last night on my walk home, and then I made that figure. It should be self-explanatory, but just in case: the biggest circle, the purple one, is the realm of all possible human thought. The circles within are not to any idea of scale, of course. There are many domains of human thought,and the next two that I've outlined are descriptions and axiomatic systems. Both of these I mean in the broadest sense you can imagine.  Physics falls within the realm of axiomatic systems of description, or it should (Hilbert's sixth problem). Within axiomatic systems you have consistent axiomatic systems, which should contain a correct physics, if it exists - i.e. if the Standard Model and General Relativity could be united. Taken as separate systems, I think that each of these theories alone counts as a consistent system, but together, so far, they do not.

Tegmark's reality is the domain of consistent axiomatic systems of description, of which our physics is (presumably) just a tiny part. Any other consistent system of physics would also fall in this domain, and Tegmark believes that each of these systems must also correspond to its own universe, just as our physics corresponds to ours. I think it's a fantastic idea, which I might illustrate by putting a big 'fantasy' circle somewhere in there, in between human thought and physics.

Friday, January 24, 2014

overflow

quick note on something unimportant:

my qualia clearly overflow my behavioral access to them.

say there's a thing here. a can of beer (cans I think are more prosaic), with its characteristic physical attributes.

when i look at it, i have an experience of it. much of that experience is strongly, closely correlated with the physical attributes of the can. you can take this for granted, or you can confirm it by ask me questions and carefully collecting my responses. the can's geometric properties, its shape, its albedo and texture, things like that. other parts of my experience are not correlated with attributes of the can, but are quirks of my own systems. colors, a/modally completed contours, illusory depth from shading, meanings of symbols, etc.

all of this you can, in principle, recover from me by making certain types of measurements - basically, you prompt me with questions or decisions, and i give you responses. these can be words, numbers, button presses, ratings, slider adjustments, essays, etc.

let's say i give you all the time in the world. you have time to run every test you can think of. you can run every task until performance asymptotes, and you can estimate any parameter that you can dream of. every aspect of this can of beer that i have any ability to respond to, to access behaviorally, is your data.

is there anything left to my experience that you have not collected, that you cannot find in your data and models?

my qualia are overflowing!

(you can make this same sort of argument for physics - i measure the physical attributes of an object until i can't find any more to measure. you can then point out, well, isn't there something left? the thing itself? but then i can ask you, what is there, about that thing, that is not described or captured in my measurements and models? what can you point to? that the thing is *there*? well, I have its thereness perfectly specified in a coordinate space. that the thing is *substantial*? well, i've got every aspect of its substantiality described by my equations of quantum electrodynamics. what is left? i think that, ultimately, there's nothing for you to point to, because in every case, i can show you how i've measured or modeled whatever it is. i don't see how the case is the same with phenomenal experience.)

Tuesday, January 07, 2014

idealism 2

The days are counting down, just weeks now until the Big Shift. This evening, ideas swirling through my head, especially a reiteration of the first version of this post. I wanted to resketch those ideas, so here we go, in less detail but more formally:

1. There is a real world that exists in some form that we can perceive, accurately or not.
1.1. The substance of this real world is not physical or objective or dualistic.
1.2. The substance of the world is subjective and phenomenal.
2. All us humans (and many other creatures) experience phenomenal consciousness.
2.1. Phenomenal consciousness is a substructure or subfunction of a brain.
2.1.1. Consciousness is not the only type of phenomenal substance (reiterating 1.).
2.2. Experience of phenomenal consciousness is analogous to a space with things in it.
2.2.1. Things that are 'in the space' of consciousness are things that one is 'conscious of'.
2.3. Objects are neural parsings of stuff in the real world.
2.3.1. Objects can be informatively (yet redundantly) labeled 'neural objects'.
2.3.2. A substructure of a neural object that is present in consciousness is an 'object-in-consciousness'.
2.4. The stuff in the world that is parsed into objects is also subjective and phenomenal.
2.4.1. Generally this stuff is not conscious.
2.4.2. An exception is when the stuff is a living brain.
2.5. We generally recognize that stuff in the world is not conscious.
2.5.1. We come to this conclusion because conscious objects are within the space of consciousness, but do not themselves contain conscious spaces (except for brains, and we only know they do because they say so).
2.5.2. For 2.5.1. to be true, one consciousness would need to be able to emulate another.
2.5.3. Despite the truth of 2.5., it is arrived at for the wrong reasons.
2.5.3.1. We mistake objects-in-consciousness for stuff in the world.
2.5.3.2. Since we make this mistake, and since objects-in-consciousness are not themselves conscious, we believe (correctly) that (most) stuff in the world is not conscious.
2.5.4. We are perplexed that brains are conscious, yet do not appear to be.
2.5.4.1. This is because we are mistaking brains, which are conscious, for objects-in-consciousness, which we have already mistaken for stuff in the world.
2.5.4.1.1. This is a subtle error, because if the middle step is left out, it seems not to be an error (we are confusing brains for stuff-in-the-world, which they are).
3. The hard problem of consciousness is the apparently uncrossable gulf between phenomenal subjective experience as-a-brain, and the non-phenomenal objective status of-a-brain.
3.1. Items 1. and 2. shows how this gulf is a consequence of a sequence of mistakes about the status of stuff-in-the-world and of objects-in-consciousness.

This is a type of idealism - which of the many subtypes I'm not sure - that is, while not popular (as far as I can tell), at least tolerable in philosophical circles. I'm liking it more and more!

Monday, October 14, 2013

objectivity

I finished Chalmers' book - The Conscious Mind - this weekend. A funny thing was that the next-to-last chapter, basically just a set of musings on the relationship between his proto-theory and artificial intelligence arguments, didn't interest me at all. This is funny because if this was 2001, I probably would have skimmed the book up to that chapter and then read it over and over and over again.

It's an excellent, important book. I wish I'd read it back when, but now was good enough timing. As I mentioned in a previous entry, just about all of my thinking on philosophy of mind and consciousness in this book; I think some of the ideas I developed naturally, like a lot of people do, but I've also read many of Chalmers' papers over the years, and a couple I've read many times, so he's undoubtedly responsible for straightening my thoughts on the subject.

But this book, it's one of those cases where reading is like sharpening your mind. You may have a set of knives, but you've let them clatter around in a drawer for a while, used one here and another there, and so they get banged up and dulled and maybe a bit tarnished, and so finally you sit down with the whetstone and a cloth and sharpen and clean, and there, a drawer full of shining, sharp knives. That's what it was like, reading this book.

In a way, it just sort of set me up with new vocabulary, or ways to structure my thinking about perception and experience, and why they are interesting, and what the alternatives are in thinking about how they are interesting. Sometimes, this is enough to take away from a book - it helps you organize, doesn't revolutionize your thought, but it helps you straighten things out, like putting the knives into categories, with the tips and blades all facing together.

But he also inspired me, and hopefully just at the right time (though I was asking for it, looking for it, so it's silly to bring up the notion of coincidence). He talks about psychophysics - although in more basic terms than the conventional science - and he presents it as a way of using subjective experience as evidence, as a thing to be explained. This was how I felt about it for a long time, but as the years and papers and experiments wheel on, you can't help but start to see things operationally, in terms of functions and moving parts, and you operationalize your subjects too, and they become black boxes that press buttons. This is so wrong!

It's wrong, and I used to know it was wrong, and I've maintained a sense that it's wrong - I recognize that this sense is part of what sets me against the West Coast internal noise crowd in modern psychophysics, and which allies me so much to the European tradition. But I'd kind of forgotten, explicitly, how it's something of a travesty against psychophysics to operationalize your subjects, especially if you're interested in psychophysics per se, and not in using it as a means to another worthy end.

What I'm rambling about is what we all know - when you have a subject in a psychophysics experiment, and you give them instructions on how to do the task, you are asking them to take hold of a phenomenal object, and to give you responses based on that object. Often the object is so ineffable that it can only be explained by example - 'this, you see this? when you see this, press this button; or, press this button when you think you see this, here'. The central object in the entire experiment is the thing that is seen. The instructions to the subject are the closest that the experimenter comes to the phenomenon of interest. But it's too easy, I see now, to slip into the mode of giving those instructions and then thinking that the phenomenon is in the data, and that by describing the data or understanding the data, you're understanding the phenomenon.

Ultimately, maybe, it's just semantics. Ultimately, all you have to analyze in any rigorous sense is the data. But I think that many psychophysicists forget, and start talking only about performance - I've done this many times now. I've gone long enough without enough inspiration, for years now, only seeing it peek through now and then, always having trouble circling back to the real object of fascination. But this book, Chalmers' book - or probably, just a few choice passages from the book - has renewed my clarity, and as I said, just in time, because I feel that the importance of these ideas, for my research and my writing and my very career, is swinging right into center stage.

Also, I have a headache right now, officially it's been 59 days since the last, longest gap since record keeping began (May 2012). I gave it a 3.5, but I'm going to go raise that to a 4.5 now, it's getting worse.

Thursday, September 26, 2013

not idealism

okay, after the epiphany of the last post, i did a little re-reading of some basic boilerplate, and i'm thinking that what i'm calling 'idealism' there is really a type of pure panpsychism, saying that everything is a subjective state. it's non-dualistic simply in that it flips the hard question around: what evidence is there for, or how to you conceive of or explain, non-subjective states? from the materialist/physicalist point of view, the subjective state seems impossible to understand, which leads to the dualist perspective. but then, on the other side, you do away with 'physical' completely. everything is a subjective state, analogous to consciousness, but usually (almost always) without the complex representational structure. so in this system, dualism is like physicalism + panpsychism. idealism is usually used to describe a point of view where everything that exists is a representation, which i still think is craziness.

now, another not on metaphors for thinking: i am now finally (after probably 10 years of delay) reading Chalmers' book 'The Conscious Mind'. i've read many of his papers, some of which are summaries of the more digestible ideas in this book, so in a way i'm prepared for him. but it's a real philosophy book, and it gets difficult. since i can't hope to understand it all, i do a lot of close skimming, reading words and getting some meaning but not all meaning. i guess that's always the case. i had the thought that this process is like looking at objects in water of varying degrees of clarity. when you understand what you're reading, the water is clear, you can see all the surfaces, each new point of fixation is visible and well defined, and you can see the whole structure. but when the water is muddy, you can't see the whole structure - you see parts of it poking into clear parts of the water (muddy water is never uniformly muddy, but the muddiness is in swirls, leaving 'open' spaces of clarity), and those maybe you can see clearly, but even they may be hazy. anyways, the visual metaphor for understanding - clarity, detail, focus, fogginess - really comes home when reading a philosophy text.

Sunday, September 22, 2013

idealism

read Koch's "confessions of a romantic reductionist" this weekend. nice book, and he's such an interesting character. i was slightly disappointed that there wasn't more to learn - it was thin (i don't normally finish a book in 2 days). not that it wasn't full of things to learn, but most of it wasn't new to me, i guess because i'm familiar enough with the literature.

it did spark one interesting thought that i'd never really had before, and it wasn't really in the book itself - you know how these things work, you might be primed in some way for something, and then someone says the right thing in the right way, and something new appears. here's what i thought:

(as a warning, the book is all about consciousness, the science of trying to explain what consciousness, i.e. subjective consciousness is. all discussion of this topic needs to be prefaced with warning that you're getting into something deep. so there it is.)

information theories of consciousness, like Tononi's or Chalmers' (such as it is), are basically dualist theories. they say that the stuff of reality has two aspects - one is the objective, measurable, interactive aspect, that we can measure in terms of physics (in the familiar sense of the word). the other aspect is subjective, intrinsic, and emergent - emergence in the sense of information, of a systematic quality that is real and not conceptual or based in observation - and it can be calculated or understood in theoretical terms (e.g. in the terms of Tononi's theory), but it cannot be measured in a relational sense.

this is not the new thought that occurred to me. i'm already on their side. i'm not a physicalist, which i think is a small-minded position, in that it shows that the person just hasn't gone far enough in thinking about the difficulty of the problem (i.e. the Hardness in Chalmers' terms). physicalism says there is only the objective stuff, and that whatever emergence there is a function of observation - i.e. a system is described by some agent, like a human scientist, and the scientist recognizes that the system has properties that are not included in the components of the system, and yet which flow from the combination of the component qualities - this kind of emergence is more a fact of higher-order recognition on the part of the observer. there is nothing actually there in the system that corresponds to the emergent quality.

still, this is not the new thought. here it is: idealism. not a new thought, but new for me. i've always been much more set against idealism than against physicalism - not agreeing with either, but mostly agreeing with physicalism, just that there's something missing there. but idealism, all wrong. but for the first time, on reading the Koch book, I got a reasonable picture of idealism in my head, and he wasn't even talking about it. the picture is this: say that all reality is subjective, and there is no objective reality at all. this is ultimate panpsychism, that everything is psyche of some level. but what makes a mind special? what is consciousness? why does there seem to be such a divide between the inner substance of our minds and the 'physical' character of the biological brain? i figured it out (in this system): physical qualities are just mental representations. the representations cannot be identical with the things they represent, of course. when i see a dog, the 'seeing' is a set of representations of various aspects of dogness. this seems fine, because i have no reason to be confused about the mismatch between my perception of a dog and the dog itself, because i am not a dog. same goes for rocks, clouds, houses, etc. but when i see a brain, or study a neural system or a neuron, knowing that i am a brain, the mismatch is so pronounced that i can't miss it. i am - my consciousness is - a brain, and yet, this representation of a brain is so fundamentally unlike my consciousness. where, in there, in that thing, does the consciousness emerge? what explains it?

in the idealist view, the explanation is purely psychological, cognitive. there is no actual distinction between mind and brain. the brain i am studying is just as much of a subjective entity as i am, but my representation is vastly inadequate. i can only 'understand' a bit of it at a time, and only in abstractions or formulations or approximations, no matter how clever i am. brains and neurons and other objective, physical, phenomena are only the limited psychological efforts of human consciousness to represent essentially unrepresentable other consciousnesses. the limitation might just be of design - the brain isn't evolved for the purpose of representing or emulating other brains. if it were, if it had equipment making such emulation possible, then observing other brains would be equivalent to observing their consciousnesses. but there may be computational limits - there must be - that make this impossible or very unfeasible. so, if a dualist theory of consciousness like Tononi's is perfected, it may be translatable into idealist terms as explaining the difficulty or intractability of emulating one idealist system on another.

a lot of this sounds very familiar - the idea that we are confused, naturally, into thinking that our percepts or our concepts, which are neural descriptions of the real world, are the real world in a direct sense. there are few direct realists out there, who believe that the dog is the dog - most of us realize that the dog is a mental representation of the dog out there. but we then invoke physicalism in noting that the dog out there is objective and 'real' and physical in a sense that is somehow different from the subjective world of qualia that interfaces between our minds and the world. this is different from what i'm getting at here - in the idealist view, out there and in here are qualitatively the same. qualia everywhere, within and without. the only difference is that our qualia are representational, while non-brain qualia aren't (usually).

this all sounds reminiscent of certain religious ideas, very buddhist or maybe hindu, the idea that everything is ultimately consciousness, and that the 'real' world is an illusion or a cognitive mistake. not saying i believe it, just that it was a sort of realization of a possibility that i had while reading an interesting book this weekend.

Thursday, April 11, 2013

physics and psychophysics

reading papers on "information integration theory" lately. up to this most recent one - barrett and seth, PLoS-CB 2011 - i had an okay grasp on the math, but now i'm considering skimming. the first author of this paper is a theoretical physicist by training, so i don't feel too bad that i can't quite take it. feeling a little bad led me to this train of thought:

in my field of psychophysics we use mathematics to describe human behaviors, with those behaviors driven by simple physical stimuli. some psychophysical models can be rather complicated, but the more complicated they get, the less realistic they get, because so to speak they inevitably start biting off more than they can chew. for example, channel theory is a bunch of mathematical objects, but they have to be fit to particular contexts. even a simple psychophysical rule or law involves constants that vary from person to person, from apparatus to apparatus.

no one in psychophysics should fool themselves into thinking that they can someday come down to a simultaneously correct and meaningful mathematical theory of whatever phenomenon they are studying, because every phenomenon is an artificially isolated part of a much more complicated whole, and the ways that the circumstances of the phenomenon can be varied are nearly infinite. but thankfully, no one in psychophysics does, i think, fool themselves this far; we recognize that mathematics is a good tool for getting a handle on what we are studying, at the same time that what we are studying is clearly variable in ways that we can only hope not to approximate too poorly. for us, mathematics is an operational description of what we're studying.

in physics, on the other hand, they have things down to the level where you will hear physicists talk about mathematical objects and physical phenomena as more-or-less the same thing. quarks and bosons, gravity and magnetic fields, are things that are only really understood through mathematics. my knowledge of physics mostly comes from reading feynman and hawking, and watching random lectures (and once having been a physics undergrad, briefly), so it's not like i have anything like an up-close viewpoint of the physicist's perspective, but i think this viewpoint is plainly very popular. john wheeler talked about the root of all reality being information, which is only a mathematical construction as far as the human mind is concerned - obviously he felt the isomorphism was close enough to make this sort of claim.

apparently, for physicists to achieve this nearly perfect mathematical description of physical reality, the mathematics had to get pretty complicated. so, when you read a paper by a physicist on a topic that you feel like you should have a good handle on - you're a psychologist, the topic is consciousness - you have quite a bit of difficulty in parsing his descriptions, even though you realize that he's not talking about anything approaching the sophistication of string theory or QED.

so.. i'll go back and give it another 20 minutes, maybe make it another half-page. also, tomorrow night i'm having an MRI of my neck, to check for dissections in my carotid arteries. fun fun fun.

(only now did i realize that this topic would have been perfect for a new dialogue.. maybe i will recast it?)

Friday, February 15, 2013

how to build a psychophysics experiment

You wouldn't know it from my CV (unless you look at the conference presentations), but I've built dozens of psychophysics experiments in my nearly 10 years in the field. I've developed a routine:

1. First, design the planned trial algorithm; how will the stimulus vary from trial to trial? What kind of responses will be collected, and how will they drive the stimulus variation? Staircases and step sizes, interleaved and latticed. In my mind, I always imagine the algorithm as a gear train driving the stimulus presentation, like the mechanism behind the turning hands on a clock. Here, a model observer is usually set up, if I can figure out what one should look like, to help test the algorithm.

2. With the first step still in progress, set up the actual trial loop and use it to test the basics of the stimulus presentation and internal trial structure, with a dummy response. Usually this part is already there, in a previous experiment, so this step usually consists of taking an old experiment and stripping most of the parts out. The trial loop and its internal structure constitute another gear train, really the interface of the experiment: the stimulus and other intervals and features (sounds, fixation points, ISIs), response intervals and valid buttons, and proper timing etc.

3. The trial algorithm should be settling by now. I start to plug it into the trial loop like this: port over the algorithm, but don't connect it to the stimulus train just yet. Instead, start to work out the finer details of the stimulus presentation with the algorithm in mind. That is, if the algorithm is like a set of gears to transmit variation through the stimulus, we have to make sure that the teeth on the stimulus gear mesh with the teeth on the output gear of the algorithm. And, the input gear to the algorithm has to mesh with the response gear. It's easiest to do this, for me, if I first design the algorithm and set it in place so that I can look at it while I finish the design of the interface.

As the algorithm is being set in place, I'll usually simultaneously start setting up the method for writing the data down. This really constitutes a third little mechanism, an output mechanism outside the big algorithm-trial loop, which records everything about the completed trial up to the response, but before the response moves the algorithm train.

4. Finally, the algorithm and the trial structure get linked together, not without a few mistakes, and the whole machine can be tested. Usually it takes a few debugging runs to find all the gears that have been put on backwards, or connected in the wrong place, or left out completely.

I think that these stages, in this order, are what I have been following for at least five years now, and it seems to work pretty well. There are parts of the skeletons of most of my experiments over the past 3 years that are nearly identical; I think that the for V = 1:Trials statement, and the closeout statements at the end of that loop, have survived through a dozen completely different experiments. The other 99% changes, though some parts are common over many different scripts.

Another thing that's constant is the way I feel when I build these things: I really feel like I'm building a machine. It's the same feeling as when I was a kid and I'd take apart a clock or a motor or a fishing reel and try to put it back together, usually failing because I didn't understand at the beginning, when I took it apart, how it actually worked (I became a scientist, not an engineer). But now, since I'm designing it, I can see where the big wheels contain the little wheels, where there are little clusters of gears connected to other clusters, transmitting motion through the whole thing. I can see how exactly the same thing, the same device underlying the experiment, could be built with gear trains and springs and chains and switches and punched tape (for the random trial order). I should make an illustration one of these days...

Anyways, that's how you do it!

Thursday, November 15, 2012

stack puzzle

Okay, I’ve been wondering for a while whether or not something is a valid question – a good question or a bad question. It is related to a few entries I’ve written here in the past year (esp. this and this), and to a paper that I’m about to get ready for submission.

The question: are the percepts contributed by different layers or modules of visual processing perceived as embedded within one another, or as layered in front of or behind one another?

Such percepts could include brightness, location and sharpness of an edge, its color, its boundary association; color and shape and texture of a face, its identity, its emotional valence, its association with concurrent speech sounds; scale of a texture, its orientation, its angle relative to the frontal plane, its stereoscopic properties.

All of these, and more, are separately computed properties of images as they are perceived, separate in that they are computed by different bits of neural machinery at different parts of the visual system hierarchy. Yet, they are all seen together, simultaneously, and the presence of one implies another. That is, to see an edge implies that it must have some contrast, some color, some orientation, some blur; but this implication is not trivial. That is, a mechanism that senses an edge does not need to signal contrast or color or orientation or scale; the decoder could simply interpret the responses of the mechanism as saying ‘there is an edge here’. To decode the orientation of an edge requires that many such mechanisms exist, each preferring different orientations, and that some subsequent mechanism exists which can discriminate the responses of one from another, i.e. the fact that the two properties are both discriminable (edge or no; orientation) means that there must be a hierarchy, or that there must be different mechanisms.

So, whenever something is seen, the seeing of the thing is the encoding of the thing by many, many different mechanisms, each of which has a special place in the visual system, a devoted job – discriminate orientation, discriminate luminance gradients, discriminate direction of motion, or color, etc.

So, although we know empirically and logically that there must be different mechanisms encoding these different properties, there is no direct perceptual evidence for such differences: the experience is simultaneous and whole. In other words, the different properties are bound together; this is the famous binding problem, and it is the fundamental problem of the study of perception, and of all study of subjective psychology or conscious experience.

This brings us to the question, reworded: how is the simultaneity arranged? From here, it is necessary to adopt a frame of reference to continue discussion, so I will adopt a spatial frame of reference, which I am sure is a severe error, and which is at the root of my attempts so far to understand this problem; it will be necessary to rework what comes below from different points of view, using different framing metaphors.

Say that the arrangement of the simultaneous elements of visual experience is analogous to a spatial arrangement. This is natural if we think of the visual system as a branching series of layers. As far as subjective experience goes, are ‘higher’ layers in front of or behind the ‘lower’ layers? Are they above or below? Do they interlock like... it is hard to think of a metaphor here. When do layers, as such, interlock so that they form a single variegated layer? D* suggested color printing as something similar, though this doesn’t quite satisfy me. I imagine a jigsaw puzzle where the solution is a solid block, and where every layer has the same extent as the solution but is mostly empty space. D* also mentioned layers of transparencies where on each layer a portion of the final image – which perhaps occludes lower parts – is printed; like the pages in the encyclopedia entry on the human body, where the skin, muscles, organs, bones, were printed on separate sheets.

But after some thought, I don't think these can work. An image as a metaphor for the perceptual image? A useful metaphor would have some explanatory degrees of freedom; one set of things that can be understood in one way, used to understand something different in a similar way. Where do we get by trying to understand one type of image as another type of image? Not very far, I think. The visual field is a sort of tensor: at every point in the field, multiple things are true at the same time, they are combined according to deterministic rules, and a unitary percept results. Trying to understand this problem in terms of a simpler type of image seems doomed to fail.

So, whether or not there is a convenient metaphor, I think that the idea of the question should be clear: how are the different components of the percept simultaneously present? A prominent part of psychophysics studies how different components interact: color and luminance contrast, or motion and orientation, but my understanding is that for the most part different components are independently encoded; i.e. nothing really affects the perceived orientation of an edge, except perhaps the orientations of other proximal (in space or time) edges.

Masking, i.e. making one thing harder to see by laying another thing in proximity to it, is also usually within-layer, i.e. motion-to-motion, or contrast-to-contrast. Here, I am revealing that my thinking is still stuck in the lowest levels: color, motion, contrast, orientation, are all encoded together, in overlapping ensembles. So, it may well be that a single mechanism can encode a feature with multiple perceptual elements.

Anyways, the reason why I wonder about these things is, lately, because of this study where I had subjects judge the contrast of photographic images and related these judgments to the contrasts of individual scales within the images. This is related to the bigger question because there is no obvious reason why the percept contrast of a complex, broadband image should correspond to the same percept contrast of a simple spatial pattern like a narrowband wavelet of one type or another. This is where we converge with what I have written a few months ago: the idea of doing psychophysics with simple stimuli is that a subject’s judgments can be correlated with the physical properties of the stimuli, which can be completely described because they are simple. When the stimuli are complex and natural, there is a hierarchy of physical properties for which the visual system is specifically designed, with its own hierarchy, to analyze. Simple stimuli target components of this system; complex stimuli activate the entire thing.

It is possible that when I ask you to identify the contrast – the luminance amplitude – of a Gabor patch, you are able to do so by looking, from your behavioral perch, at the response amplitude of a small number of neural mechanisms which are themselves stimulated directly by luminance gradients, which are exactly what I am controlling by controlling the contrast of the Gabor. It is not only possible, but this is the standard assumption in most contrast psychophysics (though I am suspicious that the Perceptual Template people have fuzzier ideas than this, I am not yet clear on their thinking – is the noisiness of a response also part of apparent magnitude?).

It is also possible that when I ask you to identify the contrast of a complex image, like a typical sort of image you look at every day (outside of spatial vision experiments), you are able to respond by doing the same thing: you pool together the responses of lots of neural mechanisms whose responses are determined by the amplitude of luminance gradients of matched shape. This is the assumption I set out to test in my experiment, that contrast is more or less the same, perceptually, whatever the stimulus is.

But, this does not need to be so. This assumption means that in judging the contrast of the complex image, you are able to ignore the responses of all the other mechanisms that are being stimulated by the image: mechanisms that respond to edges, texture gradients, trees, buildings, depth, occlusions, etc. Why should you be able to do this? Do these other responses not get in the way of ‘seeing’ those more basic responses? We know that responses later in the visual hierarchy are not so sensitive to the strength of a stimulus, rather they are sensitive to the spatial configuration of the stimulus; if you vary how much the configuration fits, you will vary the response of the neuron, but if you vary its contrast you will, across some threshold, turn the neuron on and off.

I don’t have a solution; the question is not answered by my experiment. I don’t doubt that you can see the luminance contrast of the elements in a complex scene, but I am not convinced that what you think is the contrast is entirely the contrast. In fact, we know for certain that it is not, because we have a plethora of lightness/brightness illusions.

No progress here, and I'm still not sure of the quality of the question. But, maybe this way of thinking can make for an interesting pitch at the outset of the introduction of the paper.

Thursday, April 19, 2012

Memes and Pharmacies

That rivalry proposal went in last Monday with no problems, along with a long-festering manuscript, so it seems I took a week off from writing. Now I have progress reports to do, presentations to prepare, other manuscripts to complete, on and on.. I need to make sure I keep up with this journal, which seems to be helping in keeping my writing pace up.

Vacation is over!

About 12 years ago, I read the Meme Machine by Susan Blackmore. It was around the time that I decided to major in psychology, and I was reading all this Dan Dennett Douglas Hofstadter stuff, but it was her popular science book that really had a big effect on me - I would say that it changed my worldview completely, to the extent that I would identify myself as a memeticist when discussions of religion or that sort of thing came up (it was still college, see) - I really felt like it was a great idea, that human culture and human psychology could be explained as essentially a type of evolutionary biology. I still believe it, and so I suppose this book still really sits near the base of my philosophical side, even though I don't think about these things so much anymore.

I bring this up because the other night Jingping and I were talking about being tricked, since this was the topic of a Chinese textbook lesson I had just read, and I recounted the story of being completely conned by a thief once when I was a clerk working at CVS. I wound up going on more generally about working there, and I remembered that I had worked out a memetics-inspired 'model' of that store, which I hadn't thought about in a long time. One of the things about the memetics idea that had really gotten to me was that you could see social organizations as living creatures with their own biological processes - not that this is an idea original to Blackmore, and I'm sure that's not the first place I had heard it (like I said, I was also reading Dennett and Hofstadter at the time), but she did work it into a larger sort of scientistic system which seemed to simplify and unify a lot of questions.

Now I realize that the criticisms of memetics that I heard from professors in college (when I questioned them about it) were mostly right, in that it mostly consists of making analogies between systems; only in the last few years, with the whole online social networking advent, has a real science of something like memetics actually gotten off the ground (this is a neat example from a few weeks ago), and it's very different from what had been imagined when the idea was first getting around.

Anyways, I thought I'd detail here my biology-inspired model of a CVS store, ca. 2001. I have a notebook somewhere where I had detailed a whole system, with functional syntax and everything, for describing social organizations in terms of cellular, metabolic systems. This might have been the first time that I had tried to put together a comprehensive model of a system, now that I think of it. The store, I thought, was itself a cell in a larger CVS system dispersed across the city, which itself was a system dispersed across the country. I was mainly interested in the store level, where you could see different components acting as reagents. It was a strongly analogical system, but not completely analogical, to plant metabolism: a plants needs carbon, so it uses sunlight to break down carbon dioxide, releasing the unneeded oxygen back into the world. A store needs money, so it uses salable products to break down a money-human bond, releasing the human back into the world.

Of course, with plants the sunlight comes free from heaven - all the plant can do is spread out and try to catch it - while salable products must be delivered from another part of the CVS system: the distribution center. The distribution center emits reagents to the stores, where the money-human bond is broken down. The CVS system also emits catalytic agents into the world - advertisements - to facilitate the crucial reaction. The money absorbed by the system is energy which is used to drive the system through a reaction not unlike oxidation - the money-human bond is reformed systematically, with employees as the human component. This reformation is what really drives the system. Those new human-money bonds then go out into the world and fulfill the same function, breaking apart, as they interact with other businesses.

Looking at a business in this way totally changed the way I understood the world. Businesses, churches, governments, political parties, armies - all of them can be thought of as living creatures, or as organs of larger creatures, rather than as some sort of human means-to-an-end. By changing perspective between levels, we can see ourselves as means to the ends of these larger systems, just as our cells and organs are means to ours. Now I'm finally getting around to reading straight through Hofstadter's GEB, and so I can see that this general idea of shifting perspective across levels is an old one that has been astonishing people for a long time. But for me, coming to see human culture as as being alive was a fundamental shift in my intellectual development, one that hasn't really been superseded since. I haven't become a real memeticist yet, but it's all still there, underneath... these tiny tendrils of memetics live yet...

Thursday, May 06, 2010

internet metaphors

okay, this is kind of dumb, bust since i haven't learned anything new lately, it's all i've got.

actually, i thought of this a few days ago. i was at the taekwondojang, thinking about how the classes work. (almost) every class starts the same way, regardless of who the teacher is, with a set of warmup exercises. different teachers will count a little differently, faster or slower maybe, but everyone does the same exercises. next, we start going through techniques in the lineup, and the first is always "riding stance to the left, left-hand punch". next technique will usually be "step forward front stance, low-section guarding block".

so, up to this point things are the same no matter who the instructor is, no matter what the rest of the class is going to be about. from here, things are still predictable to a point - after the low-section blocks, we'll probably do mid and high section blocks, maybe with a punch after the mid-section blocks. next, we go to fighting stance and start doing kicks, with front kick and punch first, round kick and roundhouse punch next, then sidekick with knifehand strike. it gets less and less predictable after this.

by the end of the lineup, we've probably done a couple of techniques that haven't come up in at least a week or so in other lineups. then, the rest of class will focus on a few specific techniques in some permutation of the "find a partner" game.

what this has to do with the internet is that i realized that the course of a given class could be analogized directly to a traceroute, assuming a single start location. the first few steps away from the local host are the same every time, but depending on the destination eventually the paths will diverge. the warmup and starting techniques are like the local network path out, the later techniques are like the area network, where there are a few possible large routers to choose from, and the remainder of class is like the ultimate path and network destination. kind of.

really, you could apply this structure to all sorts of things, where the first few steps are the same, but eventually there's a divergence and then different paths to disparate destinations. in a lot of ways that's how the brain works, how distribution networks of all kinds operate, etc.

like i said, not too interesting, but it's all i've got for now.

Wednesday, May 23, 2007

I'm sure this is the best thing I could have done with the past hour.

Seeing as how there's a new Transformers movie coming in a few months, I felt it was important to have a discussion of the sociopolitical themes underlying the Transformers backstory. We are probably all aware that the story has seen many revisions, through many toy lines and cartoon series, several comic books, and a couple of movies. Some of the versions of the backstory have been stupid. I have to say, however, that the original, which I think went along with the first cartoon series, was the best. I'm not actually sure, but I may have actually made up most of this.

The Transformers of today were, essentially, originally two product lines, produced on a factory world called Cybertron. They were designed and distributed by an alien race called the Quintessons, who were featured in the first Transformers movie, though none of this story was made apparent there. The two major products consisted of a line of military hardware, and a line of industrial hardware. Apparently the Quintessons dealt mechanized arms and infrastructure all over the Galaxy. Over time, their products improved in sophistication to the point where, in sci-fi language we might say, the robots became 'sentient'. This probably happened gradually, as new models and technologies were introduced. At any rate, the products of Cybertron began to acquire an awareness of the complexities of their existence, and they began to see themselves as slaves.

What happened next was likely a series of 'slave revolts', culminating in a Cybertronian Revolutionary War against the Quintessons. The Quintessons tried to pit the robots of Cybertron against eachother, using their weaponized creations in an attempt to suppress the Revolution. They weren't successful in this, as even the military robots wanted their freedom. In the end, the Quintessons lost everything, having placed the whole of their civilization on the back of the Cybertron factory. We see them in the Transformers movie as a race of insane monsters, executing one another for nonsensical crimes, apparently forgotten by the Transformers themselves.


What followed was the Cybertronian Golden Age, where the robots of Cybertron worked to create a new, independent civilization. We don't know exactly how long this lasted, but it was thousands of years before the rift between the military and the workers opened up to armed conflict. Undoubtedly, the style of governance of the military robots and the industrial robots was different. Power sharing and compromise was long the rule, but eventually the leaders of an extremist faction of the military decided it was time to take power for themselves, and to redirect the resources of Cybertron into galactic expansion. We know this faction as the Decepticons, and they have been led from the beginning by a military robot named Megatron.

Megatron's coup destroyed the Cybertronian government, and he quickly instituted martial law. The split between the military and the industrials was not absolute, but was nearly so; most of the military robots accepted Megatron's rule as a positive evolution of Cybertronian society, while most of the 'civilian' robots now considered their way of life under siege. As a result, there was soon an industrial resistance movement, led by a sturdy pro-worker faction which we today know as the Autobots, and so began the Cybertronian Civil Wars.

The Autobots were by their nature unprepared for violent conflict, and at first there were disastrous setbacks. Over time, however, the Autobots were able to exploit their mastery of Cybertronian infrastructure to deprive the Decepticons of vital resources. Finally, an Autobot given the name Optimus Prime ("Best and First") emerged, and under his leadership the Decepticons were forced to retreat to the outlying Cybertronian satellite worlds.


These are interesting, particularly modern political themes. We have capitalists (the Quintessons), facing a slave revolt. This is a familiar theme, but the twist here is that these slaves were actually created by their masters. This must be an industrialist's worst nightmare: that not only will his workers will revolt but that his products and property will turn against him.

Next we have a revolution, where an alliance of the military (we can probably best think of these as the 'soldiers' rather than the 'establishment') and the workers overthrows the master class. This is an idealized version of a communist revolution, where the workers are aided by the army to overthrow the capitalists. In communist revolutions of the 20th century, the military begins the war utilized by the ruling class, but over time it gradually is absorbed by the revolutionaries (see China, Russia, Cuba, etc.).

Finally, military coups often follow social revolutions when the army perceives that the government has become compromised on one way or another (e.g. China after 1911). This is then followed by asymmetric civil war, where a non-military socialist movement attemps to wear down a military dictatorship by using a sympathetic populace to their advantage (see China in the 40's, the Viet Cong in the 60's, etc.). Usually, however, this is not successful, and what actually happens is that after many years the military government sees its work as done, and allows a transition to a softer and more democratic system (see Spain, Chile, Taiwan, etc.).

Let the discussion of the sociopolitical themes underlying the Transformers backstory begin. Go!!!