Showing posts with label machines. Show all posts
Showing posts with label machines. Show all posts

Tuesday, February 06, 2018

IIT and Star Trek

[I originally posted this to a Star Trek forum because I am a Star Trek nerd but here it is for better posterity:]

"Complex systems can sometimes behave in ways that are entirely unpredictable. The Human brain for example, might be described in terms of cellular functions and neurochemical interactions, but that description does not explain human consciousness, a capacity that far exceeds simple neural functions. Consciousness is an emergent property.” - Lt.Cmdr. Data

A post the other day about strong AI in ST provoked me to think about one of my pet theories, there in the title: Data is conscious, the Doctor is not, and other cases can be inferred from there. Sorry that this is super long, but if you guys don't read it I don't know who will, and my procrastination needs an outlet.

First, some definitions. Consciousness is a famously misunderstood term, defined differently from many different perspectives; my perspective is that of a psychologist/neuroscientist (because that is what I am), and I would define consciousness to mean “subjective phenomenal experience”. That is, if X is conscious, then there is “something it is like to be X”.

There are several other properties that often get mixed up with consciousness as I have defined it. Three, in particular, are important for the current topic: cognition, intelligence, and autonomy. This is a bit involved, but it’s necessary to set the scene (just wait, we’ll get to Data and the Doctor eventually):

Cognition is a functional concept, i.e. it is a particular suite of things that an information processing system does, specifically it is the type of generalized information processing that an intelligent autonomous organism does. Thinking, perceiving, planning, etc, all fall under the broad rubric of “cognition”. Humans are considered to have complex cognition, and they are conscious, and those two things tend to be strongly associated (your cognition ‘goes away’ when you lose consciousness, and so long as you are conscious, you seem to usually be ‘cognizing’ about things). But it is well known that there is unconscious cognition (for example, you are completely unaware of how you plan your movements through a room, or how your visual system binds an object and presents it as against a background, how you understand language, or how you retrieve memories, etc) - and some theorists even argue that cognition is entirely unconscious, and we experience only the superficial perceptual qualities that are evoked by cognitive mechanisms (I am not sure about that). We might just summarize cognition as “animal-style information processing”, which is categorically different from “what it’s like to be an animal”.

Intelligence is another property that might get mixed up with consciousness; it is generally considered, rather crudely, as “how well” some information processing system handles a natural task. While cognition is a qualitative property, intelligence is more quantitative. If a system handles a ‘cognitive' task better, it is more intelligent, regardless of how it achieved the result. Conceiving of intelligence in this way, we understand why intelligence tests usually measure multiple factors: an agent might be intelligent (or unintelligent) in many different ways, depending on just what kinds of demands are being assessed. “Strong AI” is the term usually used to refer to a system that has a general kind of intelligence that is of a level comparable to human intelligence - it can do what a human mind can do, about as well (or better). No such thing exists in our time, but there is little doubt that such systems will eventually be constructed. Just like with cognition, there is an obvious association between consciousness and intelligence - your intelligence ‘goes away’ when you lose consciousness, etc. But it seems problematic to suppose that someone who is more intelligent is more conscious (does their experience consist of “more qualia”? What exactly does it have more of, then?), and more likely that they are simply better-able to do certain types of tasks. And it is clear, to me at least, that conscious experience is possible in the absence of intelligent behavior: I might just lie down and stare at the sky, meditating with a clear mind - I’m not “doing” anything at all, making my intelligence irrelevant, but I’m still conscious.

Autonomy is the third property that might get mixed up with consciousness. We see a creature moving around in the environment, navigating obstacles, making choices, and we are inclined to see it as having a sort of inner life - until we learn that, no, it was remote-controlled all along, and then that apparent inner life vanishes. If a system makes its own decisions, if it is autonomous, then it has, for human observers at least, an intrinsic animacy (this tendency is the ultimate basis of many human religious practices), and many would identify this with consciousness. But this is clearly just an observer bias: we humans are autonomous, and we assume that we are all conscious (I am; you are like me in basic ways, so I assume you are too), and so we conflate autonomy with consciousness. But, again, we can conceive of counter-examples - a patient with locked-in syndrome has no autonomy, but they retain their consciousness; and an airplane on autopilot has real (if limited) autonomy, but maybe it’s really just a complex Kalman filter in action, and why should a Kalman filter be conscious (i.e. “autonomy as consciousness” just results in an endless regression of burden-shifting - it doesn’t explain anything)?

To reiterate, consciousness is “something it’s like to be” something - there’s something-it’s-like-to-be me, for example, and likewise for you. We can turn this property around and query objects in nature, and then it gets hard, and we come to our current problem (i.e. Data and the Doctor). Is there something-it’s-like-to-be a rock? Certainly not. A cabbage? Probably not. Your digestive system? Maybe, but probably not. A cat? Almost certainly. Another human? Definitely. An autonomous, intelligent, android with human-style cognition? Hmmm… What if it’s a hologram? Hmmm….

That list I just gave (rock; cabbage; etc) was an intuition pump: most of us will agree that a rock, or a cabbage, has no such thing as phenomenal consciousness; most of us will agree that animals and other humanoids do have such a thing. What makes an animal different from a rock? The answer is obvious: animals have brains. Natural science makes clear that human consciousness (as well as intelligence, etc) relies on the brain. Does this mean that there’s something special about neurons, or synapses, or neurotransmitters? Probably not, or, at least there’s no reason to suppose that those are the magic factors (The 24th century would agree with this; see Data’s quote at the top of this essay). Instead, neuroscientists believe that consciousness is a consequence of “the way the brain is put together”, i.e. the way its components are interconnected. This interconnection allows for dynamically flexible information processing, which gives the overt properties we have listed, but it also somehow permits existence of a subjective point of view - the conscious experience. Rocks and cabbages have no such system of dynamical interconnections, so they’re clearly out. Brains seem to be special in this regard: they are big masses of complex dynamical interconnection, and so they are conscious.

What I’m describing here is, roughly, something called the “dynamic core hypothesis”, which leads into my favored theory of consciousness: “integrated information theory”. You can read about these here:http://www.scholarpedia.org/article/Models_of_consciousness The upshot of these theories is that consciousness arises in a system that is densely interconnected with itself. It is important to note here that computer systems do not have this property - a computer ultimately is largely a feed-forward system, with its feedback channels limited to long courses through its architecture, so that any particular component is strictly feed-foward. A brain, by contrast, is “feedback everywhere” - if a neuron gets inputs from some other neurons, then it is almost certainly sending inputs back their way, and this recurrent architecture seems implemented at just about every scale. It’s not until you get to sensorimotor channels (like the optic nerves, or the spinal cord) that you find mostly-feed-forward structures in the brain, which explains why consciousness doesn’t depend on the peripheral nervous system (it’s ‘inputs and outputs’). Anyways, this kind of densely interconnected structure is hypothesized to be the basis of conscious experience; the fact that the structure also ‘processes information’ means that such systems will also be intelligent, etc, but these capacities are orthogonal to the actual structure of the system’s implementation.

So, Data. Maybe Data isn’t conscious, but just gives a great impression of a conscious being: he’s autonomous, he’s intelligent, he has a sophisticated cognitive apparatus. Maybe there’s nothing “inside” - ultimately, he’s just a bunch of software running on a robotic computer platform. People treat him like he’s conscious (Maddox excepted) just because of his convincing appearance and behavior. But I don’t think it’s an illusion - I think Data is indeed conscious. 

Data’s “positronic brain” is, in a sense, a computer; it’s artificial and made from artificial materials, it’s rated in operations per second, it easily interfaces with other more familiar kinds of computers. But these are really superficial properties, and Data’s brain is different from a computer in the ways that really matter. It is specifically designed to mimic the structure of a human brain; there are numerous references throughout TNG that suggest that Data’s brain consisted critically of a massive network of interconnected fibers or filaments, intentionally comparable to the interconnected neurons of a biological brain (data often refers to these structures as his “neural nets”). This is in contrast to the ‘isolinear chip-bound’ architecture of the Enterprise computer. Chips are complicated internally - presumably each one acts as a special module that is expert in some type of information processing task - but they must have a narrow array of input and output contacts, severely limiting the extent to which a chip can function as a unit in a recurrently connected network (a neuron in a brain is the opposite: internally it is simple, taking on just a few states like “firing” or “not firing”, but it makes tens of thousands of connections on both input and output sides, with other neurons). The computer on 1701-D seems, for all intents and purposes, like a huge motherboard with a ton of stuff plugged into it (we can get to the Intrepid class and its ‘bio-neural chips’ in just a bit).

Data, then, is conscious in virtue of his densely recurrently interconnected brain, which was exactly the intention of Dr Soong in constructing him – Soong didn’t want to create a simulation, he wanted to create a new being. I contrast Data at first with the Enterprise computer, which is clearly highly intelligent and capable of some degree of autonomy (as much as the captain will give it, if you believe Picard in 'Remember Me’). I won’t surmise anything about “ship cognition”, however. Now, if the ship’s computer walked around the ship in a humanoid body (a la EDI of the Mass Effect series), we might be more inclined to see a ghost in the machine, but because of the ship’s relatively compartmentalized ‘chip focused’ structure and it’s lack of a friendly face, I think it’s very easy to suppose that the computer is not conscious. But holographic programs running on that very same computer start to pull at our heartstrings - Moriarty, Minuet, but especially… the Doctor.

The Doctor is my favorite Voyager character (and Data is my favorite of TNG), because his nature is just so curious. Obviously the hologram “itself” is not conscious - it’s just a pattern of projected photons. The Doctor’s mind, such as it is, is in the ship’s medbay computer (or at times, we must assume, his mobile emitter) - he’s something of an instantiation of the famous ‘brain in a vat’ thought experiment, body in one place, mind in another. The Doctor himself admits that he is designed to simulate human behavior. The Voyager crew at first treats him impersonally, like a piece of technology - as though they do not believe he is “really there”, i.e. not conscious - but over time they warm to his character and he becomes something of an equal. I think, however, that the crew was ultimately mistaken as to the Doctor’s nature - he was autonomous, intelligent, and a fine simulation of human cognition and personality, but he was most likely not a conscious being (though he may have claimed that he was).

Over and over, we hear the Doctor refer to himself as a program, and he references movement of his program from one place to another; his program is labile and easily changed. This suggests that his mind, at any given moment, is not physically instantiated in a substrate. What I mean by this is that while a human mind (or Soong-type android mind) is immediately instantiated in a pattern of activity across trillions of synapses between physically-realized interconnected elements, the Doctor’s mind is not. His mind is a program stored in an array of memory buffers, cycling through a system of central processors – at any given moment, the Doctor’s mind just just those few bits that are flowing through a data bus between processor and memory (or input/output channel). The “rest of him”, so to speak, is inert, sitting in memory, waiting to flow through the processor. In other words, he is a simulation. Now, to be sure, in a lot of science fiction brains are treated as computers, as though they are programmable, downloadable or uploadable, but in general this is a very flawed perspective - brains and computers actually have very little in common. The Star Trek universe seems to recognize this, as I can’t think of any instances of outright abuse of this trope in a ST show. One important exception stands out: Ira Graves.

Ira Graves is a great cyberneticist, so let’s assume he knows his stuff (let’s forget about Maddox, who was a theoretically impoverished engineer). He believes that he can save his consciousness by moving it into Data’s brain. But Data’s brain is not a computer in any ordinary sense, as we detailed above: it’s a complex of interconnected elements made to emulate the physical structure of a human brain. (This is why his brain is such an incredible achievement: Data’s brain isn’t a miniaturized computer, it’s something unique and extraordinarily complex. This is why Lal couldn’t just be saved onto a disc for another attempt later on - Data impressed himself with her memories, but her consciousness died with her brain.) Anyways, Ira Graves somehow impresses his own brain structure into Data’s positronic brain - apparently killing himself in the process - and seems happy with the result (though he could be deluded - having lost his consciousness, but failing to recognize it). In the end, he relinquishes Data’s brain back to Data’s own mind (apparently suppressed but not sufficiently to oblieterate it), and downloads his knowledge into the Enterprise computer. Data believes, however, that Graves’ consciousness must have been lost in this maneuver, which is further support for the notion that a conscious mind cannot “run on a computer”: a human consciousness can exist in Data’s brain, but not on a network of isolinear chips.

The Doctor, in the end, is in the same situation. As a simulation of a human being, he has no inner life – although he is programmed at his core to behave as though he does. He will claim to be conscious because this makes his humanity, and thus his bedside manner, more effective and convincing. And he may autonomously believe that he is conscious – but, not being conscious, he could never know the difference, and so he cannot know if he’s making an error or not in this belief.

I think that here we can quickly bring up the bio-neural gel packs on Voyager. Aren’t they ‘brainlike’ in their constitution? If the Doctor’s program runs on this substrate, doesn’t that make him conscious? The answer is no – first, recall what Data had to say about neural function and biochemistry. Those aren’t the important factors – it’s the dense interconnectedness that instantiates an immediate conscious experience, and we have no reason to believe that the interconnection patterns of an array of bio-neural gel packs is fundamentally different from a network of isolinear chips. Bio-neural thingies are just supposed to be faster somehow, and implement ‘fuzzy logic’, but no one suggests they can serve as a substrate for conscious programs. And furthermore, the Doctor seems happy to move onto his mobile emitter, whose technology is mysterious, but certainly different from the gel packs. It seems that he is just a piece of software, and that he never really has any physical instantiation anywhere. In defense of his “sentience” (Voyager episode ‘Author, Author’), the Doctor’s crewmates only describe his behavioral capacities: he’s kind, he’s autonomous, he’s creative. No one offers any evidence that he actually possesses anything like phenomenal consciousness. (In the analogous scene in ‘Measure of a Man’, Picard at least waves his hand at the notion that, well, you can’t prove Data isn’tconscious, which I thought was pretty weak, but I guess it worked. I don’t know why they didn’t at least have a cyberneuroscientist or something testify.)

So that is my case: Data is conscious, and the Doctor is not. It’s a bit tragic, I think, to see the Doctor in this way – he’s an empty vessel, reacting to his situation and engendering real empathy in those he interacts with, but he has no pathos of his own. He becomes an ironically pathetic character – we feel for him, but he has no feelings. Data, meanwhile, in his misguided quest to become more human and gain access to emotional states (side note: emotion chip BLECH) is far more human, more real, than the most convincing holographic simulation can ever be.

Friday, September 16, 2016

IIT & Pacific Rim

I'm going to start posting short observations of how IIT would explain or be problematic for certain ideas in sci-fi movies or books.

To start: The film "Pacific Rim", a sci-fi action movie where the main characters are pilots controlling gigantic robots. The pilots control the robot through a direct brain-machine interface, but the job is apparently too much for one pilot so there are always at least two pilots. The two pilots have their minds joined by a "neural bridge" - basically an artificial corpus callosum. While joined, the pilots seem to have direct access to one another's experiences in a merged state called "the Drift" - it seems that their two consciousnesses become one.

This scenario is the predicted consequence, according to IIT, of sufficient causal linkage between two brains - at some point, the connection is sufficiently complex that the local maximum of integrated information is no longer within each pilot's brain, but now extends over both brains. What would be necessary to achieve this? The movie doesn't attempt to explain how the brain-machine interface works, but it must involve a very high-resolution, high-speed parallel system for both responding to and stimulating neurons in each pilot's brain.

One way of doing this would be cortical implants, where high-resolution electrode arrays are installed on the surface of each pilot's brain; this is at least plausible (if not possible) given existing technology. However, none of the pilots show signs of a brain implant, and the main character Mako Mori seems to become a pilot on pretty short notice, although she has apparently been training for a long time - maybe all trainees are implanted? A big commitment.

A more hand-wavy Star Trek kind of technology would involve some kind of transcranial magnetic field system that is powerful, precise, and fast enough to both stimulate individual neurons (current TMS systems certainly cannot do this) and measure their activity on a millisecond timescale (current fMRI systems absolutely cannot do this); however, the pilots simply wear helmets while piloting the robots (although Dr Newton, who almost certainly does not have any brain implants since he is not a trained pilot, does use some kind of transcranial setup to drift with a piece of monster brain), which I think makes a transcranial system very unlikely.

If I had to guess, wireless cortical implants are the only plausible means of establishing the Pacific Rim neural bridge, but some sort of transcranial system hidden in the pilots' helmets and based on some unimaginable technology is not excluded.

Verdict: Pacific Rim's "drift" is IIT Compatible

Friday, February 15, 2013

how to build a psychophysics experiment

You wouldn't know it from my CV (unless you look at the conference presentations), but I've built dozens of psychophysics experiments in my nearly 10 years in the field. I've developed a routine:

1. First, design the planned trial algorithm; how will the stimulus vary from trial to trial? What kind of responses will be collected, and how will they drive the stimulus variation? Staircases and step sizes, interleaved and latticed. In my mind, I always imagine the algorithm as a gear train driving the stimulus presentation, like the mechanism behind the turning hands on a clock. Here, a model observer is usually set up, if I can figure out what one should look like, to help test the algorithm.

2. With the first step still in progress, set up the actual trial loop and use it to test the basics of the stimulus presentation and internal trial structure, with a dummy response. Usually this part is already there, in a previous experiment, so this step usually consists of taking an old experiment and stripping most of the parts out. The trial loop and its internal structure constitute another gear train, really the interface of the experiment: the stimulus and other intervals and features (sounds, fixation points, ISIs), response intervals and valid buttons, and proper timing etc.

3. The trial algorithm should be settling by now. I start to plug it into the trial loop like this: port over the algorithm, but don't connect it to the stimulus train just yet. Instead, start to work out the finer details of the stimulus presentation with the algorithm in mind. That is, if the algorithm is like a set of gears to transmit variation through the stimulus, we have to make sure that the teeth on the stimulus gear mesh with the teeth on the output gear of the algorithm. And, the input gear to the algorithm has to mesh with the response gear. It's easiest to do this, for me, if I first design the algorithm and set it in place so that I can look at it while I finish the design of the interface.

As the algorithm is being set in place, I'll usually simultaneously start setting up the method for writing the data down. This really constitutes a third little mechanism, an output mechanism outside the big algorithm-trial loop, which records everything about the completed trial up to the response, but before the response moves the algorithm train.

4. Finally, the algorithm and the trial structure get linked together, not without a few mistakes, and the whole machine can be tested. Usually it takes a few debugging runs to find all the gears that have been put on backwards, or connected in the wrong place, or left out completely.

I think that these stages, in this order, are what I have been following for at least five years now, and it seems to work pretty well. There are parts of the skeletons of most of my experiments over the past 3 years that are nearly identical; I think that the for V = 1:Trials statement, and the closeout statements at the end of that loop, have survived through a dozen completely different experiments. The other 99% changes, though some parts are common over many different scripts.

Another thing that's constant is the way I feel when I build these things: I really feel like I'm building a machine. It's the same feeling as when I was a kid and I'd take apart a clock or a motor or a fishing reel and try to put it back together, usually failing because I didn't understand at the beginning, when I took it apart, how it actually worked (I became a scientist, not an engineer). But now, since I'm designing it, I can see where the big wheels contain the little wheels, where there are little clusters of gears connected to other clusters, transmitting motion through the whole thing. I can see how exactly the same thing, the same device underlying the experiment, could be built with gear trains and springs and chains and switches and punched tape (for the random trial order). I should make an illustration one of these days...

Anyways, that's how you do it!

Wednesday, June 13, 2012

monitor MTF? sure!

Okay, so I need to know the spatial frequency transfer function of the monitor I've used to do most of my experiments over the past couple of years. I've never done this before, so I go around asking if anyone else has done it. I expected that at least B** would have measured a monitor MTF before, but he hadn't. I was surprised.

Still, B**'s lab has lots of nice tools, so I go in there to look around, and lo, T** is working on something using exactly what I need, a high speed photometer with a slit aperture. So today I borrowed it and set to work doing something I had never done and didn't know how to do. It was great fun.

D** helped me get the photometer head fixed in position. We strapped it with rubber bands to an adjustable headrest. I've started by just measuring along the raster. The slit is (T** says) 1mm, which is about 2.67 pixels on my display. I drifted (slowly) squarewave gratings with different wavelengths past the aperture - this was more complicated than it sounds. The monitor is run at 100Hz, and CRTs flash frames very rapidly, just a millisecond, so getting the photometer settings just right (it runs at 18Khz) took a bit of adjustment, and figuring out good settings for the gratings, slow-enough speed to drift them at (I'm limited by the 10 second block limit imposed by the photometer)..

Anyways, I got back temporal waveforms which I treat as identical to the spatial waveforms. As expected, the power of these waveforms drops off as the gratings get finer. But, I know that it drops off too fast, because of the aperture. If the aperture were exactly 1 pixel across, and if it were aligned precisely with the raster, and if a bunch of other things were true, then I could know that each epoch recorded by the photometer reflected the luminance of a pixel, and my measurements would reflect the monitor MTF. But, like I said, the aperture is 1mm, so each 10ms epoch is an aliased average over >2 pixels. I'm not even thinking about the reflections from the photometer head (there's a metal rim to the aperture T** had taped on there).

My solution: code an ideal monitor, record from it with the same sized aperture, and divide it out of the measurements. I can then guess a blur function - Gaussian - and fit that to my (4) data points. That's what I did: here is my first estimate of the vertical MTF of my Dell p1130 Trinitron:

The Nyquist limit for this display, at the distance modeled here, is about 23cpd, so I guess this Gaussian is in about the right place. It's hard to believe, though, because horizontal 1-pixel gratings look so sharp on this display. I feel like these must be underestimates of the transfer. I am nervous about how awful the vertical will be...

*edit*
It wasn't too bad, just a bit blurrier than the horizontal. Still makes me suspicious that I'm underestimating the horizontal. Not going to bother putting up plots, but here's my estimate of the pixel spread function (you can just see that it's a little broader left-right, that's the vertical blur):
 

Saturday, August 06, 2011

zipper trucks

Yesterday I learned about the existence of zipper machines. We were driving back from Foxboro and there were all these little movable barriers on the interstate; I might have said something about them, and Matt explained that they were moved every day by big, slow-moving machines. I couldn't believe it, but you could see that the barriers were all linked together with springed, metal joints. I came home and looked it up, and found that link, and some youtube videos. I have to see one of these things in action...

Also, my poem output was way up over the last few weeks (like, above zero per week). They're all in drains like google or weekly report or facebook, so for posterity let's put them here. For my birthday, I made a haiku:
thirty-two years down
won't pick up another bit
for as many more

I thought that was clever.

Then, Murf decided to institute a Thursday google+ rhyme circle. Murf started with:
"i'm a serial gangsta, so don't you be hatin';
these rhymes are coming at you - 9600 baud ratin'.
i'm a cereal gangsta, pouring as smooth as silk
all o'er these fruity pebbles some quality soy milk."

I responded with:
"
why's your baud so slow, must be messin' with your flow
don't hate, i got infinite bit rate,
symbols at the speed of light
like nuclear fission, constant information transmission,
a meltdown, crossin' synapses,
no lapses, my latency's good, it's understood 'cause i drink real milk
fortified, omega-3,
carbon chain, developin' my brain, got to sustain that spike train"

I was proud of that. Then one night this week I generated this for facebook:
"
no frogs,
no crickets,
 
no bugs at all. 
just air conditioners, 
and echoes of air conditioners. 
no, wait, 
i hear something- 
i hear a bug. 
what is it? 
what is it?"

That was kind of lame, but it was accurate. Then yesterday the Thursday rhyme circle was late. Murf began:
"
Subjective speckle, what do you say?
650nm class IIIA
Black spots moving as I'm delighted;
Not in my same direction means I'm near-sighted."

And I responded with this:
"
myopia, that's some shit
some negative lenses would fix it
stimulatin' those long wavelength cones
seein' red, thinkin' about homophones
"

I am such a genius.