Friday, April 12, 2024

How Long Ago It Feels

 Still on this? 2024 is the year of posts on memory, I guess.

When I was a kid in the 80s, going on rides across town with my parents, the radio was typically tuned to an ‘Oldies’ station, playing songs from the 50s and 60s - stuff ranging from Chuck Berry to Buffalo Springfield. I understood that this was music from when my parents were kids, and I knew enough to know what years were what, and what happened when, what was older or more recent. It was a feeling as much as a knowing - Purple Haze just sounds not-so-long-ago as A Hard Day’s Night, right? And Rockin’ Robin sounds older than any of them.

Even being, I don’t know, ten years old, I knew a basic historical outline of that time period, of the decades just prior to my existence. The presidents, the faces and names you came to recognize - the Civil Rights movement, the Vietnam War. Then the seventies, and the physical artifacts of that time that were all around: the cars, the clothes, the style and coloration of things that were old but not really old.

What I’m getting at is this: even back then, a kid, I was forming an outline of the recent past, and that involved a combination of knowing and feeling how long ago those things were. Forget the knowings for now - it’s trivial to say that something that happened in 1972 was longer ago than something that happened in 1978 - that 1965 was longer ago than both, that 1952 - the year my parents came into existence - was even longer back. But the feelings were in proper register. Thinking of 1952, about some artifact from that time, felt - it feels - further back than thinking of 1965. And so-on.

Those feelings are something I’ve been thinking about lately. As we get older we all are sometimes surprised at reminders at how much time has passed. In the last few days I’ve been reminded that thirty years have passed since Kurt Cobain’s suicide, a time I recall with some clarity - a few moments, people, associated with that event. Wow! It doesn’t feel like thirty years ago, does it? How time has flown!

But that’s getting close to what’s been bothering me. When I was a kid in the late 80s, thinking of the time when my parents were my age or younger, I would have a feeling of “long ago” that corresponded to thirty years in the past. That’s what thirty years ago felt like then. Honestly, it felt like distant, ancient history - pre-history, something I could only hear about or read about in books. But 1994, thirty years ago now, doesn’t feel distant in the same way. I’m aware of all that’s happened since - quite a lot, two thirds of my relatively full and varied life of changing places and people. But thirty years ago today certainly doesn’t feel like thirty years ago then.

I’ve had these kinds of ruminations lately, like we all do. Nothing special about it. But something more has occurred to me: I have the feeling that thirty years ago then feels quite similar, in fact, to sixty years ago now. The fifties feel about as long ago now as, I suspect, they did when I was younger. The feeling hasn’t updated. Those feelings of the past aren’t really about intervals extending into history. They’re feelings associated with clusters of landmarks. What the 50s, or the 30s, or the 1880s, or the 12th century, or whatever, insofar as they feel like a long time ago, or a very long time ago, feel like when I think about them, doesn’t really have to do with how long ago they were. It has to do with those periods specifically.

Then I start to think, is there really any feeling of “some time ago”? Is there anything left to consider, once I subtract out that cluster of landmarks?

So this line of thinking has actually led me to a new - to me - explanation of this illusion of time flying. It’s not that time-having-passed feels different in some way. It’s not that thirty years having passed up to today feels somehow like ten years having passed up to 1994. It’s more that I am, perhaps, not really sensitive to the interval itself - instead there is just the set of things I know about that time, the time long ago or proximal times, and thinking about that set, or of items in the set, has its own certain content, which varies relatively smoothly if I slide my window of reflection gradually back or forth. If there's anything like the feeling of an interval of passed time, of such a long interval (not the milliseconds and seconds and minutes that the psychologists study), it's really just the comparison of the feelings of now with the feelings of then.

It explains, in some ways, another phenomenon that rears its head once in a while, and which was the topic of the previous post. Sometimes in ruminating over some memory, it can almost feel as though I was just there. It was a decade ago, but if I think of it in a certain way, it could have been a moment ago. What I’m doing there, maybe, is just forgetting what now is, and becoming absorbed in then. There is, there is never, really any feeling of the time elapsed, of the distance in times. 

Monday, March 18, 2024

what i remember

My mind goes to strange places sometimes, for reasons I usually don't understand.

I don't mean strange in that the places are strange. I mean, I don't know why I am there. They repeat, like these little memory attractors, but there's nothing to find there. I get the feeling that there was something incomplete that happened, some kind of expectation that whatever was there might recur, and it never did, and that expectation was like a door left open, that can never close, and once in a while I just happen to wander by, and in I go.


I'll quickly note what brought me here, just now. Rearranging some computer code to plot some data. Wondering, how do I best approach this little model selection problem in python? I am not a great python programmer. I know how I'd do it in MATLAB, that's for sure. I only need to vary one parameter, the slope, since by design the means of the different conditions should be the same. Right? I should print them out just to be sure.

As this process is working through my head, I feel myself wandering with my friend Ian along the wooden boardwalks at Montgomery Bell State Park (I had to look it up just now to recall the name), near my hometown. We were 11 or 12. Ian's mother had brought us there, she was perusing an art fair set up on the boardwalks, Ian and I are just roaming, exploring the place. I remember the green-brown park service paint on the boards. It feels like it was autumn, maybe there were pine needles everywhere.

Why? I am I there? I have found myself back there over and over, just this brief recollection for no reason, no obvious connection to the current moment. Maybe because I've thought of Ian recently? A song stuck in my head that he suggested. I told the story, again, to some friend at Taekwondo of how I started as a kid - with Ian, who quit soon after I started, but I kept it up for many years, and still do, from decade to decade.

Why is that Saturday - I know it must have been a Saturday - at the park still lingering there, more than thirty years later? Why was that door left open?

Sunday, February 18, 2024

Hecatompylos

 I did write a post in 2023, but never published it. It will become something later, I promise, and then maybe I will retroactively publish that post. For all of you who were so starved of content last year.

Now, here is something. A story about memory.


I woke on Thursday morning, about 7am, thinking: "Hecatompylos".

This word was repeating in my head. My wife had woken me up, sent me, as on most weekend mornings, to go sleep with the 3 year old while she gets the almost-8-year-old ready to catch the bus. I kept thinking, "Hecatompylos - what is Hecatompylos? Somebody? Some place? Hecatompylos.. Hecatompylos.."

This word repeated in my mind until I slept, and then, I think, it kept repeating. I think I dreamed of wondering what "Hecatompylos" could be. "Hecatompylos, Hecatompylos"..

When I woke up for good, about 8am, it was repeating, repeating, a one-word earworm. I'm not sure I've had this experience before - I'm sure I've awoken with a tune in my head, but a word?

As I ate my breakfast, I looked up the word on wikipedia, which told me that Hecatompylos was an ancient greek name for the Persian city of Qumis, in northern Iran. What? The wikipedia article mentioned that Alexander the Great had visited there.. I did skim the Alexander the Great page a couple of weeks ago. I had come across his name - mentioned not as "the Great", but as "Alexander III" - in an encyclopedia of ancient science I had been reading, and I had thought, who were Alexanders I and II? So I knew I had perused his wikipedia entry. I rechecked it, to see if maybe I had come across his significant visit to Hecatompylos in Persia, but.. no mention of it.

It was still repeating! Like a word-beacon, repeating, "Hecatompylos, Hecatompylos". I think I spoke Hecatompylos under my breath a hundred times. I can still feel it in my tongue, I have to resist mouthing the word now. It's so strange.

I went to the lab and did some things, but pretty soon I was googling "Hecatompylos". I came back to the wikipedia page, and now I saw that 'Hecatompylos" could also direct to Thebes, the famous temple city of Egypt. This felt more right than Persia. And, I realized, just last night, before bed, I had read with the 5-year-old a chapter of the Buildings Book on the monumental temple and pyramid of the Pharoah Djoser. Was it in Thebes? I couldn't remember.

Now I read about Djoser, but found that no, Djoser lived hundreds of miles north, close to Egyptian Memphis (as a Tennesseean I have to distinguish Memphises), and that during his time Thebes was a mere fishing village, no Hecatompylos - no city of a "Hundred Gates". I gave up again and tried to work.

A little while later, I tried once again: "Hecatompylos", into Google - but this time, I felt, I should try "He*k*atompylos". This time, a web page comes up, a page from an online Encyclopedia of Borges - Hekatompylos was mentioned in Borges' "The Immortal".

That was it! I had indeed read The Immortal a few nights earlier, Sunday or Monday night. Maybe I had read two pages on Sunday, then the rest on Monday (I have to squeeze in Borges between bedtime reading for the little ones, who alas do not enjoy my reading The Immortal or The Library of Babel aloud). I don't recall lingering on "Thebes Hekatompylos". Maybe I did? But I've read the story dozens of times before. The last time must have refreshed some habitual circuits familiar with the words and phrases of the story, and then again, as dozens of times before, they were dormant.

Then, on Thursday morning, dreaming, one of those circuits was randomly touched - maybe it was the previous night's reading about Djoser and other Egyptian temples and tombs - and the word "Hecatompylos" popped into a dream, without any context or explanation. So free of context that it followed me from one dream into another, into waking, back into dreams, and back into waking life, until I reconnected it with its origin. Once I understood why the word was there, it dissolved back into unconscious memory and the compulsion to repeat it was ended.


***


After an introduction, Borges story begins: 

"As far as I can recall, my labors began in a garden in Thebes Hekatompylos, when Diocletian was emperor."

https://www.borges.pitt.edu/i/tebas-hekatompylos

Friday, August 19, 2022

Thoughts about truth

Something that's been on my mind for - coincidentally, I'm sure - early 2021, or the very tail end of 2020. This is about human society generally, and that was certainly a time for thinking hard about society.

Joe Biden is elected president, but Trump and his party refuse to accept it, and it becomes clear that something's being true is not what it seems. Something clearly seems true to me, why is it so powerfully opposed or contradicted by others? Of course this is a routine situation in human life, we are always disagreeing about some things - usually matters of belief or opinion or culture or whatever, but we generally should agree on apparently objective things like what happened or what is happening.

Another thing happened earlier in 2020. I read the new book of stories by Ted Chiang, "Exhalation". Many very good stories in there, just like his first book. The title story is the best - especially if you're a neuroscientist or psychologist of any sort - but it was "The Truth of Fact, the Truth of Feeling" that really stuck with me. The themes of the story are there in the title, having to do with what we think or feel is true, and what actually is true in some objective sense, and how these different kinds of truth matter to us in different ways, and how they can conflict.

During this year we have the pandemic, the conspiracy theories, the George Floyd riots, everything. And then the election happens. I'm glad Biden won, still am, but soon after the victory I and many others had a sinking feeling: this didn't fix anything, really. The problems are deep and not going to go away. Those problems could be listed and discussed somewhere else, but what I kept thinking about was truth.

I know that, from the beginning, I was taught to value truth and honesty. You're supposed to tell the truth, you're not supposed to lie. I know that most others are also taught the same thing. It's basic moral education, our parents teach us, we learn it at school, we learn it in life lessons - we tell a lie and are caught, and we suffer the consequences, and we don't do it again. We learn to be honest.

We learn all that, but few of us ever really think about it. I never did. We simply assume that truth is a basic good, some kind of fundamental virtue that we should all value and protect. And if there are disagreements about what is true, that can be difficult, but those are disagreements over fundamental things and so they are fundamental disagreements, and they really matter in that way. Debates about what is true are core, essential debates for a society. Et cetera et cetera. It sounds good, right?

What I realized in the winter of 2020-2021 is that none of that is really true. It's a myth. I mean, of course I do believe that truth matters. And I do value it, and I do want to be an honest person, and all that. But it's not fundamental. It's not a basic good that we all strive to protect or advance or expand. No, it's a luxury, it's a relatively highly-developed value that might be socially inevitable, but it is not fundamental. 

I keep talking about 'values'. By this I mean the parts of life, personal or social, that motivate our actions, especially in that we want to protect or advance them. If you value something, you want to defend it, keep it from harm; you may also want to expand it, to multiply it, to increase its share of the world. Obviously different people have different values, but it's reasonable to suppose, too, that there are some values that are common to all of us, or to the vast majority. We all value safety and security, and health. We all value our family and our friends, and our homes. We value our own sense of self and however we are significant to the world.

I had never really thought about the world in this way before, in terms of values - it's certainly not a new way of thinking, it's the kind of thing you hear about in passing, but I'd never given it much time. I probably had a naive idea of values in the sense of "universal values", the idea that there are things that we should all value even if we don't. And if you'd asked me, I would have said, "yeah, sure - truth or honesty are universal values, everyone should agree to that".

What I realize now is how naive this way of thinking is. There are certainly practical truths that no sane person will deny. The general: we need water and air to live. The incidental: the sun is shining on the courtyard outside. Those kinds of truths are obvious because we experience them firsthand. But most of the world is beyond our immediate experience, and the truth of it is therefore always provisional. We have to take someone's word for it. We can believe what we read or hear in the news, we have to believe the readings on our instruments. We can believe our own memories of what happened earlier, yesterday, last year. We can believe those things, but we don't have to - or, we can believe different things than others. Unless there are immediate facts to resolve a difference, the vast portion of what we might believe in is simply that: provisional.

So, we might grow up believing that "the truth" about happenings in the world is something fundamental, but it's not. At almost all times, it's just something we choose to believe. Of course there is generally a state of affairs out there that, if we experienced it immediately, would tend to resolve disagreements - but we almost never actually are directly exposed to it.

And we might grow up believing that the truth is something we should all value, that what is true matters in a fundamental way - but it can't be, because - apart from our immediate experiences - it's not something we ever have access to. What we actually have are choices about what to believe. And that gets to the problem: the real values, having to do with life, family, self identity, are always fundamental, and they are what drive our choices. So while we can all give lip service to valuing "the truth", because we were all taught that it is good and right and moral to believe in and to tell the truth, that's not an accurate picture of the world.

What we believe is true, aside from that tiny slice of the world that we immediately experience, is a choice that is driven by our more fundamental values. What, if true, seems to protect and strengthen my identity, my family, my home, my health? Then I will believe that.

That's my picture now of our social situation. The problems with Trump, the election stuff, the conspiracy theories, etc etc, aren't a problem of truth and lies. The problem isn't with convincing people of the truth. The problem is in those fundamental values. Those have diverged in ways that are barely even touched on by public conversations. The divergence is probably growing larger all the time. It's probably a constant process in social evolution, but in current times I wonder if the internet, social media etc, is accelerating it, so that the divergences outrace our ability to spin stories to account for the contents of society. Those contents being all the elements of culture - things that we spend our time on, circulating inside and between our minds, this constant fascination that humans are constantly engaging in. They obscure the driving forces beneath our behaviors, at the same time they slowly, gradually change those forces, or modulate or recalibrate them. But how do you get down to those bottom levels, how do you begin to fix things? I have no ideas!

Beyond Biden and Trump, and the Chiang story, one more cultural fragment stuck in my mind for years seems very relevant. Nietzsche critiqued self-righteousness, especially those who were so focused on enforcing and dictating moral rectitude, in terms that had long confounded me but I would think about it over and over, trying to understand his meaning. I did long ago, and probably it primed me for the realizations I've just written out. I'll paraphrase him here, the critical phrase (from Beyond Good and Evil):

"... no one lies so boldly as the indignant man."

Once I realized his meaning in this passage, I've never forgotten it. You read it and think he's talking about someone else, and maybe he is - without enough self-awareness, which wouldn't be surprising. But really he's talking about all of us. Imagine that you're in an intense argument, and your personal integrity has been challenged - you must respond! You must protect yourself! Isn't this exactly the time that you might... inflate the truth? Stretch it, to prove your point? Invent some detail, to exaggerate, to hammer home your point? Doesn't the truth become soft when you are at stake?

Wednesday, October 20, 2021

mixed signals

 The other day I walked into the kitchen to see the following mundane happenings:

1) Daughter sitting at the kitchen counter eating a snack out of a bowl.

2) Wife sitting on a kids chair with son in lap, clipping his toenails.

These were about 8 feet apart. I casually looked at one, then the other, then back again, and got a flash of revulsion at having just seen my daughter casually clipping and eating toenails!

This incorrect impression instantly resolved. The higher level aspects of these two distinct happenings were briefly entangled. There are many ways to explain this, two come to my mind:

1) On foveating an event, the higher-level contents (recognizing-what-is-happening) are present in my experience, but on looking away, they are reduced and only a vague 'pointer' is retained (so that I can look back at the interesting event to re-experience the full thing). In this case, the reduction was delayed or incomplete, so that when I looked at one event, the previously foveated one was still in-mind, and so they briefly overlapped. Since the higher-level contents are strongly enforced by the lower-level contents, which are completely forced by the retinal input, the intermingling was brief and the 'correct' contents survived.

2) Different events can simultaneously be in experience, but they are normally cordoned off from one another. In this case, the cordon was briefly broken and the two sets of high-level contents were mixed - maybe from one leaking into the other.

I think that 1) is the more likely alternative. I doubt that multiple sets of high-level contents can be simultaneously experienced, since they inevitably will sometimes involve common contents (in this case, both would have involved 'person/child/fingers/kitchen/etcetc'), and so would naturally be inextricable. Instead, my impression that I can simultaneously entertain different sets of high-level contents must instead be due to keeping one set in detail, the object of attention, while the other is reduced to something more like a pointer which can be quickly grabbed by attention to reconstitute the whole set.

Tuesday, August 10, 2021

Book Review: Fall (or, Dodge in Hell) by Neal Stephenson

 Not really a book review. More like, some thoughts on this book. It was long and not so straightforward as most books I read, so there was something to think about rather than just be awed (like with the Three Body books I finished last year - awesome books, huge ideas, but it was all there on the page - nothing to figure out, or reflect on, as far as I could tell).

I read this because of a review I read when it came out in 2019. There's an early section that I think got most of the attention of reviewers, where the characters go on a road trip through a post-truth America, which sounded most interesting to me. In the world of the book (or at least of this section - the story seems to span a century, and this is the only discussion of the world outside the immediate experiences of the main characters) the American hinterlands have devolved into a confused tribal culture where all belief systems are focused on the noise-filled descendant of the social-media internet we have today, and every belief is a false conspiracy theory of one sort or another. The city-classes rely on special tools, namely human editors, to keep their information accurate. That's a neat idea, and it barely has anything to do with the rest of the book!

I wasn't disappointed, though. Because while the whole back half of the book seems unrelated to this road trip, I realized after finishing the story that there was a serious connection to make. In the world of the road trip, it's pretty clear that the populace's disconnection from reality is a Bad Thing: the lives of people are brutal and confused and social progress is made impossible. It's suggested that this is because they are all exposed constantly to these noise-filled media channels where all content is false, generated by algorithms with varying hidden motives. The obvious implication is the what keeps the lives of the city people better is their connection to reality and true information - keeps them healthy, safe, and allows for progress.

The back half of the book isn't about the real world at all, though. It's about a digital afterlife where human consciousnesses* are uploaded after people die. This world begins when the first uploaded person (Dodge) finds himself exposed directly to world of chaos, that he learns to shape into meaningful things, thereby becoming the creator god of a new plane of existence. It's never said outright, but the chaos is various cloud-based (i.e. internet-based) computational activity, which means nothing at all to a human mind that is connected to it without an interface; by molding it into meaningful stuff (from Dodge's perspective), this computational activity is being replaced with new computational activity (from the outside-world perspective).

At first I thought that the book would get into an actual connection between this cloud-world of noise and the garbage-information world of the hinterland. But that connection never happens, and after a while I stopped expecting it and forgot about it. But after finishing the story I realized a different connection: the people of the digital afterlife are themselves all exposed to a false world - it's a computer game, basically - but it's ok, a Good Thing, and the ultimate meaning of the story is found there. Is it an inversion of the situation revealed in the road trip?

Sort of, I think. But also it's a comment on curation: the world Dodge creates is real in that everyone that inhabits it experiences it the same way. There are actual truths there, though they are ultimately shared psychological truths - at one point Dodge discovers that he can't change the world if other people are watching. 

This sets up a very different situation, but with obvious parallels to and deviations from the road trip world. In the real world, truth is essentially a physical thing: things exist physically, or events happen physically, and that's what makes them real, and what makes ideas about them true. People in the hinterland have beliefs that are largely false, because they are about things and events that don't exist or didn't happen. 

In the afterlife, there is no physical existence in the normal sense - physically, all the objects and places that people experience are actually processes running on vast banks of quantum computers. So strictly speaking, the things people believe in are all false, or at least not true. Not only that, but people in the afterlife have no knowledge of the 'real' world on the outside. Truth in the normal sense is impossible. Yet, because everyone agrees on this reality, we can think of beliefs in this world as having a kind of second-order truth value. Beliefs are at least meaningful and grounded, even if they are grounded in something (seems-to-be rocks and trees) that is not what it actually is (seems-to-be computers in buildings).

So then I wonder - is this a comment on the earlier part of the story? Because we can see the degenerated people of the American hinterland in the same way: their beliefs might be shared delusions or hallucinations, or vast conspiracy theories, but if they are meaningful and grounded, isn't that as good as what people have in the digital afterlife? Or, are the afterlife people in just as dire and meaningless a situation as the hinterlanders?

The book doesn't try to make this equation, I'm the one making it. The digital world is portrayed as wonderful for some, and nightmarish for many others, while the hinterland is depicted (indirectly) as more uniformly nightmarish. But again, maybe there are those in the hinterland that are living well, living their best lives despite their detachment from the larger realities. Is Dodge equivalent to a benevolent pirate king living out in the Nebraska wasteland?

At any rate, I was surprised at these connections, how they started to creep up on me.

*Re consciousness. I think that Stephenson is sufficiently vague about exactly how consciousnesses are simulated that I could suspend disbelief, or fill in the blanks myself. It sounds like the idea is that, with the connectome of a human brain, you just simulate the network on a massive parallel computer, giving each neuron its own computer processor. Maybe in some version of that, IIT consciousness might be possible (though I think probably not - but maybe)?

A bigger problem with the idea as he describes it is that a connectome could not be enough to simulate a brain, even if you had the connectome right down to every synapse, and even if (as we think is true) neurons connected with synapses are actually the right substrate of consciousness. That's because you don't know the rule for each neuron. A neuron has all these inputs, thousands of synapses on its dendrites, but what does it do when the neurons on the other side of those synapses are in whatever state? The connections don't tell you anything about that - you have to know something about how each neuron works, and what makes it harder is that there are so many kinds of neurons, so many kinds of synapses, and they might all follow different rules.

But those are details that could be filled in with enough imagination.

It was a great book, enjoyed it. 9/10. (minus 1 for being too long or too short - there's a bit more that could have been explained just about how the world worked, and the connections between the inside and outside of the afterlife, giving us a nice 1200 page book, or a part 1 and part 2 - or it could have been pared down to lessen expectations for those kinds of details. and the ending, considering how much we built up to it, was awfully abrubt, but good.)

Thursday, July 22, 2021

***************************************************************************************************************************************************************************************************************************************************************************************

When I started grad school, one of the first things I did was to adopt a kitten that had been born earlier that summer in the apartment across the hall. Now, 18 years (!?!) later, here is a Eulogy for my Cat, in the form of a couple of cat-related anecdotes from my studies.

In my first year of school I was getting into the science of spatial vision - acuity, contrast, retina, V1, etc. I learned that a cat's visual acuity is like 5x worse than a human's. So, naturally, I figured that everything must look *blurry* to a cat.

See, I lived with this creature, and I would often think, how do things appear to her? I remember walking home late one night thinking about how stars appear to me as points - must they appear to a cat as blurry spots? And I realized that this couldn't be the case.

When I take off my glasses and see a point of light, it looks blurry in that I can see that what should be a point is actually smudged across a region of the visual field. And to see that smudge as a smudge, I must be seeing details within the region it fills.

If something appears as a point, that means that it has no apparent interior. So in fact, the acuity limit is telling us about the spatial resolution of appearance. Some visible thing that's smaller than the acuity limit must appear as a point. 

Without glasses, a point is imaged on my retina in a smudge that's a good fraction of a degree across. But my true visual acuity is in the range of 2 minutes of arc; so I see that smudge as an extended, 2d thing. My cat wouldn't see the smudge - she'd just see a point.

Now, years later I'm still making use of that insight, for example when I have argued that peripheral vision can and should look just as sharp as foveal vision - it's for the same reasons that a star should look like a point to my cat.

Another one, now about my perspective rather than the cat's. Some version of this is familiar to any of us: you see a dark shadow at night - is it the cat? or is it something else? There was one instance of this that I puzzled over for years.

I had a pair of shoes that I would sometimes leave by the wall in the hallway of my apartment in Boston - now we're in my first postdoc, 10 years ago. Time and again, I would see those shoes out of the corner of my eye and think, "Cat!" - then foveate and see, "shoes!'.

What fascinated me was that after the first time, I knew what was going on. That is, the high-level visual part of me knew they were shoes. But over and over, my early visual system was fooled - that brown shape of a certain size, on the floor, was most likely "Cat".

I started to collect these cases, where peripherally-seen X makes me think I'm seeing Y, even once I know X is really X. I posted an good case here a while back: (https://twitter.com/AndrewHaun3/status/1261891022625398784). But the prototype will always be Shoes->Cat.

Anyways, before I was so theoretically rigid as I am today, I used to wonder: when I thought the shoes were my Cat, did they actually *appear* as a Cat would appear? Did my mistaken recognition subtly reshape the spatial patterns so as to make them a better fit for "Cat"?

It does seem like this sort of thing can happen to some degree, e.g. with Ryota Kanai's "healing grid" illusion. But wholesale, at the object level? How could we find out? It doesn't matter, because I don't think that's what's happening here.

What I think is happening here, rather, is that I am experiencing a spatial form that is shoes-shaped, shoes-textured, etc. And it's decidedly not cat-shaped, cat-textured, etc - it's not clear cut pareidolia, as where I might see the Cat-shapedness of a Cat-shaped cloud. 

With the shoes, they're the right size and location for a Cat, and my super-sensitive Cat-recognizers are activated, and I experience recognition-of-Cat at the same time I experience a shoes-shaped, shoes-textured spatial gestalt. It's not a wholly congruent experience.

Even now, this week, when my Cat is no more, I've already made the mistake of thinking that a shadow in the hall was "Cat"; that a light *thump* in the other room was "Cat". Now, not only is it a mistake, it's an impossible mistake. But the brain does what it does.

***************************************************************************************************************************************************************************************************************************************************************************************

Tuesday, October 20, 2020

Procrastination

 I have reached peaks of procrastination that even I might once have thought were too great, too high. I've done it with the help of 'working from home', this monstrous situation that allows me to practice the piano, play video games, read nonsense, and any other disastrous activity, any time I like.

Back when I worked at the lab, in an office, I could only procrastinate for so long before I was cornered and had no choice but to do-the-thing. I couldn't play the piano or watch Netflix on a Tuesday afternoon in the lab. I might avoid work for a while by reading internet garbage, but eventually that runs out - it really does, it just takes a few days - and I have to do-the-thing.

But this...

Since the beginning of the pandemic, I've learned a Chopin nocturne - pretty well - laid groundwork for several Rachmaninoff preludes, memorized the 3rd movement of Beethoven's 'Moonlight' Sonata (biggest thing I've memorized in.. decades, probably?), and now I'm getting the hang of the awesome last movement of his first Sonata. I'm getting pretty good at improvising random lines over 9 chords! It's great! 

And it's worthless and stupid and dangerous, since I'm not a pianist and I'm not getting paid to play the piano. I have papers to write, finish, revise; experiments to plan; blah blah blah.

Wednesday, July 22, 2020

A point on interpreting inattentional blindness studies

On Inattentional Agnosia

In a recently published study, Cohen et al showed that, under very naturalistic conditions (viewing 3D natural scene videos in VR), observers often fail to notice that the entire periphery of the visual stimulus has been rendered colorless, i.e. completely desaturated. Cohen et al conclude that the visual periphery is far less colorful than one might have thought. They state this conclusion in several ways:

“these results demonstrate a surprising lack of awareness of peripheral color in everyday life”
-here qualifying the phenomenon as ‘lack of awareness of peripheral color’. Later, they say it less ambiguously: “If color perception in the real world is indeed as sparse as our findings suggest, the final question to consider is how this can be. Why does it intuitively feel like we see so much color when our data suggest we see so little?”.

So, Cohen et al believe that their data suggest we see very little color. This particular claim is logically absurd, however. I explain why in the following:

Cohen et al clearly believe that the phenomenon they present is a case of inattentional blindness. Inattentional blindness phenomena are frequently encountered in visual experience, and are not difficult to bring about in experimental scenarios. Typically, an observer is shown some stimuli connected to an explicit or implicit task; during the course of the task, some unexpected stimulus is inserted, and the observer may fail to notice. These failures to notice are often very retrospectively surprising, since once the observer knows what to look for (once they’ve been debriefed) it is easy to see the missed stimulus. Experimenters often conclude from these results – both the failures to notice and the retrospective surprise - that, in one way or another, the observers (and the rest of us) must see far less than they think they do. But this is not the kind of conclusion that Cohen et al are drawing.

The most famous example of an inattentional blindness phenomenon has got to be Simons and Chabris’ gorilla. In their experiment, observers watched a video of several people playing a ball-passing game. The players move around constantly, throwing the ball back and forth; the observer’s task is to count the number of times that certain players catch the ball. With no warning, halfway through the video, a person in a gorilla suit wanders into the middle of the ballgame, stops and waves at the camera, and then wanders back out of the frame – the ball game continues. Many observers do not notice the gorilla at all!

Simons and Chabris used this and similar results to advance a version of the “we see less than we think” argument. But what if we transplant Cohen et al’s conclusion to the gorilla experiment?

Here is a sentence in general terms that describes both studies:
“Observers viewed a complex stimulus that engaged their visual attention. After a while, a change was introduced to the stimulus that was retrospectively obvious. Many observers did not notice the change.”

Now, here is the reasonable, broad conclusion (a la Simons & Chabris):
“We must not notice as many things as we would expect based on what seems to be obviously noticeable.”

And here is the unreasonable, specific conclusion (a la Cohen et al):
“We must always be having the kinds of experiences evoked by the changed stimulus.”

In the Cohen et al study, they replaced a colorful scene with a colorless scene; observers didn’t notice; so, according to their reasoning, we must actually be having colorless experiences all the time (or more precisely, experiences of “so little” color). Otherwise, the reasoning seems to go, we would notice the change from colorful to colorless. We don’t notice it because it was colorless all along (they do include a caveat that maybe it’s the other way around, that even the grayscale scene evokes a colorful ‘filled-in’ experience, but that doesn’t seem to be their favored interpretation).

For the gorilla study, a gorilla was introduced incongruously into a ball game; observers didn’t notice; so, we must actually be having experiences of incongruous gorillas all the time (or, maybe more precisely, experiences of gorillas in ball games?). Otherwise, we would notice the change from a no-gorilla to an incongruous-gorilla scene. I don’t want to go on with this because it’s obviously absurd. But isn’t it the same logic as the color argument?

The absurdity comes in part from arguing from a complete lack of evidence: they are taking absence of evidence (failure to notice the change) to be evidence of absence (of color experiences). The experiments they are doing have no bearing, it seems to me, on whether or not their observers are actually experiencing color in their peripheral vision.

But more than this, the absurd conclusion comes from a lack of engagement with the important concepts at play. Color, most importantly. What does it mean to see color? That's for another time, I guess.

Before I finish here, an attempt at charity:

Perhaps the logic Cohen et al would derive from Simons & Chabris is somewhere in between the broad & reasonable, and the narrow and unreasonable:

"If we do not notice something, we are not experiencing it."

This is a strong claim, which I know that Cohen et al and many others would more-or-less endorse. But it does demand some engagement with some basic questions: if one's experience is colorless, what is it like? Is it like experience of a monochrome scene? Why are shades of gray excluded from 'color' status? What is special about 'chromatic hues'? Is there really less to seeing a monochrome scene than there is to seeing a color scene? Think about it: if you are seeing a spot as blue, that precludes your seeing it as any shade of gray, just as much as it precludes it from being yellow or red or whatever. Each part of the visual field always - it seems, at least - has some color in the broad sense.

In fact, Cohen et al did find that subjects always noticed if all color, in the broad sense, was removed from the periphery, i.e. if it were replaced with a flat gray field. Which would seem to defeat their basic conclusion that we are not seeing color, or much color. So, again, what is supposed to be so special about chromatic hues? Interesting questions, definitely.

Thursday, March 05, 2020

A dialogue! (old but good)

I ask you to point your eyes up towards the clear sky and to tell me what you see. “An expanse of blue,” you say. I nod and say, “Ah, so simple. Blue is just one thing – your visual experience is so simple. Is that surprising?” 

This doesn’t sound right to you. You shake your head. “No, it’s not simple. There’s blue everywhere. Every location is blue; every part of the space is blue. It’s clearly not simple – it’s a vast structure of blue spots. I’m not even sure how to describe it to you.”

“Well,” I say, “you’re most likely confabulating this description. Through your life experience with using vision, you know that the sky is extended spatially, and that if you move your eyes around you’ll still see blue, and you know from moment-to-moment that the last thing you saw was what you’re seeing right now – simple blue – so you are illuded into claiming you see an expanse of blue. But you actually see no such thing.”

“How can you claim this?” you ask. “Why should you doubt what I tell you?”

I shake my head sadly. “Subjective reports are known for their fallibility. People often claim to have seen things that they could not have seen; they claim their experiences have qualities that they cannot have. But I’ll suspend my disbelief for a bit. Can you convince me?”

You seem a little annoyed, but you nod. “Perhaps.”

“Okay. How many parts are there to this blue expanse?” I ask. You don’t know. We go through some basic tests and it seems that you can’t really tell me about more than a handful of spots at once – yet you persist in claiming that the actual number of blue ‘spots’ is vast.

“Are they all there at once?” You seem to think that they are. “Could it be that the parts are there only when you look for them?”

“No,” you say, “it feels like a big, continuous expanse of blue. It’s not a little searchlight.”

I proceed. “But I’m asking you to convince me of that, not just to tell me again and again. As far as I can tell, you can only report the color of a few spots at a time – a big ‘sky-sized’ spot, or a few little ‘point-sized’ spots. But your momentary capacity seems to be extremely small – where is this huge expanse? And how does it make any sense that you should experience such enormous complexity, but be able to interact with only a vanishingly small portion of it?”

You seem unsettled: “Why,” you ask, “would I claim to see an expanse of spots when I only see a few at a time? What do I gain by confabulation?”

“But it’s a meaningful confabulation – you are unaware of the limits or boundaries between your momentary visual experience, your memories of recent experiences, and your expectations of what future experiences will be like. The reports you generate are more a confusion of these different processes, rather than a confabulation.” I concede a word, but little else.

“Well then,” you say slowly, “what does this confusion feel like? Might it feel like an expanse of blue? Or do you assume that only the perceptual process constitutes experience?”

I miss your point. “But we’re exactly talking about perceptual experience – of seeing blue – don’t try to shift the goal posts.”

“No, we aren’t talking strictly about perceptual experience, though I do think my experience is fairly categorized as perceptual rather than memorial or expectative. You’re the one who introduced other ‘processes’ into the conversation.”

“Well, this can’t work,” I say. “I can concede the process thing, but this doesn’t address the reportability issue at all, and it’s highly implausible, even worse than if you put everything in ‘perception’. Are you claiming that you experience all your memories or all your expectations, at once?”

“No no,” you dismiss the idea with a wave of your hand. “I was just asking – what do you think such confusion should feel like?”

“Like what you’re feeling now,” I say.

You roll your eyes. “Come on. Clearly the confusion you’re suggesting should feel very different from what I am feeling – or no, from what I claim to be feeling. Otherwise we wouldn’t be having this debate.”

“Well, I can’t say exactly. You are experiencing what you have access to, and you can report what you have access to; so your experience must be of a narrow set of blue spots. And you claim otherwise because whenever you check other spots, you immediately begin to experience them – so you mistakenly believe that they were there all along. Your experience isn’t what you think it is – and it isn’t what you claim.”

You seem perplexed. “Does that mean that unless I am queried about my experience, I am not under this illusion? I only become mistaken when asked to describe what I’m experiencing?”

“Maybe?”

You decide to change tack. “Okay. Can you tell me what substantive difference there is between this illusory or mistaken experience and an actual experience of a blue expanse?”

“Well, it would be a huge difference – the illusory experience is actually very limited and consists of very few parts, including the few blue spots and a particular set of expectations and memories that lead you to claim that you see an expanse of blue. The actual experience of a blue expanse would be just that – many many more spots, and no necessary memory or expectation aspects, though you’d probably also have those in addition.”

“I can’t help but think,” you say carefully, “that you’re doing something slippery here. You want to know why I claim to see a blue expanse, and your explanation has to do with these non-perceptual processes and how they seamlessly support my very limited perceptual process. And you reject my explanation for my claim – that I really am experiencing a blue expanse – because I can’t report the whole expanse to you. But can I report all my memories and expectations to you? Do you know how to collect such data?”

“I think that the fact you’re able to so quickly report on what you see at randomly cued locations suggests that those processes must be at work.”

“Surely they are, and I can tell you that I do indeed have experiences of memory and expectation. But I’m wondering why you think you must reject my explanation but are satisfied with your own.”

“Certainly I’m not satisfied – there is much to learn. We really understand perception etc rather poorly at this stage.”

You shake your head.

Thursday, December 05, 2019

EHS

Falling asleep two nights ago, I realized I was hearing a terrible noise, a roaring, screaming cacophony; but then, when I realized it, I also realized I wasn't hearing anything at all. For a while, maybe tens of seconds or a few minutes, I had been lying there listening to this noise, with a vaguely uncomfortable feeling, mind drifting - falling asleep, in other words. On attending to it, I seem to have broken myself out of a hypnogogic auditory hallucination - the experience faded after a few seconds of my concentrating on the fact that, actually, I wasn't hearing anything at all, and as far as I know it didn't come back, and I fell asleep.

The next day I told Jonathan about it, and he reminded me of "Exploding head syndrome", which is usually described as a sharper, more acute noise ("snapping of the brain" was a term coined by a doctor in 1920), but I think what I experienced fits. Maybe it's happened before - probably, I think - and either I proceeded to sleep without the lucid break, or I've forgotten the details. It doesn't happen often, at least.

The experience was basically like auditory imagery, similar quality as hearing music or voices in your head; but whereas those kinds of experiences generally seem (to me) to be endogenous and even self-willed (even when it's a tune you can't get out of your head), this sound felt out of my control. It was frightening, but as I attended to it it was clearly a completely internal experience - less real than tinnitus, but in retrospect I think more than just a typical auditory image. I say this because now, when I try to imagine the sound, I can get a rough idea of it but I can't really experience it as really as I was during the episode; and when I imagine a piece of music, or a voice, these also seem less substantial.

At any rate, once I attended to the experience, and reassured myself that in fact I was hearing nothing and that it was a hallucination, the cacophony dissolved into the regular silence of auditory imagining. Maybe it was an aftereffect of the long holiday weekend? Exhaustion?

Thursday, April 18, 2019

Parahypnic Hallucinosis?

Remember this episode:
http://xuexixs.blogspot.com/2018/08/blinking-plants.html

That was my bout with what I termed 'gardener's hallucinosis', where I spent all day pulling clear weed and etc from the gardens, and wound up with vivid blinking hallucinations. I likened it to the 'eyes open' geometric hallucinations I sometimes had back in my migraine days.

Well I've noticed a few times lately something similar happening under a specific circumstance. The circumstance is: I fall asleep in my daughter's room as I read her to sleep. I wake up a couple of hours later, stumble to the bathroom to brush my teeth, and I go get in my bed.

It doesn't happen every time, but sometimes - and last night very vividly - on waking in this way I have vivid and complex geometric hallucinations. Fine-grained, colorless - much of the content is just of very, very fine beads or dots, flickering and moving - but mixed into it are larger-scale features. Last night, the features were like a high-pass Kandinsky painting: discs and long, smoothly-curving lines, all moving and twisting around randomly, but no particular surface colors except for grayness, or darkness.

I could see it all fairly clearly with eyes opened, until I turned on the bathroom light and then the experience faded.

Maybe not coincidentally, I have been having minor headaches lately, and I think there was one yesterday. My brain must be in a state?

Tuesday, April 16, 2019

Two Things

Got two emails last night, one right after the other:

1. I get to give my 'color across the visual field' talk at ASSC 23. Gonna put my mouth where my money is! I wonder how they'll take it.

2. I was actually awarded a NEI travel award by the VSS! When's the last time I won something? Yeah, I don't know either!

Monday, October 15, 2018

Perfect Cadence

Just a few days ago, I realized something about my simple mind. It had been bubbling up for a few months, but suddenly it's crystallized:

Start your melody with the dominant tone on the upbeat, then down on the tonic ("5-1"), and I will love your melody and it will stick in my mind forever. Apparently.

I am a lifelong piano player, but I've only recently really started to get acquainted with the general classical piano repertoire. So while I've been playing everything by Erik Satie since I was a teenager, only in the last 5-6 years have I started to play a lot of Chopin, Scriabin, Grieg, etc. Only a 2-3 years ago did I first start to learn Chopin's etudes.

One of the etudes I played for a while, maybe the most beautiful one and one of the most famous, is the Op.10 no.3 in E-major. It starts on the 5, then to the 1:



That is a nice melody! Happy but a little sad, reflective, contemplative. Then this summer, I start looking for Schumann pieces to play, and of course I rediscover his "Traumerei" in F:

Schumann, Robert: Träumerei (Reverie) - Piano

Very nice melody, easy to play, similar mood, meaning (in my mind) to the Chopin etude. And it also starts from the 5 to the 1. I notice this and think, how neat, two melodies I like that start the same way - and just a half-tone apart!

Then this summer, I start listening to Grieg's lyric pieces, and I get the book and they're all fun to play, mostly not too difficult. One of my favorites is one of the simplest, the "Watchman's song":
Image result for grieg watchman's song
Now you see it again, again in E major: 5-1 to start the melody. And again, similar mood. Now I start to realize something funny is going on. I'm not just cherry-picking here: these are three of my favorite piano pieces to play or hum to myself in the hallways in the last year or so. I also realize that my favorite Grieg melody, "Solveig's Song" from Peer Gynt, starts the same way (5 up to 1 on the downbeat), though it is in A minor.

Then, well, I forget about it - and last week, not sure why, I'm thinking about it again. My favorite tune from Satie's Gnossienes: no 5, in G major.

Image result for gnossienne no 5

Not just the first two notes - all four of these pieces (Chopin, Schumann, Grieg, Satie) have the same first three notes, and they're all in nearby keys (Emaj, Fmaj, Emaj, Gmaj).

Once I noticed the Gnossienne, I start finding the pattern everywhere. First, many other great classical melodies, though mostly in minor keys (the major ones apparently stand out to me), and I realize I can just sit and they're so easy to find in my brain:

Beethoven's "Marmotte"; Smetana's "Moldau"; Grieg's "Solveig's song"; Faure's Sicilienne, Scriabin's Prelude 14 (which I learned 10 years ago, one of the first of his that I learned); the 'song without words' from Holst's folk song Suite, which was always my favorite part of that tune when the band played it in high school. Those are all in minor keys, but notice this too: whereas the major key ones all go down from the tonic (to the major seven, each one), the minor key tunes all go up! And except for the Sicilienne, they all go up to the 2 (it goes up to the 3).

And what do you think has always been my favorite leitmotif from Star Wars? The "Force Theme", in C minor, which goes up from the tonic to the 2!

Then we can do folk songs and pop songs; I would have immediately have listed my childhood favorites as O Shenandoah (Major, but goes up from the tonic) and "My Grandfather's Clock" (Major, goes down to the 7 like the four initial examples). Everyone's favorite 'Simple Gifts' (the Shaker hymn used by Aaron Copland). My favorite Simon Garfunkel song: El Condor Pasa (Minor, down to the seven from the tonic); a pretty good Queen song, though not my favorite, is "Who wants to live forever", in a minor key going up to the 2 from the tonic.

My two favorite Tom Waits tunes: "Downtown train" (outside another yellow moon) and "If I have to go", though those are so similar it might be that the former is a rewrite of the latter. And what Mozart melody, that I would always have said was his most beautiful, starts in exactly the same way (same first 4 notes) as those Tom Waits melodies?

It doesn't start on the upbeat, but it's the theme from the second movement of his clarinet concerto:



\new Score {
  \new Staff = "clarinet" {
    \transpose c a
    \relative c' {
      \set Staff.midiInstrument = #"clarinet"
      \clef treble
      \key f \major
      \time 3/4
      \tempo "Adagio"
      \set Score.tempoHideNote = ##t
      \tempo 4 = 60

      c4( f4. a8) | a8( g) f4 r | c4.( f8 a c) | c8( bes) a4 r |
    }
  }
}

***

Now, starting your melody by going from the dominant to the tonic doesn't guarantee I'll love it. I quickly thought of two examples I don't really care for - just two, while it was so easy to think of the two-dozen beloved ones above: I don't enjoy Schumann's "The Happy Farmer", maybe because my mother's a piano teacher and I've heard it way, way, way too many times in my childhood. And Wagner's wedding march ("Treulich geführt") is just clichéd to death. But then, speaking of Wagner, there is main theme of the Tannhauser overture, which I remember listening to over and over again, for its inspirational feeling, back 2-3 years ago. Easily my favorite Wagner theme, though I had not thought of it in a while (and I have never listened to the actual opera, so I don't know the source of the theme - wikipedia says it's from the "Pilgrim's Chorus").

In sum it seems that there is a key in my brain, shaped like "5-1", and it lets you right into my eternal musical memory.

Monday, August 13, 2018

Gardener's Hallucinosis?

Interesting experience yesterday, Sunday.

Spent ~5hrs outside in the gardens, pulling weeds. Had done the same for a few hours on Saturday. When I came inside for good, about 5 or 6, I started to hallucinate during blinks - when I blinked my eyes, especially when not prepared for it, I would see images of the plants I had been pulling all day. Sometimes very clear, seeing leaves with their serrations and textures, and tendrils curling around - the images were coherent and (mutedly) colorful, seemingly randomly selected but each was a recognizable one of the real plants I had seen, mostly members of the 2 or 3 most common weeds of the day.

Sometimes the images were strong enough to be distracting, making it hard to see - or to recognize - what was actually before my eyes. But I think they were only actually visible during the blinks. I managed over time to notice some properties of the images - I could hold my eyes closed after an effective blink. It was still unclear to what extent I was *really* seeing the fine details, or whether the actual images were coarser and just 'suggesting', as in normal visual imagery, the fine details. Holding my eyes shut, it seemed that the form of the afterimages or noise, in the eyes-shut darkness, guided the structure of the hallucination: spots of afterimage seemed to appear as leaves, streaks as stems or tendrils. But it was not so clear as to be certain of this.

The experience lasted until I went to bed, 6 or 7 hours later, but it had attenuated by then. I slept and remembered several dreams that had nothing at all to do with plants (one I remember, now, was that my lab seemed to be based in the house I grew up in, and some newcomers were using space in the den - dream-Giulio warned me not to give them to much space, or they'll think they can take more). When I woke up this morning, the phenomenon returned for another hour or so, but is gone now.

The phenomenon resembled, to me, the kinds of hypnogogic hallucinations some people have after long, repeated activity ('the Tetris effect'), but I can't find reports of this in normal waking experience (albeit during blinks). I described it to Giulio and others, the physiological explanations are kind of clear, but as to why it happened to me and why it isn't much more common, that's an open, strange question.

Another thing it reminded me of, was back in the migraine days, seeing geometric web patterns after waking, sometimes during blinks. Similar kind of dynamic, but I don't think those experiences ever lasted more than minutes, definitely not many hours.

Tuesday, February 06, 2018

IIT and Star Trek

[I originally posted this to a Star Trek forum because I am a Star Trek nerd but here it is for better posterity:]

"Complex systems can sometimes behave in ways that are entirely unpredictable. The Human brain for example, might be described in terms of cellular functions and neurochemical interactions, but that description does not explain human consciousness, a capacity that far exceeds simple neural functions. Consciousness is an emergent property.” - Lt.Cmdr. Data

A post the other day about strong AI in ST provoked me to think about one of my pet theories, there in the title: Data is conscious, the Doctor is not, and other cases can be inferred from there. Sorry that this is super long, but if you guys don't read it I don't know who will, and my procrastination needs an outlet.

First, some definitions. Consciousness is a famously misunderstood term, defined differently from many different perspectives; my perspective is that of a psychologist/neuroscientist (because that is what I am), and I would define consciousness to mean “subjective phenomenal experience”. That is, if X is conscious, then there is “something it is like to be X”.

There are several other properties that often get mixed up with consciousness as I have defined it. Three, in particular, are important for the current topic: cognition, intelligence, and autonomy. This is a bit involved, but it’s necessary to set the scene (just wait, we’ll get to Data and the Doctor eventually):

Cognition is a functional concept, i.e. it is a particular suite of things that an information processing system does, specifically it is the type of generalized information processing that an intelligent autonomous organism does. Thinking, perceiving, planning, etc, all fall under the broad rubric of “cognition”. Humans are considered to have complex cognition, and they are conscious, and those two things tend to be strongly associated (your cognition ‘goes away’ when you lose consciousness, and so long as you are conscious, you seem to usually be ‘cognizing’ about things). But it is well known that there is unconscious cognition (for example, you are completely unaware of how you plan your movements through a room, or how your visual system binds an object and presents it as against a background, how you understand language, or how you retrieve memories, etc) - and some theorists even argue that cognition is entirely unconscious, and we experience only the superficial perceptual qualities that are evoked by cognitive mechanisms (I am not sure about that). We might just summarize cognition as “animal-style information processing”, which is categorically different from “what it’s like to be an animal”.

Intelligence is another property that might get mixed up with consciousness; it is generally considered, rather crudely, as “how well” some information processing system handles a natural task. While cognition is a qualitative property, intelligence is more quantitative. If a system handles a ‘cognitive' task better, it is more intelligent, regardless of how it achieved the result. Conceiving of intelligence in this way, we understand why intelligence tests usually measure multiple factors: an agent might be intelligent (or unintelligent) in many different ways, depending on just what kinds of demands are being assessed. “Strong AI” is the term usually used to refer to a system that has a general kind of intelligence that is of a level comparable to human intelligence - it can do what a human mind can do, about as well (or better). No such thing exists in our time, but there is little doubt that such systems will eventually be constructed. Just like with cognition, there is an obvious association between consciousness and intelligence - your intelligence ‘goes away’ when you lose consciousness, etc. But it seems problematic to suppose that someone who is more intelligent is more conscious (does their experience consist of “more qualia”? What exactly does it have more of, then?), and more likely that they are simply better-able to do certain types of tasks. And it is clear, to me at least, that conscious experience is possible in the absence of intelligent behavior: I might just lie down and stare at the sky, meditating with a clear mind - I’m not “doing” anything at all, making my intelligence irrelevant, but I’m still conscious.

Autonomy is the third property that might get mixed up with consciousness. We see a creature moving around in the environment, navigating obstacles, making choices, and we are inclined to see it as having a sort of inner life - until we learn that, no, it was remote-controlled all along, and then that apparent inner life vanishes. If a system makes its own decisions, if it is autonomous, then it has, for human observers at least, an intrinsic animacy (this tendency is the ultimate basis of many human religious practices), and many would identify this with consciousness. But this is clearly just an observer bias: we humans are autonomous, and we assume that we are all conscious (I am; you are like me in basic ways, so I assume you are too), and so we conflate autonomy with consciousness. But, again, we can conceive of counter-examples - a patient with locked-in syndrome has no autonomy, but they retain their consciousness; and an airplane on autopilot has real (if limited) autonomy, but maybe it’s really just a complex Kalman filter in action, and why should a Kalman filter be conscious (i.e. “autonomy as consciousness” just results in an endless regression of burden-shifting - it doesn’t explain anything)?

To reiterate, consciousness is “something it’s like to be” something - there’s something-it’s-like-to-be me, for example, and likewise for you. We can turn this property around and query objects in nature, and then it gets hard, and we come to our current problem (i.e. Data and the Doctor). Is there something-it’s-like-to-be a rock? Certainly not. A cabbage? Probably not. Your digestive system? Maybe, but probably not. A cat? Almost certainly. Another human? Definitely. An autonomous, intelligent, android with human-style cognition? Hmmm… What if it’s a hologram? Hmmm….

That list I just gave (rock; cabbage; etc) was an intuition pump: most of us will agree that a rock, or a cabbage, has no such thing as phenomenal consciousness; most of us will agree that animals and other humanoids do have such a thing. What makes an animal different from a rock? The answer is obvious: animals have brains. Natural science makes clear that human consciousness (as well as intelligence, etc) relies on the brain. Does this mean that there’s something special about neurons, or synapses, or neurotransmitters? Probably not, or, at least there’s no reason to suppose that those are the magic factors (The 24th century would agree with this; see Data’s quote at the top of this essay). Instead, neuroscientists believe that consciousness is a consequence of “the way the brain is put together”, i.e. the way its components are interconnected. This interconnection allows for dynamically flexible information processing, which gives the overt properties we have listed, but it also somehow permits existence of a subjective point of view - the conscious experience. Rocks and cabbages have no such system of dynamical interconnections, so they’re clearly out. Brains seem to be special in this regard: they are big masses of complex dynamical interconnection, and so they are conscious.

What I’m describing here is, roughly, something called the “dynamic core hypothesis”, which leads into my favored theory of consciousness: “integrated information theory”. You can read about these here:http://www.scholarpedia.org/article/Models_of_consciousness The upshot of these theories is that consciousness arises in a system that is densely interconnected with itself. It is important to note here that computer systems do not have this property - a computer ultimately is largely a feed-forward system, with its feedback channels limited to long courses through its architecture, so that any particular component is strictly feed-foward. A brain, by contrast, is “feedback everywhere” - if a neuron gets inputs from some other neurons, then it is almost certainly sending inputs back their way, and this recurrent architecture seems implemented at just about every scale. It’s not until you get to sensorimotor channels (like the optic nerves, or the spinal cord) that you find mostly-feed-forward structures in the brain, which explains why consciousness doesn’t depend on the peripheral nervous system (it’s ‘inputs and outputs’). Anyways, this kind of densely interconnected structure is hypothesized to be the basis of conscious experience; the fact that the structure also ‘processes information’ means that such systems will also be intelligent, etc, but these capacities are orthogonal to the actual structure of the system’s implementation.

So, Data. Maybe Data isn’t conscious, but just gives a great impression of a conscious being: he’s autonomous, he’s intelligent, he has a sophisticated cognitive apparatus. Maybe there’s nothing “inside” - ultimately, he’s just a bunch of software running on a robotic computer platform. People treat him like he’s conscious (Maddox excepted) just because of his convincing appearance and behavior. But I don’t think it’s an illusion - I think Data is indeed conscious. 

Data’s “positronic brain” is, in a sense, a computer; it’s artificial and made from artificial materials, it’s rated in operations per second, it easily interfaces with other more familiar kinds of computers. But these are really superficial properties, and Data’s brain is different from a computer in the ways that really matter. It is specifically designed to mimic the structure of a human brain; there are numerous references throughout TNG that suggest that Data’s brain consisted critically of a massive network of interconnected fibers or filaments, intentionally comparable to the interconnected neurons of a biological brain (data often refers to these structures as his “neural nets”). This is in contrast to the ‘isolinear chip-bound’ architecture of the Enterprise computer. Chips are complicated internally - presumably each one acts as a special module that is expert in some type of information processing task - but they must have a narrow array of input and output contacts, severely limiting the extent to which a chip can function as a unit in a recurrently connected network (a neuron in a brain is the opposite: internally it is simple, taking on just a few states like “firing” or “not firing”, but it makes tens of thousands of connections on both input and output sides, with other neurons). The computer on 1701-D seems, for all intents and purposes, like a huge motherboard with a ton of stuff plugged into it (we can get to the Intrepid class and its ‘bio-neural chips’ in just a bit).

Data, then, is conscious in virtue of his densely recurrently interconnected brain, which was exactly the intention of Dr Soong in constructing him – Soong didn’t want to create a simulation, he wanted to create a new being. I contrast Data at first with the Enterprise computer, which is clearly highly intelligent and capable of some degree of autonomy (as much as the captain will give it, if you believe Picard in 'Remember Me’). I won’t surmise anything about “ship cognition”, however. Now, if the ship’s computer walked around the ship in a humanoid body (a la EDI of the Mass Effect series), we might be more inclined to see a ghost in the machine, but because of the ship’s relatively compartmentalized ‘chip focused’ structure and it’s lack of a friendly face, I think it’s very easy to suppose that the computer is not conscious. But holographic programs running on that very same computer start to pull at our heartstrings - Moriarty, Minuet, but especially… the Doctor.

The Doctor is my favorite Voyager character (and Data is my favorite of TNG), because his nature is just so curious. Obviously the hologram “itself” is not conscious - it’s just a pattern of projected photons. The Doctor’s mind, such as it is, is in the ship’s medbay computer (or at times, we must assume, his mobile emitter) - he’s something of an instantiation of the famous ‘brain in a vat’ thought experiment, body in one place, mind in another. The Doctor himself admits that he is designed to simulate human behavior. The Voyager crew at first treats him impersonally, like a piece of technology - as though they do not believe he is “really there”, i.e. not conscious - but over time they warm to his character and he becomes something of an equal. I think, however, that the crew was ultimately mistaken as to the Doctor’s nature - he was autonomous, intelligent, and a fine simulation of human cognition and personality, but he was most likely not a conscious being (though he may have claimed that he was).

Over and over, we hear the Doctor refer to himself as a program, and he references movement of his program from one place to another; his program is labile and easily changed. This suggests that his mind, at any given moment, is not physically instantiated in a substrate. What I mean by this is that while a human mind (or Soong-type android mind) is immediately instantiated in a pattern of activity across trillions of synapses between physically-realized interconnected elements, the Doctor’s mind is not. His mind is a program stored in an array of memory buffers, cycling through a system of central processors – at any given moment, the Doctor’s mind just just those few bits that are flowing through a data bus between processor and memory (or input/output channel). The “rest of him”, so to speak, is inert, sitting in memory, waiting to flow through the processor. In other words, he is a simulation. Now, to be sure, in a lot of science fiction brains are treated as computers, as though they are programmable, downloadable or uploadable, but in general this is a very flawed perspective - brains and computers actually have very little in common. The Star Trek universe seems to recognize this, as I can’t think of any instances of outright abuse of this trope in a ST show. One important exception stands out: Ira Graves.

Ira Graves is a great cyberneticist, so let’s assume he knows his stuff (let’s forget about Maddox, who was a theoretically impoverished engineer). He believes that he can save his consciousness by moving it into Data’s brain. But Data’s brain is not a computer in any ordinary sense, as we detailed above: it’s a complex of interconnected elements made to emulate the physical structure of a human brain. (This is why his brain is such an incredible achievement: Data’s brain isn’t a miniaturized computer, it’s something unique and extraordinarily complex. This is why Lal couldn’t just be saved onto a disc for another attempt later on - Data impressed himself with her memories, but her consciousness died with her brain.) Anyways, Ira Graves somehow impresses his own brain structure into Data’s positronic brain - apparently killing himself in the process - and seems happy with the result (though he could be deluded - having lost his consciousness, but failing to recognize it). In the end, he relinquishes Data’s brain back to Data’s own mind (apparently suppressed but not sufficiently to oblieterate it), and downloads his knowledge into the Enterprise computer. Data believes, however, that Graves’ consciousness must have been lost in this maneuver, which is further support for the notion that a conscious mind cannot “run on a computer”: a human consciousness can exist in Data’s brain, but not on a network of isolinear chips.

The Doctor, in the end, is in the same situation. As a simulation of a human being, he has no inner life – although he is programmed at his core to behave as though he does. He will claim to be conscious because this makes his humanity, and thus his bedside manner, more effective and convincing. And he may autonomously believe that he is conscious – but, not being conscious, he could never know the difference, and so he cannot know if he’s making an error or not in this belief.

I think that here we can quickly bring up the bio-neural gel packs on Voyager. Aren’t they ‘brainlike’ in their constitution? If the Doctor’s program runs on this substrate, doesn’t that make him conscious? The answer is no – first, recall what Data had to say about neural function and biochemistry. Those aren’t the important factors – it’s the dense interconnectedness that instantiates an immediate conscious experience, and we have no reason to believe that the interconnection patterns of an array of bio-neural gel packs is fundamentally different from a network of isolinear chips. Bio-neural thingies are just supposed to be faster somehow, and implement ‘fuzzy logic’, but no one suggests they can serve as a substrate for conscious programs. And furthermore, the Doctor seems happy to move onto his mobile emitter, whose technology is mysterious, but certainly different from the gel packs. It seems that he is just a piece of software, and that he never really has any physical instantiation anywhere. In defense of his “sentience” (Voyager episode ‘Author, Author’), the Doctor’s crewmates only describe his behavioral capacities: he’s kind, he’s autonomous, he’s creative. No one offers any evidence that he actually possesses anything like phenomenal consciousness. (In the analogous scene in ‘Measure of a Man’, Picard at least waves his hand at the notion that, well, you can’t prove Data isn’tconscious, which I thought was pretty weak, but I guess it worked. I don’t know why they didn’t at least have a cyberneuroscientist or something testify.)

So that is my case: Data is conscious, and the Doctor is not. It’s a bit tragic, I think, to see the Doctor in this way – he’s an empty vessel, reacting to his situation and engendering real empathy in those he interacts with, but he has no pathos of his own. He becomes an ironically pathetic character – we feel for him, but he has no feelings. Data, meanwhile, in his misguided quest to become more human and gain access to emotional states (side note: emotion chip BLECH) is far more human, more real, than the most convincing holographic simulation can ever be.

Wednesday, November 15, 2017

IIT and Blade Runner 2049

Blade Runner 2049 is probably the best movie I've ever seen in a theater - definitely the best sci fi movie I've seen. The movie is a detective story about a replicant - an artificial human - who uncovers a mystery that has personal implications for himself, and broader implications for the dystopia that he lives in.

Are replicants conscious? It's hard to argue that they wouldn't be, and the movie doesn't seem to suggest they aren't. Instead, the movie focuses on memory and how your memories make you real or not - if your memories are false, you are a kind of false person in the world of 2049, and this is how people in the movie justify their enslavement of the replicants. The main theme of the movie is memory - are my memories real? If they're real, are they really mine, or someone else's? Does it really matter?

That stuff is all interesting, but like I said, consciousness is not the question with the replicants. It is the core question regarding one of the main characters: Joi the holographic girlfriend. We can speculate now on whether or not Joi is conscious. The movie is ambiguous about this, but there seems to be a subtext that she is not conscious, but that the main character (K), and we the audience, are supposed to believe that she is. And while there is this ambiguity, the resolution of the ambiguity is deeply meaningful to the story (just like the original Blade Runner, such a significant ambiguity is left unresolved).

First, to be clear on IIT terms: replicants are conscious because they are basically humans with human brains (and humans are clearly conscious) - what makes replicants different is that they are constructed as adults, with memories implanted (or not) to give them a more natural psychology. In the original Blade Runner, replicants like Roy Batty were assumed by their masters to be essentially psychopathic by nature, and the subsequent implantation of false memories was instituted to make them more psychologically healthy. But for a 'system in a state', the truth or falsity of memory is an extrinsic fact - from the IIT point of view (and probably any other modern theory of consciousness, or of the brain) it doesn't matter for the system itself. So replicants are conscious.

Joi, on the other hand, is not a human with a human brain. She's a holographic projection generated by a computer. The hologram of course is nothing but an image; what matters is in the computer. Computers as we know them cannot be conscious in any meaningful sense: the system is a very small set of very very fast switches, entirely feedforward at the most complex, finest grain, and extremely simple at coarser grains where we might see something like feedback or lateral connections. If computers in 2049 are like computers we know, Joi is not conscious - but computers might be very different. Joi has some kind of dedicated local unit, mounted on the wall in K's apartment - perhaps the computer in that unit is a neuromorphic system that replicates the connectivity structure of a human brain. But the picture of technology in the movie doesn't suggest this level of sophistication - I think that if we want to argue that Joi is conscious (in order to counter-argue) we need to weaken some assumptions.

Maybe Joi is conscious, but her consciousness is absolutely different from a human consciousness. That still requires some kind of neuromorphic computer, but it doesn't have to reflect the structure of the human brain, but there's a problem there that boils down to unreliability: if you want a simulation of a human being, you probably want something that's utterly controllable, like a performance - and where it's uncontrollable, it should still fulfill the simulation's desiderata. But consciousness is exactly uncontrollable - it's a closed locus of causal power (according to IIT) - so if your machine is conscious and you want it to simulate a human being, then its consciousness ought to resemble a human consciousness. Joi seems to do a very lifelike impression of a human being, so we have two choices - either she is conscious and her consciousness is specified by a neuromorphic computer that reproduces human neural connectivity, or she is an unconscious simulation.

As I said above, an artificial human brain (in the sense of an electronic device) seems beyond the technology of 2049; but even if it is in reach, it is hard to reconcile with the way Joi is quickly cut and copied over to her mobile emitter. First, it would mean that not only are there artificial human brains in 2049, but they are tiny enough to fit in the palm of your hand; second, it would mean that this brain can be constructed (or connected) in seconds, since remember that according to IIT it is a causally-interacting physical substrate that specifies consciousness - a computer program stored in memory is not causally-interacting in any important sense. I just don't think either of these is plausible in context.

So that leaves us with Joi the unconscious, but highly convincing, holographic girlfriend. Seeing Joi this way is easy when she first appears in the movie, but rather quickly it becomes clear that she is a dramatic and interesting character. Just like any other character in a movie or a play, it is then very difficult to imagine that she is not conscious. We know that the actor playing her is conscious, which makes it even more difficult. But if we try, we can see her as an entirely mechanical projection, like Siri or a chatbot, something that emulates humanity down to little details like evincing emotions like love and hope and excitement, and insistence on her own choices. But evincing emotion is not the same as feeling emotion - while there was an actor (Ana de Armas) that performed the character, there is nothing there on the screen while we watch the movie, and whether or not the actor was ever even real (or whether the performance is entirely artificial) doesn't matter to the fact that the performance on the screen is just a mechanical, unconscious projection. K the Blade Runner is in a similar situation: Joi is convincing, and maybe K himself cannot recognize that she is not intrinsically real, but she is nonetheless unreal.

This seems to me to contribute important meaning to the story, and it does resonate with some clues that are given out bit-by-bit, i.e. it seems the filmmakers probably also thought that Joi is not really real (not conscious). We see ads in the background of 2049 LA for Joi, touting that she is everything you want, and this is exactly what K seems to get. And K, who seems to despair at losing her (and losing other things), seems to recognize this (or remember it) when one of those ads reaches out to him and calls him by the same name that 'his' Joi had given to him. His copy of Joi wasn't even customized (it had all the 'factory settings'), i.e. not only is she unreal, she isn't even unique. So as a conscious being himself, K really is alone - his only companion is just a performance without an actor.

I think that, then, we're left with a hard question that aligns with the main theme of the story, which is (as I understood it) does it matter if my memories are mine, or if they are real? If none of my memories are real, am I real, do I matter? That theme gets a resolution: it is clear that K is real, he matters in important ways, and that his memories, real or not, nevertheless guide him significantly. The hard question is: does it matter if you are real? I mean that from the first person: I know I am real, but are you? Does it matter if you are or not, at least, does it matter to me? Well.. in a sense it is the same problem as the main theme - you are something I perceive, just as you might be something I remember. What matters, we might like to think, is whether or not I am real, whether or not I make my own choices of significance - and as pertains to you, whether or not you (real or not) have a significant role in my reality.

And, I think, K is left in a similar place with both versions of this problem; Joi had an effect on him, it seems clear to me, encouraging him and helping to destabilize him towards his ultimate fate, just as his memories did. But whereas (his) Joi is destroyed and lost to him, his memories - "all the best ones", at least - survive even when he dies, because they also belong to others. This happens to be an inversion of Roy Batty's famous observation that his memories will be lost with him.

Ok, enough!

Friday, September 16, 2016

IIT & Pacific Rim

I'm going to start posting short observations of how IIT would explain or be problematic for certain ideas in sci-fi movies or books.

To start: The film "Pacific Rim", a sci-fi action movie where the main characters are pilots controlling gigantic robots. The pilots control the robot through a direct brain-machine interface, but the job is apparently too much for one pilot so there are always at least two pilots. The two pilots have their minds joined by a "neural bridge" - basically an artificial corpus callosum. While joined, the pilots seem to have direct access to one another's experiences in a merged state called "the Drift" - it seems that their two consciousnesses become one.

This scenario is the predicted consequence, according to IIT, of sufficient causal linkage between two brains - at some point, the connection is sufficiently complex that the local maximum of integrated information is no longer within each pilot's brain, but now extends over both brains. What would be necessary to achieve this? The movie doesn't attempt to explain how the brain-machine interface works, but it must involve a very high-resolution, high-speed parallel system for both responding to and stimulating neurons in each pilot's brain.

One way of doing this would be cortical implants, where high-resolution electrode arrays are installed on the surface of each pilot's brain; this is at least plausible (if not possible) given existing technology. However, none of the pilots show signs of a brain implant, and the main character Mako Mori seems to become a pilot on pretty short notice, although she has apparently been training for a long time - maybe all trainees are implanted? A big commitment.

A more hand-wavy Star Trek kind of technology would involve some kind of transcranial magnetic field system that is powerful, precise, and fast enough to both stimulate individual neurons (current TMS systems certainly cannot do this) and measure their activity on a millisecond timescale (current fMRI systems absolutely cannot do this); however, the pilots simply wear helmets while piloting the robots (although Dr Newton, who almost certainly does not have any brain implants since he is not a trained pilot, does use some kind of transcranial setup to drift with a piece of monster brain), which I think makes a transcranial system very unlikely.

If I had to guess, wireless cortical implants are the only plausible means of establishing the Pacific Rim neural bridge, but some sort of transcranial system hidden in the pilots' helmets and based on some unimaginable technology is not excluded.

Verdict: Pacific Rim's "drift" is IIT Compatible