Responding to Carolyn on AI -- I hope this is getting published in the right place. Only have time to respond a bit right now, to the first entry (or the first bit of the entry, before the comments on the article).
(1) General question: Why think we can learn anything about how humans think from how we make computers do things that are similar to thinking? There are striking differences between what we do and what computers do. We perceive in order to get initial data. Computers get it from us typing stuff in -- and, if Searle is right, and I think he is, the information they get and process is only related to the world through us (computers exhibit only "derivative intentionality"). We *use* data to direct ourselves in the world as well as "process" it; indeed, these two functions would seem to be intimately interrelated. On the other hand, to the extent that we, in trying to build artificial intelligences of various kinds, are facing the same "engineering problems" that evolution did, we will certainly learn something about ourselves from AI. But what were the engineering problems evolution faced? Are they similar to those we face in creating AI? Of course, regardless of these general considerations, if there is sufficient similarity between the way we operate and the way our AI systems operate, it is worth pursuing. Conjecture and hypothesize away! I take it that these specific kinds of similarities are what motivate your view, and this is fine. But I think the general considerations such as the above are good to address as well.
(2) Why is the sleep-deprived state of pilots supposed to tell us anything about dream-states? Or is the idea that the pilot's state of mind is similar to our own when we are waking up (in which case the dream is similar to the visual neural stimulation caused by blood vessels in the pilots' eyes)?
(3) Why say that the pilots "make it up after the fact"? Maybe they really were having gremlin-experiences. In other words, this example does not seem to be a clear example of the "construction" or "confabulation" taking place after the fact.
(4) Why do we need to sort through the stacks later, when we sleep? We certainly integrate things into concepts and observe their relations as we perceive. Apparently, there is another kind of integration we need to perform that gets performed during sleep. What is it? (Side note: sorting through a stack is more like sorting through CDs in a juke box or flipping through pages in a book. The pringles analogy might suggest that, when we trace our way back through the stack following the chronology, that we actually take things off the stack (and eat them ;-)). But I take this is not the way this is supposed to go. The stack itself remains intact, but with "flip through" it. Correct?)
(5) I'm lost regarding the hashes and soups and keys? Waiter, there's a key in my soup!
(6) The units of information in our short-term memory stacks, on your view, would be perceptual memory-experiences, correct? But what are the units of information in "the big hash" or "big database"? Concepts? Propositions? Both? How do the elements interrelate? What is going on when elements of the big database are "compared" with the perceptual-memory units from the stacks? The language of similarity and difference suggests that the perceptual memory experiences are being subsumed under concepts (hence my earlier question). Is this what is happening? Are propositions being "stored" in concepts a la Rand's view of concepts as "file folders"? Are additional propositions being formed from perceptual memory data (additional concepts applied and relations recognized), so that they can be brought to bear on other propositions when we wake up?