Last year I took about 1/3 of a course about "blending" or cognitive integration with Mark Turner at the U of Maryland. I dropped out because I was working too much at AnswerLogic (now Primus), but I got really interested in the theory of Turner's and Fauconnier's about "mental spaces" and their dynamics. Having just read Turner's new book, Cognitive Dimensions of Social Science, I've gotten into it again. In fact, I'm hoping to write a paper about the integration of mental spaces for a conference next summer in Denmark.
Anyway, the idea is that each network of concepts defines a mental space and that combining mental spaces according to certain principles is an essential cognitive task. Look at Turner's Blending page for a better overview than you get here: http://www.wam.umd.edu/~mturn/WWW/blending.html
A space can be determined by a single concept, a frame or script (that is, an internal representation of a scheme of relations among kinds, roles and events--e.g., the 'restaurant' frame includes waiters, dinnerware, food, tables, a check, a tip, etc.), or a theory. Certain rules of projection allow thinkers to integrate distinct spaces into a new space. The output creates a new mental unit that may have features not found in the input spaces.
Among Turner's examples is the blend implicit in the exclamation, "That doctor is a butcher!" Here we've taken our notion of a doctor and our notion of a butcher and integrated them to create a blend: the butcher/doctor. The interesting thing is that the blend drives an inference, namely that the doctor is incompetent. However, incompetence is not a conventional feature of either doctors or butchers. It emerges in the blend from the practical incompatiblity of the primary function of butchers (to chop stuff to pieces) and doctors (to nicely sew people up, and so forth.). This is interesting.
A simpler example of a blend is, say, a talking cat in a childrens' book. Here we take the human space and the cat space (our mental models of humans and cats) and selectively project certain features of humans, certain features of cats, and we get a talking cat. We do this effortlessly, and kids certainly don't think it's weird.
Now, Turner argues that counterfactual reasoning relies on blending. We reason about a blended space that integrates the space of actuality with some other stuff. If I get him right (I'm being vague here), he argues that when political scientists argue about propositions like 'If the US hadn't entered WW2, the Nazi's would have ruled all of Europe' they aren't merely disagreeing about known facts, but that often disagreement flows from the selectivity of projection. Even if two folks have the same basic beliefs about some domain or event, certain features of one guy's model of the domain or event may be more liable to be "activated" and projected into the counterfactual blend. Thus, the counterfactual spaces in the different folks' minds have different feature and support different conclusions.
This is the thing I'm interested in: the mechanisms that govern feature activation and projection in selective projection into target spaces.
The reason I'm interested is that I think persuading people of things largely has to do with getting them to create a new model of the domain in which you want to persuade them of something. And this comes down to their projecting certain features of their existing model, along with certain features of a different model that you suggest to them, into a new model (or space or whatever) that is more adequate to reality.
Now, the impediment is often that people insist on projecting certain features into the target model whether you want them to or not. So if you're trying to say something about justice, for instance, your listener might automatically project the notion of material equality into the negotiated model the discourse opens up EVEN IF YOU DON'T WANT THEM TO. And then you're stuck if your point is that justice doesn't have anything important to do with material equality.
I'm interested in what triggers or activates certain features of mental models, and what accounts for their suppression, in projection. Besides having to do with persuasion, I think this may be fertile ground for explaining our intuitions of analyticity and apriority. But that's for another day.