Monday, October 29, 2007

Does Artificial Intelligence Elegantly Worm its Way into the Study of Cognition ? II (Pun Intended)

How can robotics and AI help us to understand neural and cognitive processes? On the simplest scale some researchers have succeeded in modeling an organism or part of its neural functional system in great detail.
Cangelosi & Parisi (1997) for example created a computational model that simulated the neural circuit for touch sensitivity in Caenorhabditis elegans, a roundworm of about 1mm length.
The complete neural circuit of adult C. elegans consists of only 302 Neurons (compared to 1010 neurons in the adult human brain) (Chalfie et al. 1985). Interestingly, even in C. elegans there is a neuronal Left/Right functional asymmetry, suggesting the
“possibility that neural asymmetries observed across the animal kingdom are similarly established by very early embryonic interactions” (Poole & Hobert 2006: 2279).

Cangelosi and Parisi tried to “reproduce the nematode’s withdrawal response to touch in the head or tail regions” (Cangelosi & Parisi 1997), whose underlying neural circuit consists of 85 neurons (Chalfie et al. 1985). Not only did they succeed in creating a working computational model, but when they ‘knocked out’ single neurons of the model, the whole system behaved in a way similar to a nematode in which equivalent cells had been killed by laser beams (I hope Zombie-Aliens never do similar experiments when investigating human culture, like
Zombie-Scientist A: “What’s this 'Train Station'-thing good for, anyway?”
Zombie-Scientist B “I dunno, let’s do some ‘laser ablation’ and find out what happens”)
The crucial point is, of course, the word ‘similar’, and Cangelosi and Parisi conclude that:
“The presence of some non-matching data between the real neural circuit and artificial neural networks indicate that the model needs adjustment.”
They throw in another caveat, namely that
“the fact a computational model replicates the behavior of a real organism is only a first proof of its validity. There must be agreement between the computational model and the real organism both in what the model/organism does and in how the model/organism does it. That is, a good computational model must reflect the same mechanisms and processes present in the real organism” (Cangelosi & Parisi 1997: 95)
By now C.elegans is one of the best-studied multicellular organisms. Due to its relatively simple structure it seems to be a very fitting candidate for computational studies, and several computer models of it have been proposed so far (Suzuki et al. 2005) (At Keiko University in Japan, for example, there even is a Cybernetic Caenorhabditis elegans Project). At present, these computer models can be divided in three groups:
1. Those simulating mechanisms of stimulus-reception and -processing
2. Those generating motor-coordination patterns and movement
3. Those integrating these approaches to build a simplified model of “the flow series from reception of the stimulation to motion generation in” (Suzuki et al. 2005)
Examples of an advanced stage of the third group is the work if Michiyo Suzuki and his colleagues’, who succeeded in building a virtual C. elegans, consistsing of a “a neuronal circuit model for motor control that responds to touch stimuli and a kinematic model of the body for movement” integrated into a whole body model. (Suzuki et al. 2005).
Although even studies on the simple scales like that of C. elegans are still developing, it seems possible to achieve highly precise approximations between the behaviors of artificial and real organisms. (Suzuki et al. 2005) Thus, in the future it may even be able to simulate more complex neural or even cognitive mechanisms, which, for example, is the final aim of the “Blue Brain Project”, “the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.”
In 2006, the project succeded in building a model of the somatosensory neocortex of 2-week-old rat at the cellular level (i.e. disregarding genetic and molecular levels),with about 10,000 neurons forming a neocortical column, that is, a recurring network unit of the brain. However, “Computational power needs to increase about 1-million-fold before we will be able to simulate the human brain, with 100 billion neurons.” (Markram 2006).


References:

Cangelosi, Angelo and Domenico Parisi. 1999. “A Neural Network Model ofn s: The Circuit of Touch Sensitivity.” Neural Processing Letters 6: 91–98, 1997

Chalfie, M., J.E. Sulston, J.C. White, E. Southgate, J.N. Thomson and S. Brenner. 1985. “The neural circuit for touch sensitivity in Caenorhabditis elegans”, Journal of Neuroscience, 5:959– 964.

Poole, Richard J. and Oliver Hobert. 2006. “Early Embryonic Programming of Neuronal Left/Right Asymmetry in C. elegans.” Current Biology 16, 2279–2292

Suzuki, Michiyo, Toshio Tsuji and Hisao Ohtake. 2005. “A model of motor control of the nematode C. elegans with neuronal circuits” Artificial Intelligence in Medicine 35: 75—86

Markram, Henry. 2006. ""The Blue Brain Project." Nature Reviews Neuroscience 7: 153-160.

Friday, October 26, 2007

Does Artificial Intelligence Worm its Way into the Study of Cognition? (Pun Intended)

Citing evidence from AI/AL/Robotics to gain insight into cognitive mechanisms, Poirier et al. (2005) clearly share the sentiment that
„a measure of understanding will be gained by studying simple and superficial models of complete agents.” (p. 762)
Although they state that to fully understand the principles governing cognition, complete models of situated embodied agents engaged in brain-body-world-interaction (or their AI-counterparts) will prove essential, they hold that
“simple models like these can help us understand some general principles governing categorization” (p. 762).
There is, of course, a bigger question looming behind this assertion:
"Can robots make good models of biological behaviour?“ (Webb 2001a)
Barbara Webb (who you can hear talk about her work on robotic crickets here) proposes that models can indeed tell us a lot about biological systems, if only the dimensions of the simulation are made explicit. She proposes the following variables:
“1. Relevance: whether the model tests and generates hypotheses applicable to biology.
2. Level: the elemental units of the model in the hierarchy from atoms to societies.
3. Generality: the range of biological systems the model can represent.
4. Abstraction: the complexity, relative to the target, or amount of detail included in the model.
5. Structural accuracy: how well the model represents the actual mechanisms underlying the behaviour.
6. Performance match: to what extent the model behaviour matches the target behaviour.
7. Medium: the physical basis by which the model is implemented.“ (p. 1033)
Another problem is the confusion over the term model, which is defined in so many different ways that it is sometimes hard to find out whether two people mean the same thing when talking about models.
According to Webb (2001a)
“modelling aims to make the process of producing predictions from hypotheses more effective by enlisting the aid of an analogical mechanism” by symbolically simulating the properties assumed to perform certain functions. (p. 1035)
Biological systems and biorobotics share the property that they are
“physically instantiated and have unmediated contact with the external environment” (p. 1037).
Poirier et al. (2005) show that we can already draw much insight from this correlation, given that even simple models show how
"categorization capacities that are quite sophisticated can emerge from very simple embodied and situated systems” (p. 762).
These observations clearly speak for the importance of embodied properties when studying cognitive mechanisms. On the other hand, they give a practical example of how robotics can support specific hypotheses regarding cognition. They too show that robots that are models if animal behavior can be seen as “as a simulation technology to test hypotheses in biology” (Webb 2001a: 1049).
Many of the peer commentaries on Webb’s Behavioral and Brain Sciences Article are not that optimistic. One general criticism is that of underdetermination,
"that is, having a robot behave like an animal is no guarantee that the animal works the same way“ (Webb 2001b: 1083).
But, as Poirer et al. (2005) argue, artificial systems give us major clues about what kind of and which quantities of structure are able to perform certain functions.
Another criticism aimed at biorobotics is that, although they are inspired by biological systems, as of yet the haven’t done much to inform biology. But as Webb’s (2001a) impressive sample of biorobotics research – 78 articles from 1992-2001 ranging from bat sonar and frog snapping to simulations of insect wings, paper wasp nest construction and ant/bee landmark homing – as well as Poirier et al.’s (2005) review show, this complaint is clearly mistaken.
Another interesting test case for the ability of artificial systems to simulate biological behavior are neural networks employed to simulate properties of Caenorhabditis elegans, a roundworm that is about 1mm in length. I will discuss some of these attempts in my next post.


Reference:

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Webb, Barbara. 2001a. “Can robots make good models of biological behaviour?” Behavioral and Brain Sciences 24.6: 1033–1050.

Webb, Barbara. 2001b. “Robots can be (good) models.“ Behavioral and Brain Sciences 24.6:1081-1087

Monday, October 22, 2007

Shared Symbolic Storages

The last kind of category discussed in Pierre Poirier, Benoit Hardy-Vallée, and Jean-Frédéric Despasquale’s (2005) article about “Embodied Categorization” are ‘linguistic categorizers.’

Concepts
Poirier et al. call linguistic categories ‘concepts’, that is, first and foremost public objects whose usage is controlled by the linguistic community. Seen this way, the generally established system of concepts is the shared symbolic storage of a community. Jerry Fodor (1998) has a similar notion of concepts, stating that one requirement (Nr. 5 of 5, to be precise) for concepts is that they be
“public; they’re the sorts of things that lots of people can, and do, share” (p. 28).
Fodor’s other requirements for concepts are that they:
  1. are states of the mind/brain that function as mental effects or causes.
  2. that they are categories, that is that they function as mental operations “by which the brain classifies objects and events” (Cohen/Lefebvre 2005: 2).
  3. that they are compositional, that is that they, one the one hand, consist of constituents (of other, hierarchically intertwined, ‘lower’ concepts), and, on the other hand, that they are the constituents of what Fodor calls ‘thoughts’(i.e. his “cover term for the mental representations which […] express the propositions that are the objects of propositional attitudes.” (p. 25) As if that would make anything clearer, since ‘proposition’ and ‘propositional attitude’ are terms that are just as controversial)
  4. that a lot of them are learned. (Jesse Prinz (2005) even argues that all concepts are learned, a hypothesis Fodor definitely wouldn’t like. And of course, Prinz’s definition of concepts is different, too.)
Hurford (2007) further differentiates between ‘proto-concepts’, ‘pre-linguistic concepts’ and ‘linguistic concepts’ in order to account for neuropsychological (Barsalou 2005a) and ethological evidence (Cheney & Seyfarth 2007) for mental and conceptual representations in other animals (= ‘proto-concepts’ and in higher mammals probably (and definitely so in our ancestors) even ‘pre-linguistic concepts’). Thus, the shared/public-requirement of Fodor (1998) only holds for linguistic concepts.

Concepts and Categories
Regarding the difference between concepts and categories: concepts can be said to represent categories, e.g. when we encounter a member of the category DOG, the DOG concept is activated (Prinz 2005), or, in Barsalou’s (2005) terms, a category corresponds to a component of experience, whereas the conceptual system consists of the collected representations of these categories.
Thus, on seeing a dog, the human conceptual system construes this perception as a category instance of DOG by binding the specific perceived token (i.e. the individual dog) “to knowledge for general types of things in memory (i.e., concepts)” (p. 581).
Poirer et al. argue that linguistic categorizers are "farthest removed from their basic sensorimotor counterparts” (p. 761),although, as argued by Lakoff and Johnson (1980,1999), they still seem to be heavily influenced by embodied experience.

Linguistic Categories and Language Acquisition

Poirier et al. make an interesting analogy between the embodiment perspective they adopt throughout their paper and language acquisition. A child can be seen as simulating the arbitrary word-category/siginfiant-signifié contingencies she is presented with through her linguistic environment, forming “internal models of her community’s lexicalized categories”(p. 761) which enables her to communicate with others. Evidence for the importance of linguistic/lexicalized categorization comes from the fact that words help us to acquire new categories, “that category labels play a role in the formation and shaping of concepts” (Lupyan 2006) and that they
“play an especially important role in shaping representations of entities whose perceptual features alone are insufficient for reliable classification.” (Luypan 2005).

To conclude, it seems that all forms of categorizations may in some way be present in human cognition, and that many of the feats that make us ‘uniquely human’ are augmented by sophisticated forms of categorization which can best be described from the perspective of embodied evolutionary-developmental computational cognitive neuroscience.

Next week I will try to discuss the implications of computational/AI/robotics research, as presented by Poirier et al, for the study of human cognition and behavior.


References:
Barsalou, Lawrence W. 2005. “Continuity of the conceptual system across species.” Trends. Cog. Sc. 9.7: 309-311.

Cheney, Dorothy L. and Robert M. Seyfarth. 2007. Baboon Metaphysics: The Evolution of a Social Mind. Chicago: University of Chicago Press.

Cohen, Henri and Claire Lefebvre. 2005 “Bridging the Category Divide.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier. 1-15.

Fodor, Jerry A. 1998. Concepts. Where Cognitive Science Went Wrong. Oxford Congitive Science Series. Oxford: Clarendon.

Hurford, James R. The Origins of Meaning: Language in the Light of Evolution 1. Oxford OUP.

Lakoff, George, and Mark Johnson 1980. Metaphors we live by. Chicago: University of Chicago Press.

Lakoff, George and Mark Johnson. 1999. Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought. New York: Basic Book

Lupyan, Gary. 2005. “Carving Nature at its Joints and Carving Joints into Nature: How Labels Augment Category Representations.” Modelling Language, Cognition and Action: Proceedings of the 9th Neural Computation and Psychology Workshop. Eds. A. Cangelosi, G. Bugmann & R. Borisyuk Singapore: World Scientific. 87-96

Lupyan, Gary. 2006. “Labels Facilitate Learning of Novel Categories.” The Evolution of Language: Proceedings of the 6th International Conference. Eds. A. Cangelosi, A.D.M. Smith & K.R. Smith Singapore: World Scientific,190-197

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Prinz, Jesse. 2005. "The Return of Concept Empirism." Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Friday, October 19, 2007

Life is a Journey

Analogy

In my last posts I wrote about embodiment and how many of our categories stem from sensorimotor experience, embodiment, simulation and emulation. But what about abstract concepts? Lakoff and Johnson (1980, 1999) argue that our conceptual system, which enables us to categorize and draw analogies, is also built on the basis of embodied experience. As Poirier et al. put it
"An analogical inference is a “cut and paste” process: from a cognitive domain (the source), copy the structure of an object in the domain and paste it into another (the target), while replacing every variable from the source domain by a variable from the target domain" (Poirier et al. 2005: 759f.).
Lakoff and Johnson see embodied experience as the source domain for such processes, many of which can be found in everyday language. One example is the conceptual network of CONTAINER-metaphors. As physical organisms which are separated from the outside world by our skin, we project our own experience of being a container with a demarcating surface with an inside-outside-orientation to other physical objects which are partitioned by surfaces (Lakoff/Johnson 1980). This results in many container-metaphors in everyday language, such as “I’ve had a full life.” “Life is empty for him.” “Her life is crammed with activities.” “Get the most out of life.” etc. Other examples are metaphors of movements or spatial dimension, like to get idea’s across, words reaching someone etc. These cross-domain mapping-ability, or ‘conceptual integration’ may be an essential evolutionary step in what makes us human (Turner 2006, Mithen 1996).

Dual Systems Theory

These observations can also be integrated in dual-system accounts of reasoning, which proposes that human cognition basically consists of the interactions of:
  1. an evolutionary older system (System 1) shared by humans and other animals consisting of parallel and automatic modules mediated by domain-general learning mechanisms
  2. and a uniquely human, central system 2, whch enables hypothetical thinking, abstract reasoning, and therefore is able to access and blend multiple domains, on the other (Evans 2003).
I’m not really sure what to think about the general-purpose-claims that come with this idea, but surely cross-domain access is crucial for modern cognition. On the neuropsychological level, higher frontal control over other cognitive systems could offer some insights on how to think of a system 2, or generally into the mechanisms enabling mental time-travel and displaced reasoning (Barsalou 2005, Deacon 1997), regardless of calling it general purpose or not.

When making 'analogizing categorization' a key feature of human evolution, we have to keep in mind that analogy of a simple kind, the abstraction and mapping of common global structures, can be found in other animals as well: Even fish have distinct areas for interpreting perceptual/sensory input and motor-coordination. The cerebral cortex of mammals, however, seems to have a much higher level of brain organization, i.e. the cerebral cortex with distinct ‘projection areas’ for various sensory and motor systems, enabling a cat, for example, to play with a ball of yarn as a mouse analog (Sowa 2005). If so, System 1 and System 2 should probably not be seen as discontinuous dichotomies but rather as different stages on a continuum, or extended specializations of previous abilities (Sowa 2005, Turner 2006).

Still, positing a System 2 seems to be justified considering the importance of cross-modal, “large-scale neural integration” (Donald 2006) for human cognition. Accepting this dual account, it is still important to make out the subsumed cognitive architecture, but as a heuristic tool it seems to be as fruitful for cognitive research as Dan Dennett’s (1987) tripartite account of ‘physical stance’, ‘design stance’, and ‘intentional stance’ and Hauser et al.’s (2002) division of the faculty of language in the broad sense (FLB), and the Faculty of Language in the narrow sense (FLN). Our understanding about the levels of cognition could also be enriched by complementary approaches, for example from Artifical Life and Artifical Intelligence (Sowa 2005), or cognitive ethology.
In my next post on Poirier et al.’s paper I will describe their account of “linguistic categorizers.”

References:

Barsalou, Lawrence W. 2005. “Continuity of the conceptual system across species.” Trends. Cog. Sc. 9.7: 309-311.

Deacon, Terrence William 1997. The Symbolic Species. The Co-evolution of Language and the Brain. New York / London: W.W. Norton.

Donald, Merlin. 2006. “Art and Cognitive Evolution.” The Artful Mind: Cognitive Science and the Riddle of Human Creativity. Ed. Mark Turner. Oxford: OUP

Evans, Jonathan St. B.T. 2003. “In two minds: dual-process accounts of reasoning” Trends in Cognitive Sciences 7.10: 454-459.

Hauser, Marc D., Noam Chomsky and W. Tecumseh Fitch 2002. “The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?” Science 298, 1569-1579.

Lakoff, George, and Mark Johnson 1980. Metaphors we live by. Chicago: University of Chicago Press.

Lakoff, George and Mark Johnson. 1999. Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought. New York: Basic Book

Mithen, Steven J. The Prehistory of the Mind. London: Thames & Hudson.

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale.2005. “Embodied Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Sowa, John F. 2005. “Categorization in Cognitive Computer Science.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Turner, Mark. 2006. “The Art of Compression.” The Artful Mind: Cognitive Science and the Riddle of Human Creativity. Ed. Mark Turner. Oxford: OUP.

Monday, October 15, 2007

Simulation and Stances II: The Intentional Stance

How can we assess intentions? How does ‘folk psychology’, Theory of Mind, or ‘the intentional stance’ work?
Basically, there are two competing theories, the Theory Theory (TT) Simulation Theories (ST) of mind reading.
The simulation theory proposes that, instead of developing a full-fledged real theory about how to explain our own as well as other peoples' behavior and experience, we mentally try to simulate and imagine the internal states of others (Gopnik 1999).
An embodied perspective on this phenomenon suggests that at least some features of mind-reading are accounted for by ST (Poirier et al. 2005: 758f.). According to neuropsychological evidence, for example, the recognition of face-based emotions (FaBER), is better supported by simulationist accounts than by TT’s of mind-reading (Goldmann & Sripada 2005). As Poirier et al. (2005: 759) argue, it may be that in some situations, simulation may be a more direct means to gain insight into someone else’s, especially emotional, mental states.

Mirror Neurons

Another case for ST comes from the fact of ‘mirror neurons’, which discharge during the observation of goal-directed movement, and thus may be critical to understand others intentional states (Rizzolatti & Craighero 2004). It seems possible that we simulate the behavior of others via our ‘mirror system’ and ascribe to them the resulting intentional states. (Poirier et al. 2005: 759, Gallese et al. 2004). To interpret and integrate this intentional state, though, mirror neurons alone seem to be insufficient and in need of other social cognitive mechanisms, (Wheatley et al. 2007, Uddin et al. 2007, Gallagher 2007). On the other hand, mirror neurons play a greater role in the coding of intentions than is sometimes acknowledged, albeit depending on what we call an ‘intention’ (Iacoboni et al. 2005).

The Intentional Stance

Poirier et al. conclude that:
“the intentional stance is clearly a predictive strategy, which could (but does not always) make use of categories to which we have access not by deriving them from a theory, but by simulating the internal doxastic and volitional states of others on the basis of their behavior, context, and facial expression. Language can give access to higher order intentionality: an agent represents its own mental states as they mentally represent another agent’s mental states, and so on“ (p. 759)

In my next post on “Embodied Categorization”, I will write about Poirier et al.’s account of analogical categorizers.

References:

Gallagher, Shaun. 2007 “Simulation Trouble”. Social Neuroscience 2.3/4: 353-365.

Gallese, Vittorio, Christian Keysers and Giacomo Rizzolatti. “A Unifying View of the
Basis of Social Cognition.” Trends in Cognitive Sciences 8 (2004): 396–403.

Goldman Alvin I. and Chandra Sekhar Sripada. 2005. “Simulationist models of face-based emotion recognition” Cognition 94: 193-213.

Gopnik, Alison. 1999. “Theory of Mind.” The MIT Encyclopedia of the Cognitive Sciences. Eds.Robert A. Wilson and Frank C. Keil. Cambridge, MA: MIT Press 838-841.

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Iacoboni M, Molnar-Szakacs I, Gallese V, Buccino G, Mazziotta JC, et al. (2005) "Grasping the intentions of others with one’s own mirror neuron system." PLoS Biol 3(3): e79.

Rizzolatti, Giacomo and Laila Craighero. “The Mirror-Neuron System.” Annual Review of Neuroscience 27 (2004): 169–192.

Uddin, Lucina Q., Marco Iacoboni, Claudia Lange and Julian Paul Keenan. 2007. “The Self and Social Cognition: The Role of Cortical Midline Structures and Mirror Neurons.” Trends in Cognitive Sciences 11.4 (2007): 153-157.

Wheatley, Thalia, Shawn C. Milleville and Alex Martin. 2007. “Understanding Animate Agents: Distinct Roles for the Social Network and Mirror System.” Psychological Science 18.6 : 469-474.

Thursday, October 11, 2007

Simulation and Stances I: The Physical Stance and The Design Stance

What can the theory of embodied categorization tell us about how the intentional stance works? Dan Dennett (1987) speculates that the combinatorial, generative properties of language/the language of thought play a crucial role in our attempts to predict the behaviors of physical, designed, and intentional systems.
Combining data from various areas of research, Poirier et al. (2005), on the other hand, try to account for some aspects of these stances as internal simulations of possible external states.

The Physical Stance

Systems that are able to categorize physical systems, that is, those able to adopt the ‘physical stance’ or use ‘folk physics’, seem to do so by simulating geometrical relationships. (Poirier et al. 2005). MetaToto, for example, is a robot able to build a map of its environment from sensory input, and whose behavior is guided by simulations of movement in his internal map. Thus, the robot is able to categorize physical features, e.g. a wall, not by hitting it but by simulating it (Poirier et al. 2005 757f.; Stein 1994).
People also seem to make physical inferences either by acting on objects (similar to the “we off-load cognitive work into the environment”-view I described briefly here), or by simulating actions and visuomotor experience internally (opposed to simply engaging in mental imagery, where crucial aspects of action-oriented simulation, and dynamics like gravity and other physical, ‘abstract’ forces seem to be missing) (Schwartz and Black 1999).

The Design Stance

Systems able to categorize functional categories, i.e. those able to adopt the ‘design stance’, ‘folk biology’, or mechanics, simulate features of animals or artifacts. (Poirier et al. 2005: 758). According to Hegarty (2004), design inferences work via the ad hoc simulation of ‘abstract’ functional features in a spatial dimension, which can, but not necessarily has to, be complemented by visual simulation.
Of course, as complexity rises, Dan Dennett might be right in assuming a role for language here.
An interesting question concerns how behavior-reading works in other primates. Do they adopt the ‘design stance’, that is, do they simulate functional features in order to predict behavior, e.g. associate certain behavioral/gestural/facial/phonetic patterns as ‘do not come near me’, and others as ‘safe to approach’, etc. -or “Does the chimpanzee have a theory of mind?” (Premack & Woodruff 1978).
Poirier et al. support the idea that other primates do not have a Theory of Mind, that they are not able to model the “’action level’, a rather detailed and linear specification of sequential acts”, but only the “’program level’, a broader description of subroutine structure and the hierarchical layout of a behavioural ‘program’” (Byrne and Russon 1998).
Whereas the action level invokes mental, unobservable, ‘intentional’ concepts, behavior-reading only invokes functional categories such as movement. The reason for this inability to adopt the ‘intentional stance’ may be that primates generally lack the concept of unobservable causes and thus are not able to
“posit hidden mental representations, assessable from the intentional stance.” (Poirier et al. 2005: 758, Povinelli 2000).
The evolution of such a concept may have enabled humans to have a ‘real’ Theory of Mind, and subsequently may have influenced our engagements of the physical stance and the design stance (Herrmann et al. 2007).

Next week I will write about how the intentional stance might work, given what we know about embodiment and simulation.

References:

Byrne, Richard W and Anne E. Russon. 1998. “Learning by Imitation: a Hierarchical Approach.” Behavioral and Brain Sciences 21: 667-684

Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: Bradford Books.

Herrmann, Esther, Josep Call, María Victoria Hernández-Lloreda, Brian Hare, and Michael Tomasello. 2007. “Humans Have Evolved Specialized Skills of Social Cognition: The Cultural Intelligence Hypothesis.” Science 317: 1360-1365.

Hegarty, Mary.2004.“Mechanical Reasoning by Mental Simulation.” Trends in Cognitive Sciences 8: 280-285.

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Premack, David Guy and G. Woodruff. (1978). "Does the chimpanzee have a theory of mind?" Behavioral and Brain Sciences 1: 515-526.

Povinellim Daniel J. 2000. Folk Physics for Apes: The Chimpanzee's Theory of How the World Works. Oxford: OUP.

Schwartz, Daniel L. and Tamara Black. 1999. “Inferences through imagined actions: Knowing by simulated doing.” Journal of Experimental Psychology. Learning, Memory, and Cognition. 25.1: 116-136.

Stein, Lynn Andrea. 1994. “Imagination and situated cognition.” Journal of Experimental and Theoretical Intelligence 5: 393-407.

Monday, October 8, 2007

The Intentional Stance

According to Dan Dennett (1987) there are different strategies for predicting the future behavior of systems. A successful strategy to predict the behavior of a physical system is the ‘physical stance’, which works like this:
“determine its physical constitution (perhaps all the way down to the microphysical level) and the physical nature of the impingements upon it, and use your knowledge of the laws of physics to predict the outcome to any input.” (Dennett 1987: 16).
For example, to predict that if I lose grip of a stone I hold in my hand it will fall down, we use the physical stance (Dennett 1999).

Another strategy is the ‘design stance’ from which you assume that a certain design enables you to predict that the system will “behave as it is designed to behave” (Dennett 1987: 17). Examples are alarm clocks, computers, or thermostats, where you can gain insight about their function by analyzing the mechanics behind it, or observe the way they work. The 'design stance' is riskier than the physical stance, because first I only suppose that the artifact I encounter works like I think it does, and second, the artifact can be misdesigned or be victim to a malfunction, whereas the laws of physics do not simply do that. (Dennett 1999).

The riskiest stance is the ‘intentional stance’, and
“Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do” (Dennett 1987: 17).
In animate agents, the intentional stance comes very close what we call “Theory of Mind”. Although risky, the ‘intentional stance’ is also incredibly powerful. To illustrate this, Dennett engages in a pretty interesting Gedankenexperiment: If there were Martians – to modify the idea a little, let’s say, zombies – able to predict every future state of the universe, and therefore every action of human beings through a complete knowledge of physics, without treating humans as ‘intentional systems’, they would still miss something.
If one zombie, call him Pierre, would engage in a predicting contest with a human, he would need much more information to predict what would happen if someone went to get cigarettes than a human treating the cigarette-getter as an intentional system and taking into account the patterns in human behavior.
So why does this strategy work, and how? First, in the course of evolution, humans evolved to use these predictive strategies because they worked, or as Quine puts it
“creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die out before reproducing their kind.” (Quine 1953).
According to evolutionary epistemology, natural selection ensures a “fit” between our cognitive mechanisms and the world, at least asymptotically, because the closest approximation of epistemological mechanisms and reality has the greatest survival value. (Some aspects of these thoughts are also important in the “Social Brain Hypothesis”, which I will write about some time in the future.) This probably also holds true for the evolution of the intentional stance/theory of mind. But how does “the machinery which nature has provided us” (Dennett 1987: 33) work? Dennett himself (albeit cautionary) proposes that there may be a connection between the exploding complex combinatorics of mind-reading/the prediction of complex behaviors and the generative, combinatorial properties of language/the language of thought.
Poirier et al. (2005) have an updated idea concerning how these predictive strategies might work, and present their speculations with considerations from an embodied evolutionary-developmental computational cognitive neuroscience (there, I said it again) viewpoint, which I will, finally, discuss in my next post.

References:

Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: Bradford Books.

Dennet, Daniel C. 1999. "The Intentional Stance." The MIT Encyclopedia of the Cognitive Sciences. Eds.Robert A. Wilson and Frank C. Keil. Cambridge, MA: MIT Press.

Poirier, Pierre, Benoit Hardy-Vallée and Jean-Frédéric Depasquale. 2005. “Embodied
Categorization.” Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Amsterdam: Elsevier.

Quine, Willard van Orman. 1953. From a Logical Point of View. Cambridge, MA: Harvard University Press.