Theory of mind has organized much research into social behavior:
The ability to interpret others’ mental states and intentions, called “Theory of Mind,” has been a key area of interest for those studying the evolution of primate and human social behavior. Often, people have imagined that Theory of Mind emerges as a correlate of self-awareness — the ability to reflect on one’s own mental states. As the model goes, a focal individual interprets another’s mental state by imagining herself “in the shoes of” the other individual.
Theory of mind pushes into the background agents’ social and ecological circumstances and their embodied nature. Hence theory of mind does not provide a propitious conceptual framework for understanding social behavior.
Theory of mind doesn’t provide much insight into how persons can appreciate and sympathize with the pain they each feel. How do I know the feel of pain that you feel when your finger touches a hot stove or when someone breaks your heart? How do you know that I know how you feel? It seems to me that mutual recognition of common human nature, including social nature, is crucial for understanding how these feelings exist. Theory of mind is both too specific (mind, rather than fully embodied person) and too abstract (questions not closely related to problems of ordinary behavior) to provide much insight into others’ pain.
Theory of mind doesn’t provide a good description of an agent’s understanding. Consider an ordinary human interpretation of the behavior of a robot. The robot is programmed to store the location of a ball as either one specific location A or another specific location B. Starting with a random ball location state variable, the robot enters the room and goes to that location. The robot’s sensors then detect the presence or absence of the ball. If the ball is present, the robot bounces the ball (plays with it), sets its location state variable to this position, and then leaves the room. If the ball is absent, the robot goes to the other location, and behaves likewise.
Suppose an unprimed human observer watches the robot enter the room and play with the ball many times. Occasionally the observer, while the robot is out of the room, shifts the ball between locations. In those cases the robot first looks in the wrong location, then goes to the right location. If the ball isn’t moved, then the robot goes directly to the location containing the ball. Humans readily anthropomorphize technology, even while they almost surely would not confuse it with a real human being. The observer would typically describe the robot’s behavior as looking for the ball. In instances where the observer shifted the ball, the observer would typically describe the robot as not knowing the true location of the ball. Thus, a researcher might describe the observer as having a theory of mind (for the robot).
Theory of mind associated with mutual recognition of common human nature requires many orders of magnitude greater processing power. A simple digital machine could readily learn and predict the state and behavior of the ball-seeking robot. Describing and interpreting a person’s thoughts and emotions by looking at a representation of that person’s eyes or at her patterns of movement is a much more complex problem. More importantly, interpreting another’s mental states and intentions in historical ecologies has been predominately a highly interactive task. How one person responds physically to another both reveals emotions and intentions and changes them. A theory of mind apart from human being seems to me to be like a theory of time addressing what happened before the beginning of time.