While there is a lot of what passes for evolutionary psychology "research" that fits this description, I also think there are some worthwhile ideas coming out of this field, too.
The things that most intrigue me in evolutionary psychology are the explorations of the constraints that might have shaped human cognition, perception and decision-making, and how these constraints might work to encourage certain biases in human cognition, making our minds imperfect tools for grasping objective reality while at the same time doing extremely well at solving problems relevant to survival.
Anthropomorphism --- the tendency to see events, natural phenomena, animals, objects, and other nonhuman things in humanlike terms, i.e. attributing consciousness, feelings, intentions and personalities to things that may or may not have them --- is one of these biases.
Here's anthropology professor Stewart Elliott Guthrie's explanation of anthropomorphism from his chapter in the 2001 book Religion in Mind: Cognitive Perspectives on Religious Belief, Ritual, and Experience:
Theories of religion as anthropomorphism so far have failed largely because they fail to consider anthropomorphism in general. This is crucial because anthropomorphism is general, spanning all cultures and all domains (Thomson 1955 ; Mitchell, Thomson and Miles 1997). It occurs spontaneously and regularly in the unconsidered experience of daily life, for instance when we hear a branch tapping at a window as someone attracting our attention, see a full garbage sack in an alley as a lurking figure, or look at a car from behind and see its headrests as the heads of occupants.
Anthropomorphism also occurs in the deliberate productions of the arts and sciences. In the sciences, it is anathema, yet as Nietzsche (1966, 316) writes, even scientists wrestle "for an understanding of the world as a humanlike thing." Indeed, they do so commonly (Liebert 1909; Kennedy 1992; Mitchell, Thompson and Miles 1997). Striking examples include Darwin's view of Nature as a stockbreeder (Young 1985); Lovelock's (1987) view of the Earth as Gaia, a living organism; and the "anthropic principle" of some physicists -- the idea that because a large number of physical properties of the universe must be just as they are for humans to exist, the existence of those properties, and of humans, can be no accident (Earman 1987).
[Art] combines human and non-human categories not spontaneously but self-consciously. But such art is only the tip of a cognitive iceberg, seen in the wake of conscious experience. It is a tiny fraction of an obscure whole, held up to the light of retrospect. Artists, like all of us, must encounter anthropomorphism before using it. This encounter is involuntary. It consists not in mixing cognitive categories, but in applying a template from one category to an ambiguity we later decide belongs to a different category. First we see an ambiguous shape in an alley as a person; then we think, "Oh! That's only a garbage can." It is only when we make this after-the-fact distinction that we speak of anthropomorphism. What powers religion, and what requires explanation, is this spontaneous tendency, not only in art but in all of life.
Scanning for signs of life, including signs of communication, begins early and appears intuitive and generalized. Carey and Spelke (1994, 176) note that infants "respond to objects that lack any clearly animate features (e.g., mobiles) as animate and social beings, if the behavior of those objects approximates the behavior of a responsive social agent." In a manner that parallels the Buddhist notion of karma, older children blame "illness on the victim himself or herself, rather than allow that it happened randomly, an explanation known as 'immanent justice'" (Gelman, Coley and Gottfried 1994, 345). Such interpretations apparently correspond to preconceptions that are early and powerful.
Animism is closely related to anthropomorphism and is equally widespread. Animism was first investigated experimentally by Piaget (1929), who found it universal among young children. Piaget's finding has been broadly replicated and, as Barrett notes in this volume, the dominant psychology of religion regarding concept development is still Piagetian. "A long tradition of work on animism shows that children extend psychological explanations to ... rivers, clouds and so forth" (Harris 1994, 308).
Other studies (Sheehan, Papalia-Finlay, and Hooper 1980-1981) find animism not only among children, but also among people of all ages. In addition to humans, nonhuman animals also seem to display animism. Birds peck at twigs resembling caterpillars (Hinton 1973), coyotes pounce on sticks resembling grasshoppers (Bekoff 1989), caribou avoid rockpiles resembling Inuit (American Museum of Natural History display), and chimpanzees direct threats against thunderstorms (Goodall 1992, 1994).
(I was especially intrigued by the last paragraph, about animals showing a bias towards treating ambiguous stimuli as if they were alive. That's interesting, and does add support to the idea that this is adaptive).
Several theorists have proposed evolutionary explanations for this phenomenon, most of them hinging on the relative risks and benefits of Type I (false positive) vs. Type II (false negative) errors. This reasoning is invoked to bolster claims that all sorts of quasi-related cognitive and perceptual phenomena (like patternicity, and its consciousness-seeking cousin, agenticity) might once have conferred evolutionary advantages on their possessors.
This review article (full text here) brings a long list of disparate-seeming phenomena --- among them "auditory looming," a perceptual illusion in which you hear sounds as coming from much closer to you than they actually are; superstititions; prejudices; phobias; anthropomorphism/animism; and believing that you are more in control of your fate than you really are --- together under the same vast explanatory umbrella: Error Management Theory.
Here's a description of error management theory written by the two of the idea's originators, David Buss and Martie Haselton, in this 2000 article in the Journal of Personality and Social Psychology (full text here)*:
When judgments are made under uncertainty, two general types of errors are possible --- false positives (Type I errors) and false negatives (Type II errors). A decision maker cannot simultaneously minimize both errors because decreasing the likelihood of one error necessarily increases the likelihood of the other (Green & Swets, 1966).
The costs of these two types of errors are rarely symmetrical. In scientific hypothesis testing, Type I errors are usually considered more costly than Type II errors. Scientists, therefore, typically bias their decision-making systems (e.g., inferential statistics) toward making Type II errors. Errors are also asymmetrical in warning devices like fire alarms, which are biased in the opposite direction. Missed detections (Type II errors) are more costly; therefore, the bias is toward making false alarms (Type I errors). Whenever the costs of errors are asymmetrical, humanly engineered systems should be built to be biased toward making less costly errors (Green & Swets, 1966). This bias might increase overall error rates, but it decreases overall cost.
According to error management theory (EMT; Haselton, Buss, & DeKay, 1998), decision-making adaptations have evolved through natural or sexual selection to commit predictable errors. Whenever there exists a recurrent cost asymmetry between two types of errors over the period of time in which selection fashions adaptations, they should be biased toward committing errors that are less costly. Because it is exceedingly unlikely that the two types of errors are ever identical in the recurrent costs associated with them, EMT predicts that human psychology will contain decision rules biased toward committing one type of error over another (also see Cosmides & Tooby, 1996; Nesse & Williams, 1998; Schlager, 1995; Searcy & Brenowitz, 1988; Tomarken, Mineka & Cook, 1989).
The logic of EMT extends to benefit asymmetries as well as cost asymmetries. Consider two types of correct inferences, hits and correct rejections. If the benefits associated with these two different correct inferences differ recurrently over evolutionary time, other things being equal, then selection will favor the reasoning strategy that is biased toward the more beneficial inference, even if it results in more errors overall. In cases where the costs of two different errors are the same, but the benefits are asymmetrical, the benefit asymmetry will be the driving selective force. In cases where the benefits of correct inferences are the same but the costs of errors are asymmetrical, the cost asymmetry will be the driving selective force. The key point of EMT is that selection will favor biased decision rules that produce more beneficial or less costly outcomes (relative to alternative decision rules), even if those biased rules produce more frequent errors.
And this (to go back to that review article I mentioned earlier) is how that general pattern plays out with respect to anthopomorphism in particular:
Guthrie (2001) [in the book chapter I quote an excerpt from above] used error management logic to explain one of the key features of religion --- animism. He proposed that in ambiguous circumstances to falsely assume that an intentional agent (e.g., another human) has caused some event is less costly than to miss this fact. Given that agents often have interests that compete with those of the perceiver, it is important to have a low threshold for inferring their presence. For example, if one encountered a collection of twigs arranged in an improbably neat array, Guthrie proposed that it would be better to entertain the thought that a human or other intentional agent was responsible for the arrangement --- and to increase one's vigilance to the possibility of the agent's presence --- than to casually ignore it. Guthrie (2001) and Atran and Norenzayan (in press) proposed that belief in gods may be a by-product of this adaptive bias. The proposed animacy bias is consistent with classic laboratory experiments conducted by Heider and Simmel (1944; see also Bloom & Veres, 1999). When participants view moving images of circles and squares, they find it difficult not to infer intentional states --- chasing, wanting and escaping. The tendency to infer intentional states in these stimulus arrays emerges early (age 4), and there is preliminary evidence of cross-cultural universality of the bias (in Germans and Amazonian Indians; Barrett, Todd, Miller, & Blythe, 2005), although its magnitude of expression may certainly be variable. Common features of religion across cultures (Atran & Norenzayan, 2004)are also consistent with a universal animacy bias.So, according to this theory, our brains are biased in certain ways, not because these shortcuts were likeliest to conform to reality (they're not, and probably never were), but because the errors they tended to lead to were the least costly (and/or most beneficial) in terms of reproductive success. We err on the side of seeing patterns --- inferring direct relationships between co-occurring phenomena, for example --- because ancestral humans did not have the luxury of adopting a lifestyle of radical experimentation. We err on the side of seeing agents --- assuming that things happen because someone wanted them to happen, and took steps to ensure that they did --- because missing signs that another being (maybe a human, who might be friend or foe, or maybe a large and dangerous animal) is in the area is a much deadlier mistake to make than thinking someone is there when you're really alone.
(It's worth pointing out that this hypothesis, like many others in the field of evolutionary psychology, isn't directly falsifiable**. It can't be tested. How could it be, when the conditions it makes predictions about only existed millions of years ago?)
It's particularly interesting for me to read about the apparent universality of these biases --- agenticity, anthropomorphism, animism --- because I don't think I share them. Or, if I do, I think I have a much weaker tendency to use them.
When I think about how I understood things as a child, what sticks out to me is how impersonal, how inhuman, my world was. For most of my childhood, I made the exact opposite assumption from the one described above --- I wouldn't even necessarily think about other people in the room with me, whom I could see and hear, as intentional agents; instead, I perceived them as a cluster of different, separately-experienced phenomena that all emanated from the same source: color, movement, lots of different sounds. I perceived, but did not often draw inferences from what I perceived. Much of the time, I didn't even sort my perceptions based on where they came from. I still devote a lot more of my brainpower at any given moment to raw perception, since I cannot filter sensory input or selectively direct my attention, than I do to interpreting what I perceive. Often I will notice something, but only react to it some moments later, because it has taken me that long to transmute the raw sensory data into information.
When I would think to draw conclusions from what I saw, my conclusions tended to run in the opposite direction from what you'd expect given the human biases towards patternicity and agenticity. The world I perceived was a random, self-sufficient system. It wasn't built; it grew. (When I was little, I thought houses and roads were some kind of large plant that grew out of the ground; if you had told me people made them I would've been thunderstruck).
*This article applies EMT in a way I'm not entirely sure is valid; they make some pretty boilerplate ev-psych assumptions about optimal reproductive strategies for men and for women (and, presumably, all male and female mammals) without any regard for social organization. They seem to assume, like Enlightenment philosophers, that everyone is an island, and that the only possible deviation from this atomistic state is a monogamous
marriage life partnership (however peppered with infidelities on the man's part) and nuclear family. I do not think this notion matches up very well with what we know about how modern-day hunter-gatherers, as well as our closest evolutionary kin, the great apes, live.
**Sometimes, even when a proposition isn't directly testable, it might have implications that are testable. I think this idea --- that people's minds are set up to see the world in terms of the actions of intentional agents, rather than in terms of random chance, because there is an evolutionary advantage to this bias --- is amenable to cross-species testing. If agenticity is such a useful cognitive shortcut, it would make sense for at least some other animals to have also developed it!