Field of Science

Which came first - temples or civilisation?

Smithsonian.com has an article on Gobekli Tepe, which is possibly the world's oldest temple. Dated to 11,000 years ago, just as the last ice age was coming to an end, it's one of a handful of sites in the region that seem to show the first evidence of organized religion. Question is, what sparked this thought revolution?
To Schmidt and others, these new findings suggest a novel theory of civilization. Scholars have long believed that only after people learned to farm and live in settled communities did they have the time, organization and resources to construct temples and support complicated social structures. But Schmidt argues it was the other way around: the extensive, coordinated effort to build the monoliths literally laid the groundwork for the development of complex societies.
The immensity of the undertaking at Gobekli Tepe reinforces that view. Schmidt says the monuments could not have been built by ragged bands of hunter-gatherers. To carve, erect and bury rings of seven-ton stone pillars would have required hundreds of workers, all needing to be fed and housed. Hence the eventual emergence of settled communities in the area around 10,000 years ago. "This shows sociocultural changes come first, agriculture comes later," says Stanford University archaeologist Ian Hodder, who excavated Catalhoyuk, a prehistoric settlement 300 miles from Gobekli Tepe. "You can make a good case this area is the real origin of complex Neolithic societies."
I guess that depends how you define 'complex societies'. The remarkable site at Çatalhöyük, also in Turkey (although in central rather than south-eastern region), dates to a couple of thousand years after Gobekli Tepe. The earliest known town, this site is remarkable because there does not appear to be any social stratification - all the houses are a similar size, and there doesn't appear to be any differentiation of the type you see in later towns.

The people of Çatalhöyük were remarkably egalitarian. Since stratification is a hallmark of complex societies, it seems that religion pre-dates them. Ian Hodder, the modern excavator of Çatalhöyük, makes this point:
"Everybody used to think only complex, hierarchical civilizations could build such monumental sites and that they only came about with the invention of agriculture," said Ian Hodder, a Stanford University anthropology professor who has directed digs at Catalhoyuk, Turkey's most-famous Neolithic site, since 1993.

"Gobekli changes everything. It's elaborate, it's complex, and it is pre-agricultural. That fact alone makes the site one of the most important archaeological finds in a very long time."
So if it wasn't complex societies, what was it the triggered the invention of religion? The Smithsonian article mentions the intriguing symbolism of the site, not seen in earlier art:
What was so important to these early people that they gathered to build (and bury) the stone rings? The gulf that separates us from Gobekli Tepe's builders is almost unimaginable. Indeed, though I stood among the looming megaliths eager to take in their meaning, they didn't speak to me. They were utterly foreign, placed there by people who saw the world in a way I will never comprehend. There are no sources to explain what the symbols might mean. Schmidt agrees. "We're 6,000 years before the invention of writing here," he says.
Trevor Watkins, a retired Profess of Archaeology at Edinburgh, reckons that it was the development of symbolic representations that was the final piece in the jigsaw allowing for organised religion to manifest. This is from a talk he gave back in 2001:
Throughout hominid evolution language had been evolving in ways that are little understood and certainly not agreed by linguists. In the paintings and modelled representations of the upper palaeolithic we can see, I would suggest, the first essays in another mode of symbolic representation – like a child’s first words. A few tens of thousands of years later, at the end of the epi-palaeolithic and the beginning of the neolithic periods in south-west Asia, we see much richer vocabularies of symbolic representation, and enough hints, I think, to indicate that these are material expressions within systems of symbolic representation.
<...>
As Steven Mithen (1998, 1999) has pointed out, it is no coincidence that the first uses to which this extension of symbolic representation were put included representations of beings that are half animal and half human, non-natural or super-natural beings. With the emergence of what Mithen calls ‘cognitive fluidity’, the human mind enjoyed the power for the first time to reflect on the nature of the world, to use its power to cross-reference across all the realms of experience and knowledge, to think analogically, to formulate vivid ideas in terms of metaphors.
In other words, it was not a material revolution - high population densities and agriculture - that gave rise to religion. And if it wasn't that, then the spark may have been cognitive.

How does religion prevent suicide?

Religious people are less likely to commit suicide. This is pretty much established as fact; the big question is how the effect is mediated. Is it that the spiritual beliefs of religious people help resolve existential anxieties, gives them hope for the future, or perhaps even fear of punishment in the hereafter? Or is it that religion provides a framework for mutual support that helps people through tough times?

There's a new study from a Canadian team that sheds some light on this question. They used data from a massive survey of over 35,000 Canadians (the Canadian Community Health Survey) to take a look at some of the factors associated with thinking about suicide (suicide ideation) or even attempting suicide. It was a nationally representative sample - around two-thirds were spiritual and slightly more were religious - and around half were both religious and spiritual. Religious people were those who went to church (for reasons other than weddings or funerals) at least once a year.

What they found was that people who were either spiritual or attended religious services (or both) were significantly less likely to have attempted suicide. But the problem is that spiritual and religious people are different to atheists. For example, women are more likely to be spiritual, and also more likely to attempt suicide.

So they took this into account in their statistical analysis. Importantly (and uniquely, as far as I know), they also took account of differences in social supports. Religious people might simply have better access to information, more friends, and more emotional support. So their analysis takes this into account too, to try to provide an estimate of the 'pure' effects of spirituality and religion.

What they found was that, taking demographic factors and social support into account, spirituality no longer had any significant effect. Religion, however, did. A top-line interpretation of this is that spirituality doesn't offer any particular help above and beyond what can (in theory at least) be provided by other means. But religion seems to. That's certainly how the Christian Post sees it:
According to the data, the former category [spirituality] did not show a decreased inclination to take their lives, suggesting something more was involved that was related to the actual attendance at a religious event occurring in a church, mosque, temple or other spiritual gathering.
But I think there's something else going on here. Looking at the data closely, what it looks like to me is that spirituality is linked to fewer suicide attempts, even after accounting for demography and social support - it's just that their model become so statistically weak that it can't show this. And the data also seem to show that religious attendance reduces suicide despite being associated with less social support.

The explanation for this is a little bit complicated, but here goes! You'll need to refer to the graph on the right. The blue diamonds show the point estimate - the best guess from the statistical model. The light blue boxes show the 95% confidence intervals. They're an indication of how uncertain the point estimate is.

In short: if the 95% confidence interval crosses the 'zero' line, you can't be confident that what you're testing (spirituality, for example) has any effect - no matter where the point estimate lies.

You can see what's happening with spirituality. Once you expand the model to include demographics and social support, the 95% confidence interval balloons. Although the best estimate for the effect does go down a little (from a 43% reduction to a 37% reduction in attempted suicides), the main reason they don't find any effect is that the more complex model simply doesn't have the power to detect anything but a massive effect.

It's not that the effect goes away if we add demographics and social support - it's more that adding these factors makes the model so weak that we simply can't tell if there's any effect.

Look at the situation with religion, by comparison. Although the confidence interval increases, it's not by nearly so much. And the point estimate also goes down - from a 47% reduction to a massive 62% reduction in the risk of attempted suicide.

In fact the effect of social support is even larger than you would think from the graph. Compared with the model that controls for everything except social support, controlling for social support as well nearly doubles the effect of religion on the risk of attempting suicide.

This is a big surprise! Intuitively, you would expect that religion increases social support, and that this would explain part of the effect of religion on suicide rate. But if this were so, controlling for social support would reduce the estimated effect of religion, not increase it.

In other words, what this model seems to be showing is that religious people get less social support. And how do they define social support?
... informational support (offering of advice or guidance), tangible support (material aid or behavioral support), positive social interaction (available persons to do things with), affection (involving expressions of love and affection) and emotional support (expression of positive affect, understanding and encouragement).
Now, it's impossible to be sure about this without digging into the original data. I've contacted the authors, and will update this if they get back.

There are some other caveats here. First, this was a study on suicide ideation and suicide attempts, not completed suicides. Complete suicides have a very different epidemiology (they are more frequent among men than women, for example. So the conclusions of this study might not hold for actual suicides.

Second, over 40% of Canadians go to religious services less than once a year. That's quite low by global (although not UK) standards. In poorer countries, non-spiritual people are more likely to attend religious services - so the results might not hold elsewhere.

Bearing all that in mind, here's what I conclude from this study. First, spirituality might help reduce suicide attempts, but if it does the effect is small after accounting for other relevant factors.

Second, religion probably does help stop people going down the path towards suicide, but it does it in spite of seemingly reducing other forms of social support.


ResearchBlogging.org

D Rasic, S Belik, B Elias, L Katz, M Enns, J Sareen (2008). Spirituality, religion and suicidal behavior in a nationally representative sample Journal of Affective Disorders DOI: 10.1016/j.jad.2008.08.007

What is atheism?

The best articles are the ones that begin by making a statement that makes you splutter with indignation, and then go on to convince you that they're right. Paul Cliteur, Professor of Jurisprudence at Leiden University in The Netherlands, has done just that in an essay published recently in the Journal of Religion and Society (The definition of atheism). Here's the part that caught my attention:
Atheism is concerned with one specific concept of god: the theistic god. The theistic god has a name and this is written with a capital: God. At first sight it may be strange to limit atheism to the conception that is opposed to the theistic concept of god and not all the other gods that have been venerated by humans. Buddhists or Hindus subscribe to polytheistic approaches of the divine. Should they not be included in the atheist rejection of the divine[?] ... I think not
At first sight this seems bizarre and even counter-productive. After all, it doesn't seem helpful to equate atheists and Hindus. No self-respecting atheist would any truck with any kind of sky fairy, supernatural beings of any kind, or superstition. So what on earth is Cliteur on about?

His argument stems from the fact that atheism is a statement not of belief, but of what you do not believe. And to decide that you don't believe in something, first of all you have to know what 'it' is.

And this is Cliteur's complaint about the new religionists. They reject the idea that religion is about worshipping something that can be defined in any meaningful way. This, of course, is the stick they use to bash atheists. Here's Nicholas Lash on Dawkins:
My question to Richard Dawkins is this: given the centrality of this insistence, in Christian thought, for two millennia, on the near-impossibility of speaking appropriately of God, is it ignorance or sheer perversity that leads him wholly to ignore it, and to treat all statements about God as if they were characteristically taken, by their users, as straightforward and literal descriptions?
God, in his view cannot be defined. And as a result, it's impossible to say that God does not exist. Note that Lash doesn't mean this in any circumstantial way. He follows up those comments in the next chapter by arguing that atheism does not exist. He's right, of course, in his own special way. You can't be an atheist about something that cannot be defined.

So to be an atheist you first have to have a definition of what a theist is. If you allow theologians to have their way, they will define theism in a way that's so impossibly vague that it is meaningless (i.e. of no practical value). And if you should find that their definition is sufficiently concrete to be meaningful, then they will shift it.

Cliteur says:
By defining atheism in this limited way we acknowledge that it is difficult, if not impossible, and also useless to develop an argument against all the different concepts of god and religion that are sometimes defended. The only thing an atheist can do is to oppose the kind of language that makes it impossible to discern under what circumstances one can legitimately say, “I am not religious.” If everybody is “religious” but only the content of that religion varies, the word “religion” has lost all meaning.
Therefore, it's up to atheists to define what it is that
they do not believe in. It's too important to be left to woolly-minded theologians.

For the one life we have

Humanists believe that this life is all we have. Now, this is a very disturbing idea for a lot of people who feel that, without a hope of life after death, life itself becomes meaningless. Humanists, on the other hand, argue that the very fact that life is limited gives it greater meaning - because if you only have a short period to exist you'd better go out and make the most of it. Hence the motto of the British Humanist Association - For the one life we have.

So who's right? It's largely a philosophical question, of course, but there are a few snippets of evidence to bring to bear, one of which is a study of American college students that was published in December.

What the investigator, Jaime Kurtz of Pomona College in California, wanted to know was whether 'temporal scarcity' (the feeling that you have only a limited time left to enjoy something) makes you happier or sadder. So she gave her volunteers, who were final year students due to graduate in 6 weeks, a happiness questionnaire. She also asked them to write about their college experiences.

The clever bit was that she told a third of them that 'You have only a short amount of time left to spend at UVA. In fact, you have about 1,200 hours left before graduation.' And she told another third something subtly different: 'You have a significant amount of time left to spend at UVA. In fact, you have about 1/10th of a year left before graduation.' There was also a control condition, where volunteers were simply asked to write about a typical day.

Then, over the next two weeks, they were asked to fill in four similar surveys. At the end of the two weeks, they filled in the happiness scale again. The results are shown in the figure.

There was a clear and striking effect. Happiness went up in those who were told they didn't have much time left, and went down in in those who were made to think they had lots of time.

Why did this happen? Well what happened in the group who were made to feel time-short was that they started participating much more in college activities. Kurtz concluded:
Although it may seem counterintuitive, the present study found that focusing on the impending ending of college promoted enhanced subjective well-being over the course of 2 weeks and resulted in greater participation in college-related activities ... A likely possibility for increased subjective well-being in the grad-soon condition is that these participants reaped the psychological benefits associated with social engagement and connectedness as they became more engaged in college life over the course of 2 weeks. After all, strong social support and meaningful relationships are among the strongest predictors of happiness.
What makes these results particularly interesting is that you would not intuitively expect that telling people that a happy part of their life would soon end, and asking them to reminisce about the good times they'd had, would make them more happy. But it did - because it made them determined to make the most of the time they had left.

Can these findings be directly extrapolated to the contrast between mortality and life everlasting? No, clearly not. But you can expect the same psychological mechanisms to be at play, which adds a little bit of spice to the debate. As Kurtz says:
According to Cialdini’s (1993) scarcity principle, when a resource becomes scarce, it increases in value. If one thinks of time as one such limited resource, an awareness of its potential unavailability can increase the value of an experience, making it more likely to be enjoyed (e.g., Carstensen, Isaacowitz, & Charles, 1999). There are some people whose lives are imbued with this sense of temporal scarcity, and there are anecdotes to suggest that they are especially appreciative.

ResearchBlogging.org

Jaime L. Kurtz (2008). Looking to the Future to Appreciate the Present: The Benefits of Perceived Temporal Scarcity Psychological Science, 19 (12), 1238-1241 DOI: 10.1111/j.1467-9280.2008.02231.x

Why did science founder in the Islamic world?

The third and final part of the rather excellent BBC series, Science and Islam, presented by Jim Al-Khalili, is now available on YouTube (as well as on BBC iPlayer, for those in the UK). Most of the series has dealt with what is now uncontroversial - the enormous and diverse range of contributions to the sum of human knowledge made by medieval intellectuals in the Islamic world.

But the third programme touches on a rather more controversial topic: why is it that the scientific revolution occurred in Europe, and seems to have largely passed the Islamic world by? (Even today, scientific output in Islamic countries is dwarfed by that of other countries - Scimago Country Rank.)

According to Al-Khalili, the answer is simple - as he explains in Part 6. The discovery and colonisation of the New World in the 6th century led to an influx of wealth into Europe - and as he says, 'big science requires big money'.

But this idea, although attractive and probably true to some extent, is fundamentally flawed for three reasons.

Firstly, the New World brought enormous wealth to the dynasties of Spain and Portugal. But these two countries contributed little to the scientific revolution. Take two icons of modern science - Copernicus, who revolutionised astronomy, and Vesalius, who did the same for anatomy (and hence medicine). Compare, for example, Vesalius' 1543 drawing of the digestive system (below) with that of a Persian anatomist a century later (right). Copernicus was a native of Poland, while Vesalius was a Belgian who took up residence in Paris.

And the hot bed of the Renaissance was, of course, Italy. Although Italy was wealthy, it was wealthy not because of colonisation of the New World, but through trade with the old - and in particular the Islamic - world. In fact, the Italian city states sometimes found themselves siding with Islamic states in an effort to resist the growing power of Portugal over the spice trade (for example, at the Battle of Diu).

Secondly, the origins of the scientific revolution in Europe do not lie in the 16th Century, but much earlier. The first murmurings of the scientific revolution can be dated to the condemnation of Tempier, Bishop of Paris, in 1277 who tried - unsuccessfully - to keep the genie in the bottle.

And thirdly, the Islamic world did not do big science, even when it was fabulously wealthy (as the Ottomans were, at the time of the Renaissance). Al-Khalili cites the example of the Maragha observatory, which was indeed for a period a triumph of scientific endeavour. But it was for only a very brief time. It lasted only around 50 years before being abandoned, a sure sign that it lacked institutional support. In fact, there are no direct Islamic equivalents of the European Universities.

And according to Toby Huff, a historian of science and Islam at the University of Massachusetts and author of The Rise of Early Modern Science, the explanation for this discrepancy lies at the heart of why science exploded in Europe and not in the Islamic world (nor, for that matter, in China).

The major factor seems not to have been religious, or monetary, but legal. It was the invention of the corporation.

The 12th and 13th centuries were a period of rapid innovation in European legal systems as, beginning with Gratian's Decretum, lawyers struggled to reconcile a mass of ad hoc laws derived from Roman, Germanic, and Church codes. One outcome was the formulation of the idea that institutions could be treated as legal entities, with their own special rules and regulations. Joseph Lynch, Professor of History at Ohio State University, explains it like this (The Medieval Church, p331):
In the twelfth and thirteenth centuries, the canon lawyers had created a body of legal opinions and laws dealing with the problems of corporate bodies, a very important issue since church's institutions, including monasteries, cathedral chapters, hospitals and even the college of cardinals, were corporations that chose members, elected officers, administered income, owned property and could sue and be sued. A key element of the canon law of corporations was that the interests of the corporate group should not be damaged by the actions of its officers. The officers held their authority for the good of the group and the corporate body could protect itself from incompetent or malicious officers, as a last resort by deposing them.
It was this legal framework that gave rise to the modern University ('university' was an early alternative legal term for 'corporation'). The first of these, the University of Paris, was established in 1215. Huff says:
The legal autonomy that existed in the European universities did not exist in the Muslim world because the legal concept of a corporation, a groups of actors treated as a collective whole, did not exist. This legal defect had major implications for Islamic civilization, not least in the sphere of economic development, as Timor Kuran has made clear. (Huff: Reply to George Saliba)
This legal concept was critically important because it gave scholars in the West a legal status, as well as legal protection. It also meant that certifications were controlled by these corporate bodies, rather than being handed out by individual teachers. In the west, you got your degree from a university. In the Islamic world, you got your certificate from your teacher.

There were, of course, colleges (Madrasas) in the Islamic world, but these were constituted in a fundamentally different way. There was no separation from the Church or from the State. There were no formalised degree programmes. There were no formalised methods for judging standards. There was no faculty (who, in Western Universities, comprised the corporate body). In short, there was not the legal framework to support the sustained body of disputation and learning that could be achieved in a University.

To be sure, this is not a full answer. There were many other factors, especially philosophical, which are also discussed by Huff. But the essential lesson is that, for science to blossom, it is not enough to have a series of brilliant individuals and wealth enough to support their curiosity. You need to have a framework that enables those individuals to feed from each other. What Europe developed, and the Islamic world did not, is the idea of the scientist as a social actor. Huff again (Rise of Early Modern Science, p18):
...scientists - ancient and modern - are not isolated practitioners sequestered in laboratories but cultural actors whose very existence relies upon multiple others who (1) provide essential institutional support in the form of teaching and research opportunities, (2) provide vehicles for the publication of scientific results, and (3) provide tacit support both for the role of the scientist and for the values and worldview of the scientific enterprise.
In other words, for science to blossom it is not enough to have scientific individuals. You need a scientific culture.

If your subscriber feed is broken...

The Google fairies have been playing around behind the scenes. As a result, if you bookmarked the feed to this blog in your web browser, it might be broken. It might be broken even if you didn't.

Anyhow, if it's giving you an error message, you'lle probably need to bookmark it again or resubscribe!

Now returning you to you usual service!

The shared genetic heritage of Jews and Palestinians

The Times recently carried this unusual report on an Israeli Jew (Tsvi Misinai, a retired computer expert) who's hoping to prove that Palestinians are descended from Jews. Apparently, he thinks that proving this will help to stop the bloodshed. His idea is that modern Jews are descended from emigration in the first few centuries of the Christian era. The Jews who stayed put in Palestine converted to Islam, and became Palestinian Arabs. There's hope that genetic tests might be able to prove this.

Well, there is good news and bad news on that score.

The good news is that the genetics of Arabs and Jews have been pretty extensively researched. The classic study dates to 2000, from a team lead by Michael Hammer of University of Arizona. They looked at Y-chromosome haplotypes - this is the genetic material passed from father to son down the generations.

What they revealed was that Arabs and Jews are essentially a single population, and that Palestinians are slap bang in the middle of the different Jewish populations (as shown in this figure).

Another team, lead by Almut Nebel at the Hebrew University, Jerusalem, took a closer look in 2001. They found that Jewish lineages essentially bracket Muslim Kurds, but they were also very closely related to Palestinians. In fact, what their analysis suggested was that Palestinians were identical to Jews, but with a small mix of Arab genes - what you would expect if they were originally from the same stock, but that Palestinians had mixed a little with Arab immigrants. They conclude:
We propose that the Y chromosomes in Palestinian Arabs and Bedouin represent, to a large extent, early lineages derived from the Neolithic inhabitants of the area and additional lineages from more-recent population movements. The early lineages are part of the common chromosome pool shared with Jews (Nebel et al. 2000). According to our working model, the more-recent migrations were mostly from the Arabian Peninsula...
So, as far as male lineage goes, the genetic story is very clear. Palestinians and Jews are virtually indistinguishable.

Women are a bit more tricky...
Up until last year, the matrilineal heritage of Jews also seemed pretty clear. Analysis of elements in mitochondrial DNA (which is passed from mother to daughter) seemed to show that Jewish populations around Europe, North Africa, and the Middle East were derived from at least 8 unrelated 'founding mothers'.

Where they came from wasn't clear, but the most likely explanation was that they were from local populations that bred with immigrant Jewish males. Their offspring became absorbed into the Jewish community.

In 2008, a more sophisticated analysis was published that made use of whole mitochondrial DNA sequences. They found no evidence for the genetic bottle necks that indicate founding mothers in the large Jewish populations. Instead, they found a complicated picture with a very diverse gene pool suggesting intermarriage both with local populations and other Jewish groups.

The overall conclusion is that the female Jewish line deviates a lot more from the Palestinian heritage than the male line, but the heritage is still there.

So that's the good news. Jews and Palestinian Arabs are blood brothers - although this close genetic relationship probably stems from pre-Judaic times, rather than any more recent conversion of Palestinian Jews to Islam.

And the bad news? Well, this basic story has been known for the best part of a decade now. But, perhaps unsurprisingly, it hasn't lead to the warring sides laying down their weapons and engaging in a group hug. This is a religious conflict, not a genetic one.

Mr Misinai is, sadly, on a hiding to nothing.

ResearchBlogging.org

M. F. Hammer (2000). Jewish and Middle Eastern non-Jewish populations share a common pool of Y-chromosome biallelic haplotypes Proceedings of the National Academy of Sciences, 97 (12), 6769-6774 DOI: 10.1073/pnas.100115997

A NEBEL, D FILON, B BRINKMANN, P MAJUMDER, M FAERMAN, A OPPENHEIM (2001). The Y Chromosome Pool of Jews as Part of the Genetic Landscape of the Middle East The American Journal of Human Genetics, 69 (5), 1095-1112 DOI: 10.1086/324070

M THOMAS (2002). Founding Mothers of Jewish Communities: Geographically Separated Jewish Groups Were Independently Founded by Very Few Female Ancestors The American Journal of Human Genetics, 70 (6), 1411-1420 DOI: 10.1086/340609

Doron M. Behar, Ene Metspalu, Toomas Kivisild, Saharon Rosset, Shay Tzur, Yarin Hadid, Guennady Yudkovsky, Dror Rosengarten, Luisa Pereira, Antonio Amorim, Ildus Kutuev, David Gurwitz, Batsheva Bonne-Tamir, Richard Villems, Karl Skorecki (2008). Counting the Founders: The Matrilineal Genetic Ancestry of the Jewish Diaspora PLoS ONE, 3 (4) DOI: 10.1371/journal.pone.0002062

Church goers more likely to steal newspapers

In the province of Vorarlberg, in the west of Austria, newspapers are often sold though a sales booths. It's an honesty system - anyone can take the newspapers, but the publisher asks for a payment of 60 Eurocents.

You can see the set up in the picture. The newspapers go in the pouch are the bottom, and above it (on the black button) is a slot for you to put your money in. You're free to give as much (or as little) as you like.

Now this is a fantastic arrangement for a real-world test of honesty. Two Austrian economic scientists (Gerald Pruckner at the University of Linz and Rupert Sausgruber at the University of Innsbruck) decided to take up the challenge (see here for the full report).

What they did was very simple. They emptied the cashbox and put one paper in the bag. Then they waited for someone to take the paper. They discretely followed that person and, when there was enough distance, they popped up and pretended to be interviewing random passers buy on "social behaviour in society".

They also had another person open up the cashbox and see how much they'd paid (if anything). All through the process they were very careful to make sure none of the people they interviewed twigged that it was connected with their newspaper purchase. They didn't want people to massage their views (and the publisher didn't want his readers to think people were spying on them).

What they found was that many people don't pay anything, and that those who do pay usually don't cough up the full price. No surprises there! 39% paid nothing, and only 19% paid the full price. Of the people who stopped to be surveyed, the average payment among those who did pay was 26 cents - rather short of the 60 cent asking price . But what kinds of people pay up? Who are the honest Joes, and who are the thieving Toms?

It turns out that the thieving Toms are regular church goers!

Three factors had a big negative effect on the chance of paying anything at all. People who were over 50, who cheat on their taxes or who attend church regularly came in about the same. Basically, all other things being equal, regular church goers are 20% less likely to pay anything at all.

But it doesn't end there. The amount that church goers pay was strikingly low. Among those who actually paid something, the effect of churchgoing was to reduce payout by 22 cents compared with the average. Remember that the average payment was only 26 cents, so this is a major effect.

You can see in this chart the most important factors that determined how much people would pay. Men and people who cheat on taxes are also tight fisted. Whereas people who are married or have a partner, or who volunteer or care about what others think of them, are likely to pay more.

Why are church goers so dishonest? It's a question that seems to have stumped the two researchers. The study was done on Sunday, and they suggest that maybe church goers had donated their loose change already. But this doesn't explain why they should feel justified in pinching a newspaper.

Perhaps we shouldn't be too surprised. Previous research has shown that people who have a sense of their own moral righteousness are in fact more likely to cheat. This is presumably because they convince themselves that they are cheating for the greater good. So maybe these churchgoers feel that they have donated money to the church, and that this somehow justifies their their dishonesty.

But the real take home from this study is that it's important to test what religious people do in the real world. Rather than asking them what they do, or studying them in a laboratory environment. Because if you do that, then the results can be quite surprising!

Faith healers sabotage vaccination efforts

According to the World Health Organization, the second most important thing that we can do to improve child health around the world is better vaccination (number one is clean water). So anything that blocks vaccine uptake, especially in low income countries, is a major healthcare problem.

A recent study has looked at the factors that affect vaccine uptake in Haiti, and found that the vaccination rate is an astonishing 50% lower in children whose mothers are frequent users of faith healers. This was the case even after controlling for all the other factors that might play a role - levels of education, age, and distance from health centre.

They even controlled for religion - many Haitians are practitioners of Voodoo, and that in itself reduces vaccination. But regardless of Voodoo beliefs, recourse to faith healers still slashed vaccination rates.

It has been argued that traditional healers have an important role to play in improving healthcare in developing countries. No doubt this is true, in those cases where traditional healers can be brought within the fold of evidence-based medicine. But this new study is a warning shot. Here is clear evidence of the destructive power of faith healing.

There is a real danger that faith healers cannot be reconciled to modern medical practice. The evidence from wealthy countries does not bode well. In the past 15 years, more than 200 children have died in the USA because their parents relied on faith healers (George Street Journal). Even in states where it is illegal, the practice still goes on (there are two cases currently in the courts in Oregon alone)

Many states in the USA still allow legal exemption for religious faith healers. In other words, these fake healers can get away with murder, simply because they are religious.

ResearchBlogging.org

Adamson S Muula, Matine M Polycarpe, Jayakaran Job, Seter Siziya, Emmanuel Rudatsikira (2009). Association between maternal use of traditional healer services and child vaccination coverage in Pont-Sonde, Haiti. International Journal for Equity in Health, 8 (1) DOI: 10.1186/1475-9276-8-1

Revenge is not so sweet

What to to do if someone you know behaves badly? Turn the other cheek, or take your revenge? According to Martin Nowak's latest game-theory based analysis, turning the cheek is the strategy that's most likely to reward you in the long run.

Nowak (Professor of Biology and of Mathematics at Harvard University) is interested in something called the repeated prisoner's dilemma, a popular model of social interactions. In the game, you're paired up with another person and have a choice of either co-operating (in which case your partner benefits but you don't) or defecting (in which case you benefit but your partner loses). The payouts in the game are set such that it's best if you both co-operate, but there is a strong incentive for you as an individual to cheat - if you can get away with it!

In a study published in early 2008, Nowak and colleagues looked at what happened if you were also allowed to punish defectors – this is so-called 'costly punishment', which costs you a bit but inflicts a greater cost on your victim. Now, you might think that strategies in which you can punish cheaters can bring them back into line, and so increase your payout.

In fact, that wasn't what happened. They showed that people who were quicker to use punishment tended to lose out overall, and that the best results were achieved by people who responded to defectors simply by defecting themselves (i.e. refusing to co-operate).

In reality, life is a little more complicated than this simple two-way interaction. We've evolved to live in groups, and most of our interactions are with people who we've watched in action, and so we've formed an opinion of what they're like. Reputation is important.

In a new paper, Nowak looks at whether the reputation you earn can make punishment a more effective strategy. This time they used a mathematical model to explore all the options, rather than real people.

What the model assumes is that your actions are watched by a group of observers, who assign you a reputation according to your actions and whether the recipient of your action has either a good or a bad reputation (see figure above). They then treat you according to the opinion they've formed of you. It's called 'indirect reciprocity', as opposed to the direct reciprocity of the two-person game.

To make the simulation realistic, they also assumed that the watchers could make mistakes about your reputation, and also that they talked to each other (they gossip!).

What they found was that if the watchers were poor in assessing your reputation, then defection was the best strategy (you might as well defect, since the watchers are pretty clueless). If they were good at it, then you should co-operate with good people and defect with bad.

But there were hardly any circumstances in which you benefit from being in a group that believes in punishing those who are bad.

You can get a flavour for what this means in practice from this figure, which shows how varying two of the parameters affects the best strategy.

If individuals are poor at assessing reputation, or if co-operation is not that beneficial, then the best strategy is to always defect. But if reputations are meaningful and co-operation is valuable, then you should co-operate with agents with a good reputation, and defect with those with a bad reputation ('cooperate or defect').

There's only a very small patch ('cooperate or punish') when punishing those who are bad is the best strategy. It occurs when assigning reputation is pretty inaccurate, and the benefits of co-operation are very high.

Tweaking other parameters changes this landscape somewhat, but the general picture is always the same – there's only a small window where 'cooperate or punish' is a good strategy.

So why did punishment as a strategy ever evolve? Well, this is a model, not reality. The agents are simple, and the model assumes that everyone has the same attitudes to crime and punishment.

But perhaps the reason that punishment is popular is not that it increases overall good, but that it brings fewer negative effects to the punisher than to everyone else. For example, punishment could evolve if it is a way of establishing social dominance. Nowak explains in his 2008 paper:
We conclude that costly punishment might have evolved for reasons other than promoting cooperation, such as coercing individuals into submission and establishing dominance hierarchies. Punishment might enable a group to exert control over individual behaviour. A stronger individual could use punishment to dominate weaker ones. People engage in conflicts and know that conflicts can carry costs. Costly punishment serves to escalate conflicts, not to moderate them. Costly punishment might force people to submit, but not to cooperate. It could be that costly punishment is beneficial in these other games, but the use of costly punishment in games of cooperation seems to be maladaptive. We have shown that in the framework of direct reciprocity, winners do not use costly punishment, whereas losers punish and perish.
In other words, the winners in costly punishment games don't do well - they simply do less badly than everyone else.

ResearchBlogging.org

Hisashi Ohtsuki, Yoh Iwasa, Martin A. Nowak (2009). Indirect reciprocity provides only a narrow margin of efficiency for costly punishment Nature, 457 (7225), 79-82 DOI: 10.1038/nature07601

Anna Dreber, David G. Rand, Drew Fudenberg, Martin A. Nowak (2008). Winners don’t punish Nature, 452 (7185), 348-351 DOI: 10.1038/nature06723

A cup of coffee to raise the dead

Badscience rips this one apart nicely!

In the news today, a study linking caffeine intake to the risk of hallucinations from psychologist Charles Fernyhough and PhD student Simon Jones at the University of Durham. Apparently high caffeine users - seven cups of instant coffee per day - have three times more hallucinations (including hearing voices) than low caffeine users. The Metro this morning provided a handy ready-reckoner - apparently that equates to just 1.7 cups of takeaway coffee (so watch out all you caffeine fiends out there!)

Now for a reality check. This was an observational study not an interventional one. All they did was ask 200 students about their caffeine intake and hallucinatory experiences. They didn't actually dose them with caffeine to see what happens. Fernyhough points out:
“Our study shows an association between caffeine intake and hallucination-proneness in students. However, one interpretation may be that those students who were more prone to hallucinations used caffeine to help cope with their experiences. More work is needed to establish whether caffeine consumption, and nutrition in general, has an impact on those kinds of hallucination that cause distress.”
Also, you need to consider the possibility that students with a high caffeine intake may also indulge in other drugs...

Anyway, putting that to one side, they have an interesting hypothesis about how caffeine does its trick. They point out that caffeine increases cortisol, the stress hormone, and that cortisol is linked to some aspects of psychosis. So the increase in cortisol might be expected to trigger hallucinations. And this in turn ties in with other observations of a link between feeling out of control and seeing things that aren't there.

Religion and spiritual beliefs do not make happy children: friends and values do

Here's a study that's been reported badly in the press (e.g. Washington Times), because what psychologists mean by the term spirituality is not the same as what ordinary people mean by it. It doesn't mean quite what most people think it does.

So when a study reports that children's happiness is linked to their spirituality, it only begs the question 'what do they mean?' And here's where it gets interesting.

In the study in question, by Mark Holder and colleagues at the University of British Columbia, they used a standard questionnaire-based measure of spirituality, the Spiritual Well-Being Questionnaire (SWBQ). This has four components:
  • Personal (meaning and value in one’s own life)
  • Communal (quality and depth of inter-personal relationships)
  • Environmental (sense of awe for nature)
  • Transcendental (faith in and relationship with someone or something beyond the human level)
You can see that only the last one, Transcendental, means anything like the conventional meaning of the term 'spiritual'.

They gave this questionnaire to 320 children aged 8-12 from a mix of state and faith schools. They also gave them a questionnaire on their religiousness (which measured their beliefs and practices) and a battery of three questionnaires on their happiness.

What they found was that there was no correlation between happiness and religiousness, and that the correlation with transcendental spirituality was weak and inconsistent. Environmental spirituality was stronger, but the strongest correlation was with personal and communal spirituality.

But there is a problem. Happiness was also linked to personality. If personality and spirituality are connected, then that would skew the results (if, for example, shy children are more spiritual). So they did a hierarchical regression, which corrected for personality and also for sex and type of school.

The results for one of the happiness measures are shown in the figure. The others are pretty much the same - but this is the only one in which transcendental spirituality reached statistical significance.

You can see that personal and communal 'spirituality' is really important. They each account for about 5% and 2% of the variation in happiness - which may not sound much but is actually quite a lot by the standards of these kinds of studies. Religion also has a positive effect, but it is small (<0.05%) and probably just a chance effect (it's not 'statistically' significant).

But look at 'transcendental' spirituality! It is significant, and the effect is to reduce happiness! The exact opposite of what the news reports would leave you thinking!

So what to make of this? Well, the first lesson is not to trust news reports about studies on 'spirituality'. Journalists and psychologists mean different things by them.

Secondly, we are starting to get a good grasp on what makes for happy children (all the factors together explain over 20% of the variation in happiness between children). And the answer won't be a surprise to humanists.
To make children happier, we may need to encourage them to develop a strong sense of personal worth, according to Dr. Mark Holder from the University of British Columbia in Canada and his colleagues Dr. Ben Coleman and Judi Wallace. Their research shows that children who feel that their lives have meaning and value and who develop deep, quality relationships - both measures of spirituality - are happier. It would appear, however, that their religious practices have little effect on their happiness. (ScienceDaily)

Happy children are ones who are loved and valued, who have a strong sense of community and friendship, and who are shielded from excessive spiritual mumbo jumbo.

ResearchBlogging.org

Mark D. Holder, Ben Coleman, Judi M. Wallace (2008). Spirituality, Religiousness, and Happiness in Children Aged 8–12 Years Journal of Happiness Studies DOI: 10.1007/s10902-008-9126-1

Religion and self-control: the junk science version

Seems like you wait forever for a review on religion and self-control, and then two come along at once. Last month two psychologists from the University of Miami put forward their hypothesis that religion can increase self control. It was long on theory, but short on observational evidence.

Turns out there was another one published at the same time, in the form of a white paper (i.e. not peer reviewed) by four faculty at the Grove City College Centre for Vision and Values, an evangelical Protestant College (motto: Advancing Freedom with Christian Scholarship).

Their essay (Social Organizations as a Path to Self-control: Does Religious Participation Promote Character Development?) is conceived as an attack on Dawkin's assertion that religion is pernicious. Now, there are an enormous number of flaws in this white paper, but it would be tedious to go through them all. So let's concentrate on the biggies.
“Theory and empirical research,” the authors conclude, “point to religious participation continuing to be important for character development in the lives of Americans in the 21st century.”
In fact, their empirical evidence does no such thing. Basically what they provide here is a few studies that have found a correlation between certain aspects of behaviour and attendance at religious services. All of these studies are fundamentally flawed for a couple of reasons.

The most important of these is self-selection. In a society like America, where most people are nominally religious, going to church every week is a strong indication of having a conscientious personality type. Conscientious people are by their nature less likely to have lives that get out of control. This fact is so blindingly obvious that they acknowledge it themselves:
An alternative explanation for the average benefits of religion that seems quite persuasive on the surface is that people who have their lives in order are more likely to go to church and participate in religious activities. Under this model, religious participation has no influence on character; it is simply that those who already have character are more likely to be active members of religious communities. According to this line of reasoning, the religious communities have no meaningful influence on people; the apparent effects of religion are illusory.
Now, the interesting thing about this paper that they focus on religious behaviour, rather than religious beliefs - presumably because they could find a correlation with behaviour, but not beliefs. And this would explain it nicely.

They reject this, on the following grounds:
This claim that belonging to a close knit community would have no influence on people defies sound logic and fundamental psychological theory that we have reviewed.
But this simply reveals another fundamental flaw in their analysis. Nobody denies that belonging to a community, or that giving kids the right kind of environment to grow up in is beneficial. But the authors assume that the only (or perhaps the best) route to this is religion. In fact, secular institutions can be every bit as effective - if not more so. (This is Dawkin's point, of course. That the moral message from secular institutions is superior to that of religious ones).

Another flaw is that the studies they look at tend to assess self-reported behaviour. But it's known that religious people are more likely to over-report their good deeds. When you do actual studies observing actual behaviour, these differences disappear (The Psychology of Religion, p422).

And of course we have the mother of all observational studies, in that there are countries that with very low levels of religion. As Phil Zuckerman points out in his recent book Society Without God, Denmark is one of the least religious countries in the world. And although the Danes can be very weird, they are not noted for being a society on the verge of chaos.

What do people pray for?

When people pray, what do they think they will get out of it? It's an important but under-researched question, because it sheds an light on the role of religious beliefs in society (as opposed to the role of religion, which is much larger).

For example, one of the criticisms that theologians make of The God Delusion is that Dawkins describes God in very concrete terms. This is not the real God, they complain - an entity that they describe in what seems to me painfully abstract and circumlocutory terms (see this, for example, or indeed any of the writings of the Oxford theologian Nicholas Lash).

But what do ordinary people actually believe in? An abstract, metaphorical god? Or a concrete, personal one?

Simply asking people is not necessarily going to give you a good answer, because people will often tell you what they think they ought to say. Wendy Cage, a sociologist at Brandeis University in Massachusetts, hit upon an innovative approach to this problem when she found that people were recording their own, anonymous prayers in a public prayer book in the rotunda at Johns Hopkins University Hospital:
Although the statue of Jesus Christ has stood in the hospital since 1896, it was not until the early 1990s that people began to leave prayers written on napkins, scraps of paper, and the back of visitor’s badges and business cards at the statue’s base.

So that the prayers were not lost, hospital chaplains placed a blank book on a stand by the statue that is filled with prayers every two to three months. Anyone entering or leaving the hospital can write in the prayer book and/or read the prayers other people have written. People write prayers longhand, filling the pages with words and drawings. Some leave photographs, children’s drawings, flowers, and coins at the statue.

This is a valuable resource. Although the prayers recorded are public, and so might differ somewhat from private prayer, they are anonymous and also they weren't prompted by researchers - these are people's genuine, unprompted thoughts.

Cage collected and analysed a total of 683 prayers, and what she found was strong evidence for belief in a personal god - a sort of comforting confidant. 22 percent of the prayers in the research expressed thanks to God, while 28 percent were requests of God and another 28 percent were prayers to both thank and petition God.

Cadge said the information sheds light on the psychology of the people behind the prayers. Most prayer writers addressed God as they would a relative, friend or parent, preferring familiarity over deference, she said.

"Most prayers writers imagine a God who is accessible, listening and a source of emotional and psychological support, who, at least sometimes, answers back," Cadge said in a press release. [NB this press release is factually incorrect: Cadge's study gives no data on how often people pray, only on what they pray for].

So when these people pray (and these are Americans visiting a hospital, of course, and so it can't necessarily be extrapolated more widely), they imagine god very much as a person with whom you can have a conversation. Cadge writes:

As a group, these prayer writers conceive of God as accessible, as actively listening, and as a source of support. They begin prayers with Dear, Hello or Hey and sign them with their name or initials, almost like e-mails. Some make immediate requests and others thank God for listening; Sweet Jesus, Thank you for listening. The word love is common, We lift up N. to you, heal her heart and Help P. and her boys cope... I love you. Love, M.

Many of these prayers read as snippets of ongoing conversations between the writers and God.

But there's an important caveat. Although the writers imagine God to be a supernatural presence with magical powers, they are careful not to ask for anything that could be construed as direct evidence of this. They don't for example, ask God to heal the sick.

Rather than thanking God for specific outcomes or making detailed requests, writers frame their prayers broadly in emotional and psychological language. Prayer writers do not ask God to heal a broken leg but to give them the “strength” to get through this difficult time.

Rather than asking God for particular news at a doctor’s visit, a writer asks God to remember M. as we go to see his doctors today. Remember him in prayer and bless him always.

What these prayers rarely do is to ask an all-powerful God to cure an incurable condition; they do not ask for miracles.

All of which puts me strangely in mind of this cartoon:




ResearchBlogging.orgWendy Cadge, M Daglian (2008). Blessings, strength, and guidance: Prayer frames in a hospital prayer book. Poetics, 36 (5-6), 358-373 DOI: 10.1016/j.poetic.2008.06.011

Why Darwin is a poster child for atheism

Madeleine Bunting, writing in The Guardian, can't understand why atheists have the hots for Darwin:
The fear is that the anniversary will be hijacked by the New Atheism as the perfect battleground for another round of jousting over the absurdity of belief (a position that Darwin pointedly never took up). A poll for the BBC in 2006 found that less than half the British population accepted the theory of evolution as the best description for the development of life. Comparable figures in the US are attributed to its intense religiosity, but given the very low levels of regular worshippers in the UK, religious faith can't account entirely for the resistance to Darwinian evolution. So what is it?
Well in fact Bunting has missed out a key point. Yes, many people don't accept evolution in the UK (although data from international social surveys, rather than a BBC poll, suggest that in fact 75% of British people accept evolution [Miller, 2006]). But it's still many more than in other countries. Comparable polls suggest that acceptance of evolution in the UK is nearly twice that of the USA (Miller, 2006 again).

Miller's study also analysed the factors that lead people in Europe or the USA to accept or reject evolution. And wouldn't you know, in both places by far and away the most important factor in rejecting evolution was religion. This is after accounting for all the other factors shown in the figure here. In other words yes, even in Britain, a lot of people reject evolution. But that's because, even in Britain, a lot of people are religious.

The effect of religion is not as big in Europe as it is in the USA. And that's because people who have religion in the USA are typically more conservative (read: fundamentalist) than in Europe. Miller concludes:
...individuals who hold a strong belief in a personal God and who pray frequently were significantly less likely to view evolution as probably or definitely true than adults with less conservative religious views.
So not only does religion stop you from accepting evolution, but the stronger your religion the worse the effect.

Perhaps this is just coincidence. We know that religious people are, on average, more poorly educated and of (slightly) lower IQ than atheists - especially in the USA. Perhaps religious people just don't accept science? Well no, it turns out that that doesn't really explain it either.

In 2008 Tania Lombrozo and colleagues (University of California) quizzed 96 undergraduates (mostly psychology undergrads) about their attitudes to science and evolution. What they showed was that understanding of the nature of science was indeed very important for accepting evolution. But they also found that religion was just as important.

Now, in this admittedly rarified group they found no meaningful relationship between religious belief and understanding of science. So what this shows is that religion is directly and significantly reducing acceptance of evolution, even in people who understand science and would otherwise accept it.

Rejection of Darwinian evolution is a stark example of how religion can cause people to deny truths about the world around us. And that, Madeleine Bunting, is why Darwin has become a poster child of the atheist movement.

But perhaps this is all a misunderstanding. Perhaps there isn't really a conflict between religion and Darwinism? Bunting doesn't think so:
Many of the prominent voices in the New Atheism are lined up to reassert that it is simply impossible to believe in God and accept Darwin's theory of evolution; Richard Dawkins and the US philosopher Daniel Dennett are among those due to appear in Darwin200 events. It's a position that infuriates many scientists, not to mention philosophers and theologians.
And yet it's a rather extraordinary coincidence? Religious people don't deny fluid dynamics, for instance. Perhaps there is something about evolution that is, at heart, difficult for the religious to stomach. And I don't mean just the idea that god didn't create people using magic. I mean about what evolution tells us about what kind of god could exist.

Despite Bunting's assertions, theologians at least are aware of the problem. Amy Frykholm, writing the The Christian Century, explains the problem:
... knowledge of evolutionary history raises questions of theodicy in an especially disconcerting way. Evolution reveals a vast history of unfathomable waste, loss, extinction, suffering and death in the natural world. What has God been up to all these millennia? And what is God up to now? If we believe that God oversees creation, then God's way of doing it through evolution seems strange and even appalling.
Frykholm goes on to explain the modifications to their ideas about god that theologians have made in order to reconcile them with evolution. In other words, in order to accept evolution you have to reinvent god. For many people, that's a step they find hard to swallow. For many, acceptance of evolution is the first step on the path to atheism (and humanism).

ResearchBlogging.org

Jon D. Miller, Eugenie C. Scott, Shinji Okamoto (2006). SCIENCE COMMUNICATION: Public Acceptance of Evolution. Science, 313 (5788), 765-766 DOI: 10.1126/science.1126746

Tania Lombrozo, Anastasia Thanukos, Michael Weisberg (2008). The Importance of Understanding the Nature of Science for Accepting Evolution. Evolution: Education and Outreach, 1 (3), 290-298 DOI: 10.1007/s12052-008-0061-8

Hello Humanists4Science and Epiphenom

The BHA Science Group has a new name, one chosen to better reflect the goals of the group and one that also puts a little space between us and the British Humanist Association. The new group, Humanists4Science, will continue to use the same Yahoo Group as the focal point for its activities.

As a result, this blog also has a new name and a makeover! It's no longer a group blog, but a personal one (although I'll still post group news up here as appropriate). And who am I? Well, you can find that info and also the rationale for the choice of new blog name over in the 'About' section on the right hand side.

Make a logo for Humanists4Science - win a years sub to BHA!

The new group needs a new logo. Here's one proposal - can you do better? The best entry will win prize, donated by Crabsallover (H4S committee member), of one year's Full Subscription Membership of British Humanist Association - worth £35! The winning logo will be decided by Humanists4Science Committee on 1st March 2009.

Email your entry to: crabsallover at btinternet.com
Latest entry date: 28th February 2009.

And please do join the group if you're interested in humanism and science. Simply sign up to the Yahoo group and you're in. That's all it takes!

The childish beliefs of Dr Justin Barrett

Justin Barrett is a Senior Researcher at the University of Oxford’s Centre for Anthropology and Mind and a lecturer in the Institute of Cognitive and Evolutionary Anthropology. He's also a devout Christian who believes that we have an inbuilt predisposition to believe not just in superstitious stuff, but also in a monotheistic god. And he takes this as evidence that god is real, and not invented.

Back in November, he gave a talk at the Faraday Institute for Science and Religion in Cambridge in which he said a few things (such as: "You have to indoctrinate someone into being an atheist") that got AC Grayling's blood boiling (Grayling is a Professor of Philosophy at the University of London). Here's what Grayling had to say about it:
Barrett and friends infer from the first half of these unexceptionable facts that children are hardwired to believe in a supreme being. Not only does this ignore the evidence from developmental psychology about the second stage of cognitive maturation, but is in itself a very big – and obviously hopeful – jump indeed. Moreover it ignores the fact that large tracts of humankind (the Chinese for a numerous example) have no beliefs in a supreme being, innate or learned, and that most primitive religion is animistic
Barrett responded by complaining:
Had Grayling attended the seminar as Brown did (or read my book, Why Would Anyone Believe in God?), he would know that I do not say that religion is "hardwired" or "innate" – rather that children have propensities to believe in gods because of how their minds naturally work.
Well, luckily a video of the talk is archived at the Faraday Institute, (listed under Brain/Psychology). So we can see for ourselves what all the fuss is about.

The first 20 minutes or so is stuff we've heard before from the likes of Scott Atran and Pascal Boyer - children seem to be programmed to see intention and design, rather than happenstance and random chance. Of course, this is true of adults too. Humans in general tend to err on the side of seeing patterns where there are none. But it's especially true of young children.

The more interesting bit starts at around 21 minutes, when Barrett starts discussing his own work on infant understanding of the mental states of other people.

What he shows is that 3-year-olds think that their parents have god like omniscience - they know everything that the child knows. And when you ask them about god, they think the same. It's not till kids reach 5 years that they understand that their parents do not know what's going on inside their own head, and that there are things that only they know.

But, when you ask 5 year olds about god, well, they still think that god knows everything (as do adult believers, of course). Barrett takes this as evidence that kids come with a 'correct' knowledge of god built in. Young kids get god right, and mum wrong, and he says of 5 year olds: "They are not dumbing down god, they are smarting up mom" (30 minutes).

Now these facts are indisputable. It's the interpretation that's open to question. As Barrett says in the Q&A session: "What we do with the interpretation, depending on our worldview, is a completely different issue. But at least we can agree on what the science is starting to show."

So whose interpretation is right? Is Barrett right that belief in an omniscient god is built in, and we have to be persuaded otherwise? Or is Grayling right that very young kids just aren't smart enough to figure out how minds work. And that when they learn that others can't actually read their mind, then they would extrapolate that to god too if it weren't for cultural indoctrination?

My money is on Grayling. I think it's pretty clear what's happening here. Young kids, like animals such as monkeys, simply don't have a theory of mind. They think that others know what's in their head, and of course when they are told about this invisible person called 'God' they extend these misconceptions to it.

As kids grow up they figure out that the people around them do not, in fact, know what's going on inside their heads. They have plenty of evidence from observing how people behave, and employ their increasing brain power to work out the truth.

Of course, they can't do this for God, because God is a fictional entity. All they have to go on is what adults are telling them. And so, following the lead of the adults around them, they continue to accept that God is omniscient. Many kids have similar beliefs about Santa Claus, and for the same reason.

Barrett says that young kids 'get god right'. But the only reason they do this is that god is an extension into the adult world of childish understanding of how the world works. This isn't too surprising, of course. The Judaeo-Christian concept of god is a unabashed imaginary father figure. Adults attribute to it the superhuman powers they once believed that their own fathers had.

So the reason young kids 'get god right' is that their brains aren't fully developed. The reason older kids and adults 'get god right' is that god is their imaginary friend.

But what Barrett wants to know is "Why did these beliefs and not others?". So, is there any evidence that these beliefs are what you get if you start to degrade the adult brain's ability to reason about the world? Well yes there is.

Tania Lombrozo, Professor of Psychology at the Concepts and Cognition lab at Berkeley, has done similar experiments to those described by Barrett, but in Alzheimer's patients rather than young kids. And it turns out that, just like kids, Alzheimer's patients tend to see design everywhere. This is an excerpt from an article on her work in Berkeley Science Review:
Unlike children, most educated adults know that clouds form because water condenses, and that mountains exist because of plate tectonics. However, Lombrozo was interested in whether adults would fall back on teleological reasoning in the absence of background knowledge. To address this question, she and her colleagues Deborah Kelemen and Deborah Zaitchik examined a group of adults whose background beliefs were compromised, but who had otherwise developed normally: Alzheimer's patients.
"Alzheimer's patients have some characteristics of adults and some characteristics of children," says Lombrozo. "Like adults, they have undergone normal development and have presumably gotten rid of any reasoning strategies associated only with children. But like pre-school children, they might not have access to the kinds of rich causal beliefs that adults typically have access to."
In her study, subjects were asked to identify the most appropriate answers to a series of "why" questions. For example, for the question "Why does the earth have trees?" they could choose between "because they grow from tree seeds," or "so that animals can have shade." Lombrozo found that like young children, Alzheimer's patients were much more likely than age-matched control subjects to prefer teleological explanations, picking the teleological choice about twice as often as their healthy counterparts.
So kids are like adults but with an important bit of their brain missing. And that's why they 'get god right'!

The power of evolutionary psychology

The Economist has a very nice article on the power of evolutionary psychology to revolutionize our understanding of human behaviour. Many traditional explanations for some of the more puzzling things people do are based on little more than guesswork, and policy making suffers as a result.

The article is a neat and concise demonstration of just why it is so important for ordinary people - policy makers and the people who vote for them - to understand and accept the reality of evolution. Religiously motivated denial of evolution is not simply an emblem of the problems of faith-based thinking. It has real world implications. Social problems cannot be effectively addressed unless we understand what causes them.

For example, many forms of crime are a logical response to low social status. Violent crime may seem illogical - since it can often result in your own death or at least long stretches in prison. But competition for mates among men of low social status is such that it can be worth the gamble. And this explains the social power of marriage:
Sexual success, by contrast, tends to dampen criminal behaviour down. Getting married and having children—in other words, achieving at least part of his Darwinian ambition—often terminates a criminal’s career. Again, that is a commonplace observation. However, it tends to be explained as “the calming influence of marriage”, which is not really an explanation at all. “Ambition fulfilled” is a better one.
The article goes on to discuss other social puzzles, such as pay differentials between men and women, racism, and obesity. Critics of evolutionary psychology often dismiss these ideas as 'just-so stories', because they are too often based on inference from prejudice and shaky theory. And yet even if this were true (as indeed it is in some cases), they would be no worse than our current approaches to understanding these problems.

But evolutionary psychology is different because it is, in principle, a science. It provides a rational framework to developing theories, which can be tested both in fieldwork and mathematically. Although it's still a science in its infancy, it's a science that will totally revolutionise our understanding of humanity.

So what about religion? Such a fundamental part of human society, it's completely ignored by The Economist. Perhaps this is because the idea that religion is not divine revelation but rather just a by-product of our evolutionary history is still a pill that many cannot swallow.

Just as likely, however, it's because our understanding of the evolutionary roots of religion is still very poor. What's not in doubt is that we have a lot of inbuilt errors in the way we perceive and think about the world - shortcuts that helped our ancestors deal with a complex world but with unfortunate side effects - and that these lead to just the sorts of ideas that underpin a lot of superstition and religion. But these do not in themselves make religion. So is there some special benefit to religion from a Darwinian perspective? Or is the whole thing just an accident of our attempts to form cohesive societies that got wrapped up in our faulty thinking? Is religion a by-product, or did belief in the supernatural actually give a survival benefit to our tribal ancestors?