Friday, 25 April 2014

Getting to grips with implicit bias

Implicit attitudes are one of the hottest topics in social psychology. Now a massive new study directly compares methods for changing them. The results are both good and bad for those who believe that some part of prejudice is our automatic, uncontrollable, reactions to different social groups.

The implicit association test (IAT) is a simple task you can complete online at Project Implicit which records the speed of your responses when sorting targets, such as white and black faces, into different categories, such as good and bad. Even people who disavow any prejudiced beliefs or feelings can have IAT scores which show they find it easier, for example, to associate white faces with goodness and black faces with badness – a so called 'implicit bias'.

The history of implicit bias research is controversial – with arguments over what exactly an implicit bias means, how it should be measured and whether they can be changed [see also this recent Digest item]. Now a new paper in the Journal of Experimental Psychology reports the results of a competition which challenged researchers to design brief interventions aimed at changing people's implicit biases. The interventions had to be completed online, via the Project Implicit website, and take less than five minutes. Samples of 300-400 people were then randomly assigned to take each intervention, allow a high statistical power to estimate the effect of the intervention on IAT scores.


Overall 17 interventions were tested, and nine appeared to work, while eight had estimated effect sizes close to zero. The paper reports that interventions which focused on trying to shift the underlying attitude of the participants fared badly. Interventions such as 'instilling a sense of common humanity', 'training empathetic responding', encouraging taking the perspective of the outgroup or imagining positive interracial contact all seemed not to work.

These failures to shift IAT scores suggest that the IAT measures something which is relative stable – a real thing in our cognitive makeup, and something that can be measured in a way that can't be as easily manipulated as self-report.

The interventions which did work included a some that targeted response strategies, including a straight 'Faking the IAT' intervention, a practicing the IAT intervention and several other priming and training interventions. That these worked is also both good and bad news. That IAT scores can be shifted by faking and training is bad news for the reliability of the measure, but there is some comfort in knowing that the successful interventions all relied on sophisticated knowledge of how the IAT worked – most participants in implicit bias studies wouldn't come up with these strategies on their own.

The big unknown is how long term any of the effects are. It could turn out that sustained change on implicit biases requires longer than five minutes intervention, but with more sustained interventions it really is possible to shift the underlying attitudes, and not just people's response strategies.

_________________________________ ResearchBlogging.org

Lai CK, Marini M, Lehr SA, Cerruti C, Shin JE, Joy-Gaba JA, Ho AK, Teachman BA, Wojcik SP, Koleva SP, Frazier RS, Heiphetz L, Chen EE, Turner RN, Haidt J, Kesebir S, Hawkins CB, Schaefer HS, Rubichi S, Sartori G, Dial CM, Sriram N, Banaji MR, & Nosek BA (2014). Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interventions. Journal of experimental psychology. General PMID: 24661055

Post written by guest host Tom Stafford, a psychologist from the University of Sheffield who is a regular contributor to the Mind Hacks blog, for the BPS Research Digest.

Thursday, 24 April 2014

Alcohol could have cognitive benefits – depending on your genes

The cognitive cost or benefit of booze depends on your genes, suggests a new study which uses a unique longitudinal data set.

Inside the laboratory psychologists use a control group to isolate the effects of specific variables. But many important real world problems can't be captured in the lab. Ageing is a good example: if we want to know what predicts a healthy old age, running experiments is difficult, even if only for the reason that they take a lifetime to get the results. Questions about potentially harmful substances are another good example: if we suspect something may be harmful we can hardly give it to half of a group of volunteer participants. The question of the long-term effects of alcohol consumption on cognitive ability combines both of these difficulties.

If a problem can't be studied in the lab then everything you measure could be affected by other things you can't. So, for example, if you find that older people who drink have a worse memory than older people who don't drink, you don't know if the drinkers had a worse memory to begin with, or if people who from certain social groups are likely to have both a worse memory and drink more. These potential confounds mean that you can't be sure if drinking is having any effect on cognition, even if you find a difference between groups in both how much they drink and their cognitive abilities.

A new study from the University of Edinburgh uses an unique longitudinal dataset – the Lothian Birth Cohort – and an ingenious analysis technique called Mendelian Randomisation to disentangle the causal influence of drinking alcohol on cognitive change in older age.


The Lothian Birth Cohort consists of Scots born in 1936 studied during the Scottish Mental Survey of 1947 and then between 2004 and 2007 when Professor Ian Deary of the University of Edinburgh followed up participants in the original survey. The data these participants provided included an IQ test at around the age of 11 and another at around the age of 70. Importantly, possible confounds on IQ test score such as socioeconomic status and education were also recorded, as well as a measure of alcohol consumption (at the age of 70, 'over the past few months').

With this data you could assess how alcohol affected cognitive change – testing whether those who tended to drink more had suffered a greater drop in their cognitive ability compared to those who drunk less (as some studies have found), or whether they stayed the same or even improved their cognitive function (as some other studies have found).

The neatest part of the study, however, was to look at the influence of genetics on the effects of alcohol. Certain rare genetic variants are known to be related to the body's ability to process alcohol. Individuals with more of these variants are worse at processing alcohol and so, for a given amount of alcohol consumption, will have higher exposure to the potentially damaging effects of alcohol. The analysis, led by Dr Stuart Richie, showed that individuals with a poor ability to process alcohol did indeed suffer a cognitive decline if they drunk more alcohol. But individuals with a good ability to process alcohol actually showed the reverse effect – for these individuals, higher alcohol consumption predicted actual improvements in cognitive ability.

The authors warn that the study is blind to the other, known, detrimental effects of alcohol consumption (such as heart and liver disease) and speculate that the results may be due to an anti-inflamatory effect of alcohol, which – only in those with the right genes – means the cognitive protective effects can outweigh the cognitively harmful ones.

_________________________________ ResearchBlogging.org
Ritchie SJ, Bates TC, Corley J, McNeill G, Davies G, Liewald DC, Starr JM, & Deary IJ (2014). Alcohol consumption and lifetime change in cognitive ability: a gene × environment interaction study. Age (Dordrecht, Netherlands) PMID: 24652602

Post written for the BPS Research Digest by guest host Tom Stafford, a psychologist from the University of Sheffield who is a regular contributor to the Mind Hacks blog.

Tuesday, 22 April 2014

A self-fulfilling fallacy?

Lady Luck is fickle, but many of us believe we can read her mood. A new study of one year's worth of bets made via an online betting site shows that gamblers' attempts to predict when their luck will turn has some unexpected consequences.

A common error in judging probabilities is known as the Gambler's Fallacy. This is the belief that independent chance events have an obligation to 'even themselves out' over the short term, so that a run of wins makes a loss more likely, and vice versa. An opposite error is the belief that a run of good luck predicts more good luck – when a basketball player succeeds in a number of successive shots, they are said to have a 'Hot Hand', meaning a better chance of succeeding with their next shot. While the hot hand might be possible in games of skill, it is a logical impossibility for truly chance events.

Juimin Xu and Nigel Harvey from University College London, set out to study the role these fallacies might play in a highly relevant real-world setting: real bets made by people gambling online. Most psychology studies use are carried out on undergraduate psychology students, who participate as an obligatory part of their course. Hardly the strongest motivation for taking part in a study. With Xu and Harvey's sample there's no doubt the participants were sincerely motivated in their behaviour: they placed bets worth around £100 million during the 365 days of 2010. If any group had the incentive to judge correctly about their luck, this is it.

People who had a run of wins had a higher probability of winning their next bet
The bets from this sample were analysed for runs of wins and losses, revealing a surprising pattern. Although the bets were from unrelated events, from football matches to horse racing, people who had a run of wins had a higher probability of winning their next bet. For example, gamblers who had a run of three wins had a probability of 0.67 of winning their next bet, compared to a probability of 0.45 for those who hadn't had such a winning streak. The researchers analysed streaks of up to six wins in a row and found that the probability of winning the next bet just went up and up. The effect also held for losing steaks, so that those who lost successive bets were also more likely to lose again. The effect didn't seem to be due to skill, since a control analysis showed that the average winnings for gamblers who had long streaks of luck were the same, or perhaps even slightly lower, than for those who didn't have long streaks. The result seems to contract the gambler's fallacy, and even our reasonable faith that bet outcomes should be independent.

The answer to the mystery was revealed when Xu and Harvey analysed the odds of the bets placed by gamblers in the middle of a streak, and the amount they staked. Gamblers who won tended to take their next bet on a safer odds than the bet they had just won, with the reverse true for people on losing streaks. This, the researchers suggest, is because they believed in the gambler's fallacy and so expected their luck to turn. This had the paradoxical effect of creating luck for those who were already winning - because they then made bets they were more likely to win - and rubbing in the bad luck of those who were losing - because they made bets which they were less likely to win and so perpetuated their losing streak.

The study is great example of how a simple phenomenon – the gambler's fallacy – can have unpredicted outcomes when studied in a complex real-world environment. Don't get too carried away by the rewards of online gambling however, the paper contains this telling detail: of all the bets analysed in the study, 178,947 were won and 192,359 were lost - giving overall odds of winning at 0.48. Enough to ensure the betting site's profit margin, and to suggest that on average you're going to lose more than you stake. Unless you're lucky.

_________________________________ ResearchBlogging.org
Xu J, & Harvey N (2014). Carry on winning: The gamblers' fallacy creates hot hand effects in online gambling. Cognition, 131 (2), 173-80 PMID: 24549140

Post written by guest host Tom Stafford, a psychologist from the University of Sheffield who is a regular contributor to the Mind Hacks blog, for the BPS Research Digest.

Thursday, 17 April 2014

A photograph can be worth a thousand words

There has long been a tradition of using photographs to capture, reveal, and expose. A photograph has the ability to arouse emotion – oftentimes, some would argue, more effectively than a verbal or written description.

In a recent article in Social Dynamics, Rory du Plessis of the University of Pretoria (South Africa) has brought to life a case example of the power photographs can hold. In an analysis of two sets of photographs produced by the Grahamstown Lunatic Asylum between 1890 and 1907, du Plessis has revealed two very different faces of the institution – especially regarding the racial makeup of the patient population.

The Grahamstown Lunatic Asylum opened in 1875 in what is now the Eastern Cape Province. Like other South African asylums of the period, Grahamstown adopted the moral treatment philosophy from Europe which viewed all aspects of the institution and the activities in which the patients were engaged as therapeutic in nature. During the period of focus of this particular study, Grahamstown was working to rebuild its image after receiving heavy criticisms regarding its success as a therapeutic institution. A new superintendent, Dr. Thomas Duncan Greenlees, arrived in 1890 and introduced a series of new recreational activities including: “picnics at neighbouring seaside towns, dances, dramatic entertainments, concerns, magic lantern entertainments, social evenings, cricket, and instrumental band, and croquet and lawn tennis for the women.” At the same time, Greenless also created a system of differential treatment for the patient population of Grahamstown with only the White paying patients benefitting from these new activities and Black patients being engaged only in labour projects around the institution.

The photographs examined by du Plessis bring to light these two very different worlds of the Grahamstown Lunatic Asylum.

The first set of photographs were created explicitly for public consumption. These were published in annual reports and the institution’s own periodical (which was sometimes reprinted in medical journals). As du Plessis highlights, these images were carefully orchestrated in order to portray an image of a successfully curative environment. White patients were portrayed in decorated rooms, dressed respectfully, and engaged in recreational activities popular during the period. Black patients, conversely, were featured in drab environments, oftentimes engaged in manual work, in scenarios of passivity and docility. du Plessis describes this set of public photographs as a “marketing tool” intended to normalize the activities of a curative environment (and recruit paying patients). In this context, the success of the supposed curative environment for White paying patients was evaluated through representations of class and wealth whereas the curative environment for Black patients was evaluated through the level of compliance engendered.

Patients are seen being physically restrained by
the hands of unseen attendants and nurses
The second set of photographs reveal a very different component of the institution’s history: its stories of resistance, fear, and anxiety. These were created for internal use as part of the patient casebooks. From 1890 onwards, all patients of the Grahamstown Lunatic Asylum were photographed upon their admission. Much like the mug shots used by police departments, the admission photos focused on the face and upper body of the patient. In these silent portraits, du Plessis uncovered a range of powerful emotions exerted by the Black patients that were in stark contrast to the passivity represented in the first set of photographs. In acts interpreted as resistance, patients are seen in the casebook photographs being physically restrained by the hands of unseen attendants and nurses. In others, their gaze is averted as an act of defiance. A few other painful examples reveal the fear or anxiety expressed on the faces of those admitted to the institution.

As du Plessis highlights in his article: “the taking of a photograph is never neutral.” And more so: a picture may speak louder than words.

_________________________________ ResearchBlogging.org
du Plessis, R. (2014). Photographs from the Grahamstown Lunatic Asylum, South Africa, 1890–1907 Social Dynamics, 1-31 DOI: 10.1080/02533952.2014.883784

Post written for the BPS Research Digest by Jennifer Bazar, a Postdoctoral Fellow at the University of Toronto/Waypoint Centre for Mental Health Care and an Occasional Contributor to the Advances in the History of Psychology blog.

Further reading from The Psychologist: 'The house of cure', and 'Such tender years'.

Wednesday, 16 April 2014

Have you exercised your memory lately?

We’ve often heard someone’s memory described as 'weak' or 'strong'. But with the majority of psychological memory models drawing on information processing analogies with terms like 'storage', 'retrieval', and 'input', where did the idea of memory’s strength come from?

In a recent article published in the Journal of the History of the Behavioral Sciences, Alan Collins of Lancaster University reviewed British and American texts dating between 1860 and 1910 that focused on improving human memory. By extending his analysis to include those texts aimed at popular audiences as well as those intended more specifically for academics, Collins noticed a trend during this period in which the importance of enhancing natural memory was emphasised over the creation of artificial memory systems.

The idea of 'artificial' memory was used to describe systems created with the intention of supporting or improving one’s memory capabilities – be they mnemonics or some other form of memory aid. The criticism of such systems at the time was that they require too much mental effort and have only limited value in the practical sense. 'Natural' memory, conversely, was used generally to describe our innate memory systems.

Collins explains that the increasing tendency towards discussions of natural memory in the latter decades of the 19th century paralleled a wider emphasis on understanding everything as being a part of nature and therefore subject to natural laws. Guidebooks of the period connected all aspects of one’s life to their general health: a healthy diet, good (moral) habits, pure air, and both a strong mind and a strong body were key to a good character. Natural memory became wrapped up in these recommendations, often described as similar to our bodily functions, especially our muscles. The argument put forth contended that just as our muscles require exercise, training, and discipline, so too does our memory.


But just how does one 'exercise'; their memory? In short: repeated practice. The memory improvement texts examined by Collins advised readers to block out a period of time each day to actively exercise their memories. This time could be spent learning lists, reciting poetry, or recounting the events of the previous day. Focused attention on the chosen task was considered to be an especially critical component.

As Collins highlights in his conclusion, today we no longer draw on the muscle metaphor explicitly in discussions of memory, but the concept of 'strength' has remained. However, if we think about the advent of computer games and apps intended to strengthen (or, dare I say 'exercise'?) the mind, perhaps the idea of working out one’s memory is not quite as foreign as it may seem. Besides, learning a verse of poetry sounds a great deal more appealing than hitting the treadmill.

_________________________________ ResearchBlogging.org

Collins AF (2014). Advice for improving memory: exercising, strengthening, and cultivating natural memory, 1860-1910. Journal of the history of the behavioral sciences, 50 (1), 37-57 PMID: 24272820

Post written for the BPS Research Digest by guest host Jennifer Bazar, who is a Postdoctoral Fellow at the University of Toronto/Waypoint Centre for Mental Health Care and an Occasional Contributor to the Advances in the History of Psychology blog.

Monday, 14 April 2014

Does Psychology have its own vocabulary?

If you were to pick up the flagship journal from a discipline that is foreign to you and flip to an article at random, how much do you think you would understand? Put a different way: how much of the vocabulary employed in that article might you misinterpret?

The vocabularies used by any given discipline overlap with those of many other disciplines, although the specific meaning associated with a given term may be dissimilar from discipline to discipline. Anglophone psychology, for instance, has been previously shown to share much of its vocabulary with other disciplines, especially: biology, chemistry, computing, electricity, law, linguistics, mathematics, medicine, music, pathology, philosophy, and physics. But how much of psychology’s vocabulary may be said to be unique to itself?

In a recent article in History of Psychology, John G. Benjafield of the Department of Psychology at Brock University (Canada) compared the histories of the vocabularies of psychology and the 12 disciplines listed above. Constructing databases for each of the disciplines using entries in the Oxford English Dictionary, Benjafield examined the rate of primary vs secondary words (ie. how often a word was used for the first time by a discipline vs. how often a word was appropriated from the vocabulary of another discipline) along with the dates of first usage of these terms, and the polysemy of the vocabularies (i.e. the number of different meanings held by a given word).


So does psychology have its own vocabulary? The answer seems to be: somewhat. The majority of the vocabularies of all 13 disciplines were formed of secondary words; that is, the bulk of their vocabularies are formed of words that were first used in the English language by another discipline (often with another meaning). But, psychology was nonetheless found to have some unique characteristics with regards to its vocabulary that you may not have expected.

First, Benjafield found that computing and linguistics have the highest percentage of secondary words in their vocabularies (97 per cent and 94 per cent respectively) while psychology and chemistry had the lowest rates of the disciplines examined (65 per cent and 62 per cent). In light of these results, psychology’s vocabulary may been described as being less metaphorical in nature than previously assumed (especially when compared to computing and linguistics).

Moreover, whereas the other subjects in this study showed a collective tendency over time to increasingly assign new meanings to existing words, psychology has been following the opposite pattern – that is to say that, over time, psychology has tended more and more to invent new words for its purposes than the other disciplines.

Finally – and perhaps the most surprising conclusion to come out of Benjafield’s study – the history of the vocabulary of psychology has been shown to be most characteristically similar to chemistry. Personally, this one caught me by surprise: I would have expected closer connections to philosophy and physics based on the way the discipline of psychology developed over time. But Benjafield’s vocabulary analysis paints a different picture in which psychology has been strongly influenced by the naming practices of chemistry.

_________________________________ ResearchBlogging.org
Benjafield JG (2014). Patterns of similarity and difference between the vocabularies of psychology and other subjects. History of psychology, 17 (1), 19-35 PMID: 24548069

Post written for the BPS Research Digest by guest host Jennifer Bazar, who is a Postdoctoral Fellow at the University of Toronto/Waypoint Centre for Mental Health Care and an Occasional Contributor to the Advances in the History of Psychology blog.

Friday, 11 April 2014

Facial expressions as social camouflage

Can making faces mask your personality?

According to a group of University of Glasgow psychologists, Daniel Gill and colleagues, it can. Writing in the journal Psychological Science, these researchers say that human facial expressions can signal how dominant, trustworthy, or attractive we are – and that these ‘dynamic’ signals can mask or override the impression given off by the ‘static’ structure of the face.

In other words, someone might have a face that ‘seems untrustworthy’, but if they make the right face, they’ll still look like someone you’d trust with your housekeys.

To reach this conclusion, the researchers made use of software that allows them to generate realistic animated face images. These ‘faces’ are programmed with 42 different sets of muscles – called ‘action units’. Each one of these units could be switched on or off independently from the others, creating billions of possible animated facial expressions – only a tiny proportion of which are likely to be seen in real life. This software has been used before in studies of emotional expressions.

Gill and colleagues generated thousands of random expressions and got volunteers to rate each one for how dominant, trustworthy, and attractive it appeared. From all of these ratings they were able to determine the essence – or prototype – of, for example, a highly trustworthy look. Which, it turns out, involves the activation of the ‘Dimpler’, ‘Lip corner and cheek raiser’, and ‘Sharp lip puller’.

Can a facial expression tell you whether somebody is a good egg or not?
Armed with these dynamic prototypes of dominance, trustworthiness and attractiveness, Gill et al then tested whether they could counteract the effects of static impressions of the same traits. They used the same software to generate thousands of static faces, got volunteers to rate them, and worked out what made someone just look trustworthy, for example.

Then, they overlaid the dynamic expressions on top of the static ones. This revealed that, in general, the dynamic expressions were more powerful than the static traits. Mathematically speaking, the effect of static structure was linear while the dynamic effect was nonlinear and larger in magnitude.

They dub this social camouflaging: ‘Even the most submissive face [was] transformed into a dominant face by social camouflaging and reaches the same level of dominance as the most dominant static facial morphology.’

As well as with trustworthiness, the same effect worked for dominance and attractiveness as well, although it wasn’t quite as effective in the latter case, suggesting that ‘facial attractiveness is more difficult to mask than are facial dominance and trustworthiness’.

This, they say, is no big surprise: ‘Casting directors are probably aware that not all social traits are equal. An attractive character will require an actor with attractive morphology; however, social camouflage can help an actor fake a dominant or trustworthy character.’

However, all of this research was based on computer-generated faces. This provided Gill and colleagues with the ability to examine a wider range of expressions than would have been possible using actual models, but it does mean that these results might need to be confirmed with real faces to verify the relationships between dynamic and static faces.

_________________________________ ResearchBlogging.org

Gill, D., Garrod, O., Jack, R., & Schyns, P. (2014). Facial Movements Strategically Camouflage Involuntary Social Signals of Face Morphology Psychological Science DOI: 10.1177/0956797614522274

Post written for the BPS Research Digest by guest host Neuroskeptic, a British neuroscientist who blogs for Discover Magazine.

Wednesday, 9 April 2014

You don't have to be well-educated to be an ‘aversive racist’, but it helps

Are you a racist?

Most likely, your answer is no – and perhaps you find the very notion offensive. But according to two Cardiff University psychologists, Kuppens and Spears, many educated people harbor prejudiced attitudes even though they deny it. Their research was published recently in Social Science Research.

Kuppens and Spears analysed data from a large survey of the general US population, the American National Election Studies (ANES) 2008-2009. They focused on over 2,600 individuals of white ethnicity, and investigated the relationship between their level of education and their attitudes towards African-Americans.

In common with many previous studies, Kuppens and Spears found that more educated people were less likely to endorse anti-black views on questionnaires. For example, in response to the questions like: “Why do you think it is that in America today blacks tend to have worse jobs and lower income than whites do? Is it… because whites have more in-born ability to learn?”

However, while the educated participants reported less explicit prejudice, they did not show a corresponding tendency towards less implicit prejudice, as measured using the Implicit Association Test (IAT).

This method originated in cognitive psychology experiments and it has become widely used as a tool for probing people’s ‘unconscious’ attitudes.

As well as education, Kuppens and Spears explored IAT performance and explicit racial attitude measures across other demographics as well. They found that older white Americans reported less explicit prejudice than younger ones, yet they displayed more implicit bias. Women also endorsed less racist views, but were no different to men on the implicit measures.

Psychologists have long known that our ability to accurately perceive and self-report on our own behaviour is imperfect. If these results are anything to go by, being highly educated might not mean that we’re fully informed about our own implicit prejudices. Kuppens and Spears suggest that educated people were more likely to be ‘aversive racists’ – people who reject racism and consider themselves free of prejudice, yet still harbor implicit bias.


The researchers do note, however, that implicit measures like the IAT are open to several interpretations. In particular, they say, just because someone automatically associates a racial group with negative concepts doesn’t mean that they agree with that association. By itself, it only shows that they are familiar with it: ‘Because the nature of these measures prevents the influence of deliberative considerations on the measurement outcome, it is not clear to what extent they reflect attitudes that are endorsed by individuals, or result from information that individuals have been exposed to, but do not necessarily endorse.’

These concerns stem from the nature of the IAT procedure. In this test, the volunteers had to quickly press either a left button or a right button to categorise a target. In some cases the target was a word, and the object was to categorise its meaning as either good (e.g. ‘love’, ‘friend’) or bad (‘hate’, ‘enemy’). In other cases, the target was a picture of either a black or a white person’s face, and the task was to categorise their race.

The principle behind the IAT is that if someone mentally associates two concepts – say ‘black’ and ‘bad’ – they will find the task easier when they’re asked to use the same button to indicate these two concepts. Someone for whom these concepts are linked will tend to press the button faster, when the buttons match, as opposed to when they’re asked to use the opposite arrangement (e.g. same key for ‘black’ and ‘good’).

See also: a recently published study reported on an international contest to develop the best way to eliminate implicit racial bias on the IAT (paper, covered by the Neurocritic blog).

_________________________________ ResearchBlogging.org

Kuppens T, & Spears R (2014). You don't have to be well-educated to be an aversive racist, but it helps. Social science research, 45, 211-23 PMID: 24576637

Post written for the BPS Research Digest by Neuroskeptic, a British neuroscientist who blogs for Discover Magazine.

Monday, 7 April 2014

Around the world, things look better in hindsight

Human memory has a pervasive emotional bias – and it’s probably a good thing. That’s according to psychologists Timothy Ritchie and colleagues.

In a new study published in the journal Memory, the researchers say that people from diverse cultures experience the ‘fading affect bias’ (FAB), the tendency for negative emotions to fade away more quickly than positive ones in our memories.

The FAB has been studied previously, but the most previous research looked at the memories of American college students. Therefore, it wasn’t clear whether the FAB was a universal phenomenon, or just a peculiarity of that group.

In the new study, the authors pooled together 10 samples from different groups of people around the world, ranging from Ghanaian students, to older German citizens (who were asked to recollect the fall of the Berlin Wall). In total, 562 people were included.

The participants were asked to recall a number of events in their lives, both positive and negative. For each incident, they rated the emotions that they felt at the time it happened, and then the emotions that they felt in the present when remembering that event.

Ritchie and colleagues found that every cultural group included in the study experienced the FAB. In all of these samples, negative emotions associated with remembered events faded to a greater degree than positive emotions did. Importantly, there was no evidence that this effect changed with people’s age: it seems to be a lifelong phenomenon.


The authors conclude that our ability to look back on events with rose-tinted spectacles might be important for our mental health, as it could help us to adapt and move on from adversity: ‘We believe that this phenomenon is part of a set of cognitive processes that foster emotion regulation and enable psychological resilience.’

However, the authors admit that their study had some limitations. While the participants were diverse geographically and culturally, they all had to speak fluent English, because all of the testing was carried out in that language. In order to confirm that the FAB is truly universal, it will be important to examine it in other languages. Ritchie and colleagues also note that despite this apparent universality of the phenomenon, ‘We do not intend to imply that the FAB occurs for the same reasons around the world.’

_________________________________ ResearchBlogging.org
Ritchie TD, Batteson TJ, Bohn A, Crawford MT, Ferguson GV, Schrauf RW, Vogl RJ, & Walker WR (2014). A pancultural perspective on the fading affect bias in autobiographical memory. Memory (Hove, England) PMID: 24524255

Post written for the BPS Research Digest by guest host Neuroskeptic, a British neuroscientist who blogs for Discover Magazine.

Friday, 4 April 2014

Do television and video games impact on the wellbeing of younger children?

We’re often bombarded with panicky stories in the news about the dangers of letting children watch too much television or play too many video games. The scientific reality is that we still know very little about how the use of electronic media affects childhood behaviour and development. A new study from a team of international researchers led by Trina Hinkley at Deakin University might help to provide us with new insights.

The study used data from 3,600 children from across Europe, taken as part of a larger study looking into the causes and potential prevention of childhood obesity. Parents were asked to fill out questionnaires that asked about their children’s electronic media habits, along with various wellbeing measures – for example, whether they had any emotional problems, issues with peers, self-esteem problems, along with details about how well the family functioned. Hinkley and colleagues looked at the associations between television and computers/video game use at around the age of four, and these measures of wellbeing some two years later.

The results are nuanced. The researchers set up a model that controlled for various factors that might have an effect – things like the family’s socioeconomic status, parental income, unemployment levels and baseline measures of the wellbeing indicators. On the whole, after accounting for all of these factors, there were very few associations between electronic media use and wellbeing indicators. For girls, every additional hour they spent playing electronic games (either on consoles or on a computer) on weekdays was associated with a two-fold increase in the likelihood of being at risk for emotional problems – for example being unhappy or depressed, or worrying often. For both boys and girls, every extra hour of television watched on weekdays was associated with a small (1.2- to 1.3-fold) increase in the risk of having family problems – for example, not getting on well with parents, or being unhappy at home. A similar association was found for girls between weekend television viewing and being at risk of family problems. However, no associations were found between watching television or playing games and problems with peers, self-esteem or social functioning.


So it seems as if these types of media can potentially impact on childhood development by negatively affecting mental wellbeing. However, what we can’t tell from these data is whether watching television or playing games causes these sorts of problems. It may well be the case that families who watch lots of television are not providing as much support for young children’s wellbeing from an early stage – so the association with television or game use is more to do with poor family functioning than the media themselves. Furthermore, the results don’t tell us anything about what types of television or genres of games might have the strongest effects – presumably the content of such media is important, in that watching an hour of Postman Pat will have very different effects on a four-year-old’s wellbeing than watching an episode of Breaking Bad. And as the authors note, relying on subjective reports from parents alone might introduce some unknown biases in the data – “an objective measure of electronic media use or inclusion of teacher or child report of wellbeing may lead to different findings”, they note. So the results should be treated with a certain amount of caution, as they don’t tell us the whole story. Nevertheless, it’s a useful addition to a now-growing body of studies that are trying to provide a balanced, data-driven understanding of how modern technologies might affect childhood development.

- Post written by guest host Dr Pete Etchells, Lecturer in Psychology at Bath Spa University and Science Blog Co-ordinator for The Guardian. 

_________________________________ ResearchBlogging.org
Hinkley, T., Verbestel, V., Ahrens, W., Lissner, L., Molnár, D., Moreno, L., Pigeot, I., Pohlabeln, H., Reisch, L., Russo, P., Veidebaum, T., Tornaritis, M., Williams, G., De Henauw, S., & De Bourdeaudhuij, I. (2014). Early Childhood Electronic Media Use as a Predictor of Poorer Well-being JAMA Pediatrics DOI: 10.1001/jamapediatrics.2014.94