There have been many studies investigating whether some emotions expressed in the face are detected quicker than others. Many believe negative emotions such as anger would be detected quicker than positive emotions such as happiness as negative emotions suggest something disrupting the environment which could cause a threat to the person perceiving the emotions. Most of the research has focussed on anger and happiness, with some research on fear.
Hansen and Hansen (1988) conducted a search task where the participants had to look at the faces of nine different individuals presented in black and white photographs. Participants had two keys; they had to press a key if the nine faces conveyed the same emotion. On the other 54 trials, if there was one discrepant face (a face that shows a different emotion), the participants had to press a different key. The three emotions conveyed were anger, happiness and neutral. The results showed that anger ‘was detected relatively quickly and accurately when presented in the neutral or happy crowd’ (Hansen and Hansen, 1988). Neutral faces in an angry crowd were not detected as efficiently, this was the same for a happy face in an angry crowd. This could be because angry faces might hold attention for longer. This supports the face-in-the-crowd effect known as anger-superiority effect which is where an angry face is comparatively easy to find among a neutral or happy crowd. It links to the idea that we have adapted to detecting negative emotions quickly.
However Hansen and Hansen (1988) admitted that the results weren’t as straightforward as it first appears, one example is that a discrepant neutral face was quite easy to discover within a happy and no explanation as to why. Hansen and Hansen conducted a second experiment, and used the same individual in every picture. All the trials had a discrepant face within it, and the participants had to locate this face. The crowd display was shown to the participants and then masked by scrambled letters. It was found overall that participants needed less time to detect an angry face in a happy crowd than a happy face in an angry crowd; supporting the original experiment.
Purcell, Stewart and Skov (1996) failed to replicate Hansen and Hansen’s (1988) results using the search task. They concluded that maybe there was some irrelevant feature on the Hansen and Hansen’s pictures such as the angry faces containing dark patches causing anger to be detected more efficiently.
Fox et al (2000) conducted four experiments which show that anger is detected quicker than a neutral or happy expression. Experiment one was conducted in a similar way to Hansen and Hansen (1988). It required participants to press one of two keys, one for when all the faces in their display conveyed the same emotion, or the other key when one discrepant face was present. ‘The comparison between a discrepant angry face in a neutral crowd and a happy face in a neutral crowd gives a direct measure of the speed of detection of angry and happy faces retrospectively’ (Fox et al, 2000). The trials with the four angry faces was more error prone compared to when they were happy suggesting that angry faces disrupt attention processing more than happy faces.
Another experiment was followed up as in experiment one 40% of the participants made errors (this has been explained due to the short amount of time the faces were shown). The exposure time of the stimulus was increased; this increased accuracy and time. The results showed no difference between ‘all angry’ faces and ‘all happy’ faces. This suggests that the increase in time overcame the disruption attention processing in the angry faces. This highlights that when a person only has a short time to process emotion then anger may have priority over happiness. The results from the second experiment supported that of the first as on the different displays, the discrepant face was detected faster when the face was angry rather than happy in a neutral crowd.
However, the previous experiments results could be due to the emotional expressions being shown, if it wasn’t then when the faces were inverted with the same features present, results should remain similar. ‘There was no difference between the three same displays in contrast when the fasces were presented upright’ (Fox et al, 2000). They then decided to remove the eyebrows (experiment four) to rid any criticism of it being due to the change in eyebrow shape. ‘Finding the sad/angry face in a neutral crowd was faster and more accurate than finding the happy face in a neutral crowd’ (Fox et al, 2000) Therefore showing anger is detected faster than happiness.
The previous evidence is supported by Eastwood, Smilek, and Merikle (2001). They embedded negative faces into displays of faces with a neutral expression. The number of distractor faces with a neutral expression varied widely from 7 to 19 faces. The participants had to show the spatial location of the target. The results showed that the ‘negative face guided focal attention better than did the positive face’ (Eastwood, Smilek, and Merikle, 2001). The experiment was repeated using inverted faces and there was no change in the results obtained therefore showing anger is detected faster again.
The research above is hard to generalise as most of the participants were undergraduates. Ruffman and Jenkin (2009) looked at the differences in young and old people in identifying emotion faces. The experiment consisted of participants looking at nine faces where either all the faces were identical in showing a neutral expression or with one discrepant face. Both the adults and the young were faster at identifying anger in a discrepant face than happiness. This remained with photographs of real people and schematic faces. This shows that regardless of age, anger is processed quicker than happiness or a neutral face.
However, Juth et al (2005) finds conflicting evidence with Ruffman and Jenkin (2009) finding that happy faces are quicker to detect. Eight photographic (colour) facial images of different individuals were used and the orientation of the head to the participant changed, so it wasn’t always directly facing them. This increased ecological validity as they created a more realistic situation as humans encounter facial emotions at different angles. The orientation is also useful as it tests how strong the anger-superiority-effect is as it ought to be very prevalent when the angry face was looking directly at the participant. However it was the happy faces that stood out. Suggesting that the anger-superiority-effect might not be as clear cut as it first seemed.
Aside from the face-in-the-crowd research, other methods have been pursued to investigate expression processing, one being the flanker task. It was originated by Erikson and Erikson (1974) using letters. The idea of the flanking task is that a central target is flanked on each side by a distractor. Erikson originally conducted it using letters, however, Fenske and Eastwood (2003) used faces with a central target flanked by a negative or positive emotions. The reaction of the participants was fast regardless of the expression of the flanker faces if the target showed a negative emotion, this supports the original idea of negative emotions causing faster processing. When the central target was a happy face, there was flanker interference. This called flanker asymmetry effect where positive expressions flanked by negative expressions suffer from more interference than negative expressions flanked by positive expressions. The results were interpreted as negative emotions hold attention as they are not affected by the flanker faces.
However when Hostmann et al (2006) replicated the basic effects using more complicated schematic faces, making it more about perceptual characteristics of the faces than emotional valence, the results of Fenske and Eastwood (2003) were not replicated. This adds a cautionary note when looking at experiments into facial processing as the stimuli used can change the results dramatically. This is supported by Pessoa, Japee and Ungerleider (2005) who looked at identifying fearful faces. Participants were shown a target face, which was then masked by another face. The target face was fearful, happy or neutral. The participants were asked to state whether the target face showed fear or no fear and then rate their confidence on their answer. There was huge variability in the results. The experiment concludes that some participants were consistently aware of the masked face leading to the belief that evidence should be taken in caution with claims involving masking and quick emotion processing.
Another way to look at Empirical evidence with the speed of detecting emotions in the face is looking at top-down goals. Hahn and Gronlund (2007) used a visual search paradigm to see how top-down processing modifies attentional bias for threatening facial expressions. The evidence could be used to explain the other evidence mentioned above such as Hansen and Hansen (1988). Two experiments were conducted; one consisted of participants looking for a discrepant facial expression in a crowd of the same faces. The results supported the research mentioned above as the reaction time (RT) was quicker for ‘when the target face was angry than when it was happy.’ (Hahn and Grunlund, 2007) The second experiment required top-down processing where participants had to search for a certain type of facial expression. If the display included a target, the RT was quicker for the angry than the happy face. Again this supports the research above. However, Hahn and Grunlund (2007) found that ‘when an angry or happy face was present in the display in opposition to the task goal, the RT was equivalent.’ It can be concluded from this evidence that the presence of an angry face in the opposition task didn’t support the previously mentioned thesis of anger-superiority-effect. Furthermore, the angry face might hold attention but this only happens if a specific target is not present. However ‘in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.’ (Hahn and Grunlund,2007).
Fear has also been researched but not to such a great extent. Vuilleumier, Armony, Driver, and Dolan (2003) found that the amygdala process LSF (low spatial frequency) images; images that are very quickly processed but in course detail, and is constantly activated by fearful faces, even if they are not consciously perceived by the person. Knowing that fear has its own detection system within the brain might suggest that it is detected quicker than other emotions that might not have an independent detector.
Overall, it can be concluded that there is evidence that some emotions are processed quicker than others. Though it has also become aware, that the evidence must be considered in light of the research, for example, schematic faces showed anger was detected quicker, but this wasn’t always conveyed to real life photographs, where happiness was sometimes shown to be the quicker emotion detected. However, knowing fear has its own detecting system, and the amygdala is always subconsciously detecting fearful faces suggests that maybe fear is detected quicker. Juth et al (2005) summed up the experiments into detecting emotions in the face by stating ‘there are several aspects of visual search for emotional faces that are poorly understood’. This might explain why so much of the research is