Face Processing: Connecting the Nodes Across the Brain

Since I have rather tended to focus on the hippocampus, I’ve decided to diverge a bit into the neural connectivity of face processing* — after all, this blog is mainly about neural connectivity, not only within a region but between multiple regions, as well.
Angry face of a female intending to seriously
hurt or scare someone/thing. Blue points represent
fixations from a person looking at the picture
(source)
Faces convey a wealth of information, including physical characteristics, gender, emotional state, trustworthiness, beauty, and intentions (especially through eye-gaze direction).  How do we glean all this information from a subtle glance at a face, though?  In order to understand this, we need to start out with a basic model of face processing and how our brain recognizes a face.  Yet, this question is not so simple to answer, as you will soon see with regards to the fusiform face area!  
Other than the fusiform area, the rest of the post will be dedicated to the connectivity between interacting brain areas involved in face processing.
First, I will discuss general visual processing.
Second, how does visual processing of faces differ in fusiform face area?
Third, there is a face network — feedforward connectivity.
Fourth, feedback connectivity with the help from an example: Emotions.
read more for description of the whole-brain connectivity of face processing and a video…

Fig. 1. Visual processing of ‘What’ along the inferotemporal
 (IT) cortex of monkey (and human).  IT cortex is largely
dividedinto the caudal portion (TEO) and the more
anterior portion (TE) (source)
Before starting, a brief description of visual processing is needed.  Passive visual processing of regular objects often includes multiple extractions based on features, such as color, diagonals, shape, texture, and a multitude of other non-obvious features; these features are commonly referred to as being processed along the ‘what’ pathway.  There is a sister pathway, the ‘where’ pathway, that has traditionally been associated with the spatial characteristics of the object (but debate continues on if these pathways are mutually exclusive, interact highly, are not even two distinct pathways, or just how many ‘streams’ are there, really? *oy vey; see Kravitz et al, 2011 for good article).  These properties are decoded/processed in a sequential, yet parallel fashion in striate and extrastriate areas; processing further moves along the inferotemporal cortex, namely TEO and TE of IT cortex (Fig. 1).   
…on to faces…
The Modular Zone: FFA?

Fig, 2.  Inferior view of the brain (e.g. taking the brain and flipping it
 up-side down). Bilateral FFA activation along the
ventral occtipito-temporal cortex (source)
One such area that has received considerable attention for appearing to process faces holistically (or other ‘expert objects’) is the Fusiform Face Area (FFA).  This FFA term was, I do believe, coined by Kanwisher et al.  The FFA is located along the ventrocaudal temporal cortex (Fig. 2).  In monkeys, regions along the inferotemporal cortex, one of which is believed to be the homologue of FFA, were shown as early as the mid 80s and just now receiving considerable attention.  We see that FFA is definitely ‘activated’ by faces, but there is debate as to whether faces represent a distinct object class that activates certain/special circuitry or if the selected area(s) in our brain that respond to faces are really just an ‘expertise’ area(s) (as we have grown up our whole life processing countless faces) — for more inforrmation, see the bottom of this post (references to ‘The Debate’).  Suffice it to say, without categorizing faces as special or not, research has gone on, and, to a large extent, has accepted the fact that processing of a face is not done in a single module/brain area; rather, it appears to be processed in a wide variety of distributed areas — a ‘face network’ would be more apt.
These areas, in humans and monkeys, have been validated by not only imaging techniques but also electrophysiological recordings.  In addition to the FFA in humans and monkeys, there exist body- and scene-selective regions near the fusiform and IT cortices.
Skipping along to the good stuff: connectivity….
Feedforward Connectivity
You can see how one can get bogged down into this one particular area, namely the FFA, and forget the ‘big’ picture.  Obviously, when we look at a face, there are massive amounts of connections being activated all throughout the brain.  For me, I tend to disagree with a lot of areas that are said to be ‘modular’, as I think processing rarely, if ever, takes place in one locale.  You have so many different dimensions, not only in visual appearance, that need to be processed.
As stated above, faces convey a wealth of information: gender, trustworthiness, beauty and/or attractiveness, intentions, emotions, etc.  If you just look over the figure below, you can start to appreciate the highly connected areas involved with the simplest feat of processing a face.  Imagine what your brain is doing when you see a child trowing a snow ball, a dragon flying while breathing fire on TV….  countless more areas are activated.  

In the diagram below, a picture is painted about face processing.  This model has been constructed from several studies performed by Haxby et al..  In brief, after striate (occipital) and other close-by extrastriate cortices process for features, the first place where facial features are ‘recognized’ as facial features is in the inferior-occipital gyrus (IOG; a face template area?).  From there, facial features are classified into categories: the ‘changeable features’ and the ‘invariant features’.  The changeable features, processed by the superior temporal sulcus (STS) which is a sulcus under the lateral fissure of the temporal lobe, are such things as eye gaze, lip muscular movements, and emotional expressions.  The invariant features, or features that don’t change, are processed along the lateral fusiform gyrus (LFG), such as identity, gender, and location of specific features.  These three locations, the IOG, STS, LFG are known as the ‘core system’ of facial processing.  From here, other areas are recruited for extended processing; these areas are known as the ‘extended system’.  As you can see from the diagram, the core regions interact with each other but seem to have different projections, regions that have been reproduced in several different studies.  STS sends impulses to the intraparietal sulcus, a known area involved in directed attention (part of the parieto-frontal attention path); the auditory cortex for speech processing; the amygdala, insula and other limbic structures for affective ‘tagging’.  This is then further processed in the orbito-frontal cortex (not shown).  The LFG projects largely to the anterior temporal region and other extended facial regions, such as the inferior frontal gyrus (not shown; this region may be where all comes together, along with the orbito-frontal cortex?).  
source
Where do all these different ‘categories’ of features come together, or come bound to one another?  This remains a constant debate.  It may be the case that there is a modular region that combines these two categories, such as somewhere in the anterior temporal region (less likely) or in the inferior frontal  cortex (mentioned previously).  Another option, the option that I prefer, is that these regions are processed in parallel, and, as such, their processing is coupled to one another in a specific frequency of neuronal firing.  These two categories may be coupled together and processed into a coherent whole in the PFC, but I think they are coupled together as they are processed (e.g. have a tag of ‘face X’).  This general question of coherent coupling or modular processing is hardly being touched, as it is very difficult to answer; it spans across all behavioral and cognitive neuroscience of all types.  One area of research that comes to mind where more of this coherence is being addressed is in the realm of spatial navigation and memory.
Feedback Connectivity: 
Monkey Face Patches & Emotion
What one must keep in mind, however, is that these areas mentioned are feedforward and feedback systems.  

Monkey:
 Red = face-selective regions;
Yellow = face-responsive regions.
(source)

Before we begin, briefly, I’ll describe face processing in the monkey.  There appears to be a posterior ‘face patch’ and an anterior ‘face patch’ located within IT cortex.  Face patches have also been found in the frontal cortex, but as to whether they’re truly face-selective is not currently known.  In the figure to the left, regions of the monkey IT cortex appear to be homologous to human face areas for face responsiveness (e.g. FFA).  The one believed to be closely related to the human FFA is posterior face patch in the temporal cortex, located in area TEO.  As you can see, an anterior face patch is detected in monkey (in area TE), and this has recently been reported in the human (see Rajimehr, et al. 2009).  Indeed, there is a difference between face-selective and face-responsive areas in monkey, as alluded to earlier.  Face-selective areas are those that respond to only faces, as compared to objects, scenes, body parts (areas in red in picture).  Face-responsive areas are those that respond to faces but also may respond to the other said categories (yellow in picture). 
These scans show the emotional modulation of IT cortex, not
the face processing regions.  This modulation
is independent  of face selectivity (those regions circled in black)
source
  Particularly, what I am most familiar with is, how do facial expressions modulate face processing? This question inherently begs for an answer involving feedback connectivity.  As an example, I will use the monkey.  Emotional modulation (by showing facial expressions) of face processing appears to be independent of face-selective areas; this modulation ‘enhances’ processing along the entire inferotemporal cortex (see Hadj-Bouziane et al., 2008).  Since we know that affective facial expressions activate regions of the amygdala, particularly the basolateral nucleus, and the orbitofrontal cortex, the question that still remains, though, is where does this feedback modulation come from — 
(A) Does the amygdala send out this modulation; 
(B) does this modulation come directly from the oribtofrontal cortex, or 
(C) does the orbitofrontal cortex send feedback modulation to the amygdala, which sends this modulation to the temporal areas?  
These possibilities are currently being investigated.  This is only one example of how feedback projections play in to not only face processing but further visual processing and in other modalities.
I’ll leave you with a video of a lady with prosopagnosia, which is the inability to recognize faces (‘face blindness’), while having otherwise normal object processing and recognition.  The question that, again, remains is: Is this a problem in processing faces or a facial recognition problem, in the broadest sense? Nonetheless, here you are… 
Video of woman with prosopagnosia.  Quite sad, really.
* Note: ‘processing’ is distinct from ‘perception,’ as we have no objective measure of perception, whether it be unconscious or conscious perception.  I was chastised for this..  so it has always stuck in the back of my head.
‘The Debate’ 
 a sample of some articles
Not Special!
Becoming a ‘Greeble’ expert: exploring mechanisms for face recognition (source)
FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise (source
Expertise for cars and birds recruits brain areas involved in face recognition (source)
Special!
Domain specificity in face perception (source)
What is special about face recogintion? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition (source)
What is “special” about face perception? (source)
Can generic expertise explain special processing for faces? (source)


References (in order with respect to post):
[my references to monkey literature are much greater than human lit – sorry]

Kravitz, D. J., Saleem, K. S., Baker, C. I., & Mishkin, M. (2011).  A new neural framework for 
visuospatial processing. Nature Rev Neurosci, 12, 217-30.

Perrett, D. I., Rolls, E. T., & Caan, W. (1982).  Visual neurones response to faces in the monkey temporal cortex.  Exp Brain Res, 47(3), 329-42.

Desimone, R., Albright, T. D., Gross, C. G., & Bruce, C. (1984).  Stimulus-selective properties of inferior temporal neurons in the macaque.  J Neurosci, 4(8), 2051-62.

Gross, C. G. & Sergent, J. (1992).  Face recognition.  Curr Opp Neurobio, 2(2), 156-61.

Kanwisher, N., McDermott, J., & Chun, M. M. (1997).  The fusiform face area: a module in human extrastriate cortex specialized for face perception.  J Neurosci, 17(11), 4302-11.

Gauthier, I. & Bukach, C. (2007).  Should we reject the expertise hypothesis?  Cognition, 103, 322-330.

Tsao, D. Y., Freiwald, W. A., Tootell, R. B. H., & Livingstone, M. S. (2006).  A cortical region consisting entirely of face-selective cells.  Science, 331(5761), 670-4.

Tsao, D. Y., Moeller, S., & Freiwald, W. A. (2008).  Comparing face patch systems in macaques and humans.  PNAS, 105(49), 19514-9.

Bell, A. H., Hadj-Bouziane, F., Frihauf, J. B., Tootell, R. B., & Ungerleider, L. G. (2009).  Object representations in the temporal cortex of the monkeys and humans as revealed by functional magnetic resonance imaging.  J Neurophys., 101(2), 688-700.

Bell, A. H., Malecek, N. J., Morin, E. L., Hadj-Bouziane, F., Tootell, R. B., & Ungerleider, L. G. (2011).  Relationship between functional magnetic resonance imaging-identified regions and neuronal category selectivity.  J Neurosci, 31(34), 12229-40.

Tsao, D. Y., Schweers, N., Moeller, S., & Freiwald, W. A. (2008). Patches of face-selective cortex in the macaque frontal lobe. Nat Neuro, 11, 877-9

Moeller, S., Freiwald, W. A., Tsao, D. Y. (2008). Patches with links: a unified system for processing faces in the macaque temporal lobe. Science, 320, 1355-59.

Ishai, A., Ungerleider, L. G., Martin, A., Schouten, J. L., & Haxby, J. V. (1999).  Distributed representation of objections in the human ventral visual pathway.  PNAS, 96, 9379-84.

Haxby, J. V., Ungerleider, L. G., Clark, V. P., Schouten, J. L., Hoffman, E. A., & Martin, A. (1999).  The effect of face inversion on activity in human neural systems for face and object perception.  Neuron, 22, 189-99.

Rajimehr, R., Young, J. C., & Tootell, R. B. H. (2009).  An anterior temporal face patch in human cortex, predicted by macaque maps.  PNAS, 106(6), 1995-2000.

Hadj-Bouziane, F., Bell, A. H., Knusten, T. A., Ungerleider, L. G., & Tootell, R. B. (2008).  Perception of emotional expressions is independent of face selectivity in monkey inferior temporal cortex.  PNAS105(14), 5591-6.

Leave a comment

Design a site like this with WordPress.com
Get started