On Identifying Emotions through Facial Recognition [Layperson’s Take]

FaceMachine_screenshots_collage
Image: “FaceMachine screenshots collage.jpg“, Wikimedia Commons

I did some weekend browsing on facial recognition systems – particularly on systems designed to recognise emotions and personality traits from facial expressions. This was prompted by an article in the Financial Times, which claimed that employers like Unilever have begun using facial recognition systems to filter out candidates with the right emotional and personality traits for their job openings.

After having read a little bit, I have to say some of the claims sound doubtful. I should emphasise here that I’m not an expert and that I’ve only just begun reading up on this. I may realise later that I’m wrong. However, the initial signals I’m getting are mixed. Many facial recognition technologies seem to use an anatomical coding system called the Facial Action Coding System (FACS), first developed in the 1970s to classify facial gestures into common standardised units. While the FACS itself feels sound, there are some claims about FACS’ ability to judge emotions and personalities that I find quite difficult to accept.

For instance, I found websites of two companies (here and here) which claim that most sets of facial expressions classified under FACS fall within some defined, finite set of emotions such as anger, fear, disgust, joy, sadness, and surprise. I don’t quite understand how this isn’t a value judgement. I’m not sure if there’s any objective rationale for identifying such broad emotional states, some of which may overlap with each other. Someone could be angry and afraid at the same time, or joyful and surprised.

Even if this argument isn’t valid (as a layperson, I’m fully prepared to accept it may not be), I don’t see how one can correlate facial expressions with particular emotions without accounting for some kind of bias at the time of defining the emotions themselves. Whether the system recognises emotions as mutually exclusive of each other, or whether they assign probability values for a facial expression mapping onto an emotion, there are ethical problems in assuming that certain facial expressions are universally or objectively indicative of specific emotional states. The problem becomes even more complex when the system is being built to recognise long-term personality traits and not just emotions.

As an analogy, consider neural networks which use databases of millions of images of one particular object to “define” the object (say a dumb-bell). If most pictures of dumb-bells fed by the builders into the neural net consist of someone lifting the dumb-bell, the neural net could very well define a dumb-bell to include not just the object, but also the arm lifting the object (this exact phenomenon has occurred with Google’s DeepMind). Similarly, if the initial ‘training set’ of data that’s fed to a facial recognition system is biased in some way not recognised by the system’s builders, the final act of recognising a specific emotion may turn out to be deeply flawed.

To be fair, Affectiva (one of the companies mentioned above, offering emotional recognition technology) accepts this in their blog, mentioning they correct for this by using at least three different FACS-trained employees to look at each annotated video, while checking for consistency in labelling between viewers.  I appreciate that – it’s a good move to counter bias. However, what I infer from this is that the primary advantage of an automatic emotional recognition system is speed, with accuracy coming second (an automated system can still potentially be more accurate than a human one, but a lot will depend on how it’s built). In fact, speed is the first advantage mentioned by iMotions, the other company mentioned above, for its Emotient Facial Expressions Analysis Engine (I believe Emotient itself is owned by Apple).

For an employer, this may mean being able to process a larger pool of applicants than usual. However, the probability of finding “the right person for the job” may or may not change. A good system may indeed be less biased and more accurate in detecting desired personality types. However, a badly built system can commit too many Type 2 errors (falsely inferring the absence of competence in a candidate) implying good candidates losing out on job opportunities. On average though, I’d expect most systems in the market to be no more or no less error-free than manual systems.

Again, I’m writing all this strictly as a layperson, and I have to read a lot more before I can come to any strong conclusions. At the moment though, I’m concerned that such systems are already being used to identify personality traits at interviews. What’s worse is that one of people interviewed in the Financial Times article claimed that “An interviewer will have bias, but [with technology] they don’t judge the face but the personality of the applicant,”. This is a claim to be cautious about. Of course human beings have bias. However, that’s precisely why we shouldn’t assume an invention of our own genius to be unbiased.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s