Startup Gets Computers to Read Faces, Seeks Purpose Beyond Ads
October 30th, 2013 by admin

A technology for reading emotions on faces can help companies sell candy. Now its creators hope it also can take on bigger problems.

Last year more than 1,000 people in four countries sat down and watched 115 television ads, such as one featuring anthropomorphized M&M candies boogying in a bar. All the while, webcams pointed at their faces and streamed images of their expressions to a server in Waltham, Massachusetts.

In Waltham, an algorithm developed by a startup company called Affectiva performed what is known as facial coding: it tracked the panelists’ raised eyebrows, furrowed brows, smirks, half-smirks, frowns, and smiles. (Watch a video of the technology in action below this story or here.) When this face data was later merged with real-world sales data, it turned out that the facial measurements could be used to predict with 75 percent accuracy whether sales of the advertised products would increase, decrease, or stay the same after the commercials aired. By comparison, surveys of panelists’ feelings about the ads could predict the products’ sales with 70 percent accuracy.

Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.

Given that the market for TV ads in the United States alone exceeds $70 billion, insights from facial coding are “a big deal to business people,” says Rosalind Picard, who heads the affective computing group at MIT’s Media Lab and cofounded the company; she left the company earlier this year but is still an investor.

Even so, facial coding has not yet delivered on the broader, more altruistic visions of its creators. Helping to sell more chocolate is great, but when will facial coding help people with autism read social cues, boost teachers’ ability to see which students are struggling, or make computers empathetic?

Answers may start to come next month, when Affectiva launches a software development kit that will let its platform be used for approved apps. The hope, says Rana el Kaliouby, the company’s chief science officer and the other cofounder (see “Innovators Under 35: Rana el Kaliouby”), is to spread the technology beyond marketing. While she would not name the actual or potential partners, she said that “companies can use our technology for anything from gaming and entertainment to education and learning environments.”

Applications such as educational assistance—informing teachers when students are confused, or helping autistic kids read emotions on other people’s faces—figured strongly in the company’s conception. Affectiva, which launched four years ago and now has 35 employees and $20 million in venture funding, grew out of the Picard lab’s manifesto declaring that computers would do society a service if they could recognize and react to human emotions.

Over the years, the lab mocked up prototype technologies. These included a pressure-sensing mouse that could feel when your hand clenched in agitation; a robot called Kismet that could smile and raise its eyebrows; the “Galvactivator,” a skin conductivity sensor to measure heartbeat and sweating; and the facial coding system, developed and refined by el Kaliouby.

Affectiva bet on two initial products: a wrist-worn gadget called the Q sensor that could measure skin conductance, temperature, and activity levels (which can be indicators of stress, anxiety, sleep problems, seizures, and some other medical conditions); and Affdex, the facial coding software. But while the Q sensor seemed to show early promise (see “Wrist Sensor Tells You How Stressed Out You Are” and “Sensor Detects Emotions through the Skin”), in April the company discontinued the product, seeing little potential market beyond researchers working on applications such as measuring physiological signs that presage seizures. That leaves the company with Affdex, which is mainly being used by market research companies, including Insight Express and Millward Brown, and consumer product companies like Unilever and Mars.

 

Now, as the company preps its development kit, the market research work may provide an indirect payoff. After spending three years convening webcam-based panels around the world, Affectiva has amassed a database of more than one billion facial reactions. The accuracy of the system could pave the way for applications that read the emotions on people’s faces using ordinary home computers and portable devices. “Affectiva is tackling a hugely difficult problem, facial expression analysis in difficult and unconstrained environments, that a large portion of the academic community has been avoiding,” says Tadas Baltrusaitis, a doctoral student at the University of Cambridge, who has written several papers on facial coding.

What’s more, by using panelists from 52 countries, Affectiva has been teasing out lessons specific to gender, culture, and topic. Facial coding has particular value when people are unwilling to self-report their feelings. For example, el Kaliouby says, when Indian women were shown an ad for skin lotion, every one of them smiled when a husband touched his wife’s midriff—but none of the women would later acknowledge or mention that scene, much less admit to having enjoyed it.

Education may be ripe for the technology. A host of studies have shown the potential; one by researchers at the University of California, San Diego—who have founded a competing startup called Emotient —showed that facial expressions predicted the perceived difficulty of a video lecture and the student’s preferred viewing speed. Another showed that facial coding could measure student engagement during an iPad-based tutoring session, and that these measures of engagement, in turn, predicted how the students would later perform on tests.

Such technologies may be particularly helpful to students with learning disabilities, says Winslow Burleson, an assistant professor at Arizona State University, author of a paper describing these potential uses of facial coding and other technologies. Similarly, the technology could help clinicians tell whether a patient understands instructions. Or it could improve computer games by detecting player emotions and using that feedback to change the game or enhance a virtual character.

Taken together, the insights from many such studies suggest a role for Affdex in online classrooms, says Picard. “In a real classroom you have a sense of whether the students are actively attentive,” she says. “As you go to online learning, you don’t even know if they are there. Now you can measure not just whether they are present and attentive, but if you are speaking—if you crack a joke, do they smile or smirk?”

Nonetheless, Baltrusaitis says many questions remain about which emotional states in students are relevant, and what should be done when those states are detected. “I think the field will need to develop a bit further before we see this being rolled out in classrooms or online courses,” he says.

The coming year should reveal a great deal about whether facial coding can have benefits beyond TV commercials. Affdex faces competition from other apps and startups, and even some marketers remain skeptical that facial coding is better than traditional methods of testing ads. Not all reactions are expressed on the face, and many other measurement tools claim to read people’s emotions, says Ilya Vedrashko, who heads a consumer intelligence research group at Hill Holliday, an ad agency in Boston.

Yet with every new face, the technology gets stronger. That’s why el Kaliouby believes it is poised to take on bigger problems. “We want to make facial coding technology ubiquitous,” she says.