Your Facial Expression Is the Next Frontier in Big Data

The human face is powered, depending on how you count them, by between 23 and 43 muscles, many of which attach to the skin, serving no obvious function for survival. An alien examining a human specimen in isolation wouldn’t know what to make of them. Tugging on the forehead, eyebrows, lips and cheeks, the muscles broadcast a wealth of information about our emotional state, level of interest and alertness. It is a remarkably efficient means of communication—almost instantaneous, usually accurate, transcending most language and cultural barriers. But sometimes the data is lost, ignored or misinterpreted. If a logger smiles in the forest with no one around to see him, was he actually happy?

Rana el Kaliouby hates to see that information go to waste. Meeting el Kaliouby in her small office in Waltham, Massachusetts, I see her contract her zygomaticus major muscle, raising the corners of her mouth, and her orbicularis oculi, crinkling the outer corners of her eyes. She is smiling, and I deduce that she is welcoming me, before she even gets out the word “hello.” But many social exchanges today take place without real-time face-to-face interaction. That’s where el Kaliouby, and her company, come in.

El Kaliouby, who is 37, smiles often. She has a round, pleasant, expressive face and a solicitous manner, belying her position as the co-founder of a fast-growing tech start-up—an anti-Bezos, an un-Zuckerberg. Her company, Affectiva, which she founded in 2009 with a then-colleague at the MIT Media Lab, Rosalind Picard, occupies a position on the cutting edge of technology to use computers to detect and interpret human facial expressions. This field, known as “affective computing,” seeks to close the communication gap between human beings and machines by adding a new mode of interaction, including the nonverbal language of smiles, smirks and raised eyebrows. “The premise of what we do is that emotions are important,” says el Kaliouby. “Emotions don’t disrupt our rational thinking but guide and inform it. But they are missing from our digital experience. Your smartphone knows who you are and where you are, but it doesn’t know how you feel. We aim to fix that.”

Why does your smartphone need to know how you feel? El Kaliouby has a host of answers, all predicated on the seemingly boundless integration of computers into the routines of our daily lives. She envisions “technology to control lighting, temperature and music in our homes in response to our moods,” or apps that can adapt the content of a movie based on your subconscious reactions to it while you watch. She imagines programs that can monitor your expression as you drive and warn of inattention, drowsiness or anger. She smiles at the mention of her favorite idea—“a refrigerator that can sense when you are stressed out and locks up the ice cream.”


Read more: http://www.smithsonianmag.com/innovation/rana-el-kaliouby-ingenuity-awards-technology-180957204/#xYHMfwAxPsjZ5zO8.99
Give the gift of Smithsonian magazine for only $12! http://bit.ly/1cGUiGv
Follow us: @SmithsonianMag on Twitter