Software That Listens for Lies

04 novel1 articlelarge

Interesting article explaining how an algorithmic analysis of a persons speech, specifically their loudness, changes in pitch, pauses between words, ums and ahs, nervous laughs and dozens of other tiny signs, can identify a lie. Wide-reaching implications for video and audio analysis of professionals, executives, job applicants, criminals, and more.

http://www.nytimes.com/2011/12/04/business/lie-detection-software-parses-the-…

Linda A. Cicero/Stanford News Service

Dan Jurafsky of Stanford is among those who have been teaching computers how to spot the patterns of emotional speech — the kind that reflect deception, anger, friendliness and even flirtation.

By ANNE EISENBERG

SHE looks as innocuous as Miss Marple, Agatha Christie’s famous detective.

Rob Klein

But also like Miss Marple, Julia Hirschberg, a professor of computer science at Columbia University, may spell trouble for a lot of liars.

That’s because Dr. Hirschberg is teaching computers how to spot deception — programming them to parse people’s speech for patterns that gauge whether they are being honest.

For this sort of lie detection, there’s no need to strap anyone into a machine. The person’s speech provides all the cues — loudness, changes in pitch, pauses between words, ums and ahs, nervous laughs and dozens of other tiny signs that can suggest a lie.

Dr. Hirschberg is not the only researcher using algorithms to trawl our utterances for evidence of our inner lives. A small band of linguists, engineers and computer scientists, among others, are busy training computers to recognize hallmarks of what they call emotional speech — talk that reflects deception, anger, friendliness and even flirtation.

Programs that succeed at spotting these submerged emotions may someday have many practical uses: software that suggests when chief executives at public conferences may be straying from the truth; programs at call centers that alert operators to irate customers on the line; or software at computerized matchmaking services that adds descriptives like “friendly” to usual ones like “single” and “female.”

The technology is becoming more accurate as labs share new building blocks, said Dan Jurafsky, a professor at Stanford whose research focuses on the understanding of language by both machines and humans. Recently, Dr. Jurafsky has been studying the language that people use in four-minute speed-dating sessions, analyzing it for qualities like friendliness and flirtatiousness. He is a winner of a MacArthur Foundation fellowship commonly called a “genius” award, and a co-author of the textbook “Speech and Language Processing.”

“The scientific goal is to understand how our emotions are reflected in our speech,” Dr. Jurafsky said. “The engineering goal is to build better systems that understand these emotions.”

The programs that these researchers are developing aren’t likely to be used as evidence in a court of law. After all, even the use of polygraphs is highly contentious. But the new programs are already doing better than people at some kinds of mind-reading.

Algorithms developed by Dr. Hirschberg and colleagues have been able to spot a liar 70 percent of the time in test situations, while people confronted with the same evidence had only 57 percent accuracy, Dr. Hirschberg said. The algorithms are based on an analysis of the ways people spoke in a research project when they lied or told the truth. In interviews, for example, the participants were asked to press one pedal when they were lying about an activity, and another pedal when telling the truth. Afterward, the recordings were analyzed for vocal features that might spell the deception.

For her continuing research, Dr. Hirschberg and two colleagues recently received a grant from the Air Force for nearly $1.5 million to develop algorithms to analyze English speakers and those who speak Arabic and Mandarin Chinese.

Shrikanth Narayanan, an engineering professor at the University of Southern California who also uses computer methods to analyze emotional speech, notes that some aspects of irate language are easy to spot. In marital counseling arguments, for instance, the word “you” is a lot more common than “I” when spouses blame each other for problems.

But homing in on the finer signs of emotions is tougher. “We are constantly trying to calculate pitch very accurately” to capture minute variations, he said. His mathematical techniques use hundreds of cues from pitch, timing and intensity to distinguish between patterns of angry and non-angry speech.

His lab has also found ways to use vocal cues to spot inebriation, though it hasn’t yet had luck in making its computers detect humor — a hard task for the machines, he said.

Elsewhere, Eileen Fitzpatrick, a professor of linguistics at Montclair State University in New Jersey, and her colleague Joan Bachenko are using computers to automatically spot clusters of words and phrases that may signal deception. In their research, they have been drawing on statements in court cases that were later shown to be lies.

David F. Larcker, an accounting professor at the Stanford Graduate School of Business, audited a course in computer linguistics taught by Dr. Jurafsky and then applied its methods to analyze the words of financial executives who made statements that were later disproved.

These executives were, it turned out, big users of “clearly,” “very clearly” and other terms that Joseph Williams, the late University of Chicago professor who wrote the textbook “Style,” branded as “trust me, stupid” words.

PROFESSOR LARCKER says he thinks computer linguistics may also be useful for shareholders and analysts, helping them mitigate risk by analyzing executives’ words.

“From a portfolio manager’s perspective looking at 60 to 80 stocks, maybe such software could lead to some smart pruning,” he said. “It’s a practical thing. In this environment, with people a bit queasy about investments, it could be a valuable tool.”