How Companies Are Using Artificial Intelligence to Tell if You’re Lying

Some data scientists believe AI can analyze voice and eye movements to help banks, insurers, and others detect fraud. Yet even among AI experts, many doubt such systems can consistently find the truth.

Illustration of a person sitting at a table speaking into their cellphone being lit like they're being interrogated. Illustration: Nash Weerasekera

Insurance company Lemonade requires customers making claims to upload a video explaining their loss. But when the upstart insurer earlier this year said on Twitter that artificial intelligence analyzes these videos for “non-verbal cues” that may suggest fraud, the company received a barrage of angry messages on social media.

“There’s a wide range of possible uses of digital technologies in the space,” says Luke Stark, who studies the ethical and social impacts of artificial intelligence and machine learning as an assistant professor at Western University in London, Ontario. “They range from the plausibly occasionally useful-but-problematic to the doesn’t-work-at-all and problematic.”

The Lemonade episode highlights both corporate America’s growing investment in detecting lying and deception and the potential backlash when details of such technology become public.

“They struggle with talking about fraud for a variety of reasons. It’s almost like there is no winning there,” says Vijay Balasubramaniyan, co-founder and CEO of Pindrop, who claims his company’s product is 99.5 percent effective in correctly identifying who is calling and 80 percent in detecting fraudsters. Pindrop says its clients include five of the top seven insurers and eight of the top 10 banks.

“You talk a lot about it, and the fraudsters know what’s happening and therefore start figuring out ways to beat the system. So, it doesn’t help,” he says. “And worse is if they get the wrong kind of press coverage, like Lemonade did.”

Behind the scenes, companies are spending ever-growing amounts on fraud detection. One recent study from Grand View Research, a market research company, estimated that the fraud detection and prevention market was worth almost $21 billion worldwide in 2020, with that figure expected to grow to more than $62 billion by 2028. 

In more than a dozen interviews for this article, top industry officials said that some limited lie detection works, such as by matching your voice to previous calls to prevent fraud. Such technology sorts out who you are—whether you are who you claim to be when you contact a company. Many experts doubt, however, that similar technology can make the next step after confirming your identity and evaluate whether what you are saying is true.

Predicting Loan Defaults

Despite such doubts, many firms are seeking to develop technology that can detect fraud, deception, or intent through clues from your voice, eye movements, body language, or even your “digital body language” from how you use your mouse and keyboard.

For example, Israel-based Voicesense markets several products, including a Loan Default Predictor, which study vocal intonation, pace of speaking, and emphasis—but not the actual words—in what they call behavioral predictive voice analytics. “What if you could forecast your applicants’ default probability using only their voice? You can . . . with Voicesense,” the company advertises.

Another startup, ForMotiv, advertises a “Digital Polygraph” that “uses machine learning, artificial intelligence, and predictive behavioral analytics to measure a user’s ‘Digital Body Language’ and predict, in real-time, what their true ‘intent’ is.” The company, which has offices in Philadelphia and Ho Chi Minh City in Vietnam, targets its services to insurers, banks, and call centers.

Voicesense says that in analyzing voices for a major Israeli bank, Bank Leumi, it identified 306 people as high risk, with 173 of them defaulting on their loans, a rate much higher than the 199 people seen as low risk, of whom 65 defaulted.

“We can predict, ‘What is the probability this person is likely to default,’ it could be two years forward,” founder Yoav Degani says about his company’s services. “Those that are marked as high risk, usually, the percentage of defaults will be two times, three times, four times higher than those we marked as low.”

More on AI and Digital Privacy

Yet in the group identified as high risk, 43 percent paid back the loan, which means implementation of this technology could mean worthy clients might be turned away. A spokesman for Bank Leumi says it was unaware of this study, and Degani says the official who authorized his research has since left the bank.

“The rate of false positives with detecting lying is so high that you’re turning away so many genuine people from a loan, even though you might be catching the occasional bad person,” says Balasubramaniyan, of Pindrop, speaking in general terms. “That’s worse for them, right? Their job is providing loans and that’s how they get business.”

The comment highlights the dilemma of fraud and lie detection technology. If it works well, companies save money and reduce risks in areas such as lending, insurance claims, and other transactions. Yet if the technology does not live up to its promises, it could deny honest people access to services they deserve.

Karthik Tadinada, director of data science at Featurespace, a British company that focuses on fraud and financial crime prevention, says companies can detect suspicious behavior by monitoring incoming telephone numbers that call many times a day, or for disposable burner phone numbers. But detecting lying or intent in the abstract is much more difficult. 

“I would cast a very skeptical eye on claims that this stuff is possible,” he says. “Those kinds of things working at that scale—I haven’t really seen anything that has even a hope of working.”

Interest From Upstarts

Startups may become the first to embrace new deception detection technology to screen customers for loans or insurance fraud because they seek an edge against conservative, well-established banks and insurers that rely on traditional credit scores and background checks. 

“We do not use such technology today,” says Guy Goldstein, co-founder and CEO of Next Insurance, which targets small businesses and the self-employed. “I would imagine we would use it in the future, and it could mitigate risk and have significant impact on a business like Next.”

The promise of machine learning to cut costs and speed up customer service has attracted strong investor interest in disruptive insurers such as Next, which has raised more than $800 million as of March 2021, and Lemonade, which has a $4 billion stock market valuation after going public last year.

After enduring public criticism in May, Lemonade stressed that it does not reject any insurance claims based on artificial intelligence alone. “The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities,” it wrote on the company website

However, Lemonade approves or denies about a third of all claims without human intervention, according to the company’s January 2021 S-1 filing to the Securities and Exchange Commission. Lemonade’s two founders declined several requests for comment.We do not use AI to automatically reject or accept claims,” said Lemonade spokesman Yael Wissner-Levy. “The ‘instant’ claims are handled by our system (you can call it automated algorithms, I guess) based on predetermined rules.”

Top banks and credit card companies routinely use related technology to detect whether you really are who you say you are when you call them. Their efforts have gotten more sophisticated in recent years, such as by comparing your voice with past conversations, identifying unusual background sounds, and even looking for unusual patterns from computer mice and keyboards, experts say.

Part of the reason for this new fraud technology is that simple passwords are often not secure enough.

“You’re probably still giving your mother’s maiden name, birthday, and PIN number,” says Edward Miller, CEO of LumenVox, whose voice technology is used to authenticate users at companies including Vanguard, IBM, and Air Canada. “Because of social engineering, you can get a lot of that information pretty easily.”

Despite the limitations of traditional passwords, using technology to analyze the sound of your voice, motion of your eyes, or overall behavior for loans, insurance, or other services is controversial. One important reason is that from old-fashioned police polygraph tests through today’s algorithms, humans have never invented anything near a flawless truth detection machine. 

John Kircher, chief scientist at Converus, whose product seeks to identify deception by measuring minute fluctuations in eye motion, is open about the limitations of lie-detection technology. “It’s not perfect. If it’s 90 percent accurate, one person out of 10, if you’re telling the truth, is going to fail it. So there’s a reason for people to be concerned.”

Close up of person from lead image with data and computer analyzation overlay.
Can artificial intelligence analyze your facial expressions or eye movement to see if you're lying?

Illustration: Nash Weerasekera Illustration: Nash Weerasekera

How the Technology Works

To detect potential fraud, researchers rely on machine learning, a method in which computers try to detect patterns by analyzing data from thousands or even millions of people. When you read about artificial intelligence today, it typically refers to such machine learning, not the sentient robots of sci-fi movies and novels.

Consider how a computer might have estimated house prices from thousands of recent sales. A person would decide which variables to enter into a model, such as square footage, number of bedrooms, location, size of yard, swimming pools, and other extras, and the model would return a price. Now, “what the models can do is kind of sift through all of this stuff and figure out what the important variables are for itself,” says Tadinda of Featurespace.

The idea is the same when seeking correlations such as loan default probability. Yet making conclusions about humans is far more difficult than devising a formula for house prices. That lack of confidence you display when talking to the bank may suggest you are a shifty character, but in reality you may be someone who has just learned their child got sick at school. Or maybe you are using medication, are tired, or any one of a number of other distractions.

In recent years, some startups have made more confident claims about their ability to sort out the truth, sometimes going too far even for industry insiders. “Vendors are incredibly overzealous with their marketing,” says Kevin Gosschalk, founder and CEO of fraud prevention company Arkose Labs, whose clients include Microsoft, PayPal, and Singapore Airlines.

“You go to Black Hat and 50 percent of what people say is just thoughts and dreams as to where they hope they could be,” he says, referring to a big annual cybersecurity conference. “That is just the reality of the cybersecurity industry.”

Eyes and Micro-Expressions

Todd Mickelsen, president and CEO of Utah-based Converus, believes eyes are more revealing and accurate in preventing fraud and deception than voice. His company’s $4,800 EyeDetect camera system monitoring subtle changes can detect whether someone is credible or deceptive more than 85 percent of the time, he says. “EyeDetect is the first viable, scalable, scientifically validated credibility assessment method invented since the polygraph debuted nearly 100 years ago,” the company advertises.

Mickelsen says U.S. banks and insurers do not use EyeDetect, but Converus has sold its services in more than 50 countries, including in Latin America, often for employment screening—a use largely barred by U.S. law. One client is a Peruvian Financial Services Company, which Converus says saved more than half a million dollars in a year by screening loan applicants in a country that does not maintain comprehensive credit histories.

Some American defense attorneys and private investigators also have used the technology to test the truthfulness of clients or potential clients, Michelsen says.

China provides a glimpse of what could come to the U.S. There, OneConnect Financial Technology, an offshoot of a Chinese financial services company, says it monitors facial micro-expressions for signs that loan applicants are lying.

“OneConnect’s remote credit review system can analyze facial expressions during credit interviews using smartphone cameras to flag potential fraudulent activity for further investigation,” the company said in a 2020 press release. “The most notable benefit for retail banking is the system’s decision-making engine, which has been shown to cut credit losses during the risk approval process.”

Traditional banks and insurers are often reluctant to discuss whether and/or how they might be employing deception detection technology. Top technology or other officials from AIG, Allstate, Bank of America, Citi, Liberty Mutual, The Loan Store, TD Bank, and Truist Bank did not respond to multiple requests for comment. A spokeswoman for Ford Credit declined to say whether it uses such tech in assessing loans or claims.

Risk of Discrimination by Algorithm

In addition to fears about bad publicity, traditional companies may be reluctant to discuss these issues because previous use of similar technology, such as facial recognition, raised questions surrounding potential discrimination. Some scholars and activists are concerned the same flaws also might plague deception detection technology. 

“As a Black man asking for a loan, I’m seen as a liar, no matter what the computer says,” says Talib Kweli, a rapper, entrepreneur, and activist who has written several songs that discuss lying.

“If your example is a bank officer or loan officer saying he’s going to deny me a loan based on the fact that he could tell (I’m) lying, that sounds a lot to me like just what being Black feels like.”

Analyzing voice, eyes, or computer mouse motion to make decisions regarding loans, insurance claims, or other business transactions could also create image or legal problems, says Danielle Warren, a Rutgers Business School professor who has studied insurance fraud.

“I can imagine the backlash from consumers if they’re being denied loans because of some unproven technology, suggesting that they could be deceptive. That sounds like an ethical nightmare, like a lawsuit waiting to happen,” she says. “In a courtroom, you’re not going to say: ‘I rejected their application for a loan because of the quickness of their clicking of the mouse.’”

Degani of Voicesense responds that his technology avoids discrimination by focusing only on speech patterns rather than words, making his company unaware of race, age, context, or even higher bank default rates in one part of town vs. another. “One of the advantages of our approach is that it has less prejudice than any other . . . unlike many other machine-learning approaches that are many times accused, that are to some degree exposed, to such biases and prejudice,” he says. 

All executives interviewed for this article agreed that companies should never use lie detection results alone to decide anything. Converus CEO Mickelsen gives the example of couples who turn to his company to prove infidelity—a service advertised on its website. “It’s a questionable area, because we’re not 100 percent accurate, we’re going to get it wrong on occasion. And then what does that do to that relationship?”

“So our recommendation is that it should be used to help make a decision," he says. "But it’s not a definitive result.”

Adam Tanner, an associate at Harvard’s Institute for Quantitative Social Science, is a contributing editor at Consumer Reports. He is the author of “What Stays in Vegas: The World of Personal Data—Lifeblood of Big Business—and the End of Privacy as We Know It.”


Headshot of CRO freelance writer Adam Tanner

Adam Tanner

Adam Tanner is a Consumer Reports contributing editor. He is also the author of “Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records” and an associate at Harvard's Institute for Quantitative Social Science.