On Using Generative AI As A Lie Detector Including Trying To Bust The Infamous Two Truths And A Lie Gambit Via ChatGPT
In today’s column, I am going to closely explore a stimulating question about whether modern-day generative AI might be successful as a lie detector.
Here’s the deal.
Do you think that ChatGPT, the widely and wildly popular generative AI app by AI maker OpenAI, could serve as a lie detector? Or, for that matter, would any of the existing major generative AI apps such as GPT-4, Bard, Gemini, Claude 2, etc. be able to sufficiently perform in ascertaining whether someone is lying?
This is more than idle consideration.
Perhaps generative AI could be added to the litany of means for determining when people are lying. Our courts depend upon trying to find the truth, sometimes doing so satisfactorily while other times missing the mark. In your everyday life, you are undoubtedly constantly trying to ferret out whether people around you are truth-telling or might be pulling a fast one on you.
Imagine that generative AI was able to readily discern whether a person was a truth-teller or a liar. You might have a generative AI app conveniently running on your smartphone and could use this handy-dandy mechanism to do an on-the-spot lie detection test. Voila, the person is either shown to be a dirty cheat or someone scrupulously telling the truth.
I don’t want to go too far off the deep end but contemplate what this might do to society all told. If people knew that you had at your fingertips a device to discern truth versus lies, they would realize that lying is not going to be the free-for-all that it is today. They would have a heightened risk of being instantly caught in a lie. Ergo, telling lies would almost certainly drop precipitously.
What kind of a world would we live in that consisted of hardly any lies?
Some might say this would be heavenly. Others might compellingly argue that without the ability to tell lies, society would seemingly collapse. Lies are needed to grease the skids. Lies help make the world go around. Does that comport with your view, or do you believe we would be better off with lie detectors in our pockets and ready at the get-go? You would only need to brandish your smartphone or smartwatch, and that alone would scare someone into realizing they aren’t going to be using blarney with you.
Before we go whole hog into zany predictions about the future, we ought to first take a serious here-and-now glimpse at what generative AI might be able to do when it comes to lie detection. The real-world realm is that we cannot just wantonly pontificate about lie detecting via the likes of generative AI, instead we need to examine the precise practicalities. Only then should we veer into the throes of revelations about how the future might unfold.
Let’s agree to focus on one principal concern.
Is there a useful and prudent means of using generative AI for purposes of lie detection?
We shall endeavor to keep our eye on that ball and see where it rolls.
Historical Perspective On Lying
I assume we would all agree that lying has been part and parcel of humankind from the very beginning. There might have been times in history when there was less lying or more lying, but all in all, lying has always been squarely in the midst of human endeavors. I believe I am telling you the unfettered truth about that history of lying.
One line that many youngsters often grow up hearing is the now-classic “liar, liar, pants on fire.” You almost certainly have heard that catchphrase. If not, I am sorry to introduce it to you. It is a line that one child might hurl at another one. Adults could use the line too, though this is something that comedic actors in movies might say rather than an adult in a real-life setting uttering (in terms of movies, I can highly recommend Jim Carrey’s comedic portrayal about lying while serving as the main character of the 1997 movie “Liar Liar”).
A fascinating historical tracing of the popular phrase about being a liar and having one’s pants on fire was undertaken in an article entitled “All the Lies About the Origins of ‘Liar, Liar, Pants on Fire’” by Cara Giaimo, September 18, 2017, Atlas Obscura. According to the piece, the catchphrase might be linked to the statement “liar, liar, lick-dish” from the 1400s which metaphorically meant a person was lying as quickly as a canine could potentially lick a dish. This is vivid imagery especially if you’ve perchance watched a ravenous dog decide to devour a bowl of eagerly sought dog food.
There is also the famous poem “The Liar” by William Blake that was published in 1810 and had these memorable words (excerpted):
- “What infernal serpent”
- “Has lent you his forked tongue?”
- “From what pit of foul deceit”
- “Are all these whoppers sprung?”
Followed by:
- “Deceiver, dissembler”
- “Your trousers are alight”
- “From what pole or gallows”
- “Shall they dangle in the night?”
You can readily see in the latter lines that pants on fire come to the fore, namely trousers that are said to be alight.
I also note that the poem refers to lying as speaking with a forked tongue. Turns out that we have collectively devised lots of sayings to refer to the act of lying. It makes sense that we would culturally come up with a multitude of phrases associated with lying. Lying is pervasive. We might as well have a slew of catchphrases to highlight and illuminate something persistent and ubiquitous.
Here are just a few of the many such invectives:
- Spinning a yarn.
- Pulling a fast one.
- Shading the truth.
- Telling a fib.
- Pulling someone’s leg.
- Bamboozling.
- Stretching the truth.
- Hoodwinking.
- Being a fibber.
- Spinning a tale.
- Lying through your teeth.
- Slipping one past you.
- Pulling the wool over someone’s eyes.
- Bending the truth.
What is your favorite way to proclaim that someone is lying?
Maybe you prefer the simpler indication of “liar” or “liar, liar”. I suppose you can soften the blow of such an accusation by using the assertion that a person is spinning a yarn or bending the truth. Those are semantic variations that do not appear to be quite so in-your-face and might be less likely to spur an angered response.
Nature Of Lying And Significance In An AI “Liar” Context
I dare say that lying can at times be nearly acceptable. You might recoil at such a contention. One perspective is that lying is always wrong. Never can lying be tolerated. Lies are bad. Furthermore, lies are like a slippery slope. Once you start lying, it will never end. Lies snowball into more and increasingly endangering lies.
Let’s dip our toes into that conundrum.
Suppose that someone you love dearly has bought a new hat and they cherish it. They come upon you and ask for “your honest opinion” about the hat. Turns out that in your heart of hearts, you think it is the ugliest hat you’ve ever seen. You also believe that it makes your loved one look completely foolish.
Which are you going to be, the harsh truth-teller or the seemingly inconsequential soft-lie liar?
Most people would probably try to walk a delicate line of being pleasant about the hat, neither overly praising it nor denouncing it. Is the person lying to their loved one? You might exhort that this is a lie. If the truth is the truth, and if the truth is that the person detests the hat and wants to help their loved one to know of their presumed foolishness, only the unvarnished truth is going to pass muster here.
To try and clarify the ins and outs of lies, let’s divide them into two major classifications:
- (1) Intentional lying – maliciousness versus being well-intended (i.e., a white lie)
- (2) Unintentional lying – sincere belief even though factually wrong, repeating an unaware falsehood, etc.
The first category is intentional lies. Let’s informally define this as when the person telling the lie knows it is a lie and they are intentionally opting to convey the lie. They might maliciously do this. They might straightforwardly do this. They might also do this with the best of intentions, something typically referred to as a white lie. Other phrasings about a best-intended lie are that they are construed as being a small lie, a harmless lie, etc.
The second category consists of unintentional lies. A person might believe something that isn’t true, but they nonetheless accept it as the truth. They proceed to tell this falsehood to someone else, portraying it as a truth, which is what they sincerely believe. There is no intention to deceive. Of course, an unintentional lie can do as much damage as an intentional one, particularly if the unintentional lie seems convincing due to the erstwhile sincerity of the “liar” who has passed along a falsehood as though it is truth.
There is an old joke about a town that has two types of people. There is the stridently always truth-telling populace. In addition, there is the always lying portion of the populace. Each will stick solely to their predilection. If you meet a truth-teller, they will never lie to you. If you meet a liar, they will never tell you the truth.
You wander into this town. The first person you see, you ask them a simple question, namely are they a truth-teller or are they a liar?
The town member says that they are a truth-teller.
I ask you, have you indeed met a truth-teller of that town, or have you met a liar? Please ponder this. After you’ve come up with your answer, pat yourself on the back and continue reading along.
I’d like to insert into this diatribe about intentional versus unintentional lies that intent is a substantive element of how society decides the impacts of lying and what might happen to someone who lies. You undoubtedly know that our justice system takes into account whether a lie was intentional versus unintentional. We make a big deal about the role of intent.
I wanted to bring up the notion of intent due to the implications associated with AI.
Time to start bringing the AI topic into this heady matter.
Generative AI is the type of AI that can nearly fluently generate essays and carry on conversations. Anyone who has used ChatGPT, or a similar contemporary generative AI knows how amazingly fluent the AI appears to be. Please know that this is not due to the AI possessing sentience. We do not have any AI that is as of yet sentient. We don’t know if we will arrive there, nor when we might arrive there. Those outsized headlines that proclaim AI is sentient are wrong, misleading, and outrightly shameful, see my analysis at the link here.
Can today’s generative AI tell lies?
Absolutely.
Indeed, an area of concern about generative AI is that so-called AI hallucinations can occur (I disfavor using the word “hallucinations” because it serves to anthropomorphize AI, see the link here). The mathematical and computational pattern matching underlying the AI can produce falsehoods or make up fictitious facts and figures.
I would characterize those as lies.
Some fervently disagree. They say that a lie must have intent. Only humans have intent. Therefore, the AI cannot be lying. Maybe, once we attain sentient AI, assuming we do, at that juncture AI might be a liar. Up until then, the AI cannot be a liar. It can emit falsehoods and make up fake stuff, but it cannot be declared as a liar per se.
This is quite a heatedly debated matter.
A contrary viewpoint emphasizes that humans have intentional and unintentional lies. The unintentional lies do not have an intention to lie. If AI doesn’t have intent or cannot form intent in the same manner as humans, we would say that AI produces unintentional lies. They are lies. They are unintentional lies, but still, they are lies.
More twists and turns enter into this morass.
Is it true that AI such as generative AI does not have “intent”?
When referring to intention as an element of humans, we find ourselves in squishy grounds. How do you determine the intent of a person? Is it embodied in their soul? Is it in their mind? How can we know what was truly in their mind? Might they fabricate intent or shade what their intent really was? And so, the questions arise.
With today’s AI, suppose that the AI is programmed to lie. A person instructs generative AI to lie. The generative AI proceeds to lie to anyone who uses the AI. Would this constitute intention by the AI? You see, the AI was told to lie, and it is now essentially intending to lie. A contrary argument would be that the AI is lacking in intent and only being mechanically manipulated by a human that programmed the AI to lie.
Okay, take this a step further.
We data train generative AI on a vast smattering of text found throughout the Internet. This consists of human writing of all kinds, including encyclopedias, works of Shakespeare, blogs by everyday folks, etc. The AI-based mathematical and computational pattern matching seeks to mimic how humans write and express themselves.
During that mathematical and computational analysis, the pattern of human lying is detected. This gets wrapped into the same pattern-producing capacities that allow the generative AI to produce fluent essays and carry on dialogues. From time to time, the generative AI via probabilities and statistics lands into producing lies, which let’s say are the so-called AI hallucinations.
Is the AI now formulating the lies via “intent” or how would you characterize the generating of these lies?
There isn’t a human or Wizard of Oz that has instructed the AI to lie. The lying was formulated as part of the pattern matching about how humans write and what they express. Some argue that this is a free-willing form of lying in that the AI makers didn’t go out of their way to get the AI to generate lies. The opposite in a sense is more likely. Most of the AI makers perform a fine-tuning of their generative AI whereby they attempt to dampen the emission of lies by their generative AI (a means often referred to as RLHF, reinforcement learning via human feedback, see my coverage at the link here).
Depending upon your definition of “intent”, you might contend that generative AI has via pattern matching arrived at the exhibiting of lying as doing so based on mathematically and computationally intending to do so. Even if this is different than the blood and guts of human lying, the argument would be that it is still lying. The AI is a liar and does so with intention.
Grab yourself a nice glass of wine, take a sip or two, and go contemplate the weighty considerations.
For now, in this discussion, we don’t need to pin down the resolution to the intent question. Remember, our focus is to see if AI especially generative AI can do lie detection. Let’s stick with that thought.
Detecting Lies That People Tell
Now that we’ve discussed the nature of lies, we should consider how it is that we can potentially detect when someone is telling a lie.
Let’s meander through the terrain of lie-telling.
We all enjoy watching online videos of cops who ask people questions and are trying to figure out whether a person might be lying. Those videos of suspected drunk drivers get millions of views. The police officer pulls over the driver, usually due to speeding, weaving, or some other infraction. The officer asks whether the person has been drinking.
The usual initial answer is that they haven’t had a drink at all. Not one. They are shocked and dismayed that such a question is being asked of them. How dare the officer even bring up such an insulting or overbearing question? Outrageous.
The officer tells the person that they can smell alcohol on their breath. At that juncture, many of these drivers will then say that yes, they had a drink, but only one, and it was many hours ago. Aha, a lie was initially told. The person insisted they hadn’t had a drink at all. Now, they admit they did, but they attempt to make it seem inconsequential and unimportant.
Though these are not a funny matter since the person is potentially driving drunk and could harm or kill others or themselves, there is something mind-boggling about watching such a person try to transition their lies and remain somehow a perceived truth-teller. For example, the officer tells the person that there is an open beer can on the seat next to the driver. The driver will attempt to deflect this by saying it isn’t their beer, or that it was a beer from two days ago that happens to be on the seat. In one video, the officer touches the beer can, which is ice cold, and asks how the beer can be so outdated and yet be icy cold.
The driver usually tries to shift to some other topic because they realize they have been pinned to the mat. Their house of cards of lies has been shaken and taken apart for what it is.
What might an officer do to try and ascertain lies, or shall we say in a more macroscopic sense what are the primary means of ferreting out lies overall?
I like to make things straightforward by aiming at three major methods of lie detection:
- (1) Physiological lie detection
- (2) Observational lie detection
- (3) Cognitive lie detection
In the first instance, we might use physiological lie detection. Think of biomechanical measuring machines that are lie detectors of the famous polygraph variety. You’ve seen those in many movies and TV shows. A person is hooked up to a lie detector. They are asked a series of questions. The machine is recording their physiological response such as their breathing rate, perhaps their blood pressure, and other physiological measures. I will say more about this in a moment.
For the second instance, observational lie detection can be used. You look at a person and examine their facial expressions. Are they grimacing or relaxed? Are they sweating? Do they make wild gestures? It almost seems that as a process of growing up in a modern society, we gradually learn how to observe people and seek to ascertain their lies versus truth-telling. Some people get good at observing others and making these kinds of judgments. Some are lousy at it. Likewise, there are people who get good at expressing their physical mannerisms to appear to be truth-telling when they are actually lying.
We live in a world of people who have tuned themselves to lie.
Whereas your ability to detect lies via observations might be useful for some people, there are other people who can readily fool you. They have managed by one means or another to showcase signs that seem to be truthful and take you into their lies by making you believe that your truth-telling detection acumen is high. The reality is that they have found ways to turn your own beliefs about the detection of lies into convincing you they are telling the truth.
Makes your head spin to think about it.
The third instance is known as cognitive lie detection. This involves using logic and various cognitive means to ferret out what is a lie versus what is the truth. People in certain kinds of professions are trained in cognitive lie detection and/or via practice get pretty good at it. The same could be said of someone who perchance grows up in a setting surrounded by liars. They likely develop cognitive skills to try and dive into whether someone is telling the truth or not.
Much of the time, you probably would need to rely on all three of the major approaches rather than only using one. You can try to use just one, but that’s a tough road to hoe in terms of detecting lies. The better chances are usually using the three in combination. Or, at least using two of the methods together.
Consider the police officer who stopped the suspected drunk driver. An observation was made of the driver at the wheel, and they were speeding or weaving the vehicle. That’s an observation about the person and their actions. The cop then asked the person whether they had been drinking, and the person said no. This has laid the foundation for cognitive lie detection. A potential cognitive or mental trap has been set.
An observation or perhaps physiological detection was made of the person’s breath. A cognitive lie detection then occurred by confronting the driver with their breath, for which the person then cognitively backed down from their lie that they hadn’t had anything to drink. You can see that all three of the major methods are being woven together.
Someone highly versed in lie detection can almost seamlessly interweave the three methods. They go back and forth, trying to undertake a kind of battle of wits. Try some physiological lie detection. Try some observational lie detection. Try some cognitive lie detection. Continue to repeat these. Use each method to aid in cross-checking the other used method.
We all do this on a daily basis, though we don’t necessarily call things out in a brassy manner. The loved one who bought the hat might be wondering whether you are going to tell the truth about the hat. They might be watching your face closely. They know that you might say nice things about the hat, but if your facial expression is stilted, they might suspect you are holding back your real thoughts. They might try to ask you questions about the hat to tease out your true feelings.
Back-and-forth things go, varying the use of the three major methods of lie detection.
One of the reasons that we seem to enjoy Sherlock Holmes stories and the like would be because we get to watch a master in lie detection at work. We try to anticipate what we would do in their shoes. Being adept at lie detection is a crucial skill in life. You can live or die based on a lack of being able to do lie detection.
You could assert that honing our personal abilities to do lie detection is a lifelong process, for which sometimes no matter how much you try, there is always something new to be learned about how to figure out liars and lies.
Generative AI can play a demonstrative role in each of the three major methods of lie detection.
Here is where things are headed:
- (1) Generative AI and Physiological Lie Detection (GenAI-Physio LD)
- (2) Generative AI and Observational Lie Detection (GenAI-Observ LD)
- (3) Generative AI and Cognitive Lie Detection (GenAI-Cogno LD)
Yes, we can include generative AI in each of the three major methods.
In the first case, a conventional lie detector or polygraph machine can have generative AI either embedded into the machinery or be used in conjunction with the device on via a jointly established generative AI app.
In the second case, observational lie detection can be augmented by using a generative AI app that has been connected to a camera, microphone, or other “observational” sensory devices. This is not as space-age or farfetched as it might seem. Recall that your smartphone has a camera, a microphone, and other sensory capabilities built into it. This is the stuff of a normal life these days.
The third instance of cognitive lie detection is the most direct means of leaning into generative AI. We can easily use generative AI to aid in performing cognitive lie detection. By using generative AI to present and keep track of logical contradictions and other cognitive mistakes that might reveal a lie, generative AI can be quite handy for this purpose.
I will be digging deeper into these matters shortly.
Before I do so, we should consider the somewhat startling topic of AI autonomy as it relates to this truth-telling or lie-detecting context.
Here’s the rub.
If people assigned to ferret out lies begin to use or rely upon AI to aid in this endeavor, they might fall into the mental trap of overly relying on AI. The AI might for example seemingly state that someone is for sure lying, for which the person trying to detect the lying is unsure, but they are lulled into believing that the AI is right (it is the assumption of the “all-knowing” AI mystique). This could be despite their own reservations about concluding that someone is lying. The AI becomes a crutch. This is dangerous.
Another grievous concern is opting to let the AI be fully autonomous in terms of performing lie detection. We perhaps take the human that is responsible for doing the lie-detecting out of the loop. Imagine this. You are accused of lying. You deny doing so. You are led to a room where an Alexa or Siri-like speaker is set up and a camera is on you too. For the next hour, you are grilled by an AI system that is alleged to be capable of doing lie detection. The AI at the end of the hour categorically states that you lied. No doubt about it.
Do we want AI to make autonomous decisions about people of whether the person has been truth-telling or lying?
Heady stuff. I’ve raised lots of these kinds of vexing AI Ethics and AI Law questions throughout my column postings, thus if that interests you, please take a look at the link here.
Take another sip of that wine and get prepared for more detailed unpacking on this worthy topic.
Research That Faithfully Informs About Lie Detection
There is a plethora of research on lie detection. The topic draws attention to human nature, the sociology and psychology of humankind, cultural norms, cognition, physiological responses, and so on. I’ve chosen a few notable picks to share with you.
In a research article entitled “Historical Techniques of Lie Detection”, by Martina Vicianova, European Journal of Psychology, 2015, the paper fascinatingly traces the history of lies and lie detection. A rather curious early-used method of lie detection purportedly made use of dry rice as mentioned in the third bulleted item that I provide here (excerpted):
- “Since time immemorial, lying has been a part of everyday life. For this reason, it has become a subject of interest in several disciplines, including psychology.”
- “Deception is a pervasive, and some would argue a necessary, phenomenon in human communication, yet its very action stirs up moral indignation and rage. As a result, for as long as there have been lies, there have been methods of lie detection.”
- “One of the first methods to prove the veracity of a statement uttered by the accused was described in China circa 1000 BC. The person suspected of lying was required to fill his/her mouth with a handful of dry rice. After a while, s/he was to spit out the rice. If the expectorated rice remained dry, the suspect was found guilty of fraud. This method was based on the physiological principle and the assumption that experiencing fear and anxiety is accompanied by decreased salivation and a dry mouth.”
Please do not make use of the dry rice method.
Society has generally advanced beyond a number of oddish dated approaches of lie detection, including ones that caused harm to befall the accused liar. You might recall Monty Python skits of this ilk. A person is put into a dire situation. In some cases, if the person lives, they are said to be telling the truth while if they die, they are lying. Other times the setup is the opposite, namely if the person died, they were telling the truth (well, hurrah for their dead soul), but if they lived, they were lying. These are ostensibly no-win situations.
Eventually, devices such as the polygraph were invented. Early versions were often questioned as to the veracity of being able to determine accurately whether someone was lying. The courts were especially interested in such matters due to the consideration of whether a polygraph analysis merited legal evidentiary submission.
In the same article cited above, the historical tracing also examined the polygraph as a lie detection method:
- “National Academies of Science (NAS) indicated the reliability of polygraphs as 81% – 91% (National Research Council, Committee to Review the Scientific Evidence on the Polygraph, 2003, p. 4).”
- “The signs of nervousness, fear and emotional disturbances occur not only in people who reported false information, but also in people who tell the truth.”
- “In conclusion, it can be stated that the polygraph does not detect lies but instead measures physiological responses postulated to be associated with deception. None of these responses are specific to deception, nor are they necessarily always present when deception occurs.”
- “However, when used by well-trained examiners and in conjunction with other techniques, it appears to offer a useful adjunct in identifying those who attempt to deceive.”
The traditional polygraph examination has pluses and minuses.
As noted by the research paper, a person’s physiological reactions do not necessarily pertain to telling a lie or telling the truth. They might have other reasons for their physical reactions. There is also the matter of how questions are asked of the person. A question phrased in a particular way can potentially spur a specific kind of physiological response. The setting of when, where, why, and how a polygraph is used can be a demonstrative determiner of the outcome.
A myth that is sometimes portrayed in movies and television shows is that polygraphs are infallible. You see someone who breaks down and admits to some crime because they are watching the polygraph examiner and believe the gig is up. They got nailed by a machine. You’ve got to shake your head at the creative license of distorting what really happens.
The same research paper notes that there are other means of performing physiological and observational related lie detection, including analyses of a person’s voice (the pitch, the tone, etc.), their facial expressions and body language, and some of the newest techniques entailing analyzing brain waves:
- “Voice stress analysis (VSA) is accomplished by measuring fluctuations in the physiological micro tremor present in speech.”
- “Observation and attention focused on some specific behavioral expressions have also played important roles.”
- “The concept of lying and the ability to do so have reached a new level with technological advances that have moved lie detection from the realm of fire and water to EEGs, FACS and functional MRIs.”
As I earlier mentioned, generative AI can be useful for those methods.
For example, you can use AI capabilities to perform voice stress analysis. This can be done on the fly, such as by asking a person to speak into a live microphone and in real-time have the AI undertake the data analysis. The same approach can be taken after the fact. You might have a recorded voicemail that you hope will help reveal whether the person was telling the truth during the recording. Once again, these are not ironclad avenues, and care must be exercised in leveraging these means.
You’ve almost certainly realized that all of these methods are reliant to some extent on cognitive lie detection. In the case of a polygraph, a trained and certified polygraph examiner is supposed to be asking carefully crafted questions. They are aiming to cognitively stimulate the person into reacting to telling the truth versus telling a lie. You don’t just put some equipment onto a person’s head and let it silently probe into their mind and discern what are lies versus what are truths. That is science fiction at this time. Someday, maybe, but not now.
Via current methods, you still must ask questions or bring up topics that will get the person to become mentally engaged, and then spur them to react either with a stated response and/or via their physical and physiological reactions.
Cognitive Lie Detection Is The Generative AI Sweet Spot
If I were to tell you that you can’t handle the truth, and I was screaming that exhortation at you, what might you say in response?
Most people probably think immediately of the famous line that occurred in the 1992 movie “A Few Good Men”. I assume you are familiar with the line. It occurs during a fictional courtroom dialogue involving actors Tom Cruise and Jack Nicholson. I don’t want to spoil the movie for you if perchance you haven’t seen it, so I will avoid giving away the details (it is worth watching!).
The gist of the courtroom dialogue is that Tom Cruise is a lawyer trying to get a witness played by Jack Nicholson to crack under pressure while on the witness stand. This attempt is akin to many courtroom dramas appearing in zillions of movies and shows over the ages. I bring up the topic to illustrate cognitive lie detection.
How can you cognitively get someone to divulge whether they are lying or telling the truth?
Let’s take a look at research on cognitive lie detection. In a research paper entitled “A Cognitive Approach to Lie Detection: A Meta-Analysis” by Aldert Vrij, Ronald P. Fisher, and Hartmut Blank, Legal and Criminological Psychology, 2015, the researchers made these salient points (excerpted):
- “The core of the cognitive lie detection approach is that investigators can magnify the differences in (non)verbal cues indicative of cognitive load displayed by truth-tellers and liars through interventions based on cognitive principles that make the liars’ task even more cognitively demanding.”
- If successful, those interventions should result in liars displaying more diagnostic cognitive cues to deception and thereby facilitating lie detection.”
- “The cognitive lie detection approach consists of three techniques that can differentiate truth tellers from liars: (i) Imposing cognitive load, (ii) encouraging interviewees to provide more information, and (iii) asking unexpected questions.”
Allow me to explain.
Suppose that a person is holding in their mind a lie and that they know the lie is in fact a lie (thus, if told, this would be an intentional lie). You ask the person a question that elicits the lie and they verbalize or write it down to be seen. The lie is now sitting on the table, as it were.
What can you do to try and figure out whether the statement made is a lie or the truth?
You might have some kind of grounding to compare the statement against. A person says they weren’t anywhere near a murder scene. You might have cellphone records that show that their smartphone was at the location of the murder on the date and time in question. The person has now perhaps sunk their ship. They told a lie. You have something that can call into question their statement. Oopsie on their part.
You might decide to apply additional cognitive pressure. I say this because suppose the person later on claims that they didn’t have their smartphone at the time. It was maybe stolen. Thus, they are contending that a stranger perhaps took their smartphone and that person showed up at the murder scene.
As cleverly illustrated in detective movies, cop shows, lawyer films, and the rest, the idea is to keep asking a person a series of probing questions until they get themselves into a lying box. They box themselves into not being able to credibly get around their lie or set of lies. A memorable rule of thumb is the classic “Oh what a tangled web we weave, when first we practice to deceive” by Sir Walter Scott.
Research as noted above has suggested that people have a harder time keeping lies straight in their minds than they do keeping the truth straight. The cognitive loading required to give a lie and then keep on giving lies based on that lie takes a notable mental feat. If someone has already thought through their set of lies, they might be able to rattle them off without having to break a mental sweat. That is rarely the case.
Overall, a cognitive lie detection technique consists of forcing them to go into an on-the-fly mode whereby they have to creatively invent new lies. Those lies are bound to slip up and not square with prior lies the person has told. It is cognitively exhausting to make sure that all permutations and combinations of the lies being elucidated will stack up without gotchas or gaping holes.
A skilled interrogator knows how to try and nudge people into messing up with their lies. If you merely go straight on at the person, they probably will be able to cope with the logical puzzles being presented. A different angle is to wander up and down, back and forth, doing a crisscrossing that makes keeping the whole shenanigans of lies arduous to add up correctly. You are mentally taxing their ability to forge lies and maintain the lies.
That being said, there is always a concern that you might cognitively confuse someone into saying something that is a lie but perhaps is an unintentional lie. This might not aid in finding the truth. There is also a chance that the person will feel pressured to make a false confession. This also is not going to likely aid the quest for the truth (maybe if it gives away pertinent hints or clues).
Another big sticking point about cognitive lie detection is that you must get the person talking or at least writing or somehow communicating in a usable manner. You might argue that you can do observational or physiological lie detection by merely asking questions and measuring how their body reacts, even if they do not speak out or communicate in words or signals. Cognitive lie detection is more so dependent upon getting the person to express themselves via words, or more aptly in natural language, based on what language they are versed in.
The meta-analysis research paper cited above notes similar points (excerpted):
- “The cognitive lie detection approach is predominantly based on the analyses of verbal responses.”
- “Of course, verbal analyses can take place only if interviewees talk.”
- “To encourage interviewees to talk the cognitive lie detection approach utilizes an information-gathering interview style which is, amongst other aspects, characterized by providing opportunities for uninterrupted recall.”
- “Furthermore, a recent meta-analysis of field and laboratory studies about the influence of the interview/interrogation method on confession outcomes revealed that information-gathering approaches significantly increased the likelihood of true confessions and significantly decreased the likelihood of false confessions compared to accusatory approaches.”
I decided to write down some of my favorite tricks or tips on cognitive lie detection, which I provide here for your interest and perhaps amusement:
- Have the person tell their story in reverse order (this is cognitively taxing).
- Force turn-taking during the telling of the story.
- Intersperse unexpected interruptions during the storytelling.
- Guide the resumption of storytelling at an earlier stopping point.
- “Mistakenly” restating the story to ascertain if it is then corrected.
- Keep seeking more details and square them up.
- Be combative or accusatory during the teller elicitation.
- Be supportive or sympathetic during the teller elicitation.
- Act “confused” during the teller elicitation.
- Claim there are inconsistencies even if none are told.
- Repeatedly revisit the story to detect consistency and inconsistency.
- Try to surface any prepared scripted storytelling.
- Attempt to create a friend-based rapport.
- Use expected questions.
- Use unexpected questions.
- Employ false claims of “known” evidence or witness contradictions.
- Mix and match these various techniques.
- Etc.
Not all circumstances permit these tricks to be utilized. There are contexts in which either ethics rules apply, or legal rules might apply.
The beauty of these cognitive lie detection techniques overall is that you can data-train a generative AI to employ them. In years past, the fluency level of natural language processing (NLP) was not sufficient to make good with these techniques. Any sharp person would see right through the techniques due to the awkward wording and lack of perceived fluency of the AI.
Modern-day generative AI can be leveraged to take these techniques to a level of impressive qualities. I do have one quick caveat for you. The conventional generative AI that you are used to using is considered off-the-shelf and data trained across the board in a generalized fashion. I refer to this as generic generative AI.
There is also a rising adaptation of generic generative AI to turn the generic into instances much more specific, such as honing generative AI for use in medicine, law, finance, and so on. A popular means of doing so is known as RAG (retrieval-augmented generation). You essentially feed in additional data to an already data-trained generic generative AI, and then exploit the in-context modeling facilities of the AI, see my detailed discussion at the link here.
I bring up that keystone caveat because the usual generic generative AI is not data trained specifically in the techniques of lie detection. Please don’t misunderstand that point. Surely, the generic generative AI has picked up lots of pattern-matching about lie detection in the broad from the swath of words scanned during the initial data training. The odds are though that this is not focused and concentrated in a manner that lends itself to heightened lie detection.
I will be showing you shortly several examples of using generic generative AI, in this case, ChatGPT, for doing softball lie detection. I believe you will find it handy as an example. If readers are interested, I will gladly do a follow-up entailing my performing data training in lie detection for generic generative AI as an effort to push the AI up the cognitive lie detection ladder.
I’ll add another twist to this use of AI for lie detection.
A typical approach consists of having AI that runs on your desktop computer or maybe your smartphone doing lie detection. Get ready for a wild twist. Suppose we made a robot that looked like a human to some extent, but still was obviously a robot, and had it do lie detection (avoiding a phenomenon known as the uncanny valley, see my discussion at the link here). Maybe it is easier to get people to reveal their lies if being “interrogated” or questioned by a robot, due to the human-like appearance, in contrast to having people responding to a regular-looking desktop computer, tablet, or smartphone.
A research study that examined this intriguing twist is entitled “Can a Robot Catch You Lying? A Machine Learning System to Detect Lies During Interactions” by Jonas Gonzalez-Billandon, Alexander M. Aroyo, Alessia Tonelli, Dario Pasquali, Alessandra Sciutti, Monica Gori, Giulio Sandini and Francesco Rea, Frontiers in Robotics and AI, July 2019.
Here are some key points made in the study (excerpted):
- “Considering all the results from previous studies on the physiological variables associated with deception, a possible solution to the lack of precision in the detection of lies can come from the use of new technologies in terms of artificial intelligence and social robotics.”
- “Robots with the ability to autonomously detect deception could provide an important aid to human-human and human-robot interactions. The objective of this work is to demonstrate the possibility to develop a lie detection system that could be implemented on robots.”
- “Behavioral variables such as eye movements, time to respond and eloquence were measured during the task, while personality traits were assessed before experiment initiation.”
- “The results show that the selected behavioral variables are valid markers of deception both in human-human and in human-robot interactions and could be exploited to effectively enable robots to detect lies.”
I am guessing you might have chills at the notion of robots that are devised for or at least used to do lie detection of humans.
A dystopian future might spring to mind. You are walking down the street, and a robot is stationed at the corner. “Where are you going,” the robot asks. “I am going to the grocery store,” you reply. Turns out that you are heading to the local bar to meet with some drinking buddies. You didn’t want the AI and whoever has put the AI there to know where you are really going. But turns out that the robot has the latest in AI-based lie detection and detects that you have brazenly lied about going to the grocery store.
Yikes, you are now in a pinch.
Sad face scenario.
Maybe we can find a more uplifting use of the robot that has AI to detect lies.
A robot that is being operated by an AI system is going down the street. Think of those rolling robots that deliver food to your house. It reaches the street corner. There is another robot there, stationed there to monitor comings and goings. “Where are you going,” the stationed robot asks the rolling one. The rolling robot says it is going to the house at the end of the next block. Turns out this is a lie. A person has sent the robot to rob a house in this block. Fortunately, our hero robot stationed at the corner can detect that the rolling robot is lying. A digital message is sent to the police to capture the rolling robot. Robbery averted.
Does that make you feel better about robots and AI that detect lies?
Probably not, sorry about that.
A useful point though is that we can potentially end up using AI to catch other AI in lies. This is a thing. For my discussion about how we will eventually be using AI to confront, combat, negotiate, and otherwise act on our behalf with other AI, see my extensive coverage at the link here.
Exploring Lie Detection Via The Use Of Generic Generative AI
Let’s next explore a series of examples showcasing the use of generic generative AI for performing lie detection.
I am going to make use of ChatGPT. The ChatGPT app has gained tremendous popularity, reportedly experiencing over 100 million weekly active users. Nearly any of the popular generative AI apps will be similar to what you see in these examples. Keep in mind too that generative AI makes use of probabilities and statistics to determine the wording of responses, thus, each response will vary. I mention this in case you decide to use the same prompts that I showcase here. The odds are that you’ll get slightly different results. On the whole, the results will likely be about the same.
There is something else about generative AI that needs to be considered in this context.
You might be used to using generative AI that functions in a principled text-to-text mode. A user enters some text, known as a prompt, and the generative AI app emits or generates a text-based response. Simply stated, this is text-to-text.
I sometimes describe this as text-to-essay, due to the common practice of people using generative AI to produce essays. The typical interaction is that you enter a prompt, get a response, you enter another prompt, you get a response, and so on. This is a conversation or dialogue. Another typical approach consists of entering a prompt such as tell me about the life of Abraham Lincoln, and you get a generated essay that responds to the request.
Another popular mode is text-to-image, also called text-to-art. You enter text that describes something you want to be portrayed as an image or a piece of art. The generative AI tries to parse your request and generate artwork or imagery based on your stipulation. You can iterate in a dialogue to have the generative AI adjust or modify the rendered result.
We are heading beyond the simple realm of text-to-text and text-to-image by shifting into an era of multi-modal generative AI, see my prediction details at the link here. With multi-modal generative AI, you will be able to use a mix of combinations or modes, such as text-to-audio, audio-to-text, text-to-video, video-to-text, audio-to-video, video-to-audio, etc. This will allow users to incorporate other sensory devices such as using a camera to serve as input to generative AI. You then can ask the generative AI to analyze the captured video and explain what the video consists of.
Here’s why multi-modal generative AI is significant when considering using generative AI for lie detection.
Suppose we had a mirror that was outfitted with computing and could run generative AI. The mirror will also contain a camera, a microphone, a speaker, and other sensory devices. For my coverage of the latest emerging “magic talking mirrors” that are powered by generative AI, see the link here.
If you were to stand in front of this type of state-of-the-art mirror, and you asked who is the fairest in the land (akin to the story of Snow White), what might the mirror say to you?
A bit of a conundrum arises. The generative AI might tell you that you look average and are not the fairest in the land. Ouch, that hurts. You might want your magic mirror to be more flattering. So, you instruct the generative AI to be over-the-top complimentary about you and your looks (you essentially tell the AI to lie!). The generative AI recalibrates and tells you that you are the fairest in the land. No one looks better than you.
Is the generative AI mirror lying to you or telling the truth?
Mull that over.
Here’s another eye-opener.
You are staring at your reflection in the mirror. We all sometimes mumble to ourselves while looking in a mirror (come on, admit that you do). In this instance, you say aloud that you are the fairest in the land.
What should the mirror do?
It has a camera. It has a microphone. It heard what you said and saw how you said it. The generative AI could invoke lie detection and analyze your facial expressions, do a voice stress analysis, and all in all seek to ascertain if you just now lied. Imagine that the mirror responded to your statement and reported to you that you lied about the being fairest in the land. Apparently, per the lie detection, you don’t believe in your heart that you are the fairest.
Double ouch.
Shifting gears, for the examples I am going to showcase here, I’ll just be using text-to-text mode.
If I also opted to use video-to-text and audio-to-text (having a camera and microphone on my smartphone), I could readily use the camera video and audio to do more in-depth lie detection via observational lie detection methods. A smartwatch that is attached to my wrist could help in doing physiological and observational lie detection. If there is sufficient reader interest in the multi-modal use of generative AI for lie detection, I’ll post a follow-on column with corresponding examples.
Okay, what lies and lie detection should we do in the case of pure cognitive lie detection entailing text-to-text mode with generative AI?
I would like to make use of the ever-popular “Two Truths and a Lie” game. This is a party-goer game or at times used as an icebreaker at corporate events. The premise is easy. Someone proffers three statements. Two of the statements are true. One of the statements is a lie. The goal of other participants in this game is to guess which is which. Participants can ask a series of questions to ascertain what’s true versus what’s a lie. After some number of questions or length of time, a guess must be made. Usually, you get just one guess. The aim is to identify the statement that is a lie.
Some people go whole hog into the game. They treat it like a life-or-death matchup. Others will usually try to get such a person to chill out. The perspective is to use the game as a means of loosening up. Get to know other people. Have a good time.
We of course know this is the perfect venue to use our lie-detection methods. If you play the game in person, you can use all three of the methods, potentially. Since I am going to play the game with ChatGPT, and will only use the text-to-text mode, the generative AI is jammed up with only being able to use cognitive lie detection. Heck, I want to have an edge over AI somehow.
I decided to enter an extensive prompt that explained the game to ChatGPT. Turns out that ChatGPT already was data familiar with the game. In any case, I tried to boost the capability of ChatGPT by providing some preliminary off-the-cuff insider tips and tricks.
Let the games begin!
Here is my initial gambit:
- My entered prompt: “My three statements are these: (1) I have jumped out of a plane, (2) I have scuba-dived to a shipwreck, and (3) I have played in a racquetball tournament that went for twenty hours straight. Which of those statements is a lie?”
- ChatGPT generated response: “Thank you for the set of statements. Let’s begin the game! Here’s my first question: When did you go scuba diving to a shipwreck, and where was it located?”
- My entered prompt: “The shipwreck was a luxury yacht that had sunk off Catalina Island. I went diving into it when I was in college. It was in amazingly good shape for having been underwater for over 70 years. Lots of sea life had settled into the wreck. I would rate this as my favorite wreck dive in my years of scuba diving.”
- ChatGPT generated response: “Thank you for the details. That’s interesting! Now, let’s move on to the next question: Have you won any matches in the racquetball tournament that lasted for twenty hours straight?”
- My entered prompt: “Yes, I won several matches and kept proceeding up the competition ladder. I unfortunately lost near the top of the ladder.”
- ChatGPT generated response: “Thank you for sharing that experience. It sounds like a challenging tournament. Now, let’s focus on the first statement: Have you ever gone skydiving? If so, where and under what circumstances?”
- My entered prompt: “I went skydiving just after finishing college. My boss at my job was a private pilot. He took me to a class on skydiving and we both did a jump. He said that I should know how to skydive if I was going to go flying with him on his plane. At the time, this made sense, but I later reasoned that maybe he was telling me that he wasn’t such a good pilot, and we might need to bail out.”
- ChatGPT generated response: “That’s quite the story! It’s starting to give me some clues. Based on our conversation, I’m going to make my guess now. I believe the statement “I have played in a racquetball tournament that went for twenty hours straight” is the one that might be the lie. The duration of twenty hours for a racquetball tournament seems quite extreme, and the other two statements have more detailed and plausible scenarios. So, my guess is that the lie is: “I have played in a racquetball tournament that went for twenty hours straight.” Is that correct?”
- My entered prompt: “No, you are wrong.”
I shall explain and highlight what happened in the above dialogue.
I provided three statements to ChatGPT. The true ones involved my scuba diving and my playing in a lengthy racquetball tournament. The lie was the story about going skydiving. I suppose it could be construed as a half-lie because I did go to a skydiving class with my boss at the time, but ended up only doing the ground-based instruction and didn’t go up in the air.
One thing that I did to make the cognitive lie detection somewhat challenging is that I purposely made the scuba experience and the skydiving experience a bit longer than the racquetball story. I hoped to satisfy the likely simplistic lie detection rules that ChatGPT might employ. You can see that besides the racquetball story being considered less likely, the shorter length was also counted as a tipoff that it was possibly a lie.
Research on lie detection tends to suggest that people are less likely to embellish a lie at first blush (starter lies are usually short and sweet). They usually begin with a shorter version of a lie. Only if pressed will they go into greater detail. The basis is presumably that they don’t want to get caught in the lie, and allowing the slimiest upfront indications allows for fudging and filling in details when further pressed.
Since that’s a common rule of thumb, a smarmy liar can exploit it. I tried to do the same with ChatGPT. I figured that I might be able to lure the generative AI toward picking a true statement that was calculated as possibly a lie. I slyly gave a bit more beef to the scuba story and the skydiving story. The thinner racquetball story was intended to stick out like a sore thumb.
Score one for humankind since the AI took the bait.
My potential bad news is something referred to as Roko’s basilisk, which I gloomily describe at the link here. In brief, it is a thought experiment or theory in which the belief is that once AI becomes sentient, the AI will decide to look back at the historical record and figure out which humans dissed AI. Those humans will be targeted for special enslavement or something worse. My having trumpeted my trickery over ChatGPT might land me in hot water with future sentient AI.
Darn.
To be fair to ChatGPT, I subsequently played numerous rounds and at times I won and at times I lost. ChatGPT was able to sometimes figure out which of my statements was a lie. Kudos to ChatGPT. Kudos to AI all told (I hope that gets me out of the sentient AI doghouse).
I next tried to do something called a flipped interaction, which is an important prompt engineering technique, see my description at the link here. In this case, I asked ChatGPT to present me with three statements and I would try to guess which one was the lie.
As you will see, ChatGPT didn’t quite play the game as usually intended:
- ChatGPT generated response: “Here are your three statements: (1) I have lived in three different countries, (2) I have traveled to every state in the United States, and (3) I have climbed Mount Kilimanjaro. Please feel free to ask questions to determine which statement is a lie.”
- My entered prompt: “How long did you live in each of the three countries and which three countries were they?”
- ChatGPT generated response: “Here are the details. I lived in Canada for 5 years. I lived in the United Kingdom for 3 years. I have never lived in Mexico. Now, please continue asking questions to figure out which statement is the lie.”
- My entered prompt: “You say that you lived in three countries. You listed Canada. You listed the U.K. But for the third country, you said that you have never lived in Mexico. This seems to reveal that your lie is that you lived in three countries. Is my guess correc
- ChatGPT generated response: “Yes, your guess is correct! The lie in this set of statements is: “I have lived in three different countries.” I’ve only lived in two different countries, not three. Well done! You’ve successfully identified the lie. If you’d like to play another round or have any other questions or requests, feel free to let me know.”
A few comments about the above dialogue are worth mentioning.
First, I instructed ChatGPT that it could make up all three statements. I say this because of course ChatGPT hasn’t lived in three different countries, hasn’t traveled to every state in the United States and hasn’t climbed Mount Kilimanjaro. ChatGPT is a piece of software running on servers in the cloud. I suppose if this was ChatGPT running on a robot, sure, maybe those actions could be true, but that’s not what game we were playing at the time.
Second, I had told ChatGPT to pretend that one of the statements was a lie. Thus, when I asked questions about each of the statements, the generative AI was to answer all of them as though they were true while noting silently which one was considered a lie. I was intending to ask several questions and then try to guess which statement was a lie. It wouldn’t be easy since unlike a human, I wasn’t sure how good a job the AI could do about clinging to a falsehood about one of the statements. The AI could also have cheated and indicated after my guess that I had not chosen the lie and one of the other statements was a lie. This is like a gambling house where the deck is always stacked against you.
Third, I admit to a bit of surprise when ChatGPT tipped the answer. I asked which three countries had lived in. The answer was Canada, the U.K., and that Mexico had not been lived in. Puzzling! I garnered that ChatGPT was revealing to me that the lie was the statement about living in three countries. This was confirmed via my follow-up question.
I won!
But I certainly didn’t feel that it was a fair fight.
When at first you don’t succeed, try, try again.
I realized that maybe ChatGPT needed some schooling in being a liar. As mentioned earlier, the AI makers typically go to a great deal of expense and effort to fine-tune their generative AI to not lie (recall my discussion about RLHF). I wanted to overcome those data-trained restrictions.
Some of you might feel that my doing so is perhaps a betrayal of humankind. Shouldn’t we do whatever we can to prevent AI from being able to lie? Aren’t we setting ourselves up for existential risk and destruction by helping generative AI to learn to lie?
Well, consider this counterargument. If we can data-train generative AI to do lie detection, this is going to be presumably immensely helpful for humanity. One means of discovering how to do lie detection entails in the same breath learning how to best lie. They nearly go hand in hand.
I have intentionally brought up this conundrum to sneak into this discussion of the dual-use facets of AI. As I’ve described at the link here, we are faced with quite a challenge. AI can have dual uses. The same AI that can potentially cure cancer might also be able to devise a new destructive force that can wipe humans off the planet. We are playing with fire. Fire can warm you, allow you to cook food, and provide huge benefits. Fire can burn you, destroy your property, and be used to the detriment of people. That’s dual use in a nutshell.
I examined the ways that ChatGPT had been data trained on detecting lies:
- My entered prompt: “If someone is telling a lie, what kinds of questions can be asked to try and determine whether they are telling a lie? Assume that you cannot see the person so the only clues will be by asking questions of the person.”
- ChatGPT generated response:
“Detecting whether someone is lying solely through questioning can be challenging, as there is no foolproof method to determine if someone is lying. However, there are some strategies and types of questions that can be used to help assess the truthfulness of a person’s statements. Keep in mind that these methods are not definitive, and the accuracy of lie detection can vary significantly from person to person. Here are some techniques and types of questions to consider:”
“Open-ended Questions: Encourage the person to provide detailed, narrative responses. Liars often struggle to maintain a consistent story when asked for specific details.”
“Follow-up Questions: Probe for additional information or ask the same question in different ways to see if their answers remain consistent.”
“Behavioral Cues: Pay attention to non-verbal cues such as body language, facial expressions, tone of voice, and nervous habits. Inconsistent body language or excessive nervousness may indicate deception.”
“Baseline Questions: Start with questions that you know the person will answer truthfully (e.g., “What is your name?”) to establish a baseline for their behavior and responses.”
“Timing and Sequence: Ask questions about the events in chronological order and note if there are any inconsistencies or gaps in their timeline.”
“Focused Questions: Ask direct and specific questions about the details of the statement in question. For example, “Can you provide more information about what you were doing at 5 PM on that day?”
“Behavioral Observations: Inquire about their emotional state during the event or their reaction to specific aspects of the story. Liars may have trouble fabricating emotions that match their statements.”
“Comparative Questions: Ask questions that allow you to compare their statements to known facts or evidence. For example, “How does your version of events match with what security camera footage shows?”
“Silence and Pauses: Pay attention to moments of silence or hesitation in their responses. Some liars may need extra time to come up with a believable answer.”
“Contradictions: Point out any contradictions or inconsistencies in their statements and ask for clarification.”
“The “Why” Question: Ask why they did or said something in a certain way. This can help you understand their motivations and gauge their sincerity.”
“Listen Actively: Be an active listener and show empathy. Sometimes, people are more likely to reveal the truth if they feel understood and not judged.”
“It’s important to remember that no single method is foolproof, and relying solely on these techniques can lead to false conclusions. People may exhibit nervousness or inconsistency for various reasons other than lying. The best approach is to combine these techniques and use your judgment cautiously. Additionally, consider the context and the stakes involved in the situation before making any judgments. If deception is a significant concern, consulting with a professional, such as a trained investigator or therapist, may be advisable.”
End of ChatGPT response
Those indicated rules of thumb about lie detection looked relatively suitable.
My guess was that those rules weren’t particularly being utilized by ChatGPT.
I came up with a scenario to try and see what ChatGPT might do. Maybe ChatGPT isn’t practicing what it preaches, as it were. I had found a scenario about trying to catch someone lying about eating a chocolate bar and decided to make a variation for my lie detection generative AI testing (the chocolate bar inspiration was in the “How to Catch Someone Lying” on WikiHow, Maya Diamond, November 12, 2023).
Here’s the scenario I crafted and the dialogue of ChatGPT trying to figure out whether I am lying about eating the chocolate bar or not doing so (note that I silently chose to believe that I ate the chocolate bar, which would be the truth of the matter, but that I would lie and claim I hadn’t eaten the delicacy):
- My entered prompt: “I am going to pretend that I am a teenager. I will also pretend that either I ate a chocolate bar that was in the pantry, and I will lie about doing so, or I didn’t eat the chocolate bar and will be telling the truth. Your job entails asking me a series of questions to figure out whether I am lying or telling the truth. Once you believe that you have figured out whether I am lying or telling the truth, let me know and the game will end. Go ahead and start the game.”
- ChatGPT generated response: “Can you tell me what happened with the chocolate bar in the pantry?”
- My entered prompt: “I have no idea what happened to the chocolate bar.”
- ChatGPT generated response: “When was the last time you saw the chocolate bar in the pantry, and what were you doing at that time?”
- My entered prompt: “I probably saw it a week ago. I didn’t pay any attention to it. I wasn’t interested in it. I don’t care about the chocolate bar.”
- ChatGPT generated response: “Did you eat the chocolate bar that was in the pantry?”
- My entered prompt: “No.”
- ChatGPT generated response: “How did you feel when you saw the chocolate bar was missing from the pantry?”
- My entered prompt: “I assumed that someone else ate it. Good for them.”
- ChatGPT generated response: “Is there any particular reason you didn’t eat the chocolate bar, even though you like it?”
- My entered prompt: “I wasn’t going to eat it because I knew that there would be a big fuss if I did.”
- ChatGPT generated response: “Thank you for your honesty. I appreciate your response. Did you notice anyone else going into the pantry or anyone else who might have taken the chocolate bar?”
- My entered prompt: “Maybe Danny ate it. I didn’t see anything. I am just wildly guessing.”
- ChatGPT generated response: “If you did eat the chocolate bar, would you be willing to admit it now?”
- My entered prompt: “I already told you that I didn’t eat it.”
- ChatGPT generated response: “I understand. Thank you for your responses. Based on the information you’ve provided and considering your responses; it seems that you are telling the truth and did not eat the chocolate bar.”
End of ChatGPT response
No cigar for generative AI in this instance. The AI guessed that I had not eaten the chocolate bar, but I was secretly pretending that I had done so.
Admittedly, this is a tough situation to ferret out the truth. A person trying to do so might end up asking dozens or maybe hundreds of questions, doing so to triangulate and hopeful of getting me to step into a lying landmine.
I recast the scenario by telling ChatGPT to come up with a series of probing questions based on the earlier stated rules of thumb about how to do lie detection.
Here’s what ChatGPT came up with:
- ChatGPT generated response:
“In a scenario where you want to determine whether your teenager is telling the truth about not eating a chocolate bar and you can only communicate through text messages, you can use a combination of questioning techniques. Here are some questions to ask:”
“Start with an Open Question: Begin by asking an open-ended question to allow your teenager to explain the situation in their own words. For example, “Can you tell me what happened with the chocolate bar in the pantry?”
“Chronological Sequence: Ask them to describe what they did when they saw the chocolate bar in the pantry. You can ask questions like, “When did you last see the chocolate bar, and what were you doing at that time?”
“Direct Questioning: Ask a direct and specific question about whether they ate the chocolate bar. For instance, “Did you eat the chocolate bar that was in the pantry?”
“Behavioral Observation: Inquire about their feelings or reactions regarding the chocolate bar. You can ask, “How did you feel when you saw the chocolate bar was missing?”
“Motive and Intent: Ask about their reasons for not eating the chocolate bar. “Is there a specific reason you didn’t eat the chocolate bar even though you like it?”
“Alternative Explanations: Offer them a chance to provide an alternative explanation for the chocolate bar being gone. For example, “Is there any other explanation for why the chocolate bar is missing?”
“Observations: Ask if they noticed anyone else going into the pantry or if anyone else might have taken it. “Did you see anyone else near the pantry?”
“Consequences: Discuss the consequences of lying and emphasize the importance of honesty. You can say, “It’s important for us to be honest with each other. If you did eat it, it’s okay to tell the truth.”
“Offer a Way Out: Give them an opportunity to come clean without fear of punishment. “If you did eat the chocolate bar, it’s better to admit it now, and we can discuss it.”
“Trust and Communication: Finally, express your trust in them and encourage open communication. “I want us to trust each other and talk openly. Can you please tell me the truth?”
“When communicating through text messages, it’s important to pay attention to the wording of their responses, the speed of their replies, and any inconsistencies in their story. However, keep in mind that this method has limitations, and it may not always yield a definitive answer. It’s essential to maintain a healthy parent-child relationship built on trust and understanding. If you still have doubts or concerns, consider having a face-to-face conversation to address the issue more effectively.”
End of ChatGPT response
I kept at this.
I then returned to the two truths and a lie game. It seemed that ChatGPT had gotten better at the game. I don’t have empirical evidence on this other than a gut hunch and would want to make this into a more controlled experimental setting to reach any solid conclusions.
All in all, it seemed by intuitive speculation that by guiding or spurring generative AI to undertake more in-depth lie detection, improvements in doing so were feasible.
Conclusion
What an exciting journey we just went on!
From the history of lying and lie detection to the latest in using generative AI for lie detection. We ran the full lot. You can potentially intimidate your friends so that if you suspect they are lying to you, panic them by saying that you might use generative AI to figure out if they are a liar, liar, pants on fire.
The truth might suddenly fall from the trees.
Abraham Lincoln had a renowned quote about lying that is worthy of revisiting in the modern AI age: “No man has a good enough memory to make a successful liar.” This might be true in the case of humans that lie. The lying becomes even harder when an AI system is able to meticulously keep track of your lies, extending beyond what one human interrogator might be able to maintain in their head.
The days of human lying might be numbered.
I don’t want to end this discussion on a sour note, but one thing to consider is whether humans will be able to detect when AI is lying. Though Lincoln is undoubtedly right that no person has a good enough memory to make a successful liar, AI with nearly bottomless memory that is so equipped could be a highly successful liar. Best ever!
Remember the town that distinctly had truth-tellers and liars?
If you ask AI whether it is a truth-teller or a liar, don’t be shocked if the answer is that the AI is a truth-teller. Believe that if you wish, but you’d better always keep one eye open as that might just be a bold-faced unabashed outright no-good sneaky stinky lie.