FYI.

This story is over 5 years old.

Tech

To Truly Fake Intelligence, Chatbots Need To Be Able To Change Your Mind

To achieve social intelligence, AI needs to be able to be persuasive, argues researcher.
RYGER/Shutterstock

Could you ever imagine yourself in a heated argument with a chatbot? Like, really passionate, deeply reasoned position-taking—argument, counterargument, countercounterargument, countercountercounterargument. Could you imagine a chatbot convincing a jury of a defendant's guilt? Or even just trying? And if you could imagine it, what would that mean?

Questions like these are at the core of a paper published recently in AI Matters by Samira Shaikh, a computer science-slash-psychology researcher at the University of North Carolina-Charlotte. In it, she considers the notion of social competence in artificial intelligence agents—that is, the ability of a machine to accomplish goals that are social in nature. A key example of a social goal is that ability to persuade others, to change minds. Thus, a bot capable of persuasion would represent a significant advance in artificial social intelligence.

Advertisement

To that end, Shaikh set about automating "the very process of persuasive communication, by designing a system which can purposefully communicate, without any restrictions on domain or genre or task, and which has the clear intention of persuading the recipients of its messaging." So: a lawyer-bot, basically. Or a politician-bot. (I'm not sure which is more frightening.)

Shaikh's task comes down to two core questions. First, can persuasive strategies be automated in a virtual chat agent? Second, can persuasion by (human) individuals during conversation be detected by a chatbot and counteracted?

"Our goal is to define specific human persuasive strategies that can be programmed into an agent who can then persuade participants to its own view."

"Our goal is not to create an artificial agent capable of passing the Turing test or the Loebner prize," Shaikh writes. "Rather, our goal is to define specific human persuasive strategies that can be programmed into an agent who can then persuade participants to its own view."

To reach this goal, Shaikh's work consisted of three phases. The first was conducting a "belief elicitation study" to identify some actual real-world positions that a bot might argue for or against. The second was taking these positions or beliefs and marrying them to a database of natural-language statements—humanspeak, in other words. Finally, the resulting suite of positions and statements was incorporated into programmed behaviors and strategies for persuading others and for counteracting attempts at persuasion by others.

"The behaviors programmed in the agents are triggered, in part, by a variety of linguistic cues emerging from the conversation, such as dialogue acts, topic, polarity and communication acts," Shaikh's paper continues. "The annotated context of conversation is used to inform the agent's models by updating the underlying beliefs of participants in real time. It is necessary for the agent to create and maintain a representation of the mental states of the participants with respect to the topic so as to understand their viewpoints."

Basically, the persuasion-bot models certain real-world positions and beliefs (as collected by the survey in the experiment's first phase) and selects from a bag of annotated statements according to some predetermined behaviors or strategies meant to maximize impact on another agent's models of real-world positions and beliefs. The specific strategies were drawn from preexisting theories of influence in social psychology.

The results of the study were statistically significant. In the end, the bot was able to change minds—it could make a point.

There's a bit of a lesson buried in this about changing minds that's pretty pertinent for the current political climate. For the bot to have an effect, it needed to be able to represent the other party's current position accurately. It needs to understand where the human arguer is coming from. Otherwise, it's really only making noise.