Artificial Intelligence (AI) is becoming the backbone of society and life as we know it. However, a gap remains between the way humans think and the way artificial intelligence proceeds. Most linguistic models learn to take the information given to them and then produce a response based on their stored knowledge. Humans also consider common sense, mutual beliefs, and tacit knowledge gathered from real-life experiences or incidental activities during a conversation.
These elements change the interpretation of what we say but are not taken into account by current AI models, so they do not provide precise and human answers. Talking with virtual assistants can sometimes be frustrating if they don’t understand what you intend to ask or provide a poor or incorrect response to your request. Imagine never having to worry about that communication barrier again. Bridging the gap between how humans communicate and generate responses and how artificial intelligence models do so would be a huge development for the future of technology and the role of AI in society. The way Siri, Amazon Alexa, your car, and any other machine that converses with you will have a stronger communication connection and can provide more accurate responses.
“Imagine you want to buy flowers for your wife, and how useful it would be if the machine understands that your wife is in a relationship, roses represent love, so the assistant recommends buying her roses. That’s what we tried to solve here. To get a better answer with human common sense,” explained Jay Pujara, ISI research manager and co-author of this study.
This work led by Pei Zhou, Ph.D. candidate at ISI, “Think Before You Speak: Explicitly Generating Common Sense Tacit Knowledge for Generating Responses” was accepted in ACL 2022 (60e Annual Meeting of the Association for Computational Linguistics). He used inner discourse models, rather than traditional end-to-end models that ignore tacit knowledge, to test whether applying tacit knowledge as a factor improves the accuracy of AI-generated responses. . The inspiration for this study comes from author Pei Zhou’s earlier work on improving human-computer communication. “The important part that Pei chose as one of the main angles of his research was to study the role of common sense in human-machine communication. Current models lack knowledge of common sense, they are not able to make inferences like humans do,” said Dr. Xiang Ren, research team leader at ISI and assistant professor, co-author of this paper.
Ren also said, “We wanted to see if it would benefit models as well as humans if they had the ability to mimic the same thought process as humans.” Turns out… they do.
The study proved that when AI models are given the tools to think like humans, they create more of their own common sense. Ren explained, “by explicitly telling the model what common knowledge is useful to the ongoing conversation, the model produces more engaging and natural responses.” Some may assume that models already have common sense of their own, however, these results show that empowering models with common sense knowledge creates more humane and sensible responses.
Pei Zhou also discussed the results of the study, addressing other factors besides response quality, such as its positive impact on the abilities of interior monologue models. Once the models received knowledge generated from a commonsense database, the models were now creating their own thought process: with only implicit information given to the source, they were able to generate new knowledge of common sense.
The more AI is trained and accurate to real human characteristics and thought processes, the more we can use AI as a tool to advance technology and have more human interactions with programs in which you converse with artificial intelligence instead of a real person.
Authors: Pei-Zhou, Karthik Gopalakrishnan, Behnam Hedayatnie, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur. This work was undertaken with support from DARPA’s Machine Common Sense program, Amazon, and Google.
Posted on June 7, 2022
Last updated on June 7, 2022