Natural language processing (NLP) is central to the communication with machines and among ourselves, and NLP research field has long sought to produce human-quality language. Identification of informative criteria for measuring NLP-produced language quality will support development of ever-better NLP tools. The authors hypothesize that mentalizing network neural activity may be used to distinguish NLP-produced language from human-produced language, even for cases where human judges cannot subjectively distinguish the language source. Using the social chatbots Google Meena in English and Microsoft XiaoIce in Chinese to generate NLP-produced language, behavioral tests which reveal that variance of personality perceived from chatbot chats is larger than for human chats are conducted, suggesting that chatbot language usage patterns are not stable. Using an identity rating task with functional magnetic resonance imaging, neuroimaging analyses which reveal distinct patterns of brain activity in the mentalizing network including the DMPFC and rTPJ in response to chatbot versus human chats that cannot be distinguished subjectively are conducted. This study illustrates a promising empirical basis for measuring the quality of NLP-produced language: adding a judge's implicit perception as an additional criterion.
|Original language||English (US)|
|State||Published - Feb 7 2023|