In bot we trust: A new methodology of chatbot performance measures

This article proposes novel methods of tracking human-chatbot interactions and measuring chatbot performance that take into consideration ethical concerns, particularly trust.

The main goal in the article is to track the evolution of chatbots from simple systems to robust natural language processing that uses deep learning and exposes how this change affects busi­ness and organizations.

Based on the research and major developments in chatbots, there are new dimensions of trust that are suggested:

Transparency – does the chatbot send honest signals in communication, does not speak like a “politician”, and does not deny its status – not knowing something, or not being sure of the exact answer.

Integrity – the topic of trust and chatbots could be supplemented by integrity, seen as a factor associated with credibility, and concerns the user’s expectation that the chatbot will act consistently in line with past experience. If the user perceives the chatbots as predictable, this may lead to a feeling of trust in the chatbot.

Explain-ability – describes the degree to which the motivations and intents of the chatbot are in line with those of the consumer; does the chatbot display the quality of being well-meaning, kind, or does it try to control and manipulate the user?