Chatbots can answer user questions automatically, 24 hours a day, 365 days a year, with a uniform level of quality. Companies are deploying chatbots at an accelerating rate, because chatbots enable users to resolve problems themselves, reduce the workload placed on helpdesk and other operators, and improve operational efficiency. However, there are also various problems with conventional chatbots. For example, system deployment is labor-intensive, search hit rates can be low, and maintenance is difficult. Using these chatbots effectively has therefore been difficult. While the use of conventional chatbots requires that companies first register conversational paths, called scenarios, Toshiba Digital Solutions’ Commendry eliminates the need for these scenarios. Commendry is a scenario-less AI chatbot service that makes it possible to deploy chatbots simply by registering a FAQ -- a collection of frequently asked questions and their answers. Users then engage in dialogue with the chatbots to arrive at the answers they need. This helps keep down the overall cost of operations, such as help desk operations, to improve the level of service provided to customers and to improve internal operations. Let’s look at the features of Commendry, which utilizes unique AI technologies that Toshiba has been developing for over fifty years, and the initiatives Toshiba is implementing to leverage generative AI to improve Commendry’s functionality.
The advantages and disadvantages of single-question-single-answer/AI chatbots and scenario-based chatbots
In recent years, a growing number of companies have begun using chatbots to provide better customer service and improve operational efficiency. This trend has further accelerated as a result of the COVID-19 pandemic, but companies still face deployment and operation hurdles when trying to effectively use chatbots.
Chatbots are primarily categorized as one of two types: single-question-single-answer/AI chatbots and scenario-based (rule-based) chatbots. Each type has its own advantages and disadvantages.
Single-question-single-answer/AI chatbots provide answers to questions entered by users based on the contents of a collection of Frequently Asked Questions (FAQ). These systems require a FAQ to be registered in advance, but if the companies register various questions and answers, the systems can field a wide range of questions. However, the terms users enter often differ from those used in the FAQ. When this happens, users’ questions do not match the questions in the FAQ, so these systems fail to produce answers. To improve the search hit ratio (accuracy), companies must register thesauruses of other words that have the same meanings as those used in the FAQ. Preparing these thesauruses is a very labor-intensive process. Systems that use AI have higher hit rates, but training data must often be prepared in advance, and this frequently requires specialized knowledge regarding tools.
With scenario-based chatbots, users are presented with choices and are guided to select keywords related to their questions. This process is repeated, directing the user to the answer to their question. This requires the company to prepare a FAQ and scenarios (routes) for reaching answers in advance. For example, if a user wants to know how to ground their microwave oven, the chatbot could present choices that would gradually narrow down the range of possible questions to the question the user wants answered, such as first asking if the user has a question about a kitchen appliance or a non-kitchen home appliance, then if the question is about a refrigerator or a microwave oven, and then what specifically they want to do with their microwave oven. Through this process, they eventually arrive at the answer to the user’s questions (Fig. 1). To make it possible to arrive at the answer to the user’s question, the company must envision and design various possible routes. Furthermore, the company must also update the routes whenever the FAQ is updated. Scenario-based chatbots work by having users make choices to narrow down what they are looking for, making it easy to reach the correct answer. On the other hand, the answers that can be provided are limited because this process is performed using routes prepared in advance.
Addressing the problems with these chatbots requires improvements to the efficiency of chatbot system deployment and maintenance, as well as to the hit rate -- the frequency with which users arrive at the answers they need.
A scenario-less AI chatbot service that transforms inquiry-handling operations
Since October 2020, Toshiba Digital Solutions has been supplying “Commendry,” the industry’s first*1 scenario-less AI chatbot service, in SaaS form. It offers the advantages of both single-question-single-answer/AI chatbots and scenario-based chatbots while addressing their shortcomings. Toshiba has developed natural language processing technologies through its long years of research and development, and Commendry leverages these technologies in its own unique dialogue technology.
*1: According to a study by Toshiba Digital Solutions conducted in October 2020
With the Commendry chatbot (an AI chatbot), all companies need to do is register a FAQ. Users can then engage in dialogue with our proprietary AI operator to rapidly find the answers they need. Toshiba’s proprietary AI technologies learn important keywords in FAQs, understand user questions that use phrasing or terminology that differs from the phrasing or terminology used in the FAQs, and engage in dialogues with users using suggested keywords which are inferred dynamically from vague questions (keywords automatically proposed by Toshiba’s proprietary AI that are used to narrow down searches). Even if there are many answers that would match a user’s question, using suggested keywords to narrow down candidates, in a similar way as with scenario-based chatbots, produces answers with higher hit rates while eliminating the need to prepare and maintain scenarios, which presents a significant burden.
Let’s look at the process in a bit more depth, using the above example of trying to find out how to ground a microwave oven. For example, if the user enters the word “grounding,” Commendry could infer and suggest keywords that would lead to the answer by asking questions like “Would you like to know how to connect a grounding cable?” “Would you like to know how to ground a device?” “Would you like to know the purpose of a grounding cable?” Based on the user’s answers to the suggested keywords, Commendry could then engage in dialog with the user by using new suggested keywords, asking questions like “Would you like to know about a washing machine?” or “Would you like to know about a microwave oven?” The conversation would flow smoothly until the user arrived at their answer (Fig. 2).
Furthermore, with Commendry, there is no need to register a thesaurus, as there is with single-question-single-answer/AI chatbots. Commendry has performed deep learning on a dataset of roughly 650 million sentences (roughly 700,000 words) collected from the internet. This proprietary Toshiba AI model understands vague and paraphrased expressions and uses that knowledge to perform searches, producing higher hit rates. Companies in industries that use specialized terminology can prepare their own dictionaries and train the AI on them. For example, if a user asks “What is the interest rate for a home loan?” even if the same expression is not used in the FAQ, Commendry would understand the intent of the question and confirm by asking, “Do you want to know the interest rate for a mortgage?” Commendry would then infer suggested keywords to ask follow-up questions such as “For a fixed-rate mortgage or a variable-rate mortgage?” efficiently leading the user to the answer they seek.
Commendry also uses other techniques to improve search hit rates. Users can select categories in which they wish to perform searches, but sometimes Q&A pairs cannot be found in the category they selected. When that happens, Commendry automatically expands the search range to encompass all categories, so it can find matching Q&A pairs in other categories. This function is helpful when users select the wrong category and when system administrators perform FAQ maintenance. Commendry also has functions for rapidly resolving issues when users cannot find the answers they seek, such as by easily escalating the issue to the system administrator.
Commendry is already in use by numerous companies, improving customer service, making operations more efficient, cutting overall operation costs, and reducing the burden involved in operation and maintenance. For example, Mitsui Sumitomo Insurance Company, Limited is using Commendry as its inquiry contact point on the website of its pre- and post-compensation service, a new business segment for insurance companies. This new service, which it was able to roll out quickly and at little cost, has been praised as an example of the company’s successful digital transformation (DX) promotion efforts. By reducing the time and effort required for maintenance, Commendry is contributing significantly to the creation of a better service environment by making it possible to flexibly improve FAQs based on the constantly-changing questions it receives from customers. It is expected to improve customer satisfaction by providing rapid, accurate answers to the many complex questions customers have about insurance, without making customers wait. Furthermore, it is also expected to prove effective as a tool for reinforcing and enriching the support offered to customers by insurance agents, who are important partners of Mitsui Sumitomo Insurance.
* Click here for more information about the system being used by Mitsui Sumitomo Insurance Company, Limited.
New services that use generative AI
So far, we have looked at the features and benefits of Commendry, along with a typical example of its use. When deploying chatbots, many companies struggle with the workload involved in creating and updating FAQs. That is why we are developing new services that are focused on FAQs -- services that leverage the power of generative AI.
ChatGPT is a generative AI service developed by OpenAI. ChatGPT is an extremely useful tool in which users can enter a question on-screen and ChatGPT will provide them with a summarized answer, written in natural language. However, there are various issues with ChatGPT, such as the accuracy of its answers with respect to new information or specific information, or how it handles confidential information or copyright. To address these issues, we implement thorough security measures when utilizing generative AI. For example, we apply Azure OpenAI Services provided by Microsoft to OpenAI’s technologies to ensure a high degree of security, and we set up independent environments for each customer.
* Click here to learn about Toshiba’s initiatives related to large language models (LLMs), the foundations of generative AI.
We are developing two new services. Both of these services leverage existing documents. One generates FAQs from documents and the other searches documents directly to find answers, instead of using a FAQ. Currently, both are in the proof of concept (PoC) stage, and we are aiming to launch the services in the spring of 2024.
The first service, the FAQ generation service, uses generative AI to generate a list of likely questions and their answers from designated documents, using proprietary assets, and outputs a draft version of a potential FAQ in Excel form. For each asset, the document structure is analyzed, and the Q&A pairs generated by the generative AI are merged and compared to create a more accurate and precise FAQ. The output results are confirmed by the system administrator to produce a highly accurate FAQ. The completed FAQ can then be registered in Commendry to smoothly put it into operation and to significantly reduce both the burden involved in deploying new chatbots and the burden involved in adding or updating Q&A pairs for systems already in operation (Fig. 3).
The second service, which searches documents, assists with customers’ internal operations. Existing documents such as regulations and manuals are first registered in Commendry. Users can then ask questions, and a generative AI searches through the documents to create a clear summary of the search results in easy-to-understand language. Excerpts from the documents on which the answer is based, along with links to the documents, are shown on the same screen as the answer to the user’s question. The search keywords are also highlighted in the results. Accessing the links opens the documents to the corresponding pages, allowing users to easily verify the accuracy of the answers and get more detailed information (Fig. 4).
We are seeking to fully automate FAQ maintenance operations, which are now done partially by hand, in order to further improve the efficiency of inquiry operations, achieve greater response skill uniformity, share knowledge from experts, and promote operational DX. We are developing solutions with the aim of providing chatbot with functions for adding and updating Q&A pairs. We will continue to contribute to our customers’ operation DX by evolving our services by improving AI chatbot accuracy and functionality.
- The corporate names, organization names, job titles and other names and titles appearing in this article are those as of February 2024.
- All other company names or product names mentioned in this article may be trademarks or registered trademarks of their respective companies.
- Commendry is not currently available for purchase outside Japan.
Related articles
Running Feature: The frontlines of generative AI! Learn about the key points of this technology, its business applications, and its future prospects(Article list)