It is said that Scientia potentia est or Knowledge is Power. For large enterprises with a rich history and deep expertise of several decades in their industry, knowledge truly is a competitive moat. This competitive advantage, however, if left unmanaged, can soon turn into an overwhelming daily struggle for thousands of employees. Global enterprises suffer from what is known as Knowledge Debt – mismanaged, segregated information spread across legacy systems, which is hard to discover, access, update, and collaborate on.
While there are several aspects of knowledge management that can be optimized – right from creating content to monitoring usage and removing old documents – in this article, we will focus on the heart of the problem – finding the right content at the right time, in a jiffy and show how IT leaders can leverage not just a single FAQ management solution but an end-to-end Knowledge Management Stack to tackle the problem head-on and deploy a scalable solution quickly using our Knowledge Management conversational app.
The problem of knowledge debt with traditional chatbots
We have been fortunate to work with multiple enterprises that have been in the business for 20+ years and have 10,000+ employees. Larger the enterprise, larger the knowledge debt. Around 2 years back, we solved it the traditional way – by building journeys (or intents) for hundreds of questions, adding utterances (training data) for each, and adding answers manually for each question. However, we soon realised that this would not work. Many challenges persist.
Time to market – Creating 1000+ intents, adding responses, and training manually was just too slow and not scalable. Maintaining and improving this data was a whole other challenge in itself.
Non-Exhaustive – Let’s say we cover a thousand questions manually (for every new document that was being added or removed, someone would have to add/remove data from the bot), there’s always a lingering possibility of missing out on a query that a user might ask for. The solution largely depended on the responses that the bot was trained on. The management of the chatbot is chaotic compared to the ROI promised by the solution.
Lack of domain knowledge – The key difference between a machine-learning based and rule-based technology is that ML can understand relationships between different entities in a domain and use it to provide better recommendations. The traditional method did not provide us with the capability to model these unique relationships.
Internationalization – All our enterprise customers have a presence across countries and therefore policies usually vary. One of our oilfield services customers, for example, has a presence in 80+ countries. This meant that the bot would have to provide 80 different responses to the question based on the country code of the employee. A traditional chatbot cannot handle this without a lot of duplication.
Languages – Many documents were in local languages (e.g. the employees in France would refer to the policies in French, not English). Translating these into English and training the bot in 80 languages – both were grueling tasks.
Non-business-friendly – Assuming we made the solution work using a million if-else conditions, it’s still non-viable for a business owner or decision-maker at the client’s side to change or update the knowledge in the bot by themselves. This would defeat the entire purpose as it is the business that usually maintains domain-specific knowledge within the enterprise.
In short, the traditional solution is not designed to scale and address the knowledge management needs of a large enterprise. And necessity is the mother of all inventions. Right? Ladies and gentlemen, proud to announce the birth of the KODA stack.
Knowledge debt? No problem. The KODA Stack is here.
The power of a conversational interface lies in the power of processing infinite input, which poses an equally great challenge as well. If not fed on copious data, the bot accuracy and adoption can go down drastically. So, it is important that we have multiple levels of fallbacks to ensure that users always find some relevant response to their query.
High accuracy and resolution rate across languages – This is the single most important KPI of KODA. And to ensure that, we have configured a 3-engine pipeline with a continuous feedback loop connected to user feedback and analytics engine.
Successful small, medium and large companies are deploying KODA and Bot Orchestrator to provide hassle-free service. Are you one of them? Become a cognitive enterprise, today. Talk to Billy, or drop us a email@example.com
Engine 1: Data Modelling with Knowledge Graph
All domains usually have a common set of intents, entities, and relationships defined between them. Knowledge graphs are a great tool to mimic such relationships. For example, here is a subset of the knowledge graph that we have created for HR –
This is a brilliant way to model policy-related data. The graph can be flattened and represented as a two-dimensional table. The business users can download the pre-built conversational app from Yellow Messenger’s marketplace and all they need to do is upload the file with the relevant content and the bot would start working automatically. We built this for one of the world’s largest oilfield services companies with a presence in 100+ countries. See an example below of the KODA knowledge graph in action.
In the above example, you can see that since the bot has the relationships between intents and entities modeled, it handles queries even if they are unclear or ambiguous (e.g. my eligibility, maternity, etc.) This solves the following problems –
1. Time to market – Business users can just add the responses in the Knowledge Graph, upload the bot and it starts working.
2. Internationalisation – The same schema can be filled for different countries to configure the bot for country-wise policies or responses.
3. Domain knowledge modelling – Knowledge graph as a framework allows us to model key relationships of a domain.
4. Business user friendly – The pre-built app comes with the graph traversal logic configured and all we need to do here is to update the schema, thus requiring absolutely no code to be written.
Engine 2: Q&A Hub + Recommendation Engine
A lot of our customers already have a vast set of questions and answers that they want to add to their knowledge repository. With large enterprises, it is often a part of the KPIs for support agents to create these knowledge bits. But like we discussed earlier, it is a pain to add these questions and answers manually one by one as intents and then add and maintain training data for each of these.
We are therefore introducing a Q&A Hub using which customers can upload a CSV file with a set of questions and answers, hit train, and BOOM! The bot is ready along with a pre-built recommendation flow. The bot is trained on Sentence Encoder and similarity search and the did you mean responses are actually used by the bot for self-learning to improve it further. But that’s a deep topic, and it demands a blog of its own, so we will leave it for another day. For now, sit back and see the FAQ builder in action!
This further helps us with the following –
1. Consolidation of multiple data sources – We can now connect multiple QnA data sources along with the Knowledge Graph to improve our resolution rates.
2. Intelligent recommendations and self-learning – This addresses the problem of infinite input and ensures that the user is always directed to the most relevant answer even if their query is ambiguous. The ‘did you mean’ responses selected by the users are recorded and further used to improve the bot.
3. Faster time to market – With a Q&A hub, you can really go live within minutes, as we showed in the demo. Enough said!
4. Business user-friendly – The entire process, including training and recommendations, is automated. All you need to do is hit ‘Train’ for the bot to work. Absolutely zero code!
Engine 3: Document Cognition
The above solution works pretty well for small to medium-sized use cases. While this helps automate many support requests, it still does not address the complete knowledge discovery problem of the enterprises since the chatbot is limited to the questions and answers or knowledge graph data added to the platform.
If we want to have a truly extensible knowledge discovery conversational app, it is important that we integrate with document management systems and search across them in real-time. Document cognition allows integration with multiple knowledge bases like Google Drive, Sharepoint, Service Now, etc., and does just that based on the access levels of the users. It also uses past ratings and Machine Learning model’s confidence scores to score and rank articles. This is being used by the world’s second-largest pharma company whom we are proud to have as a customer.
How KODA makes it all work together seamlessly
With a powerful NLP engine and Document Cognition across resources, the right response is just a few seconds away. Always.
1. User’s query is processed and converted into Machine Learning (ML) features.
2. ML features are passed through sentence encoder models, intents and entities are detected.
3. Intents and entities are passed through Q&A Hub and/or Knowledge Graph (based on the setup)
4. Responses from the two engines and compared by a recommendation engine to provide the most relevant answers.
5. If the query is still not solved, the query is passed to the document cognition engine which searches across various systems and ranks the responses.
6. Feedback and analytics on each of these insights are then used to improve the bot further.
Advantages Of KODA
Knowledge management is an elemental process, however still underrated. We’ve seen a bunch of benefits already from quicker responses, quality resolutions and FAQ handling. Following are enterprise benefits and why you need KODA today.
1. Save tremendous time – One of our deployments for a multinational healthcare company resulted in agent query resolution time slashed by about 50%. Furthermore, 15% less ITSM tickets were raised when used by end users.
2. Higher productivity – When KODA was adopted by an international oilfield services organization that employs about 100,000 people across 120 countries, it directly slashed human resources tickets by a whopping 22%. Their employees have experienced faster resolutions internally allowing them to focus on critical tasks.
3. Scalable – Chatbots that work on questions and answers are limited to the data that is fed. Virtual assistants as explained in this article can train themselves and accommodate unique cases. It matures and exudes intelligent responses with time.
4. Easy deployment and training – Our KODA stack is easy to adopt and absorb in any organization no matter the structure and configuration of knowledge. One can train employees to monitor and work on this without hassle since it requires no coding skills.
5. Instantaneous impact – Experience a change in productivity and savings almost immediately.
Our knowledge discovery solution is enterprise-ready, can be integrated with multiple systems and can go live in minutes. Several enterprises are paying an insignificant amount to get rid of their knowledge debt. Is it time for your enterprise to close the debt? Just KODA firstname.lastname@example.org
More Posts on Chatbot Technology
- Slack Integration with Yellow Messenger HR Chatbot
- Yellow Messenger’s Multilingual Intelligent Virtual Assistants
- Yellow Messenger’s Bot Cloning Feature
- Yellow Messenger’s Robust NLP Engine | Part 2
- Yellow Messenger’s Robust NLP Engine – I
- Chatbot Flow Builder by Yellow Messenger
- Knowledge Optimization, Discovery and Analytics for large enterprises
- Knowledge Management Systems with Document Cognition
- FAQs on WhatsApp Chatbot Integration
- 4 Best Practices for Conversational Design