-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Transforming Customer Service with AI and Knowledge Management

Article Featured Image

Customer service and customer support operations are no strangers to AI technologies. AI has long been embedded in the interactions between companies and their customers. Coupled with knowledge management (KM) techniques for creating, capturing, and curating knowledge, AI has increased customer satisfaction, shortened response times to customer queries, and heightened employee job satisfaction. It’s a win-win when customers come away with positive feelings about their communication with agents, and agents feel joyful about their jobs. The intersection of AI and KM takes customer service and support to a whole new level.

Advances in the use of AI for customer service and support, mainly due to the introduction of generative AI (GenAI), have escalated significantly in the past 2 years. But AI was well-entrenched even before that. Machine learning enabled a vastly better understanding of relevant terminology, and natural language processing (NLP) encouraged the use of conversational language rather than a more stilted keyword approach to searching for information. AI facilitated understanding that “cross country” might refer to skiing, running, or travel and that it could be shortened to x-country. (AI should also be able to distinguish between the use of “x” in this context as meaning “cross” rather than more risqué meanings of “x.”)

Customers have become accustomed to hearing an obviously nonhuman voice on the other end of a customer support phone conversation, at least at the beginning of the call. They expect a synthetic voice to give them options about why they are calling and to suggest answers to simple queries such as, “What’s my balance?” “When are you open today?” or, “Is my flight delayed?” This automating of routine tasks, so easily done without human intervention but reliant upon internal knowledge, has been an enormous time-saver for customers and employees. It represents a perfect intersection of AI and KM.

As AI technology develops, voice assistants and customer service chatbots will sound less and less synthetic. They will approach a normal-sounding human voice. This shift to not being able to discern the difference between a computer-generated voice and a human one is already well-underway.

INVOKING KNOWLEDGEBASES IN THE AI ECOSPHERE

The introduction of GenAI has strengthened the relationship between AI and KM. The language models used for GenAI depend upon the knowledgebases already existent within enterprises. These knowledgebases, curated by knowledge managers with input from SMEs (subject matter experts), are integral to successful interactions with customers. It’s already the case that employees rely upon the information they find in the knowledgebases to answer questions, particularly those that are not routine and thus cannot be easily answered by a predefined, AI-generated response.

Here are some examples: “What’s the part number I need to fix this very old machine and what do I do if the part is no longer being made?” “How do I install this program on my computer?” “Can I use points from two different people to book a five-city, three-continent trip on three different airlines?” The first two lend themselves to a GenAI approach and its ability to not just search in a traditional manner using keywords but also to have a conversation with internal documents, such as manuals, licenses, contracts, work orders, and service agreements. When customers ask very complicated questions, such as the travel one, escalation to a human is the best approach. Advice about where to find a very old part is also likely to need a human response, perhaps guided by GenAI.

However, customers are sometimes wary about interacting with a computer instead of an actual person. Recent press coverage of hallucinations produced by GenAI chatbots has caused concern. It’s easy to forget that humans also make mistakes, often by misunderstanding what was said to them.

Here’s a scenario: You arrive at a restaurant with two colleagues for a 1 p.m. luncheon reservation. The greeter tells you that you’re early. You think back to your earlier conversation with the restaurant when you said, “I’m calling about my 1 p.m. reservation for two people. Could you please change that to three? So, you now have a midafternoon reservation for two people. What was obviously needed here was a clarifying question from the restaurant. “Do you mean three people or 3 p.m.?”

Clarifications can be automated. Take, for example, an insurance company. Someone calling the claims line from an area code indicating they live in a town that just experienced a major event, such as a tornado or a wildfire, is extremely likely to be interested in information about filing a claim for damage to their home. The call can be automatically routed to the department that handles those types of claims. Hopefully, the call will be answered by a gentle, soothing voice saying, “I see you’re calling from an area impacted by [insert disaster name here]. How can we help you?”

PROMPTING VERSUS SEARCHING

Iterative prompting of GenAI chatbots is a best practice, as many are learning. Getting a good answer in response to a prompt tends not to be a “one and done” exercise. With AI, search is no longer a matter of choosing the best keywords. Prompting is more nuanced. Clarification is one aspect. Another is evaluating the first answer given and asking for changes—more serious, more light-hearted, more factual, longer, shorter, specific examples, relevant to a specific industry or task, more technical, less technical, without hallucinations, or whatever else comes to mind.

As AI becomes more sophisticated and more integrated into customer support systems, these iterations become baked into the interface. The agent doesn’t have to think up all possible iterations; they are on the screen, ready for the agent to suggest to the customer. When the AI suggestions don’t match with what the customer is asking, the next step is to escalate to a human being for a more in-depth understanding of the customer’s needs.

We’re moving from training agents to training the underlying language models in their understanding of the information contained within an enterprise’s knowledgebases. What sparks language model training is learning from customer interactions so that knowledge is updated in close to real time and made available to customer service agents. Knowledge managers can oversee the training process to ensure that information added to knowledgebases supports the objective of being a single source of truth and the success of enterprise AI in the customer service and support area requires advance planning. It’s a bad idea to spring on agents a tool that’s not well-thought-out. AI is supposed to help customers, not hinder them. A tool that compromises the customer experience should never be adopted.

For example, emails to customers should be reviewed and edited before sending. If AI is used to summarize internal documents, particularly lengthy ones, those summaries should also be reviewed and edited to ensure the AI got it right.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues