Published: June 18, 2020 - 10:01

“Hello! We’re glad you’re here...

... How can I help you?”

These words probably seem oddly familiar – after all, they are popping up on websites with increasing frequency. They come from a chatbot, prompting the user to interact. Most of the time though, the brief conversation ends in frustration – the chatbot just doesn’t seem to understand the question. More like a hasslebot than a chatbot.

Nowadays, both text and voice-based chatbots are already being used in numerous environments, such as social media, customer service, website searches, and more. To work effectively, however, the chatbots need to have a good understanding of the users’ concerns and interact with them accordingly.

But what prerequisites are necessary to ensure that the interaction doesn’t turn out to be a source of hassle and frustration?

 

Artificial intelligence (AI) and understanding text

The ability to process unstructured information presupposes the ability to understand the content. While this may seem simple on the surface, establishing a meaningful and expedient dialogue is one of the most difficult challenges for AI, largely as a consequence of the unparalleled complexity of human language.

Using rule-based approaches, it’s relatively easy to create simple chatbots that use decision trees or algorithms to respond to questions. In practice, however, they repeatedly come up short. As a result, applications with rigid predefined decision paths are not particularly suitable for complex topics and dialogues. The reason for this is that natural language is unstructured. Dialects, irony, ambiguity – all of these elements are difficult, if not impossible, to understand solely on the basis of rules.

By contrast, more complex AI-based applications employ a range of different technologies to create human-like conversational interaction, or in other words, a relatively normal dialogue (conversational search). Coupled with machine learning, innovative approaches to speech recognition have proven to be remarkably effective in recent years.

 

Training data – the foundation for NLP and machine learning

In order for machines to interact and perform autonomously in natural language with a broad vocabulary, they have to learn – just as we humans do. 

For this to work, the machine first needs to understand our language – from the grammatical and syntactical basics to semantics and textual correlations. Natural language processing (NLP) provides support in this respect. This technology translates natural language into machine language so that words, sentences, relationships, and facts are understandable. 

The next step is where machine learning comes into play. Here, an algorithm enables the machine to learn by analyzing data and past experience.

Based on these experiences, statistical probabilities, and as a function of the context, the machine creates its own rules to recognize certain patterns and correlations.

In this context, having high-quality, pertinent training data – in other words, texts that have been annotated – is a factor that cannot be emphasized enough. This pattern can then be transferred to new, unfamiliar data in order to manage which information is taken from the text for correct understanding and which characteristics are relevant at which point in time. The quality of the predictions, and thus of the dialogue, is therefore crucially dependent on the quantity and, above all, the quality of the data. Algorithms can only learn to interpret requests correctly if the training data accurately reflects the range of user input and is based on real application experience.

State-of-the-art insight engines unite these techniques and expand traditional enterprise search to include extensive AI features that optimize and even transform workflows and business processes and lighten the workload for employees.

 

To find out how this looks under real-life conditions, read our case study Boosting customer service in the B2C sector with Mindbreeze InSpire.

Open Case Study