Podcast | Can your chatbot handle user intent within a conversation? Perhaps it should

Jean
Shin

Director, Strategy & Content | Podcast Host of Mobile Interactions Now

25 min podcast
image_podcast_can_your_chatbot_handle_user_intent_

In this episode, we check in with Dan Leshem, Co-Founder of Plantt.io, a Tel Aviv-based customer experience automation company. We’ll be talking about what Dan learned from developing AI-powered chatbots for interacting with customers and employees. As a hardcore coder and technologist, Dan brings his deep insights and practical ideas to improve our interactions with bots.

Podcast transcript

Jean:

Dan, welcome to the show. I am thrilled to have you on the show today because I'm getting some questions lately that has some really practical consequences in terms of when to deploy what, and for whom? That kind of very specific practical questions. And for me, I've had a lot of conversational automation related conversations with the business, as well as the strategy end of it. And I really wanted to get to some of the technical part of it as well to really help people to get started.
 

Dan:

Thank you for having me, Jean. It's my pleasure. I'm Dan and the CEO and co-founder of Plantt and basically we help companies build better chatbots and automate their customer communication channels. It's a great pleasure to be here and talk about AI, customer communication automation. It's my pleasure.


Jean:

Awesome. I really want to start from overall understanding first so that we make sure that we are on the same page. Given this whole pandemic induced reset, what this whole digital part of a customer experience and the field that you are working closer with has even bigger consequences on day to day operations.

I want to really do a little quick recap on what this whole automation topic means to business these days, especially when it comes to customer facing automation. Can you just give us a little general understanding of this?


Dan:

Absolutely. I think maybe even before the pandemic, we've seen companies struggling to meet the demand and keep up with customer's expectations. Nowadays, you have customers communicating with your business in multiple channels. It's not just phone calls, it's not just emails or people walking into your stores. We're seeing companies that are having trouble to scale their support organization fast enough, and to communicate, to have this strategy of communicating with your customers across different channels.

And this of course led to bad customer experience, meaning slower response times or higher response times and a drop in customer satisfaction. And this is where we started I think in the third time to see chatbots and customer facing automation, we've seen companies trying to embrace that. And on paper, I think chatbots makes, they make a great promise, right?

From our perspective as a consumer, they give us that promise of you get whatever you want in your language whenever you want it. That's great on paper. Except, it doesn't in reality, doesn't really work. We've seen companies struggle to bring that into production and this is where the experience starts to break.

In my opinion, I think our companies should be look at how we should, when we think about building the right customer facing automation, it should start by listening to our customers. We need to understand what is it that our customers really want to achieve when they communicate to us? We need to understand, for example, if our customer is contact us to ask about the delivers that do so about the shipping status, we need to understand in real time, who is this client and why they, why now? Why they contact us right now? That's I think can be the key to start answering this question about how to make a customer facing automation succeed.


Jean:

I had a little chuckle, an internal chuckle when you say you have to start listening to your customers. I'm like, it's a such a no brainer kind of concept that applies to anything. Sometimes, you have to listen to your internal people first.


Dan:

Yeah.


Jean:

And as I remember going into a meeting and there is different departments and there is this whole customer service folks talking about, just delving into how the chatbot can kind of alleviate some of the phone call load that is coming in. And it's like, can they answer this kind of questions? And then, can you do this so that instead of calling? And that's what they want to talk about the most. And we also had like more of a operations and IT people, and they are talking about, can they do this with a process? More of an onboarding process and things like this, which is basically, in a traditional sense, is like a process automation.

It seems there is that the wants are many, even internally, before you even start really understanding the customer part of this. These gaps knowing, because you have firsthand experience talking with the businesses, like what to tackle first, given the technology and the data available. How do you even think about internally and says that, "Okay, what do we want to accomplish with this particular type of automation or attempt? How do we even think about this to begin with?"


Dan:

Absolutely. You know, when we help companies build their strategy, I think one thing they can look at it is their current data. Their, I mean, your business is running for a while, so it generates data. You've been communicating with your customers, so you should leverage that data to give better experience. It's not something new, it's not like nothing there, though. When we want to build a new website or a new mobile app to our customers, we'll look at the data. We look, we talk to our customers.


Jean:

Like, have you seen examples, where you felt like, okay, the whole to the customer needs and the final interaction quality comes out was a really good match? And can you talk about some of the examples that made you really feel like, "Ah, we are achieving something here?"


Dan:

We have this example of working with some company and they wanted to solve everything with AI, right? They wanted to have this huge, smartest chatbot that can talk about nearly everything. Where, in reality, what the customer really wanted is just to get two questions, two things they wanted to achieve. First, they wanted to ask about the shipment status and the other one was the customers who lost their access to the account. They had to reset their password.

And at the beginning, it didn't work. The AI assistant just didn't deliver on the promise because they gone through a very long and tedious planning phase of their chatbot. And they were gathering examples and dialogue samples, and they were iterating through the training the NLP system that, that the language understanding technology. And just in reality, it didn't work because their customers, they wanted to reset their password. They didn't want to talk about the weather or to talk about I don't know, to have some concierge sales bot. They didn't want it. They wanted to ask what their delivery status, where is their delivery and reset their password.


Jean:

As a human being, I can totally relate. Like, when I need some kind of service and I come to someplace for that, if you give that to me right away, and by the way, there're some other delightful things I could do with it, then I could be open for it.


Dan:

I think having automated conversations with our users is all about managing the expectations with them, right? It's like having a contract, upfront contract with our users. When this contract is broken, this is where the customer experience breaks, as well. And we've seen, for example, we've seen companies tell their customers when they first interact with the chatbot, and they didn't have very sophisticated chatbot in terms of NLT or NLU technologies, language understanding that their knowledge base was pretty standard. And we're seeing companies tell their users, "Hi, this is our digital assistant. I am a chatbot. I can help you find what's your delivery status is. I can help you order a coffee or something like that. And that's it. And if you want to speak with an agent by all means, tell me, and I will get you an agent right away as soon as possible. And, but why don't you give it a try first?"

And we've seen the completion rate and we're seeing the performance works way better than companies who didn't manage expectation with their users. I think in my opinion, what we've seen that worked usually better is that if you are honest with your users, if you are transparent and you tell them they are interacting with a chatbot and you tell them what, how it can help them or in what topics it can solve, what issues it can solve. And the last part is that you always back it up with a human. If they can get it through, if they know they have a human behind the scene that can pick up the conversations and help them solve their problem, they we'll give it a try because it's a win-win situation.


Jean:

Wow. You just said like at least three big chunks of what makes successful that I really want to unpack a little bit. Because first part of like a chat bot actually explaining, "Hey, I'm here to do this," and setting the expectations upfront. It almost rings true in terms of like building the personality of the bot, as well. Because, that honest openness, straightforwardness and that reminds of a previous episode, the guest I had, Marco Spies, he wrote Branded Interactions. And it really, the way he approached is bots needing to have a personality. Whether you are there to entertain people or to serve them quickly, efficiently, or that that really embodies, picks up, resonate with the brand and then move from there. And so, I totally understand this. Is it in reality in practice, how it's done, is it more of a bot spelling out, "Hi, I'm here to take an order?"


Dan:

First of all, I think it depends on how you measure the level of experience. And in my opinion, the way I measure it is that whether the user got what they asked for. Right? I don't measure the level of experience in terms of fancy language understanding, or how conversational or how much humanlike is the conversation is. Yeah, I think we need to be very transparent with our users and very specific. For example, if you have a bot for where their goal is to help users get the delivery status, the shipment status, a good example would be where you get, you tell your users, "I'm going to find out where your delivery is. I just need a few details from you. All right? And then I'm going to transfer this to an agent which, who can really look up the delivery status for you."

Okay. This is in a situation where you don't have an integration to your shipment status and you don't, you can tell through an API or you don't have the data point in real time, you don't have this data available. In this case, you tell your users, you're going to have a discussion, you're going to have a dialogue with them, and then you're going to transfer them to an agent. That's okay. That's managing expectation. They will not be disappointed.

On the other hand, if you tell them, "I'm here to help you get your, where, tell you where your delivery is." And then, you ask them a bunch of questions. What's your order status, what's your phone number? And then, you tell them, "Okay, so an agent will join the conversation in a few moments." This is where the expectation breaks.


Jean:

Then let me pick up on that handover moment. Now, from the business's point of view, if somebody is looking at this bot interactions as a part of automation that will take to help them, even though there is no live human assistance available. But this requirement systematically having human assistance available is a kind of a burden that's not completely addressed.


Dan:

Absolutely. And so, in my opinion, you need to identify the data points and the integration points in your conversation. Where you're dealing with use case that requires more context and data, which might not be available to the bot in real time, this is where you have to be prepared with a human that can extract this data and can pick up the conversation from where the bot left. You need to map out the different integration points in the conversation and the data points and know where are the spot that you should be prepared with a live agent behind the scene?


Jean:

What are some performance indicators that give you some sense of, rather than your own self satisfaction? Oh, my God, it did this for me, but what do you really look out for in terms of measuring and seeing progress?


Dan:

One thing I look at is the failure rate. This is where the bot essentially failed to understand the user. And either the conversation was cut off or handed over to an agent. Without, where we didn't plan on handing over the conversation to an agent at this point of the conversation. And of course, I look at satisfaction rates because user satisfaction, customer satisfaction is extremely important when you want to measure whether your bot is successful or not. What is the part of the conversation that has been automated? We see many, to give you an example, we see many chatbots that are great, they work fine in understanding the initial inquiry of the user, but they don't really help the customers at the end. If you have follow-up questions, the customers, as a user, if you have a follow-up question to them and you ask them, "Hey, didn't I read this article, but still I'm having trouble getting things done."

This is where that chatbots start failing. It's really important for us as developers to look on the whole conversation. Not just that it's not like a single interaction. It's not like that you asked me questions, I provide you an answer and that's it. We need to look at the whole context. And we need to understand whether I can keep going with the conversation more than two steps.


Jean:

Does that have anything to do with the, in the design process, the persona the assumptions we make in terms of how the interactions will go or the level of understanding the end user will have. The way I'm seeing it is that there was an understanding, bot is understanding, "Oh, this is asking for this." How would you solve this kind of things?


Dan:

If we are having a conversation in German and I'm not sure what you're saying and what you're asking, I would ask you to clarify or I would ask you a question that will help me understand what is that, that you are looking to understand, to get. And we see when it comes to tech bots ,for some reason we see this is not the case. The chatbot, if they're not confident enough, either way, they will go with what they are confident. We think that the longer the conversation is, so it's the customer, the experience is damaged. And that's really not the case. It's all about managing these expectations and letting your users know that they are understood that the chat bot understand them. And if it's not, that's okay.


Jean:

Some of the disappointment I've been having in bot interactions, when I don't get it, and ask, I do my part of the asking again. And it doesn't ask back to discover what that problem is. It's keeps saying to me, "I don't understand. I don't understand." And then, end up it's a dead end, or it's a hand over, and this is, is there a technical problem, or is it just a mindset of why I'm not seeing the progress on that end, where chatbot is asking questions to me?


Dan:

I think it's an issue of context and how we design the automation today. The way it works today, that we have the process of intent discovery. We are looking to understand what types of interactions we have with the users. You have people, you have users wants to order a pizza. You have users who want to cancel a pizza, to cancel their order, so that kind of intents. What we need to do is we need to look at the intent within the conversation. We need to expect the user intent in terms of the context.

For example, in the delivery status example, we have user asking about the delivery status, and then we need to expect them when we tell them your delivery is left the warehouse two days ago, we need to expect at that point, that the user is going to ask, "Why does it taking so long?" We need to find this follow-up intent and to use that to understand our users. We know they asked about that why is their delivery being delayed? We need to find those different intents throughout the conversation and improve the conversational experience.


Jean:

It's sort of like the technology seems to be moving to already-know, like when you call in given the time, the problem they were having or where it is, and already knowing the answer to why you might be calling in. They are more focusing on bot already saying, "Hi, are you calling in, or are you contacting us for this?" And that kind of thing. Yes, it will be delightful or something that the bot already knows, anticipates that, but that is something that I didn't think of, it's icing on the cake.


Dan:

As you mentioned today with stuff after single interactions, moving forward, we have these tools that can help you really design, better the conversation and predict in each step of the way, what the customers really want to do and how to better interact with them even after the initial response. When we can, when you interact as a human, the conversation can, can, can evolve in different directions. And we need as developers. And as, as companies, organizations, we need to design, better chatbots that can somewhat imitate that notion.

Most chatbot platforms that we see, we've seen out there, they don't support that notion that the conversation can evolve in different ways. They don't give you that option to keep the context of the conversation and to really have a conversation in with your users. Although we move forward as an industry into that direction, we still, a little bit far in that one.

Part 2 of the interview with Dan Leshem will be released in two weeks, following this release of Part 1.