Podcast | What’s the difference between an intelligent assistant and a chatbot?

Jean
Shin

Director, Strategy & Content | Podcast Host of Mobile Interactions Now

33 min podcast
tyntec_podcast_e21_dan_miller

In this episode, we check in with Dan Miller, Founder of Opus Research, who knows the ins and outs of how we got to today’s technologies shaping service automation and conversational commerce. We’ll be delving into different sources of creativity behind the latest intelligent assistants – and discuss why it took a while for Dan to embrace the term ‘chatbot’:-)

Podcast Transcript:

Jean:

Dan, welcome to the show. We gave a short intro in the beginning, but I would love you to tell us a little more about your personal journey through the tech space. I find it tremendously interesting. What led you to founding of Opus Research?

 

Dan:

Wow. Okay. Well, I think one of the things that most impresses people when I talk is, I started Opus in 1986 after working in corporate strategy for Atari, of all places. Since so many developers are gamers and came out of the game business, I was there at the beginning, sort of. And we were developing, as Atari's core business was around games, obviously, they also had a line of computers, as many people know, had their Atari 800s and that sort of thing. And we were investigating, just as standard broad modems were getting popular and that sort of thing, what sort of services a company like Atari could offer electronically to home devices. And we talked to phone companies and I got to know a lot about telephony in that process. I got tours of electronic switching systems centers and that sort of stuff.

 

And long story a little shorter, Atari suffered a bit in the marketplace. Our group was spun off and then dissolved. And that's when I hung up the shingle to start Opus. And Opus's mission was just to work with companies that were looking at developing interactive services, essentially, but they were kind of slow to develop. So I do all sorts of research around what we now call conversational commerce, that we started looking, at that time I called it conversational access technologies, but it was essentially voice. Our premise from even back then was that voice is probably the most powerful user interface, that using our own words to accomplish the tasks we want to do through ever more powerful resources, which we now call in the cloud, which back then we called in hosted services, all those sorts of things. The names have changed but the core technologies are the same.

 

But I also want to say I'm essentially an English major and journalist. I like writing about this stuff. I do not code, I learned Basic in ninth grade in 1960-something on GE timesharing. So I know a little bit about programming, but from then on, mostly I write in English, and I've always been fascinated in how technology can help us out.

 

Jean:

What an honor to have you on the show today. I simply could not have picked a better person to talk about this, and especially to check in at this time, because given the pandemic reality we are living through together these days, it appears that the stakes are getting even higher for many businesses to get digital self-service right. And actually, those solutions are not entirely new space. And I would love it if you could just give us a little historical overview of how this thing has evolved and where it's headed a little bit.

 

Dan:

Yeah. So self-service used to be code for just automated handling, replacing people with machines, even when it was an interactive voice response system, an IVR would answer the phone and give people static choices. Pick one for sales, two for service, and that sort of thing, just to replace a less expensive resource. I'm sorry, to replace an expensive resource with a less expensive resource, automation. As these systems became more robust and could understand people's intent and that sort of thing, self-service got redefined, at least in my mind. And the world's starting to figure this out, that it really means service of self. And it means that people can use their own words, whether they're typing them in or it's saying them, and accomplish the tasks that they think they want to do. So we're looking at resources that recognize or anticipate an individual's intent based on what they know when that person contacts a brand or a company.

 

And the cues may be what they say or whether they're picking one or two or three, but it could also be their history, what apps they've used on their smartphone, what their number is. They have a lot more cues to key off of to recognize intent. And to your point about the pandemic exposing a few things, it exposed a few trends that were already well under way, that the best companies were already putting into practice. The use of speech analytics, the use of natural language understanding, of machine learning to better understand their customers. Now, I know there's a fine line between that and pure surveillance and hoovering up a lot of information about individuals, but we're really, in terms of defining self-service, also defining what I would call benign surveillance. So that you do have information in a smartphone.

 

Well, there is a debate about where it's going to get stored and that sort of thing. But as developers try to figure out how to build better self-service apps, they know that there's tools that will help develop the right prompts, to use those prompts to take the correct action, to make the right dips into the right database, to better serve people. And that got accelerated as people were forced to stay at home and do more sorts of things online or over the phone.

 

Jean:

The way you are describing what the origin was, what it's trying to do…are there any misinterpretations in terms of what it is supposed to do and what it actually means that really bug you?

 

Dan:

Well, Opus has long used a term that we call intelligent assistants, as opposed to chatbots. And we were trying to capture the idea that we're using technology or intelligence in the network or on devices as an assistant to the individual that's using it. So to cut to your question, the word chatbot came up almost simultaneously with this idea of a virtual agent. That if there is a Siri on your phone or Alexa on your smart speaker, that's a virtual system. But if you're on a messaging app and something pops up in a bubble, then you know it's automated. Well, that's a chatbot. And the problem I always had with chatbot is that to me bots were viruses, bots in the '80s or '90s or whatever were malicious code that went around and planted malware or stole your information or whatever.

 

The idea that there's chatbots I have now come to embrace because it is the term that people use, but it's like holding your nose and saying, "Okay, I'll vote for that." Because there's a lot of creativity that's going into creating better chatbot platforms, to bringing big developer communities in to improve what a chatbot is. So it's a chatbot. But to me, if you think of it more as an intelligent assistant or a virtual agent, bot also implies something like robotic process automation, that there's repetitive tasks or go-fasters that are there. And that's benign. I don't mind that, but it doesn't put you on a continuum to say, "Okay, if I can do task one really quickly on behalf of a caller, I can also use ..." I should recognize that that task may lead to five more tasks. And we'll get to this later in our discussions, but as you design a better bot, they become more conversational.

They handle multi-slotted instances, meaning that somebody may engage with a chatbot or an intelligent assistant to buy an insurance policy. I'll jump right to a complex one. And there's a bunch of questions and answers that have to take place to qualify that individual, to know what they're really asking for, essentially to fill out a form. And that means there's many slots in a conversation. And you can build this into "a chatbot," but it's more likely that if you think of it as an assistant helping somebody enroll, helping somebody fill out a form, then you are better at helping that individual accomplish his or her task.

The precursor to chatbot were these things called chatterbots. One of the early ones was something called Eliza, which was like an online psychiatrist sort of app. And I'm misrepresenting this, because my brain is not totally ... But it essentially was designed to engage in a conversation, and that conversation could be sort of aimless, because it basically would say, "How are you feeling?" And you'd say, "I'm feeling fine." It'd go, "Fine?" It would take your last word and then put an upward inflection on it. And those things could go back and forth for days, essentially. It could go on forever. Whereas when you're trying to apply an intelligent assistant in self-service, you're trying to accomplish a task.

 

So there's a notion that if I design the perfect chatbot and it would answer a question and ensure that it was correct and then say, "Oh, my job is done," and you could have 100% correctness and very short conversations and nothing gets done. If you design an intelligent assistant, there's more conditions or rules to define. So you say, okay, Dan's calling to buy insurance. I need to know ... And that pulls up a form or a set of slots that need to be populated. I need to know his age, I need to know his income, his general health, all that stuff. And if you design an assistant to do that, that's all doable with today's current tools and it's just a matter of intent. I just urge that people don't call them chatbots.

 

Jean:

That totally jives with one of recent experiences I had. I was in a meeting with a bunch of people from customer experience department, and we were talking about using WhatsApp and using that as an additional channel to do CX, customer experience, and conversations with agents and bots. And in that particular meeting, after everything was launched, we had a head of IT sitting there and he's more interested in, "Can I get this onboarding process done as well? Can I have them actually activate their SIM card?" In this case it was a telco. And so that conversation quickly moves. Yes, you can have a conversation and exchange dialogs. And then it quickly moves to, "Okay, what else can you do step by step? And can we do this?" And I totally, I think we are beginning to see this more.

 

And one of the interesting things is that these days, because users are already familiar with using a lot of different chat apps, whether that's Facebook Messenger or WhatsApp, what have you, they already kind of know this interface going back and forth. Now, what do you see now that this whole coupling of popularized chat apps coming together with the automation in the back, what do you see happening there?

 

Dan:

Wow. Couple of things to be cognizant of. One is, Opus separates the core technologies that support conversational commerce from a front end group, which is a conversational user interface. And there a lot of attention is paid to training a system to understand consistently the intent of an individual and then match it with the correct resource to respond correctly, or make the correct recommendation, or route it to a person that can do all of that. So those are the three things that happen on the front end. And if you're a developer there's a lot of platforms that will perform the core functions to discover what the right answers are, or recommendations based on ingesting past conversations and building accuracy and recognizing what that intent is. And then going the next step, which is, which of these answers answered correctly? And then building essentially a thing that responds correctly or makes the right recommendations at scale based on past performance.

 

Now, the other part we call conversational intelligence. And that's what your question was about. What if it isn't ... So the easiest thing a chatbot can do is be something like a glorified FAQ, that there's a static set of correct answers. And what you're trying to do is train the thing to recognize that intent accuracy based on past conversations. But what if it's, and this is what we're learning from the pandemic, what if it's a shopping app and you need to know what's in stock and how much it costs, and that's ever changing? And there's some more information you have to get from an individual about how to deliver it, how they want to pay, and that sort of stuff, and that all needs to read from or write to back office systems.

 

So there's considerable thought going into, and this has to do with what your IT manager was asking, going into what curation or cleaning up of back office systems in order to offer consistently correct results based on real time information at scale. And that's still being ironed out. But we have the benefit of past investments that companies have made to create, just transform these processes into services. Many times there's APIs for them, there's APIs for getting the credit card, writing to the credit card or payment systems. There's connectors into inventory systems and that sort of thing. But that's on a case by case basis. That's why there'll always be a role for developers and people to care and feed these intelligent assistants or bots over time. Because they get better with experience, but they do require supervision.

 

Jean:

You mentioned many things still to be ironed out. And I could not have put it better. But despite all that, have you seen anything recently that surprised you and you were like, "Oh, this thing is really working," in terms of a user experience, it's actually a satisfying experience that you get some things done. Have you seen anything?

 

Dan:

I knew you were going to ask this. And it troubles me, I haven't been surprised yet. I still want to be. There's things we visualized back when we thought we'd have flying cars before we'd have self-driving cars, where technology takes a left turn. So I think there's some pioneering things. There's two sources of creativity. There's up and coming firms that have developed either better user interfaces or better ways to understand, or companies that have gotten better at figuring out the middleware to get the right information. And I'm still waiting for things to come together. I think at one of the airlines, I believe KLM, was exhibiting great confidence that you could use a WhatsApp or a social network to do more things that people are accustomed to related to flying through social networks.

 

And I think it's that kind of thing. It's going to be subtle. One of the examples, and this one's kind of obvious, is people that have Alexa and have an Amazon Prime account and have made an order might come home and there'll be like a green flashing ring on their Echo, and it'll be to tell them that something they ordered is on its way. So where the small victories are going to be is when these interactive entities are really ingrained in your life in a way that you use them repeatedly. And you're just come to accept them and use them often.

 

Jean:

Unpacking that sliver of a moment, that you are delighted, all those elements in it. And just the amount of work that goes into it to create that at scale is still mind-boggling. We talked about developers still having a lot of work to do, and all the data and the tools that need to be connected and all that. But we are also starting to see, I'm not sure if you are seeing that as well, there's some more demand for prebuilt tools that are coming into play. Is this going to help at all? Or what do you see?

 

Dan:

Oh, absolutely. And it's not just the prebuilt, it's also the low code and no code development environments that are out there. Because what we're seeing is the growth in, you might call them citizen programmers and business organizations. They used to call it the shadow IT, but they don't even have to be IT so much, but it's business unit employees as subject matter experts to bring what they know about what good service is into self-service. One of the things we're finding is, you can use analytics to identify the best agents and the best answers. You may then try to get one of the best agents to come over to the self-service side and be the supervisor to the bot, help the bot learn.

 

And if it's a matter of pointing at a radio button or just doing the checkbox as opposed to even doing pull-down menus and connecting diamonds and squares and circles, that that's a big plus. So we're seeing advancements in just having tools that don't require coding. You also brought up the pre-population of some basic skills on a vertical basis, on a per industry basis. Because a bank is going to be different from a telco, is going to be different from healthcare. There's some horizontal things like enrollment, where you just want to get name, address, telephone number, and method of payment, and that sort of thing. Why wouldn't you make all of that generic? And then there's special cases or common activities based on industry where you should build the same sorts of things. There have been bots being developed for 15 years now.

 

And I would say there's precursor scripts that have been developed for speech-enabled IVRs that we know what the most frequent topics are, or categories of calls, and now chats into companies. And so there's what can be thought of as tremendous amount of prior art to shorten the time it takes to get an effective bot up and operating quickly, and that's happening.

 

Jean:

Since it's happening more for certain industry sectors, certain use case, and probably that's one of the reasons why you're citing airline industry as well and things like that, is there an intrinsic difference in some of these sectors that helps them get going faster? Is it availability of data and what have you?

 

Dan:

Well, I mentioned prior art in the form of IVR scripts and that thing. So dating back to the turn of the century, what we call the usual list of suspects for companies that are ripe for automation have been telcos, financial services, travel. The first work we did was actually government, but there's some reasons why that gets held up. And healthcare will be next as well. And the core model for eligibility for automation initially was a frequent number of calls for some repeated action. So getting a balance, back in the old days when wire line phones were the major source, not this and not apps, we know what the frequently used apps are, what the functions of those apps that you use daily are, and that defines what the most popular vertical industries are going to be.

 

There also has to be a business case behind, since it is an enterprise and everything's built on ROI, what drove the original automation, which then drove self-service, was just replacing humans with automation, as I said. And you could look at the volume of calls, look at the percentage you could "capture," do the math of, "Oh, it's $25 per call versus $5 or something like that." And you can build a business case for automating. We don't have, now that things have been automated, we don't have the same numbers for moving from this form of automation to a conversational form of automation.

 

But what we do have is, in vertical industry, you can make a case in financial services a little faster, where especially among high net worth individuals there's categories of callers and applications where you can build a business case. Airlines is the same sort of thing, because the average airline ticket is hundreds of dollars. The cost of real time support involving individuals can be high, they can be lengthy as issues gets more complex. So you can start building cases that say, "Hey, if I can do a chatbot that can do more than just answer balancing inquiry and can march somebody through the process of booking a flight or making complex choices about the nature of a room in a hotel and get their questions answered, I can build a business case for that. So that's the hidden thing, is that some industries are more conducive to a good ROI.

 

Telecommunications, especially when cable companies and wireless companies were locking you into a one or two-year deal and that sort of thing, they knew what the incremental revenue would be for a successful interaction. And you could figure the more you can do online or through a bot, the better.

 

Jean:

So the technology is there, things are happening, the use cases as such. If you were to help mainstream economy, not necessarily the most high value product, how should an enterprise think about this, in terms of who’re still sitting on the fence and thinking about, "Okay, so digital transformation, this is one way to sustain my business." How can they even quickly get their arms around it and think about this? Any word of wisdom, is what I'm going for here.

 

Dan:

It's a very timely question, because in our last report I drew that classic bell-shaped adoption curve that shows early adopters over on this side, and then that's the small percentage of companies, usually large and with an infinite amount of money to invest in things, moving to a much fatter part of the curve, which are the early majority. And we're moving to that, and that means that it is the medium to large companies, not the super large companies. It means that there are products and services that will help them get started with a pretty formidable digital self-service product. Something cloud-based, tools that you initially just go to a website, give your name, and you have access to pull-down menus and things to just start building bots.

 

And there's enough prior experience at the learnings of these larger companies that you know what doesn't work, and you have instances where you can point and say, "I want what those people have." So we've reached this point where there are premium models and consumption models for the tools and resources to build good, intelligent assistants that start free and then grow with you. There are guidelines even in the APIs once you enroll that have the do's and don'ts, and there's tools that subject matter experts as opposed to IT people can do. So chances are companies are already doing this. Somebody in marketing or in support has said, "We really should have a bot on WhatsApp that can take care of 80% of these stupid calls we have about wanting a refund." It's happening organically.

Part 2 of the interview with Dan Miller will be released in two weeks, following this release of Part 1.