(Note: this entry was from 2016 and is addressing Supervised ML, generative chatbots survival in controlled and opened interaction environments. I would like to revisit this entry, especially after seeing any feedback this entry generates.)
I love Star Wars. My first cinema experience was going with my dad’s friend to see The Empire Strikes Back.
My favorite character as a kid was R2D2. He was like the perfect puppy, man’s best friend, who also had an attitude and spoke in a secret language that only those who were close to him could understand.
When I first was assigned to working on chatbots, I was clouded by dreams of making an AI, just like most people. Visions of science fiction computer companions danced in my head.
To my disappointment the early scripts, and even many modern scripts, are simply that, scripts that share more in common with the search bar on google than they do with HAL.
As anyone who has worked on a chatbot knows, the moment you put some form of artificial social script like a chatbot in the wild, humanity feels obligated to test it and torture it for you.
It sounds terrible, but why would you think there would be any other result? There have been countless social experiments that have demonstrated the harsh reality of society for years. Leave something artificial unattended and humans will try to break it. I read a story about a traveling robot that was hitchhiking across the country, and it literally didn’t last more than a few hours. Probably the most famous example was Microsoft “Tay.” If you don’t know the story, about a year ago Microsoft set loose an AI with a twitter account. The results weren’t just bad, they were remarkably bad. I was not surprised, because that has been my experience since my first attempt at a chatbot for customers in 1998.
Every chatbot I have ever seen made public usually resulted in escalations. Not because the bot was bad, but because people seemed to feel angry about being “forced” to use it and went out of their way to have a bad experience.
There was only one way I felt comfortable doing a bot, and that was introducing it without a personality, but rather a “search” field that was interactive. I put all the same technology into it and had it respond in the same way, minus the effort to make it feel like an artificial or real person. Not even a name, just called it “search” and made the interface feel like a search field. It GREATLY reduced escalations caused by its existence and actually did the job of deflecting.
The thing with putting something in public is that it must be adopted by the community and made part of that community. Chatbots are no different in my experience. However, if you make the community all your customers, and you force it into their lives, they will likely reject it.
Maybe that will change in the future as AI gets stronger, but for now, I personally avoid making chatbots that are ‘chatbots,” I make them ‘natural language search engines’ that feel interactive, robust, and nonintrusive.
I did this all the way until I came to work for a network company that had done something remarkable.
The company had a chat system (IRC) that had a simple scripted bot with a name; we’ll call it “Lilith.” It was a script one of their people had used for games or something in the past. Lilith was NOT customer facing. She was in their team chats. She “listened” to their chats and if certain words or phrases were mentioned, she would reply with links to their Knowledge Base.
She was essentially a much simpler script than most companies have launched to serve their customers. But unlike those bots, she was part of her community, they wanted her there. They interacted with her. New hires were quick to adopt her as they panicked in the chat hoping for help from their busy piers, and Lilith would reply with answers fast. They also had her doing fun little tasks like “dice rolls” which they used to have fun or make decisions.
Once I saw this, I realized that bots can work when they are part of their community. We updated her to provide training when requested. You could PM her directly if you wanted to be discreet. You could tell her to “like” a response to help her learn (via the KB machine learning) Leads could update her with a few commands to add new phrases for her to listen for. Since Lilith was in their chat space all day and they were there, too, she acted as their KB’s interface, eliminating the need to switch to the KB.
Just like R2D2 is Luke’s astromech, other people in the company saw it as a tin can; they didn’t speak its language. They didn’t get it. But there was a bond between Lilith and their team.
When a chatbot is used to face the agency side, like your support and customer service teams rather than the customer, they can augment your users. They make Knowledge Base usage fun and part of their culture. It’s not another website they have to click on, it’s another lead in their team chat.
Most developers can make a chatbot, but the win here came from it being connected to a Knowledge Base system and being focused to serve a team, not the entire customer base.
To that team, this chatbot was an R2 astromech. It was part of the team, and, in my opinion, it played a major role in that team’s reputation and success.
When the company was bought by a larger corporation the new leadership came in and was immediately offput by the bot. The integrated support teams from new company hated Lilith. They mocked it, would try to crash it, and complained about the chat platform it used. Within a few weeks it was decommissioned.
The members of the old team literally held a memorial service at a bar across the street from the office. There were condolences on Facebook from former employees that had heard. No joke, the developer who created it took a day off in mourning. I am not exaggerating; it was crippling to those who had spent years with that bot supporting them.
After that, the CSTATS for the company dropped. It wasn’t all the bot, but as the primary interface to the KB, it did play a large role. No one from either team wanted to open the KB; some didn’t even know it existed.
If you really want your company to dabble in chatbots and AI; I would highly recommend starting with introducing the chatbot internally to the customer service teams. Let them name it, let it talk to them via messaging channels. Maybe integrate it into their CRM, or have it listened in on customer service messaging channels and make suggestions to the service agent side of the conversation. Learn what a bot can do and how to make the bot work for you before you try introducing it as a deflection tool and risking generating escalations and calls rather than reducing them.
Think of the chatbot as an astromech. Make the bot your frontline team’s R2D2, a friend that speaks their language, learns their culture, and is part of their team.
Boop, beep, waaaaooooooo….