Generative AI has threatened, or promised, to disrupt every industry out there, and customer service is no exception. In fact, the tech has already found its way into customer care centers. According to an IBM report last August, customer service has become the number one space where executives are looking to implement generative AI, with 85% of executives believing generative AI will be interacting directly with customers over the next two years.
But AI isn’t new to the field.
“We’ve been using AI for quite a long time in our service solutions,” says Jennifer Quinlan, global managing partner and customer transformation consulting services leader at IBM iX.
Of course, chatbots have been available for decades, and anyone who interacted with any corporate “virtual assistant” chat options before 2023 will have encountered some form of AI output. Never mind the back-end systems where AI is used to improve operational efficiency.
Companies typically use chatbots as a first line of customer resolution because they are cheaper than having staff field phone calls. If an automated prompt-and-response bot can point a dissatisfied customer toward a self-service resolution (“fill in this form” or “follow these steps”), it saves valuable man-hours. At least on the corporate side.
From a customer’s perspective (specifically, mine), chatbots are often frustrating tools with a toddler’s capacity for communication. The fact that the end result of successfully prompting one is being told to fix the issue yourself is less than satisfying. Generative AI, at least, might even the scales between customer experience and company expense.
Anyone who’s experimented with the likes of ChatGPT understands its enhanced competence, which would make chatbots more personable and effective. However, there are still kinks waiting to be ironed out.
“We’re working with our clients on how to build trust and transparency [with generative AI] as well as how to monitor and audit those bot-based responses,” Quinlan says. Generative AI is still prone to “hallucinate” and spit out false answers, so letting a bot run unmonitored in place of a real customer service agent could do lasting damage to a brand’s reputation. Quinlan says companies are still “testing and learning” how to deploy generative AI to the frontlines.
But generative AI has a potentially more powerful role to play on the back end of customer service. The tech can summarize customer issues, keeping different tiers of customer service agents informed, and saving the customers from having to repeat themselves.
Generative AI can also help agents scan through reams of policy information to find the best solutions for customer complaints. And, in an example Quinlan provides, it can be used to auto-email customers summaries of their interaction with customer service, so that there’s a record of what action is being taken.
“Generative AI is not meant to replace people,” Quinlan says. “It’s meant to provide information that actually can help [customer service agents] meet the needs of that customer faster, better, and smarter.”
Hopefully that day will come soon.
That’s all from me on the Trust Factor. The newsletter will continue on its regular schedule under the stewardship of a new writer, Nick Rockel.
Thanks for reading,
Eamon Barrett
eamon.barrett@fortune.com
IN OTHER NEWS
Stuck in the middle
A survey by Capterra, a productivity software company, tapped into the feelings of middle managers across Australia, Canada, the U.K., and the U.S., and found that 75% of those under 35 feel overwhelmed, stressed, or just plain burnt out. That’s bad news for the employers grinding them because over 40% of young managers with less than two years of experience are planning to quit, the survey shows. Some plan to leave management altogether.
AI isn’t inventive…
On Tuesday, the U.S. Patent and Trademark Office (USPTO) announced guidance rules on the use of AI in inventions, saying “a natural person” is required to make “a significant contribution” to any new invention that could be considered for a patent. The guidance creates an interesting gray space for inventors using AI to assist with their projects as some results will and some results won’t be patentable.
…but AI can be haunting
An anti-gun-violence group has used AI to re-create the voices of six kids killed during the 2018 Parkland high school shooting and is using the audio spoof to robocall politicians who oppose tougher gun laws. The families of the victims worked with the campaign to provide audio clips of their children from which the AI could learn.
The U.K. doesn't trust ChatGPT
The U.K. Department for Work and Pensions (DWP), which handles welfare and pension payments for 20 million Brits, recently updated its “Acceptable Use Policy” framework, to ban its staff from using publicly available AI applications—and the department name-dropped ChatGPT. But while ChatGPT is out, other AI bots are in. Ryan Hogg reports.
TRUST EXERCISE
“Susceptibility to disinformation threatens everyone, from school kids to voters to corporate CEOs. We are past the point of simply applying healthy skepticism–we all need to do more to protect ourselves, our families, and our constituents.”
So concludes Bill Novelli, professor emeritus at the McDonough School of Business at Georgetown University, in an op-ed for Fortune that begins with a look at how the high level of COVID deaths in America can be laid at the feet of distrust. That distrust is fueled by misinformation, which is promulgated by social media, and we still appear to be lacking the tools to parse through it.