Google today announced that it has signed up Verizon as the newest customer of its Google Cloud Contact Center AI service, which aims to bring natural language recognition to the often inscrutable phone menus that many companies still use today (disclaimer: TechCrunch is part of the Verizon Media Group). For Google, that’s a major win, but it’s also a chance for the Google Cloud team to highlight some of the work it has done in this area. It’s also worth noting that the Contact Center AI product is a good example of Google Cloud’s strategy of packaging up many of its disparate technologies into products that solve specific problems.
“A big part of our approach is that machine learning has enormous power but it’s hard for people,” Google Cloud CEO Thomas Kurian told me in an interview ahead of today’s announcement. “Instead of telling people, ‘well, here’s our natural language processing tools, here is speech recognition, here is text-to-speech and speech-to-text — and why don’t you just write a big neural network of your own to process all that?’ Very few companies can do that well. We thought that we can take the collection of these things and bring that as a solution to people to solve a business problem. And it’s much easier for them when we do that and […] that it’s a big part of our strategy to take our expertise in machine intelligence and artificial intelligence and build domain-specific solutions for a number of customers.”
The company first announced Contact Center AI at its Cloud Next conference two years ago and it became generally available last November. The promise here is that it will allow businesses to build smarter contact center solutions that rely on speech recognition to provide customers with personalized support while it also allows human agents to focus on more complex issues. A lot of this is driven by Google Cloud’s Dialogflow tool for building conversational experiences across multiple channels.
Comments are closed.