Is your speech analytics solution multilingual?

Exceptional customer experience (CX) continues to be paramount for any business. In fact, a Gartner survey conducted in 2020 found that 91% of organizations said…

Alex Martinez profile picture
Alex Martinez

September 21, 20214 minute read

Exceptional customer experience (CX) continues to be paramount for any business. In fact, a Gartner survey conducted in 2020 found that 91% of organizations said that CX was one of, or the primary goal of, their top digital business transformation efforts. The bottom line is that if your business is not creating a personalized and positive experience across all customer touchpoints, your business will experience a decline in sales, customer install base, and brand.

Organizations are rapidly embracing speech analytics to help improve customer experience. The benefits are clear: with a speech analytics solution, a business can understand what customers are saying at every touchpoint throughout their buying journey. And it’s not just through the acquisition of a new customer that marketing or sales leaders must be in-tune with what their customers are saying–even through post acquisition, leaders in customer retention or support must be able to respond to or correct a situation based on the conversations taking place at the contact center or through a chatbot.

But what if you are a global company and you serve customers across all continents? Can you say that you are listening to all your customers if your analytics solution can only provide you with accurate speech analysis in English?

Without a speech analytics solution that is accurately multilingual–and the catch word is “accurately,” not just at 70% or 80% accuracy–then you are only capturing and understanding a fraction of your Voice of the Customer.

The importance of a multilingual speech analytics solution

This is where a multilingual speech analytics solution can help. A multilingual speech analytics solution gives you a view into what customers think and are saying about your brand, products or services, and the overall experience you provide across all regions of the world–not just in English speaking nations. And when your CEO, CMO or VP of Sales have access to this vital information, they can take proactive measures to make changes and improvements on the products and services you provide.

Speech analytics has become an essential tool for providing insights and advantages for all departments within a company. It is no longer just the contact center that is deploying this sophisticated AI driven technology, but it is useful across all departments, especially those requiring constant conversational customer engagements.

Think of an Accident & Disability (A&D) insurance organization, for example. You would start with the claims adjudication department, where you may want to analyze the claim trends from all the unstructured conversational information recorded between the person that submitted the claim and the claims adjudicator that processes it. Next, the conversations of an inside sales representative selling Accident & Disability policies can be analyzed for accuracy and selling best practices. Are the sales agents selling the right coverage? Are they compliant? If they aren’t meeting sales targets, are they not selling on value, or are they too pushy? A great multilingual analytic engine needs to mine not just for sentiment, but also emotions, intent, and compliance.

As another example, if your company sells financial services products in North America there is a strong likelihood that agents will engage with customers predominately in three languages: English, Spanish, and French. So, the speech analytics evaluations must be consistently accurate across all three languages to obtain a good pulse of the customer and employee and of what is being said.

How speech analytics works

So, how does it work? Speech analytics takes the unstructured data trapped in recorded calls, emails, chat transcripts and other customer interactions. The audio undergoes a speech recognition process that turns the sounds into text. In addition, acoustic signals such as distress in the voice, loudness, rhythm, and silence are extracted. Next, all data, including the transcription, the acoustic properties and any metadata, are normalized into a consistent format across channels.

If you would like to learn more on how OpenText™ can enable your organization with Speech Analytics solutions, please visit our website.

Share this post

Share this post to x. Share to linkedin. Mail to
Alex Martinez avatar image

Alex Martinez

Alex is a Senior Product Marketing manager at OpenText with over 20 years experience working with customers and partners across multiple verticals with a strong focus on the Healthcare and Financial Services markets. He is keen on guiding customers through their digital transformation journey, taking a solution-oriented approach to solving their day-to-day problems.

See all posts

More from the author

Impact of agent experience management on customer success and service

Impact of agent experience management on customer success and service

Customer success and customer service are two interconnected functions inside a business. Both are focused on ensuring customer satisfaction and loyalty but they have some…

4 minute read

An inside look at enterprise document accessibility for PDFs

An inside look at enterprise document accessibility for PDFs

Accessible PDFs are easy to use – enabling most people to view, read or interact with them. The goal is to create an inclusive experience…

4 minute read

Elevating contact center excellence

Elevating contact center excellence

Contact centers face persistent challenges in ensuring optimal agent productivity and efficiency. OpenText™ Qfiniti Explore 24.1 emerges as a beacon of innovation, addressing the pressing…

2 minute read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.