That client who looks like he’s from Paris, Texas? Try Paris, France – TechCrunch

our daily life It is more connected to a globalized network than ever before. Products are sourced and shipped from afar; Traveling to a place 3,000 miles away can be easier than crossing a big city in traffic; And information that spreads to anyone and everyone with one click.

A startup called Sanas has developed some AI voice technologies that aim to make a critical component of that network run more smoothly — how people who speak the same language but with different dialects can better understand each other, by filtering out accented sounds and turning those dialects into reality. . time in others. Today the startup announces $32 million in funding following the strong momentum of its tools as it comes out of stealth and takes off on a larger scale.

The investment is being led by Insight Partners, with participation from new backers GV (formerly Google Ventures), strategic backer Assurant Ventures and angel investor Gokul Rajaram. Previous backers from its initial round Human Capital, General Catalyst, Quiet Capital and DN Capital are also participating in this Series A. Along with the investment, Sanas is also announcing a strategic partnership with Alorica, one of the world’s largest BPOs, which rolls out technology to 100,000 employees and 250 corporate clients globally.

The company does not disclose the valuation but we understand it is $150 million after the money. This Series A is one of the biggest startups in voice AI, and from what we understand, it comes after Sanas rejected a takeover offer from the big tech companies. (If you can’t buy ’em, invest in ’em!)

As you would expect from the list of investors, Sanas technology is already being deployed in call centers. Specifically, he’s found a lot of attraction with remote customer service providers, who have become a hotbed of abuse against agents who may speak the same language as the customer, but speak loudly.

In addition to insurance giant Assurant and BPO leviathan Alorica, other clients include large collection agency company ERC and travel industry BPO IGT. In a wistful comment on the state of our world, Sanas CEO and co-founder Maxim Serebryakov said the result of using technology in these places has been dramatic in terms of reducing customer inconveniences.

Sanas’ plan is to use the funding to continue expanding its business in this sector but also to begin to shape other use cases in the enterprise, for example as a plug-in for video calls or for voice-based interactive services to help machines (and ML-based systems) understand a wide range of dialects. .

Serebryakov initially co-founded the company with Shawn Zhang and Andrés Pérez Soderi, two fellow students at the Artificial Intelligence Laboratory at Stanford, after a fourth friend of theirs needed to drop out of school and return to his home country, Nicaragua, to take a job to help with it. A family emergency.

The friend got a job at a call center back home serving clients in the United States, and although he was quite fluent—and a student taking a break from Stanford no less—he faced endless abuse over the phone from people who didn’t like his accent.

The other three understood it well to judge, respond, and offend this, being first generation immigrants themselves (and I will add that I know this well myself too, both in my current life and growing up as a first generation immigrant in the United States). And so they decided to put what they learned from AI to the test to see if they could fix it. (Earlier this year, Sanas also selected a fourth co-founder, now COO Sharath Keshava, who left another company he co-founded,, after learning about the company and wanting to be involved in building it.)

There are plenty of tools out there today to “automatically tune” a person’s voice and adjust it in real time or in a later time – and they’re as popular as photo filters at this point. But as Serebryakov notes, it’s especially difficult to be able to keep the natural and actual voice and change the way he pronounces what he’s saying.

Interestingly, the problem is so abstract – Sanas has come so close to fixing it by assimilating thousands of hours of different dialect speech into a system and ordering it to match other sounds, with the full combination of technology and method now also patented in the process – that the end result is that the engine Sanas’ “Translation” can be used with absolutely any language, not just English as you might have assumed. (Serebryakov tells me it’s already being used to “soften” dialects across Japan, China, and South Korea, for example.

“Such technology is universally applicable, from one dialect to another,” he said. “It will take time, but our goal is to allow people to communicate in any dialect whatsoever.”

There is kind of uneasy about the concept of what Sanas is doing and what she is doing here. It raises a lot of questions about potential abuse, and apart from that some might find it distasteful and past that technology was developed specifically to hide a person’s true identity: shouldn’t people who judge dialect learn those who should learn to be more open and accepting Instead of people forever adjusting to prejudices by hiding anything that distinguishes others as strangers or different?

However, there are points to argue against those as well. Sanas is not specifically building any apps for consumers or making its technology accessible to them at this time precisely because of how this is being abused. Even their customers don’t use a cloud-based version of the technology: to keep things extra secure, they are on premises and so customers control their own data that goes through and is generated through Sanas.

In terms of obscuring true identity, this is definitely a bigger problem that we all need to tackle on a daily basis. Meanwhile, this gives those on the sharper end of those punches way to handle better, and in some very practical ways making it easier for people (even those with good intentions) to simply understand each other without the accents in the way.

I received a demo of the service during my interview, in which Sanas called one of her clients’ agents in India, had him speak to me first in his own accent, then “played” his neutral Midwest tone. It was a little scary knowing what was going on in the background, but on the surface, I was pretty amazed at how normal everything looked—well, normal enough, at least. His voice was clear, but perhaps a little clear, almost mechanical and devoid of emotion.

Apparently, this is also somewhat intentional at the moment, and may evolve if that’s what customers and other users want.

“The reason we focus on call centers is because they are an outstanding fruit,” said Serebryakov, noting that the difficulties of effectively building a leading technology were challenging enough, but also proportional to the use case. “For us when building this, it was important to go the path of least resistance. No singing, no laughter, no overly emotional speech. What we are addressing is that we are trying to control how these users interact at work.” There’s no crying in baseball, and there’s no call center fun and games either.

“Insight Partners is delighted to deepen its relationship with Sanas in such a cutting-edge and cutting-edge technology,” Ganesh Bell, managing director of Insight, said in a statement. “As the company emerges from the stealth phase, I look forward to working with this highly talented and passionate team to build a product that, among many things, will help eliminate the unfortunate biases and discrimination experienced by those who speak English for the second time the language, which includes many of Sanas employees.”

Leave a Reply