AudioTelligence, a tech start-up based in Cambridge in the UK, has managed to raise $8.5m for ‘autofocus for sound’, a new technology for voice assistants.
The company is going to be working with investors Cambridge Enterprise, CEDAR Audio, Octopus Ventures and Cambridge Innovation Capital.
AudioTelligence has achieved something remarkable: it has designed a background noise filtering technology that will make voice assistant conversations more audible.
The so-called ‘blind audio signal’ separation technology is able to remove background noise.
Both humans and machines will therefore be able to enjoy much clearer, audible and easier-to-interpret conversations.
The new technology has several commercial applications, including smart speakers, smart TVs and voice assistants.
AudioTelligence CEO Ken Robertsclaims that it will also provide a solution to what he calls the “cocktail party problem” – the struggle of hearing in noisy crowds.
According to Roberts, the company will be pursuing a licensing strategy as opposed to building hardware of its own.
He claims that based on the tests that they performed on an undisclosed home assistant platform, the sentence recognition in noisy environments had skyrocketed from 22% to 94%.
In terms of speech recognition, this is a significant leap forward.
Roberts also went on to say that the technology does not rely on matched microphones, which separates it from what is already out there.
In comparison, it is simpler to implement, more cost-effective, and does not require any sort of prior algorithm training.
As such, it is capable of recognising new voices as they appear and adjusting its focus as needed.
The company claims that the technology offers high performance and low latency, which allows for fluent lip syncing, a crucial component in hearing assist applications.
Roberts says that only a software upgrade is needed to implement it in existing devices – no calibration is required.