Getting an AI to understand speech is already a tough nut to crack. A group of Australian researchers wants to take on something much harder: teaching once-deaf babies to talk.
Why so tough?
Think about what happens when you talk to Siri or Cortana or Google on a phone: the speech recognition system has to distinguish your “OK Google” (for example) from background noise; it has to react to “OK Google” rather than “OK something else”; and it has to parse your speech to act on the command. And you already know how to talk.
The Swinburne University team working on an app called GetTalking can't make even that single assumption, because they're trying to solve a different problem. When a baby receives a cochlear implant to take over the work of their malfunctioning inner ear, he or she needs to learn something brand new: how to associate the sounds they can now hear with the sounds their own mouths make.
Getting those kids started in the world of conversation is a matter of “habilitation” – no “rehabilitation” here, because there isn't a capability to recover. GetTalking is the brainchild of Swinburne senior lecturer Belinda Barnet, and the genesis of the idea was her own experience as mother to a child with a cochlear implant.
“With my own daughter – she had an implant at 11 months old – I could afford to take a year off to teach her to talk. This involves lots of repetitive exercises.“ That time and attention, she explained, is the big predictor of success.
In the roughly 10 years since it became standard practice to provide implants to babies at or before 12 months of age (fully funded by Australia's national health insurance scheme Medicare since 2011), 80 per cent of recipients achieve speech within the normal range. What defines the 20 per cent that don't get to that point? Inability, either because of family income or distance from the city, to “spend a year sitting on the carpet with flash-cards”.
That makes it hard for parents in rural or regional locations, regional, or low-income mothers, Barnet said. The idea for which Barnet and associate professor Rachael McDonald sought funding looks simple: an app to run on something like an iPad that gives the baby a bright visual reward for speaking.
However, it does test the boundaries of AI and speech recognition, because of a very difficult starting point: how can an app respond to speech when the baby has never learned to speak?