In schools all over the world, children sit in classrooms and learn a new language. Most options available for students are Spanish, German, or French. One such language that often gets left in the dark is Sign Language.
Approximately 10 per cent of the population is either deaf or experience hearing loss and roughly 4,000 people are diagnosed with sudden deafness every year. Sign Language is difficult to learn and even harder to teach because it utilizes a combination of mouthing, facial expressions, body posture, as well as hand gestures to communicate.
But all of that may change with the introduction of artificial intelligence (AI) into the mix. Stephanie Stoll, a PhD candidate in computer vision from the University of Surrey, is currently collaborating with colleagues on a software that will allow people to teach themselves sign languages in an “automated and intuitive way.” In its current version, the software can analyze how students sign in Swiss-German sign language and then deliver feedback regarding handshape, location, timing, and motion.
Stoll believes that they are the first to attempt any kind of assessment on signing. Moreover, they want to develop a system that can give users evaluations of their progress and correct mistakes. Learning Sign Language is categorically more difficult because it can’t be read or written down. To combat this, the team at the University of Surrey developed a computer game that shows a video of the sign and then records the users’ performance of the sign for evaluation.
AI technology is utilized in every facet of the performance assessment. A convolutional neural network (CNN) pulls information from the video to view the user's upper body posture. The pose is then transmitted to the hand shape analyzer where an additional CNN examines the video and pinpoints hand shape information at each point of the video. Those shapes are then delivered to a hand motion analyzer called a Hidden Markov Model (HMM). This particular AI technology compares the reference model to the user attempt and produces a score of how well the gestures match.
Everything the AI does is behind the scenes so that the only thing the user has to focus on is learning. Currently, the software is only compatible with Swiss-German Sign Language.