Friday, 31 May 2019

Turning on the Captions.


Image result for turn on the subtitles campaign
Elsewhere signers are demanding more 'turning on of BSL' in media.  The issue with sign is you DON'T get an option to turn it on or off, if it is there, you are stuck with it. As averse to captions which you can choose or not to use, far too many BSL campaigns ignore choice. TV and films are a visual medium nobody has solved (or wants to), the conflict of imaging on programs, there doesn't seem to be an accepted format that is a norm either.   

It is far from clear what image the deaf signer is actually following, the film/Item? or the signed translation?   Most at grass root level would say the signer is the central person/image they are following.  Or, why some dedicated signing programs with BSL included are subject to Interpreter on-screen inclusion too?  If a digital option became available re sign translation choices, perhaps including the option 'whole screen' for the signer would perhaps be more effective, however, this means missing the action otherwise.  Some areas e.g. sport, suggest neither captions or BSL was entirely necessary, with regards to some weather reporting, captions obliterated some area coverage entirely. 

Most issues appear to have been overcome via captioning which tends to become invisible to most deaf and not interfering with what is being transmitted.  One suggestion would be to split the TV set into two parts with a captioning section below the actual program?  Then nothing is on-screen to detract.

Currently, there is no digital modus for an '889' sign option. Hence BSL moved to the media graveyard shift, and even the BBC handing the Deaf two dedicated programs for BSL (Despite others with hearing loss complaining they were frozen out of inclusion).   Why aren't there options visually in media?  Surely the technology already exists?  And if it does would lip-speakers come into it?  


Portugese sign language app gets Google funds.

Image result for hand talk appEarlier this month, Google AI Impact Challenge awarded HandTalk with a grant worth US$750 thousand. The app and website plugin offers real-time translations from Portuguese to Brazilian Sign Language for deaf users. HandTalk is one of 20 winners from the Google AI Impact Challenge, all of whom shared US$25 million in grants. Organizers surveyed over 2,600 applicants using AI to address social and environmental challenges. In the end, the corporation invested in 20 international non-profits, social enterprises, and research institutions.

Besides this pool of money, participants received consulting from Google Cloud and personalized coaching from Google-affiliated AI experts. Additionally, the chosen groups were invited to join Google’s six-month Launchpad Accelerator program.  During this time, participants have the option to develop OKRs (Objectives and Key Results) as well as establish timelines for projects. Every partner will also work with a Google expert for regular coaching sessions and mentorship.

With its portion of the money, HandTalk will elevate its platform that automatically translates Portuguese into Libras, also known as Brazilian Sign Language. The company strives to break communication barriers by making its product more accessible to members of the deaf community. While improving the quality of translations is paramount, HandTalk is also planning to expand its service to include American Sign Language.  Today, there are two main products under HandTalk’s model. With Hugo serving as an animated language interpreter, partners incorporate the Website Translator into their domains through a plugin. From there, Hugo translates texts into Libras.

Not only does this ensure better communication, social responsibility, and innovation, but also compliance with the Brazilian Accessibility Act. Under this ordinance, public and private sector entities must be inclusive to everyone, including people with disabilities. The second product is the app itself, which essentially functions as a pocket translator. It translates Portuguese text or dictations into Libras, not to mention offering a dictionary for language learners. Since the 2012 founding, HandTalk has seen over two million app downloads.

According to the World Health Organization, 80 per cent of the world’s 360 million deaf people can’t understand the spoken or written version of their native language. Just in Brazil alone, 70 per cent of Brazilian deaf people can’t read or write in Portuguese.

Google expects AI to remedy this predicament, hence its support for HandTalk.

Automating Lip-reading.



It's taken them a long time to understand what lip-readers (and sign users), already know.  Ergo lip-readers use the entire visual image to follow, the same as sign users do, we don't just look at the face, we try to take in the whole picture.  

If you observe the deaf signer then concentration is not on the hands most of the time, as averse to lip-readers where the face is all.  The issue with lip-reading is that it is assumed unless the total concentration is on the face it is hard to follow and there are fewer visuals that can add to it.   The issue we have is people cannot orate properly and the ideal situation for effectiveness doesn't exist nor do classes approach tuition to accommodate that. Less than 5% of deaf or hard of hearing people attend a lip-reading class.  

The ratio with deaf pre-signers attending classes is that even fewer of them do. The signer doesn't use many assistive aids to follow as the lip-reader tends to do, but often it adds to lesser understanding rather than improves it because we don't really know what we can hear, then guesswork gets involved, some of it educated some totally 'Half past two, how are you..'.  There are issues with body language as regards to different cultures and people too, as well as their etiquettes.

The study investigates a model that can use hybrid visual features for optimizing lip-reading. Lip reading, also known as speech-reading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available. 

Experiments over many years have revealed that speech intelligibility increases if visual facial information becomes available. The research was carried out by Fatemeh Vakhshiteh with the supervision of professors Farshad Almasgan and Ahmad Nickabadi. In an interview with ISNA, Vakhshiteh said using a variety of sources for extracting information substantially helps the lip-reading process. According to Vakhshiteh, this model was inspired by the function of the brain because the human brain also processes several sources of information in production and reception of speech. 

In this model, deep neural networks are used to make the recognition of lip-reading as well as phone recognition easier, she said. “The neural networks were specially used for situations that audial and visual features must be processed simultaneously.” “This is especially helpful in noisy environments where the audial data produced by speakers might become less clear or incomprehensible.” “This would also help the people with speech difficulty because they can use their visual data to compensate for the interruption in the speech signal they receive,” she added. The research results demonstrated that the proposed method outperforms the conventional Hidden Markov Model (HMM) and competes well with the state-of-the-art visual speech recognition works.