Imagine the following scenario. You are in foreign country where you don’t know the language. You might be fluent in English but that doesn’t count too much because most people in the country don’t know English. You are list in the underground trying to find your way back to the hotel. You are struggling to find a map with English on it. You are struggling to find someone to talk and be understood so that he can help you. For many people this is a quite stressful situation. Personally, I find it a bit amusing and it’s one of the “remember the time when we… ” moments that you tell your friends about and laugh your heart out. However, if you are not on vacations and you are on business travel and have to catch a really important meeting, I could see myself stressed too.
You finally get your way around. Reflecting though on such an occasion you can see some resemblance with how mute people could feel like. You might be talking to other persons but they don’t understand you. You utilize whatever communication channels you have apart from your own voice and speech. On the other hand, the persons you are talking too are like deaf people. They can hear but not understand you. They try to make sense of your body language and expressions in order to communicate. So, my question is could language be considered as a kind of disability? If so, could solutions for that problem help people with disabilities too?
Some time ago, I came across an article with a demonstration video from Microsoft showing their new speech recognition system which as they say improves accuracy to about 7 out of 8 words whereas todays systems have an average accuracy of 3 out of 4 words. This is great news I thought at first. the demonstration though went even further. They also developed a system using Bing translation engine that could translate English to almost any language real time. You can see the live captioning towards the end of the speech being automatically translated to Mandarin. The most fun tough was that they also developed a Text to Speech system that can be trained from you and use your voice to read aloud text. Tying that to the already presented speech recognition and translation engine they made a system that could translate what you say live and speak it out using your own voice. When this becomes available large scale on mobile phones, they could be like those translation devices you see in Star Trek. In the end of the show the Rick Rashid, head of Microsoft Research, who is giving the presentation they expect to break language barriers in a few years.
So, if language barriers are about to break in a few years we could also have broken till communication problems for deaf people at least. I mean, if we can have a live captioning software on our smartphone deaf people could read in the screen what is been said by their friends when they are talking to them.
So, getting back to my initial question. Is language a disability? It is certainly a barrier in out communication. Could solutions for breaking the language barriers be used in other ways? Definitely. That’s another example of assistive technologies becoming mainstream… (or is it the other way around?). This mainstreaming leads to wider target groups for these technologies and therefore lower costs and more effective solutions for people with disabilities.