We love hearing what our users are up to on the platform, and every now and then we come across a project that we are particularly impressed by. This project by Dr. Faisal Alsrheed is one of them. Read our interview with him below to learn more about how he built a model to translate sign language into text in emergency situations.
User story: AI for sign language in emergency situations
"The model can take video footage and read the 60 most important signs used in emergencies and translate them into text with a 96% accuracy"
Hi Faisal! We’ve been so impressed with what you’ve been building on the platform. Would you tell us a bit more about what it is you’re trying to achieve?
Absolutely! The short version is that I’ve been building a sign-language translator that can interpret what hearing-impaired people are communicating into text in emergency situations, so that medical staff are able to understand it.
There are many applications where people have built apps for communication in the other direction -- translating text into sign language -- so that part of the translation is done quite well already. But translating from sign language to text has proved a lot more difficult so I wanted to try and see if I could do something about this.
That’s an interesting use case! We’re curious, what made you decide to do it?
In Saudi Arabia, the literacy rate of people with a hearing impairment is very low, which can lead to particularly difficult situations if they end up needing to call for help in an emergency. With AI we have a great opportunity to change this once and for all, which is why I wanted to work on this. Working with tools like Peltarion makes it easy to experiment with things that are otherwise quite difficult to learn, which means that I can get something up and running in my spare time. I think it’s very important that the app I’m building is completely free for the people who need it.
That sounds like a really important issue to tackle. What have been the results so far?
Right now the model can take video footage and read the 60 most important signs used in emergencies and translate them into text with a 96% accuracy, so I’m quite happy with that!
Great! We’re of course very curious about how you found the platform in all of this. What have you found helpful about the platform?
I’m able to experiment a lot more with the Peltarion platform than I otherwise would have. In fact, I would not even have been able to do this in a Jupyter notebook. I was able to do much more advanced things, like having two inputs and using the connectors to make that happen. I also liked that I was able to run multiple experiments at once. And the deployment, of course. This is something I normally find very tedious, and I couldn’t believe how easy this was in the platform. In the platform it’s not really about how technically difficult a problem is. It is just about how you tune the hyperparameters, which I really like.
That’s like music to our ears! And what features would you like to see added?
I’ve been wondering whether you plan to add standard RNN and LSTM models at any point soon - not because I’ve wanted to use them for this problem myself, but sometimes people want to see the results of these models first as a comparison to know how to evaluate the quality of what I’ve built.
Thanks, Faisal! That’s great feedback. We’ll pass that on to the product team. And thank you so much for telling us about your amazing project! We’re here to help if you want any support from us along the way.
If you want to explore Faisal’s project further, you can find a video demo of what Faisal has built here.
And if you’ve been building a project that you think we should feature on our blog, please get in touch with me at firstname.lastname@example.org. We’d love to hear from you!