SignON is a 3-year project led by ADAPT at Dublin City University (DCU), which claims to be on a mission to address the communication gap between users of spoken languages and deaf sign language users. In a recent development, the SignON project co-coordinated by Tilburg School of Humanities and Digital Sciences has received Horizon 2020 funding of €5.6M by the European Union (EU).
Use of the funds
With the funds, Prof. Andy Way (Professor of Computing at Dublin City University), Ireland (coordinator), and Dr Dimitar Shterionov Assistant Professor in Cognitive Science and Artificial Intelligence at Tilburg University, The Netherlands (scientific lead), will conduct research and develop a mobile solution for automatic translation between sign and oral (written and spoken) languages.
ADAPT is a leading SFI Research Centre for digital media technology that focuses on developing next-generation digital technologies around how people communicate by helping to analyse, personalise and deliver digital data more effectively for businesses and individuals.
ADAPT researchers are based across seven leading Irish higher education institutions: Trinity College Dublin, Dublin City University, University College Dublin, Technological University Dublin, Maynooth University, Cork Institute of Technology, and Athlone Institute of Technology.
About the SignON Project
SignON is a user-centric and community-driven project with an aim to facilitate the exchange of information among deaf and hard of hearing, and hearing individuals across Europe. The project is primarily focusing on the Irish, British, Dutch, Flemish, and Spanish Sign and English, Irish, Dutch, and Spanish oral languages.
The project claims, there are 5,000 deaf Irish Sign Language (ISL) signers; in the UK around 87,000 deaf signers use British Sign Language (BSL); in Flanders, Belgium some 5,000 deaf people use Flemish Sign Language (VGT); there are approximately 13,000 signers using Sign Language of the Netherlands (NGT), and it is estimated that there are over 100,000 Spanish Sign Language (LSE) signers.
How will the collaboration with European deaf and hard of hearing communities help?
Through collaboration with these European deaf and hard of hearing communities, researchers will define use-cases, co-design, and co-develop the SignON service and application. The objective of the research project is the fair, unbiased, and inclusive spread of information and digital content in European society.
In addition, SignON will incorporate machine learning capabilities that will allow; learning new sign and oral languages; style-, domain- and user-adaptation; and automatic error correction, based on user feedback.
To the user, SignON will deliver signed conversations via a life-like avatar built with the latest graphics technologies.
“To ensure wide uptake, improved sign language detection and synthesis, as well as multilingual speech processing for everyone, the project will deploy the SignON service as a smartphone application running on standard modern devices. While the application is designed as a light-weight interface, the SignON framework will be distributed on the cloud where the computationally intensive tasks will be executed,” says a statement released by the project.
When will it be released?
The SignON project will commence on 1st January 2021, as the consortium is currently recruiting a wide range of experts in the fields of Natural Language Processing (NLP) Machine Learning (ML), Deep Learning (DL), Machine Translation (MT) Linguistics, Deaf studies, education, 3D graphics, and others to join the SignON team.
Speaking about the project, Prof Andy Way says, “When I first worked on sign language MT fifteen years ago, the field was very small. In 2022, we will see a special issue of the Machine Translation journal appearing dedicated to this topic. Now that ISL is a fully-fledged official language in Ireland, it is great to see this work continuing to thrive.”
He further adds, “I am pleased to coordinate the SignON project, which will develop a free, open application and framework for conversion between video (capturing and understanding Sign language), audio and text and translation between Sign and spoken languages.”