AI in the service of assistive technologies: the LIS2Speech project for sign language translation

Artificial Intelligence-enhanced assistive technologies are becoming essential tools for meeting accessibility needs, breaking down communication barriers in key sectors such as public administration, tourism, healthcare and retail.

Among the most promising applications are those that interpret the Italian Sign Language (LIS), a visual-gestural language used by people who are deaf or hard of hearing to communicate. These technologies make it possible to translate LIS into accessible messages in real time, facilitating interaction between hearing and deaf people and bridging the gap that the complexity and lack of widespread use of LIS can create.

LIS2Speech: a project to translate sign language in real time

Orbyta Tech, as a company specializing in the development of digital solutions with a focus oninclusion and accessibility, has activated the LIS2Speech project, born out of a desire to contribute expertise inArtificial Intelligence and cross-platform application development to the development of technologies that promote accessibility.

The goal is to develop an open platform that, by combining neural networks, deep learning and computer vision, can be used to create apps that can recognize and translate sign language in real time and make the results available to public-private initiatives aimed at promoting accessibility for people with hearing impairments.

In fact, the development of anapplication usable on camera-equipped devices, capable of capturing the movements of the sign language (LIS) speaker and translating them into speech or text format in real time, represents a promising initiative to break down communication barriers between deaf and hearing people, positioning itself as an assistive technology capable of opening up new opportunities for social inclusion.

Recognizing signs with artificial intelligence

The process of recognizing Italian sign language (LIS) throughArtificial Intelligence is developed in three stages:

The Sign Language Recognition phase divided into three steps:

    • Creation of a specific dataset of LIS glosses, representing the basic linguistic units.
    • Training an artificial neural network using the dataset to enable understanding of individual glosses.
    • Development of algorithms based on skeletal data and spatial coordinates to accurately recognize signs in the context of movements.

The Sign Language Translation phase, in which advanced algorithms come into play for direct translation of LIS into written Italian, using sign-to-text approaches.

The text-to-speech phase in which the written translation is converted into spoken text through generative AI algorithms, making the message understandable in oral form.

LIS-to-speech for accessibility and inclusion

Development of cross-platform real-time LIS-to-speech translation app
Translation accuracy through sign recognition and machine translation algorithms based on neural networks and deep learning
Improved accessibility and social inclusion