Updated 102 days ago

Signematic

Making content and media more accessible by providing real-time sign language translation.

  • AI / Robotics

About This Project

Signematic aims to provide live sign language transcription for videos and movies using advanced machine-learning algorithms and gesture models. Our solution ensures that the deaf and hard-of-hearing community can enjoy a seamless viewing experience with accurate and real-time sign language interpretation.

How We Built It

Signematic was developed using a combination of cutting-edge technologies:

  • Web Scraping: Utilized Beautiful Soup to extract relevant sign language videos from the web.
  • Three.js: Implemented for creating dynamic and realistic hand skeleton animations.
  • Node.js: Used for backend development and managing server-side operations.
  • Speech Recognition: Integrated for converting spoken words in videos into text.
  • YouTube Search Algorithms: Employed to find and retrieve videos matching the speech-to-text output.

Adobe Express Add-On API and Google Chrome Extension

When a user enables the Signematic Chrome extension, it converts the speech in the video to text using a robust speech recognition engine. This text is then processed by a web scraper that uses ASL grammar rules to search for videos depicting the corresponding signs. These videos are stitched together into a cohesive sign language interpretation overlay, providing a synchronized viewing experience. Additionally, Signematic is available as an Adobe add-on, enabling content creators to auto-generate sign language subtitles for their videos.

Challenges Faced

  • Cross-Platform Integration: Ensuring seamless communication between Adobe applications and our Python code proved to be a significant challenge.
  • Speech Recognition Accuracy: Dealing with diverse accents and low-volume audio often resulted in missed or incorrect words, impacting the overall transcription quality.

#What We Learned

  • Three.js: Gained proficiency in using Three.js to create efficient and accurate hand skeleton animations, which are critical for sign language depiction.
  • Adobe Express: Explored and integrated our solution with Adobe Express. We familiarized ourselves with the user experience of this relatively new software to effectively implement our extension and enhance content creation inclusivity.
  • Google Chrome Extensions: Leveraged our prior experience in developing Chrome extensions to create a sophisticated solution that overlays animations and videos in the user's browser. To further enhance the viewing experience, we also launched the project on a VR headset, making video watching fully immersive.

#Next Steps

  • Social Media Integration: As Adobe Express is not the only content creation app, we would like to solve this issue in the future directly though implementing it for popular social media platforms like Instagram, so that creators wouldn't even have to worry about dding this feature at all.
  • Gesture Animation Enhancement: We would like the data to appear to be more smooth. This can be done by slowing down the training videos and also by implementing a smoothening effect through the ratio in which the points and lines are moving.
  • Language Expansion: There are different sign languages. We used ASL only for this project and we hoping to expand it to more languages in the future (like BSL)