Our Mission: Enable Information Access to Everyone, Irrespective of Their Sensory Abilities

Our mission is to transform education and empower independence through multisensory information access.

Line Break

In the United States alone, there are ~23.7 million people with vision impairments. Of this population:

70%

Are unemployed or under-employed

11%

Have college degrees vs. 33% for the general population

30%

Do not travel independently

Meet the team

We are a group of people dedicated to using our talents to help expand information access for all.

Dr. Hari Palani

Founder & CEO

Dr. Hari Palani

Dr. Nick Giudice

Founder & CRO

Dr. Nick Giudice

Dr. Joyeeta Mitra Mukherjee

CTO

Dr. Joyeeta Mitra Mukherjee

Owen Thompson

AI & Accessibility Engineer

Owen Thompson

Kesahv Bharadwaj

AI Application Engineer

Kesahv Bharadwaj

Tharun Raman

Full-Stack Engineer

Tharun Raman

Ajay Karthick

AI Research Intern

Ajay Karthick

Vanshika Kumar

AI Research Intern

Vanshika Kumar

Our Story

Founders of UNAR Labs, Nick Giudice (wearing a cap and blue checkered shirt) and Hari Palani (wearing a black t-shirt), smiling together in a close-up portrait.
Nick Giudice and Hari Palani, the founders of UNAR Labs.

What started as an academic project with a focus on graphical elements, evolved into a mission of making information accessible to people with visual impairments. It all started in 2010, when Saranya (UNAR CEO Hari Palani’s wife) decided to pursue her Master’s degree at UMaine under Nick Giudice’s mentorship. After communicating for several months, she was taken by surprise when Nick showed up to their first meeting with his guide dog, Uro. She hadn’t realized Nick was visually impaired.

Hari was working as a software engineer at the time and was curious which tools had helped Nick overcome his blindness throughout his career. “Your life activities primarily rely on vision, while mine primarily rely on touch and audio. Beyond that, there is not much difference, as we both are accessing, inferring, and using the information around us to experience the world and do what needs getting done,” Nick explained. This response profoundly impacted Hari, so he quit his job in 2011 and joined the world of assistive technologies as a graduate student with Nick at UMaine.

Line Break

Inspired by Touchscreen Devices

That same year, touchscreen devices were prolific. One of Nick’s graduate students, Monoj Raja, was studying the idea of using their vibration to convey indoor maps to blind users. With this initial work as inspiration, Nick and Hari started exploring how the touch and audio capabilities of commercial smartphones and tablets could be leveraged to provide non-visual access to all visual graphical information. We envisioned a graphic screen reader for the blind and visually impaired (BVI) and developed a prototype for usability testing. Enthusiasm from BVI indicated our approach was viable and could likely solve a significant information access problem faced by millions.

Line Break

The Journey of Midlina

From 2012 to 2018, our team invented and perfected ways of making map information accessible to BVI users. We researched and identified foundational parameters and characteristics for haptic information extraction, rendering multisensory graphics, interface characteristics, technology limitations, and usability guidelines.

Based on the findings, we developed the first version of Midlina, a framework that dynamically converts graphical media on touchscreen devices into accessible multisensory rendering. After developing a beta app to demonstrate the usability of Midlina, we discovered the core problem facing the BVI demographic is well beyond graphics.

Line Break

The Journey of Morf

In 2021, we concluded that an end-to-end accessibility solution was necessary to fill the gap in math and graphics accessibility. Empowered by the advances in computer vision and deep learning, we started prototyping CeCe, an end-to-end accessibility solution for making SAT papers accessible. We fully automated the process of identifying, extracting, processing, and converting different content types (e.g., graphics, math, and text) and used Midlina to deliver the SAT paper to a mobile app.

Then came ChatGPT. It disrupted the way we interacted with computer systems and showed us the unprecedented power of Large Language Models (LLMs) in understanding context which revolutionized information extraction from documents. Since 2022, we have experimented with Large Vision Language Models (LVMs) and have significantly improved our math and graphics conversion engines which paved the way for Morf – our new AI-enabled end-to-end accessibility solution for K-12 curriculum.

Morf is the digital bridge that finally fills the information access gap between sighted and BVI students in the classroom. This web-based app takes digital media documents as input, performs visual-to-multisensory conversion, and delivers an accessible multisensory equivalent optimized for instant use on touchscreen devices, tactile graphic embossers and tactile graphic renderers (e.g., Canute and Graphiti). After years of dedication to our work, weʼre proud that Morf is delivering on our mission to transform education and empower independence through multisensory information access.