Dr. Hari Palani
Founder & CEO

Our mission is to transform education and empower independence through multisensory information access.
What started as an academic project with a focus on graphical elements, evolved into a mission of making information accessible to people with visual impairments. It all started in 2010, when Saranya (UNAR CEO Hari Palani’s wife) decided to pursue her Master’s degree at UMaine under Nick Giudice’s mentorship. After communicating for several months, she was taken by surprise when Nick showed up to their first meeting with his guide dog, Uro. She hadn’t realized Nick was visually impaired.
Hari was working as a software engineer at the time and was curious which tools had helped Nick overcome his blindness throughout his career. “Your life activities primarily rely on vision, while mine primarily rely on touch and audio. Beyond that, there is not much difference, as we both are accessing, inferring, and using the information around us to experience the world and do what needs getting done,” Nick explained. This response profoundly impacted Hari, so he quit his job in 2011 and joined the world of assistive technologies as a graduate student with Nick at UMaine.
That same year, touchscreen devices were prolific. One of Nick’s graduate students, Monoj Raja, was studying the idea of using their vibration to convey indoor maps to blind users. With this initial work as inspiration, Nick and Hari started exploring how the touch and audio capabilities of commercial smartphones and tablets could be leveraged to provide non-visual access to all visual graphical information. We envisioned a graphic screen reader for the blind and visually impaired (BVI) and developed a prototype for usability testing. Enthusiasm from BVI indicated our approach was viable and could likely solve a significant information access problem faced by millions.
From 2012 to 2018, our team invented and perfected ways of making map information accessible to BVI users. We researched and identified foundational parameters and characteristics for haptic information extraction, rendering multisensory graphics, interface characteristics, technology limitations, and usability guidelines.
Based on the findings, we developed the first version of Midlina, a framework that dynamically converts graphical media on touchscreen devices into accessible multisensory rendering. After developing a beta app to demonstrate the usability of Midlina, we discovered the core problem facing the BVI demographic is well beyond graphics.
As part of the NSF i-Corp program, our team traveled across the country and conducted user interviews to better understand the problem and needs of end users. After more than 200 interviews from 2019 to 2020, we had a better understanding of the ecosystem, the pros and cons of existing solutions, and the gaps in the field.
Despite technological advances, graphical accessibility problems persisted due to the high cost of existing single-purpose solutions, limited portability, incomplete/incorrect descriptions, and lack of real-time results. Most solutions required human effort for authoring, converting, and/or producing graphics in accessible formats. As a result, BVI students were missing out on key content that their sighted peers were getting in real time.
In 2021, we concluded that an end-to-end accessibility solution was necessary to fill the gap in math and graphics accessibility. Empowered by the advances in computer vision and deep learning, we started prototyping CeCe, an end-to-end accessibility solution for making SAT papers accessible. We fully automated the process of identifying, extracting, processing, and converting different content types (e.g., graphics, math, and text) and used Midlina to deliver the SAT paper to a mobile app.
Then came ChatGPT. It disrupted the way we interacted with computer systems and showed us the unprecedented power of Large Language Models (LLMs) in understanding context which revolutionized information extraction from documents. Since 2022, we have experimented with Large Vision Language Models (LVMs) and have significantly improved our math and graphics conversion engines which paved the way for Morf – our new AI-enabled end-to-end accessibility solution for K-12 curriculum.
Morf is the digital bridge that finally fills the information access gap between sighted and BVI students in the classroom. This web-based app takes digital media documents as input, performs visual-to-multisensory conversion, and delivers an accessible multisensory equivalent optimized for instant use on touchscreen devices, tactile graphic embossers and tactile graphic renderers (e.g., Canute and Graphiti). After years of dedication to our work, weʼre proud that Morf is delivering on our mission to transform education and empower independence through multisensory information access.
We are a group of people dedicated to using our talents to help expand information access for all.
Founder & CEO
Founder & CRO
CTO
AI & Accessibility Engineer
AI Application Engineer
Full-Stack Engineer
AI Research Intern
AI Research Intern
Want to be part of our mission of expanding information access? Join us. See our openings.
Our mission is to transform education and empower independence through multisensory information access.
We are a group of people dedicated to using our talents to help expand information access for all.
Founder & CEO
Founder & CRO
CTO
AI & Accessibility Engineer
AI Application Engineer
Full-Stack Engineer
AI Research Intern
AI Research Intern
Want to be part of our mission of expanding information access? Join us. See our openings.
What started as an academic project with a focus on graphical elements, evolved into a mission of making information accessible to people with visual impairments. It all started in 2010, when Saranya (UNAR CEO Hari Palani’s wife) decided to pursue her Master’s degree at UMaine under Nick Giudice’s mentorship. After communicating for several months, she was taken by surprise when Nick showed up to their first meeting with his guide dog, Uro. She hadn’t realized Nick was visually impaired.
Hari was working as a software engineer at the time and was curious which tools had helped Nick overcome his blindness throughout his career. “Your life activities primarily rely on vision, while mine primarily rely on touch and audio. Beyond that, there is not much difference, as we both are accessing, inferring, and using the information around us to experience the world and do what needs getting done,” Nick explained. This response profoundly impacted Hari, so he quit his job in 2011 and joined the world of assistive technologies as a graduate student with Nick at UMaine.
That same year, touchscreen devices were prolific. One of Nick’s graduate students, Monoj Raja, was studying the idea of using their vibration to convey indoor maps to blind users. With this initial work as inspiration, Nick and Hari started exploring how the touch and audio capabilities of commercial smartphones and tablets could be leveraged to provide non-visual access to all visual graphical information. We envisioned a graphic screen reader for the blind and visually impaired (BVI) and developed a prototype for usability testing. Enthusiasm from BVI indicated our approach was viable and could likely solve a significant information access problem faced by millions.
From 2012 to 2018, our team invented and perfected ways of making map information accessible to BVI users. We researched and identified foundational parameters and characteristics for haptic information extraction, rendering multisensory graphics, interface characteristics, technology limitations, and usability guidelines.
Based on the findings, we developed the first version of Midlina, a framework that dynamically converts graphical media on touchscreen devices into accessible multisensory rendering. After developing a beta app to demonstrate the usability of Midlina, we discovered the core problem facing the BVI demographic is well beyond graphics.
As part of the NSF i-Corp program, our team traveled across the country and conducted user interviews to better understand the problem and needs of end users. After more than 200 interviews from 2019 to 2020, we had a better understanding of the ecosystem, the pros and cons of existing solutions, and the gaps in the field.
Despite technological advances, graphical accessibility problems persisted due to the high cost of existing single-purpose solutions, limited portability, incomplete/incorrect descriptions, and lack of real-time results. Most solutions required human effort for authoring, converting, and/or producing graphics in accessible formats. As a result, BVI students were missing out on key content that their sighted peers were getting in real time.
In 2021, we concluded that an end-to-end accessibility solution was necessary to fill the gap in math and graphics accessibility. Empowered by the advances in computer vision and deep learning, we started prototyping CeCe, an end-to-end accessibility solution for making SAT papers accessible. We fully automated the process of identifying, extracting, processing, and converting different content types (e.g., graphics, math, and text) and used Midlina to deliver the SAT paper to a mobile app.
Then came ChatGPT. It disrupted the way we interacted with computer systems and showed us the unprecedented power of Large Language Models (LLMs) in understanding context which revolutionized information extraction from documents. Since 2022, we have experimented with Large Vision Language Models (LVMs) and have significantly improved our math and graphics conversion engines which paved the way for Morf – our new AI-enabled end-to-end accessibility solution for K-12 curriculum.
Morf is the digital bridge that finally fills the information access gap between sighted and BVI students in the classroom. This web-based app takes digital media documents as input, performs visual-to-multisensory conversion, and delivers an accessible multisensory equivalent optimized for instant use on touchscreen devices, tactile graphic embossers and tactile graphic renderers (e.g., Canute and Graphiti). After years of dedication to our work, weʼre proud that Morf is delivering on our mission to transform education and empower independence through multisensory information access.
Our mission is to transform education and empower independence through multisensory information access.
We are a group of people dedicated to using our talents to help expand information access for all.
Founder & CEO
Founder & CRO
CTO
AI & Accessibility Engineer
AI Application Engineer
Full-Stack Engineer
AI Research Intern
AI Research Intern
What started as an academic project with a focus on graphical elements, evolved into a mission of making information accessible to people with visual impairments. It all started in 2010, when Saranya (UNAR CEO Hari Palani’s wife) decided to pursue her Master’s degree at UMaine under Nick Giudice’s mentorship. After communicating for several months, she was taken by surprise when Nick showed up to their first meeting with his guide dog, Uro. She hadn’t realized Nick was visually impaired.
Hari was working as a software engineer at the time and was curious which tools had helped Nick overcome his blindness throughout his career. “Your life activities primarily rely on vision, while mine primarily rely on touch and audio. Beyond that, there is not much difference, as we both are accessing, inferring, and using the information around us to experience the world and do what needs getting done,” Nick explained. This response profoundly impacted Hari, so he quit his job in 2011 and joined the world of assistive technologies as a graduate student with Nick at UMaine.
That same year, touchscreen devices were prolific. One of Nick’s graduate students, Monoj Raja, was studying the idea of using their vibration to convey indoor maps to blind users. With this initial work as inspiration, Nick and Hari started exploring how the touch and audio capabilities of commercial smartphones and tablets could be leveraged to provide non-visual access to all visual graphical information. We envisioned a graphic screen reader for the blind and visually impaired (BVI) and developed a prototype for usability testing. Enthusiasm from BVI indicated our approach was viable and could likely solve a significant information access problem faced by millions.
From 2012 to 2018, our team invented and perfected ways of making map information accessible to BVI users. We researched and identified foundational parameters and characteristics for haptic information extraction, rendering multisensory graphics, interface characteristics, technology limitations, and usability guidelines.
Based on the findings, we developed the first version of Midlina, a framework that dynamically converts graphical media on touchscreen devices into accessible multisensory rendering. After developing a beta app to demonstrate the usability of Midlina, we discovered the core problem facing the BVI demographic is well beyond graphics.
In 2021, we concluded that an end-to-end accessibility solution was necessary to fill the gap in math and graphics accessibility. Empowered by the advances in computer vision and deep learning, we started prototyping CeCe, an end-to-end accessibility solution for making SAT papers accessible. We fully automated the process of identifying, extracting, processing, and converting different content types (e.g., graphics, math, and text) and used Midlina to deliver the SAT paper to a mobile app.
Then came ChatGPT. It disrupted the way we interacted with computer systems and showed us the unprecedented power of Large Language Models (LLMs) in understanding context which revolutionized information extraction from documents. Since 2022, we have experimented with Large Vision Language Models (LVMs) and have significantly improved our math and graphics conversion engines which paved the way for Morf – our new AI-enabled end-to-end accessibility solution for K-12 curriculum.
Morf is the digital bridge that finally fills the information access gap between sighted and BVI students in the classroom. This web-based app takes digital media documents as input, performs visual-to-multisensory conversion, and delivers an accessible multisensory equivalent optimized for instant use on touchscreen devices, tactile graphic embossers and tactile graphic renderers (e.g., Canute and Graphiti). After years of dedication to our work, weʼre proud that Morf is delivering on our mission to transform education and empower independence through multisensory information access.