A team of professors and students at Quinnipiac University has developed groundbreaking software designed to enhance communication for individuals with mobility impairments. The software, named AccessiMove, allows users to control devices entirely through facial gestures, providing a new level of independence for those who have difficulty using traditional input methods.
The initiative was inspired by an encounter that Chetan Jaiswal, an associate professor of computer science, had with a young man in a wheelchair at an occupational therapy conference in 2022. Witnessing the boy’s struggle to communicate with his parents sparked a determination in Jaiswal to harness technology to better serve individuals with disabilities. “We are computer scientists,” he remarked. “We can do better. Technology should help people who actually need it.”
Collaborating with colleagues Karen Majeski, an associate professor of occupational therapy, and Brian O’Neill, another associate professor of computer science, Jaiswal worked alongside students Michael Ruocco and Jack Duggan to create the university’s first patented hands-free input system. The software utilizes a standard webcam to detect head tilts, winks, and other facial gestures, enabling users to control a computer cursor and perform tasks like opening applications or restarting the system.
Jaiswal emphasized the broader impact of AccessiMove, stating, “This benefits a lot of people, especially those with disabilities and motor impairments. They can actually use their face to interact with a computer and do a number of things that we take for granted.”
Partnerships for Development and Expansion
The team is actively seeking partnerships, collaborators, and investors to expand the software’s application, particularly within the healthcare sector. “We have many healthcare industry partners on the East Coast, especially in Connecticut, whether it is Yale Hospital or Hartford Hospital,” Jaiswal noted. “All of us want what is best for our patients, especially those who can’t help themselves. They need assistance to live a better life.”
Majeski explained the functionality of the software, detailing how an individual’s facial movements can translate into computer commands. O’Neill added that the system focuses on the bridge of the nose, with movements in that area guiding the cursor in various directions. A simple tilt to the left or right can trigger specific actions like opening a web browser.
The technology’s versatility extends beyond personal computers. Jaiswal highlighted its potential for use in wheelchairs, allowing users to navigate their environment through facial gestures. For instance, looking up could move the wheelchair forward, while looking down would reverse it. This innovation could significantly enhance mobility for seniors in assisted living facilities, enabling them to move independently.
Real-Time AI Enhancements and Future Aspirations
AccessiMove relies heavily on artificial intelligence to track facial gestures in real time. The software also has implications in the gaming industry, fostering accessibility and new modes of interaction for various game types.
Majeski pointed out the educational benefits for children with mobility issues, suggesting that the software could facilitate interaction with toys and learning tools. “If it’s a toy and they can’t turn the button on, could you help us hack that toy so they could turn that button with a whole hand grasp?” she queried, illustrating the software’s potential for learning and play.
The team conducted trials with various users to ensure the software’s effectiveness in different scenarios, including adjustments for those wearing glasses or having limited neck movement. They found the system to be successful across these conditions, underscoring its adaptability.
Although funding is necessary to bring AccessiMove to market, Majeski is optimistic about its future. “There is money behind bringing it to the market so even if there is a source option for people with disabilities, we still need funding to bring it to a place where it is an open-source option for individuals to use on their own,” she stated.
O’Neill emphasized the accessibility of the software, noting that it does not require specialized hardware. “It is using the webcam built into any tablet or any phone,” he explained, making it easier for a wider audience to utilize the technology.
Jaiswal envisions a future where AccessiMove becomes a common tool for those in need, as well as for individuals seeking convenience in their daily lives. “The technology is useful in hospital settings,” he said. “Patients can use facial gestures to communicate, especially those who can’t speak.”
With continued development and the right partnerships, AccessiMove stands to revolutionize communication for many individuals, fostering a more inclusive and connected world.
