Development of Gesture-Controlled Robotic Assistant for the Disabled Using Deep Learning
Keywords:
Assistive technology, CNN, Hand gesture recognition, Real-time detection, SSD, Stroke patient, YOLOAbstract
The rising rate of motor disabilities, particularly in stroke patients, has underscored the need for innovative assistive technologies to enhance their quality of life. This paper focuses on the development of a gesture-controlled robotic assistant using deep learning techniques that aimed for individuals with partial hand mobility. The primary objective was to design a system that recognizes predefined hand gestures in real-time to control household appliances and notify caregivers of patient needs. The research leverages Convolutional Neural Network (CNN), You Only Look Once (YOLO) and Single-Shot Multibox Detector (SSD) for gesture recognition. The system integrates Internet of Things (IoT) devices, enabling real-time feedback and automation through platforms like Telegram. A dataset consists of five specific hand gestures was used to train models, with augmented data to improve robustness across varying environmental conditions. The system achieved 97% real-time gesture recognition accuracy in controlled settings, demonstrating its reliability in improving stroke patients’ interaction with their environment. This research highlights the potential of combining deep learning with IoT for the development of accessible, cost-effective assistive technologies.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Autonomous Robotics and Intelligent Systems (IJARIS)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

