Recognition of Radar-Based Deaf Sign Language Using Convolution Neural Network
Keywords:
Radar, deep learning, Short-Time Fourier Transform (STFT), gestures, classificationAbstract
The difficulties in the communication between the deaf and normal people through sign language can be overcome by implementing deep learning in the gestures signal recognition. The use of the Convolution Neural Network (CNN) in distinguishing radar-based gesture signals of deaf sign language has not been investigated. This paper describes the recognition of gestures of deaf sign language using radar and CNN. Six gestures of deaf sign language were acquired from normal subjects using a radar system and processed. Short-time Fourier Transform was performed to extract the gestures features and the classification was performed using CNN. The performance of CNN was examined using two types of inputs; segmented and non-segmented spectrograms. The accuracy of recognising the gestures is higher (92.31%) using the non-segmented spectrograms compared to the segmented spectrogram. The radar-based deaf sign language could be recognised accurately using CNN without segmentation.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 International Journal of Integrated Engineering

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Open access licenses
Open Access is by licensing the content with a Creative Commons (CC) license.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.