Human language

Hand gesture system enhances human-computer interaction

Human-computer interaction is demonstrated in devices such as touch screens, computer mice, keyboards, and remote controls. Researchers have worked to improve this interaction through more interpersonal communications using machines. Voice-assisted technology is one way, and now a team in China is looking at physical movement as another approach.

Researchers from Sun Yat-sen University (Guangzhou, China) have developed a hand gesture recognition algorithm that could be integrated into “consumer devices” to provide human-computer interaction in a “seamless way”. more natural and intuitive contact”. The team notes that this finding resolves issues of complexity, accuracy and applicability, all of which have presented difficult limitations in existing interaction methods.

“Traditional simple algorithms tend to suffer from low recognition rates because they cannot cope with different types of hands,” says lead researcher Zhiyi Yu, associate professor at Sun Yat-sen. “By first classifying the input gesture by hand type and then using sample libraries corresponding to that type, we can improve the overall recognition rate with almost negligible resource consumption.”

According to the study, published in the Electronic Imaging Journal, “hand gestures are an important part of human language, and therefore, the development of hand gesture recognition affects the nature and flexibility of human-computer interaction.”

The new hand-adaptive algorithm, which is “trained” using self-collected data, can be adapted to different types of hands, unlike existing attempts which can only identify a small number of recognizable gestures. The algorithm works by classifying the user’s hand type (“normal”, thin or wide) as well as the length and width of the palm and fingers. It does not recognize input gesture images directly, the study notes, but first categorizes them by hand type “and then uses different sample libraries for recognition based on different hand types.” This improves the overall recognition rate with almost negligible resource consumption.

According to the researchers, this method allows for a pre-recognition (shortcut) step that calculates a ratio of the surface of the hand to select the three most probable gestures out of nine possible ones (see figure).

“The gesture pre-recognition step not only reduces the number of computations and hardware resources required, but also improves recognition speed without compromising accuracy,” Yu says, noting that reducing this step ultimately determines a gesture. final “using a much more complex and high-precision feature extraction based on the invariant moments of Hu” – the Hu variant is defined as a set of seven numbers computed using central moments invariant to the transformations of picture.

The researchers note in their study that “the results of [these] The experiments demonstrate that the proposed algorithm could accurately recognize real-time gestures and has good adaptability to different types of hands. The algorithm boasts a recognition rate of over 94%; this rate exceeds 93% “when images of hand gestures are rotated, translated, or scaled.”

The researchers’ next steps will be to “improve the performance of the algorithm in poor lighting conditions and increase the number of possible gestures”.