JPM, Vol. 13, Pages 874: Using EfficientNet-B7 (CNN), Variational Auto Encoder (VAE) and Siamese Twins’ Networks to Evaluate Human Exercises as Super Objects in A TSSCI Images

JournalFeeds

JPM, Vol. 13, Pages 874: Using EfficientNet-B7 (CNN), Variational Auto Encoder (VAE) and Siamese Twins’ Networks to Evaluate Human Exercises as Super Objects in A TSSCI Images

Journal of Personalized Medicine doi: 10.3390/jpm13050874

Authors:
Yoram Segal
Ofer Hadar
Lenka Lhotska

In this article, we introduce a new approach to human movement by defining the movement as a static super object represented by a single two-dimensional image. The described method is applicable in remote healthcare applications, such as physiotherapeutic exercises. It allows researchers to label and describe the entire exercise as a standalone object, isolated from the reference video. This approach allows us to perform various tasks, including detecting similar movements in a video, measuring and comparing movements, generating new similar movements, and defining choreography by controlling specific parameters in the human body skeleton. As a result of the presented approach, we can eliminate the need to label images manually, disregard the problem of finding the start and the end of an exercise, overcome synchronization issues between movements, and perform any deep learning network-based operation that processes super objects in images in general. As part of this article, we will demonstrate two application use cases: one illustrates how to verify and score a fitness exercise. In contrast, the other illustrates how to generate similar movements in the human skeleton space by addressing the challenge of supplying sufficient training data for deep learning applications (DL). A variational auto encoder (VAE) simulator and an EfficientNet-B7 classifier architecture embedded within a Siamese twin neural network are presented in this paper in order to demonstrate the two use cases. These use cases demonstrate the versatility of our innovative concept in measuring, categorizing, inferring human behavior, and generating gestures for other researchers.

MDPI Publishing. Click here to Read More