File(s) under permanent embargo
Moving towards intelligent telemedicine: Computer vision measurement of human movement
Background: Telemedicine video consultations are rapidly increasing globally, accelerated by the COVID-19 pandemic. This presents opportunities to use computer vision technologies to augment clinician visual judgement because video cameras are so ubiquitous in personal devices and new techniques, such as DeepLabCut (DLC) can precisely measure human movement from smartphone videos. However, the accuracy of DLC to track human movements in videos obtained from laptop cameras, which have a much lower FPS, has never been investigated; this is a critical gap because patients use laptops for most telemedicine consultations.
Objectives: To determine the validity and reliability of DLC applied to laptop videos to measure finger tapping, a validated test of human movement.
Method: Sixteen adults completed finger-tapping tests at 0.5 Hz, 1 Hz, 2 Hz, 3 Hz and at maximal speed. Hand movements were recorded simultaneously by a laptop camera at 30 frames per second (FPS) and by Optotrak, a 3D motion analysis system at 250 FPS. Eight DLC neural network architectures (ResNet50, ResNet101, ResNet152, MobileNetV1, MobileNetV2, EfficientNetB0, EfficientNetB3, EfficientNetB6) were applied to the laptop video and extracted movement features were compared to the ground truth Optotrak motion tracking.
Results: Over 96% (529/552) of DLC measures were within 0.5 Hz of the Optotrak measures. At tapping frequencies 4 Hz, there was progressive decline in accuracy, attributed to motion blur associated with the laptop camera’s low FPS. Computer vision methods hold potential for moving us towards intelligent telemedicine by providing human movement analysis during consultations. However, further developments are required to accurately measure the fastest movements.
Publication titleComputers in Biology and Medicine
Department/SchoolSchool of Information and Communication Technology
Place of publicationUnited Kingdom
Rights statementCopyright 2022 Elsevier Ltd