(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .



Clinical gait analysis using video-based pose estimation: Multiple perspectives, clinical populations, and measuring change [1]

['Jan Stenum', 'Center For Movement Studies', 'Kennedy Krieger Institute', 'Baltimore', 'Maryland', 'United States Of America', 'Department Of Physical Medicine', 'Rehabilitation', 'The Johns Hopkins University School Of Medicine', 'Melody M. Hsu']

Date: 2024-04

Gait dysfunction is common in many clinical populations and often has a profound and deleterious impact on independence and quality of life. Gait analysis is a foundational component of rehabilitation because it is critical to identify and understand the specific deficits that should be targeted prior to the initiation of treatment. Unfortunately, current state-of-the-art approaches to gait analysis (e.g., marker-based motion capture systems, instrumented gait mats) are largely inaccessible due to prohibitive costs of time, money, and effort required to perform the assessments. Here, we demonstrate the ability to perform quantitative gait analyses in multiple clinical populations using only simple videos recorded using low-cost devices (tablets). We report four primary advances: 1) a novel, versatile workflow that leverages an open-source human pose estimation algorithm (OpenPose) to perform gait analyses using videos recorded from multiple different perspectives (e.g., frontal, sagittal), 2) validation of this workflow in three different populations of participants (adults without gait impairment, persons post-stroke, and persons with Parkinson’s disease) via comparison to ground-truth three-dimensional motion capture, 3) demonstration of the ability to capture clinically relevant, condition-specific gait parameters, and 4) tracking of within-participant changes in gait, as is required to measure progress in rehabilitation and recovery. Importantly, our workflow has been made freely available and does not require prior gait analysis expertise. The ability to perform quantitative gait analyses in nearly any setting using only low-cost devices and computer vision offers significant potential for dramatic improvement in the accessibility of clinical gait analysis across different patient populations.

People that experience a stroke or are diagnosed with Parkinson’s disease often have mobility impairments such as slow walking speeds, shortened steps and abnormal movement of the legs during walking. It is a challenge for clinicians to measure and track the multitude of walking parameters that can indicate recovery or progression of disease in an objective and quantitative manner. We present a new workflow that allows a user to analyze the gait pattern of a person walking recorded with only a single video obtained with a smartphone or other digital recording device. We test our workflow is 3 groups of participants: persons with no gait impairment, persons post-stroke, and persons with Parkinson’s disease. We show that a user can perform these video-based gait analyses by recording videos with views from either the side or the front, which is important given the space restrictions in most clinical areas. Our workflow can produce accurate results as compared with a gold standard three-dimensional motion capture system. Furthermore, the workflow can track changes in gait, which is needed to measure changes in mobility over time that may occur because of recovery or progression of disease. This work offers potential for dramatic improvement in the accessibility of clinical gait analysis across different patient populations.

Funding: We acknowledge funding from the NIH (grant R21 HD110686 to RTR), RESTORE Center Pilot Project Award (to RTR via NIH grant P2CHD101913), the American Parkinson Disease Association (grant 964604 to RTR), and the Sheikh Khalifa Stroke Institute at Johns Hopkins Medicine to RTR. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability: The dataset of unimpaired gait is available from http://bytom.pja.edu.pl/projekty/hm-gpjatk . The stroke and PD datasets contain videos with identifiable information and are therefore not available. Code for our workflow is available at https://github.com/janstenum/GaitAnalysis-PoseEstimation/tree/Multiple-Perspectives .

Copyright: © 2024 Stenum et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

A person of size (height) s stands at two distances from a frontal plane camera (C Front ; panel A): an initial reference depth (d Ref ) and at a depth-change (Δd i ). The size in pixels of the person at each depth are denoted by s Ref and s i . From trigonometric relationships we derive a relationship between pixel size and depth-change (B, see Methods for detailed explanation; f, focal length of camera; x IP , position of image plane of camera; x Cam , position of camera lens; x Ref , initial position of person; x i , position of person following depth-change). The predicted pixel sizes of a person standing at increasing depths closely tracks manually annotated pixel sizes, which shows that we can use pixel size to estimate depth-changes (C). Summary of our frontal plane workflow (D): OpenPose tracks anatomical keypoints, we find gait cycle events, calculate a time-series of pixel size, and calculate depth-change at which point step lengths and step times can be derived.

We recorded three-dimensional (3D) motion capture and digital videos of gait trials performed by persons post-stroke and persons with Parkinson’s disease (A). We analyzed digital videos of the frontal (C Front ) and sagittal plane (C Sag ) with OpenPose to track anatomical keypoints (B). We developed workflows to perform a gait analysis, independently, for videos of the frontal and sagittal plane (C). See Methods section for detailed information about the frontal and sagittal plane post-processing workflows. Note that the ‘Calculate depth-change time-series’ step in the frontal workflow contains multiple sub-steps including tracking the pixel size of the torso and low-pass filtering (see S4 Fig for justification of tracking method and smoothing). We compared spatiotemporal gait parameters and joint kinematics from our workflows to parameters obtained with 3D motion capture (D).

Here, we present a novel, versatile approach for performing clinical gait analysis using only simple digital videos. First, we developed and tested a novel workflow that performs a gait analysis using frontal plane recordings of a person walking either away from or toward the camera ( Fig 1 ). Our approach is based on tracking the size of the person as they appear in the video image (measured with keypoints from OpenPose) and using trigonometric relationships to estimate depth and, ultimately, spatial parameters such as step length and gait speed ( Fig 2 ; see expanded description in Methods ). Second, we test both our frontal and sagittal workflows directly in two clinical populations with gait impairments that result from neurologic damage or disease (persons post-stroke or with Parkinson’s disease).

This foundational work in using pose estimation for video-based gait analysis has demonstrated significant potential of this emerging technology. There are now prime opportunities to build upon what has already been developed and progress toward direct clinical applications. In moving toward clinical application, we considered the needs for: 1) flexible approaches that can accommodate different perspectives based on the space constraints of the end user (e.g., a clinician may only have access to a long, narrow hallway or hospital corridor where a sagittal recording of the patient is not possible), 2) testing and validation directly in clinical populations with gait dysfunction, 3) measurement of clinically relevant gait parameters that are of particular relevance to specific populations, and 4) the ability to measure changes in gait that occur in response to a change in speed.

Recent developments in computer vision have enabled the exciting prospect of quantitative movement analysis using only digital videos recorded with low-cost devices such as smartphones or tablets [ 5 – 7 ]. These pose estimation technologies leverage computer vision to identify specific “keypoints” on the human body (e.g., knees, ankles) automatically from simple digital videos [ 8 , 9 ]. The number of applications of pose estimation for human health and performance has increased exponentially in recent years due to the potential for dramatic improvement in the accessibility of quantitative movement assessment [ 6 , 7 , 10 ]. We have previously used OpenPose [ 8 ]–a freely available pose estimation algorithm–to develop and test a comprehensive video-based gait analysis workflow, demonstrating the ability to measure a variety of spatiotemporal gait parameters and lower-limb joint kinematics from only short (<10 seconds) sagittal (side view) videos of individuals without gait impairment [ 11 ]. Others have also used a variety of approaches to combine pose estimation outputs and neural networks to estimate different aspects of mobility [ 5 , 12 – 16 ].

Walking is the primary means of human locomotion. Many clinical conditions–including neurologic damage or disease (e.g., stroke, Parkinson’s disease (PD), cerebral palsy), orthopedic injury, and lower extremity amputation–have a debilitating effect on the ability to walk [ 1 – 3 ]. Quantitative gait analysis is the foundation for effective gait rehabilitation [ 4 ]: it is critical that we objectively measure and identify specific deficits in a patient’s gait and track changes. Unfortunately, there are significant limitations with the current state-of-the-art. Marker-based motion capture laboratories are considered the gold standard measurement technique, but they are prohibitively costly and available largely to select hospitals and research institutions. Other commercially available technologies (e.g., gait mats, wearable systems) only provide predefined parameters (e.g., spatiotemporal data or step counts), are relatively costly, and require specific hardware. There is a clear need for new technologies that can lessen these barriers and provide accessible and clinically useful gait analysis with minimal costs of time, money, and effort.

Results

Development and testing of a novel approach for gait videos recorded in the frontal plane We first validated our frontal plane approach during overground walking in a group of young participants without gait impairment (we have previously demonstrated the accuracy of obtaining gait parameters using sagittal plane videos in the same dataset of unimpaired participants [11]). We then compared spatiotemporal gait parameters (step time, step length and gait speed; averaged values for a single walking bout) simultaneously obtained with 3D motion capture and with frontal plane videos positioned to capture the person walking away from one camera and toward the other camera (data collection setup shown in Fig 3A). PPT PowerPoint slide

PNG larger image

TIFF original image Download: Fig 3. Testing of a novel approach for spatiotemporal gait analysis from videos of unimpaired adults recorded in the frontal plane. We recorded digital videos of the frontal plane where the person walked toward one camera and away from the other camera (A). We compared spatiotemporal gait parameters (B, step time; C, step length; D, gait speed) between the two digital videos and 3D motion capture (see S1 Table). https://doi.org/10.1371/journal.pdig.0000467.g003 Step time showed average differences (negative values denote greater values in video data; positive values denote greater values in motion capture data) and errors (absolute difference) up to one and two motion capture frames (motion capture recorded at 100 Hz; 0.01 and 0.02 s), respectively, between motion capture and frontal plane video (Fig 3B and S1 Table). The 95% limits of agreement between motion capture and frontal plane videos ranged from −0.03 to 0.05 s, suggesting that 95% of differences with motion capture fell within this interval. Step length showed average differences and errors up to about 0.02 and 0.03 m, respectively, between motion capture and frontal plane videos (Fig 3C). The 95% limits of agreement between motion capture and frontal plane videos ranged from −0.052 to 0.094 m. Gait speed showed average differences and error up to 0.04 and 0.06 m s−1, respectively, with 95% limits of agreement ranging between −0.11 and 0.17 m s−1 (Fig 3D). Correlations for all spatiotemporal gait parameters between motion capture and frontal plane videos were strong (all r values between 0.872 and 0.981, all P<0.001; Fig 3B–3D).

Testing of video-based gait analysis in persons with neurologic damage or disease Next, we evaluated both our sagittal and frontal plane workflows in two patient populations with neurologic damage or disease (persons post-stroke and persons with PD). We compared spatiotemporal gait parameters (step time, step length, and gait speed), lower-limb sagittal plane joint kinematics, and condition-specific, clinically relevant parameters (stroke: step time asymmetry and step length asymmetry; PD: trunk inclination) simultaneously obtained with 3D motion capture and with sagittal and frontal plane videos (data collection setup shown in Fig 4A). Note that frontal videos are limited to spatiotemporal gait parameters and that joint kinematics and trunk inclination can only be obtained from sagittal videos within our current workflows. PPT PowerPoint slide

PNG larger image

TIFF original image Download: Fig 4. Video-based gait analysis from frontal and sagittal views in persons post-stroke. We recorded digital videos of the frontal and sagittal plane during gait trials (A). We compared spatiotemporal gait parameters (B, step time; C, step length; D, gait speed) and gait asymmetry (E, step time asymmetry; F, step length asymmetry) between the two digital videos and 3D motion capture. We also compared lower-limb joint kinematics at the hip, knee and ankle obtained with sagittal videos and motion capture for the paretic (G) and non-paretic (H) limbs (MAE, mean absolute error). Gait parameters are calculated as session-level averages of four gait trials at either preferred or fast speeds (see Table 1). https://doi.org/10.1371/journal.pdig.0000467.g004 We present gait parameters as averaged values across four overground walking bouts each at 1) preferred and 2) fast speeds (see S2 Table for values of gait parameters). For preferred speed trials we instructed participants to walk at their preferred speed; for fast speed trials we instructed participants to walk at the fastest speed that they felt comfortable. Of the four trials at each speed, there were two trials of the participants walking away from the frontal camera (with the left side against the sagittal camera) and two trials walking toward the frontal camera (with the right side against the sagittal camera). We intend our workflows to have clinical applications and therefore present values as session-level values (i.e., the results that would be obtained as if the four walking trials were treated as a single clinical gait analysis); we report more detailed comparisons at the level of single trial averages and step-by-step comparisons in the supplement (S3 and S4 Tables).

[END]
---
[1] Url: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000467

Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/