SlideShare a Scribd company logo
1
A Musculoskeletal Model Driven by Microsoft
Kinect Sensor v2 Data
Department of Health, Science and Technology, Aalborg University, Denmark
Project period: 02.09.14 – 18.12.14
Project group: 748 – 1th semester, Sports
Technology.
Authors:
Lasse Schov Jacobsen
Adam Frank
Magnus Hansen
Morten Kudsk
Mikkel Svindt Gammelgaard
Supervisor: Michael Skipper Andersen
Number of Pages: 14
Supplementary material: 2
Worksheets: 9
Completed: 18.12.14
ABSTRACT
Objective. To develop a musculoskeletal model driven by data
retrieved from Microsoft Kinect Sensor v2 and compare the
output to a musculoskeletal model driven by data from maker-
based motion capture system for three different movements.
Furthermore, determine the optimal position for the Microsoft
Kinect Sensor v2 for each movement.
Method. In the positioning test, a combination of seven angles,
three heights and three distances was conducted to find the
optimal position for obtaining data for a musculoskeletal model,
doing a gait, squat and shoulder abduction cycle. When the
optimal positions for the three different movements were
determined, data for the comparison test were collected for five
healthy male subjects. Eight Oqus 1 infrared high-speed cameras
and two force platforms were used to collect the maker-less
based motion capture data. One Microsoft Kinect Sensor v2 was
used to collect the marker-less based motion capture data.
AnyBody Modeling System was used to analyze different
variables for the two systems.
Results. Multiple positions were fund to be optimal for the
position of the Microsoft Kinect Sensor v2 at the squat and the
shoulder abduction movement. The same positions for these
movements were chosen to be the same (0°, 0.75/2.6). The
optimal position for the gait movement (0°, 0.75m/3.4m) was
determined, based on the highest percentage of tracked Kinect-
joints. Strong correlations were found in the comparison test for
knee flexion angle and hip flexion angle for both the gait and the
squat movement. Doing the shoulder abduction movement,
strong correlations were found for shoulder abduction angle
(0.99) and moment (0.88). Even though strong correlation were
found in the ankle flexion angle (0.71) in the squat movement,
other results indicates that the Microsoft Kinect Sensor v2 has
limitations tracking the ankle sufficiently. A strong correlation in
the ground reaction force (0.81) was observed for the gait
movement, where as the ground reaction forces in the squat
movement were: left (0.49) and right (0.50).
Conclusion. The results of this study show that data obtained by
the Microsoft Kinect Sensor v2 can be used as input in a
musculoskeletal model. Though the Microsoft Kinect Sensor v2
show some encouraging results for some variables, it still proves
insufficient as a alternative to marker-based systems.
2
1. Introduction
When analyzing human kinematics and kinetics,
motion capture have often been used to detect
skeletal movement in space and external forces with
force platforms. Data from motion capture systems
and external loads can be an input to a
musculoskeletal model. Using inverse dynamics, the
musculoskeletal model uses kinematics and external
loads to compute forces in muscles and joints. A
popular motion capture method is based upon
infrared cameras, retro reflective skin markers and
force plates. This approach has been used in research
areas like rehabilitation (1, 2) and sports performance
optimization (3-5), where accuracy of movements is
necessary to estimate forces in muscles and joints.
Despite its popularity, the marker-based motion
capture system (MBS) has several limitations. The
MBS is expensive due to the cost of the cameras
alone, starts from approximately 24.000 U.S. dollars
(6). Subsequently the MBS requires skin markers,
force platforms and specialized software. Secondly,
markers are placed on the skin and can therefore
move relative to the underlying bones, giving
obscured estimations of the bones (7, 8).
Furthermore, the MBS is in some cases not able to
detect one or more markers, because the markers are
not visible to at least the required minimum of 3
cameras, and thereby the markers become occluded
(7).
Studies have researched Microsoft Kinect Sensor v1 as
an alternative to MBS’s (9-13). The Microsoft Kinect
Sensor v1 was introduced in the game industry as a
device, which allowed interaction with video game
consoles through gesture recognition and voice
commands. The depth camera in this device consists
of an infrared laser projector as well as an infrared
video camera. To recognize gestures within the
sensors field of view, it uses the laser projector to
project a speckle pattern. The infrared camera uses
deformations in the speckle pattern to create a 3-D
depth map (9, 11, 12). Based upon a randomized
decision forests algorithm, the Microsoft Kinect
Sensor v1 is able to automatically determine
anatomical landmarks on a subject within the 3-D
map. The Microsoft Kinect Sensor v1 is capable of
tracking 20 Kinect-joints and creating a 19-segment
stick figure (12, 14). Dutta (9) researched the depth
mapping capabilities of the Microsoft Kinect Sensor v1
and found promising results when compared to a
MBS. Additionally, R.A. Clark et al. researched the
Microsoft Kinect Sensor v1’s abilities to assess
anatomical landmarks position and angular
displacement. R.A. Clark et al. concluded that the
Microsoft Kinect Sensor v1 could provide comparable
data to a MBS (12). These results suggest that the
Microsoft Kinect Sensor v1 has the potential to be a
motion capture method, which is cheaper compared
to MBS and do not rely on markers. However,
researches have shown numerous limitations of the
Microsoft Kinect Sensor v1, one of them being its
inability to assess internal/external joint rotation in
the peripheral limbs (12). This is because the
Microsoft Kinect Sensor v1 does not possess the
ability to determine an orthogonal axis in relation to
the longitudinal axis of the arm, due to inadequate
number of points in the stick figure. Another
limitation is the resolution of the depth sensor in
Microsoft Kinect Sensor v1, which could contribute to
lack of smoothness and accuracy of the
measurements (11). Furthermore, Pfister et al., who
investigated the percentage of tracked Kinect-joints,
have proposed issues regarding the position of the
Microsoft Kinect Sensor v1. The Issue arises from the
fact that the Microsoft Kinect Sensor v1 is only able to
track segments directly in its field of view (11).
In 2014 the Microsoft Kinect Sensor v2 was
introduced as an upgraded version to the Microsoft
Kinect Sensor v1. The Microsoft Kinect Sensor v2 was
developed with a different depth sensor technology
than its predecessor. Microsoft Kinect Sensor v1 was
developed with a structured light technique, where
the Microsoft Kinect Sensor v2 is based on a time of
flight as a core mechanism of depth retrieval (15).
Time of flight technology measures distance from the
sensor to the captured object based on the speed of
light. The time of flight for the light pulse from the
sensor to the captured object can be used to estimate
the distance to the object (16). This new depth sensor
has a resolution of 512 x 424 compared to the
Microsoft Kinect Sensor v1’s resolution of 320 x 240,
which Microsoft claims to improve smoothness and
accuracy of the measurements. Furthermore,
Microsoft Kinect Sensor v2 is able to track 25 Kinect-
joints compared to Microsoft Kinect Sensor v1, which
tracks 20 Kinect-joints. The additional Kinect-joints
have been added to the hand and neck segments. This
could enable tracking of internal/external rotation of
the arm (17). The developments incorporated in the
Microsoft Kinect Sensor v2 has the potential of
improving the detectability of skeletal movements
compared to the Microsoft Kinect Sensor v1 (17).
Therefore, the purpose of this study was to develop a
musculoskeletal model driven by data retrieved from
Microsoft Kinect Sensor v2 and compare the output to
3
a musculoskeletal model driven by data from MBS.
The comparison was based upon three movements:
gait, squat and shoulder abduction. The squat and
shoulder abduction were chosen to represent
movements in the lower- and upper body
respectively. Gait was selected due to the
implementation of this movement in rehabilitation
(18). To approach the issue previously mentioned
regarding the inability to track joints, which is not
directly in the sensors field of view, a test, to
systematically assess different positions of the
Microsoft Kinect Sensor v2, were implemented into
this study to determine the optimal position for each
of the three movements.
2. Method
2.1. Optimal positioning of Microsoft Kinect
Sensor v2
2.1.1. Subject information
1 male subject (age 22 years, height 1.86 m, weight
82.41 kg) participated in this test. The subject was not
affected by any musculoskeletal illnesses or chronic
pain conditions. Prior to the test, the subject signed
an informed consent statement.
2.1.2. Experimental method
To systematically test the positions of the Microsoft
Kinect Sensor v2 different heights, distances and
angles were determined. The distances and angles
were determined from the center between the two
force platforms, see figure 1-2. The force platforms
will be described further in section 2.2.2. The heights
were based upon the maximum and minimum
recommendations from Microsoft. The distances were
based upon a pretest, which determined the
minimum and maximum distance the Microsoft Kinect
Sensor v2 was able to track all Kinect-joints. The
median of the minimum and maximum height and
distance was selected, and subsequently ± 0.4 m was
selected as upper and lower height and distance.
However, the tripod where the Microsoft Kinect
Sensor v2 was attached had a limitation regarding the
maximal height. Therefore, 0.05 m was subtracted
due to these limitations. Hence, the following heights:
0.75 m, 1.15 m, 1.55 m and distances: 2.6 m, 3 m, 3.4
m were chosen. Seven different angles were chosen
by steps of 15° from the frontal to the sagittal plane of
a subject standing in the center between the two
force platforms, see figure 1-2. For each of the seven
different angles, three trials were completed for every
combination of distance and height. Using the
Microsoft Kinect Sensor v2, data was obtained with a
sampling frequency of 30 Hz. A customized program
developed using the Microsoft Software Development
Kit (SDK) was used to directly obtain the data. The
program processed the recording of the Microsoft
Kinect Sensor v2 and stored the data as a .csv format.
The format contained 25 coordinates, which made up
a stick figure of 24 segments.
Figure 1 illustrates the positioning of Microsoft Kinect Sensors v2
at different angles and distances in the gait movement.
Figure 2 illustrates the positioning of Microsoft Kinect Sensors v2
at different angles and distances in the squat and shoulder
abduction movement.
2.1.3. Movement protocol
The subject wore tight fitted shorts to ensure that the
Microsoft Kinect Sensor v2 only detected the body
segments. The Microsoft Kinect Sensor v2
automatically tracked the Kinect-joints in its field of
view. To control the tracking of the Kinect-joints, trials
were only initiated when all Kinect-joints were
tracked. Therefore, when performing the gait
movement, the subject was instructed to initiate the
walk from a specific point, which was defined as being
one step away from the first force platform. Due to
limitations of Microsoft Kinect Sensor v2s field of
4
view, the fourth step was shortened and the walk was
terminated afterwards as seen in figure 1. The subject
was instructed to initiate gait, using the non-dominant
leg. This was done to ensure that several subjects
would have the same conditions. When performing
the squat movement, the subject was instructed to
place their feet shoulder width apart and squat until
they reached approximately a 90° angle in the knee
joint, before returning to the initial position. When
performing the shoulder abduction movement, the
subject was instructed to raise the non-dominant arm
laterally away from the midline of the body to
approximately 90°, before returning it to the initial
position. To make the movement velocity of the trials
similar, a time window was implemented for each
movement. This consisted of a start and end audio
queue, which the subject was instructed to follow.
The time windows for the gait and squat movements
lasted 5 seconds, and for the shoulder abduction
movement it lasted 7 seconds. The subject was
instructed to keep their head in neutral position,
when performing the different movements. This was
done to ensure similarity in the motion of each trial.
One experimenter was observing that the subject
performed the movements correctly and consistently.
If an error in the performance of a trial was observed,
the trial would be discarded and repeated. Prior to
the data collection for each movement, the subject
was given 5 minutes to practice the instructed
movement.
2.1.4. Computational method
Additionally to the 3-D position of the 25 Kinect-joints,
the output of Microsoft Kinect Sensor v2 also reports
the state, in which the Kinect-joints were tracked. A
Kinect-joint could be specified as not tracked, inferred
and tracked. This state information detected the
confidence of the 3-D position data (19). By
systematically studying different positions of the
Microsoft Kinect Sensor v2, the state information was
used to compute the mean percentage and SD (SD) of
tracked Kinect-joints for each position. A customized
MATLAB (R2014b, The MathWorks, Inc., Natick, MA,
United States) script was used to analyze each
position for all of the movements. For each of the
movements, the optimal position was determined.
The optimal positions were defined by having the
highest percentage of tracked Kinect-joints. The
optimal positions determined, were used as the
Microsoft Kinect Sensor v2 placements during the
comparison test. The specific positions optimal for the
gait, squat and shoulder abduction movements can be
found in table 1.
2.2. Comparison between a marker-based
system and a marker-less system
2.2.1. Subject information
5 male subjects (age 26 ± 3.7 years, height 182.4 ± 5.5
cm, weight 81.8 ± 7.1 kg) participated in this study.
The subjects were not affected by any
musculoskeletal illnesses or chronic pain conditions.
Prior to the test, each subject signed an informed
consent statement.
2.2.2. Experimental method
The Microsoft Kinect Sensor v2 was compared to a
MBS consisting of eight Oqus 1 infrared high-speed
cameras, sampling at 100 Hz. The system was
combined with Qualisys Track Manager v. 2.9
(Qualisys, Sweden). Ground reaction forces were
collected using two AMTI (MA, USA) force plates v.
OR6-7-1000 and OR6-5-2000, with a sampling rate of
2000 Hz filtering at 1050 Hz. 33 retro reflective
markers were placed on the subject in accordance to
a full body protocol (see supplementary material). The
retro reflective markers were placed to track the
motion of each segment see figure 4. Additionally the
markers were placed on bony landmarks to ensure
the least amount of movement relative to the
underlying bone. The subjects wore tight fitted shorts,
which enabled the markers to be placed directly on
the skin of the subjects. The protocol for the gait,
squat and shoulder abduction movements, were
similar to the protocol used for the optimal
positioning test.
2.2.3. Computational method
The musculoskeletal models used in this study, were
developed using the AnyBody Modeling System (AMS)
v. 6.0.3 (AnyBody Technology A/S, Aalborg, Denmark).
The models were based on the GaitFullBody template
(20) from the AnyBody Managed Model Repository v.
1.5. Two different musculoskeletal models were used
for each trial. The data obtained by MBS drove one
musculoskeletal model and the data obtained by
Microsoft Kinect Sensor v2 drove the other
musculoskeletal model. Henceforth, the system
consisting of the Microsoft Kinect Sensor v2 as well as
the associated musculoskeletal model shall be
referred to as MLS. For scaling the GaitFullBody model
to the sizes of the different subjects, the method for
scaling musculoskeletal models proposed by
Rasmussen et al. (21) was used. This method
establishes coherence between the geometry and
mass of segments and enables subject specific scaling
5
of the model, without compromising the kinematic
function of the model. For the gait and squat
movements constant strength muscles were added to
the lower extremities. For the shoulder abduction
movement constant strength muscles were added for
both arms and neck. These muscles were added to
enable estimations of internal forces within the joints,
using inverse dynamics. For both models a 2’ order
zero phases lag low pass Butterworth filter was
applied, with a cut-off frequency of 6 Hz. The filter
was applied to minimize fluctuations of the data
trajectories. Spyder v. 2.3.1 (The Scientific Python
Development EnviRonment) was used to drive several
models simultaneously. Furthermore, the neck point
was anticipated not to have influence on the
movement because it was held in neutral position
during all movements. The neck was therefore fixed in
neutral position in both models.
2.2.3.1. Musculoskeletal model driven by the
marker-less data
MLS data was collected using the customized program
previously described in section 2.1.4. The segment
lengths of the stick figure were computed for one data
point in the movement’s initial position. The length of
a segment was computed by subtracting the 3-D
positions of the two attached Kinect-joints, which
defined the segment. For every paired segment the
mean of the two segments were used. Based upon
these segment lengths the GaitFullBody model was
scaled. Virtual markers were defined on the
GaitFullBody model and on the Kinect-based stick
figure, see figure 3. The virtual markers were defined
at all the Kinect-joint locations illustrated in figure 3.
Corresponding markers were defined on the
GaitFullBody model. The least square difference
between the virtual markers and the corresponding
Kinect-joints, obtained with the MLS, was minimized
using the local optimization-based method proposed
by Andersen et al. (8). This was done because the
equations within the model exceeded the total
number of degrees of freedom, and therefore the
model was over-determined. For all pairs of markers,
weights were introduced such that the effect of
segment length discrepancies between the Kinect-
based stick figure and the GaitFullBody model could
be reduced. This was accomplished by assigning a high
weight for the hip virtual markers in all three
directions and reducing the weight for the knee,
ankle, shoulder, elbow and wrist in the longitudinal
direction. Due to inadequate information for properly
estimation of specific movements, drivers were added
to fix the position of lateral bending, extension and
rotation of the thorax.
The MLS did not have an output that provided
information regarding ground reaction forces (GRF).
The GRFs were predicted using the method suggested
by Fluit et al. (22). The method solves the under-
determinacy problem during a double contact phase,
by computing the GRFs using the muscle recruitment
algorithm (18). This is done by introducing artificial
muscle-like actuators at 12 contact points under each
foot. For each contact point, five muscle-like actuators
detecting a normal force in the vertical direction.
Furthermore, medio-lateral and anterior-posterior
directions were added to the force plate as static
friction forces. Each actuator only generated reaction
force if the contact point was within 0.05 m and had a
maximal velocity of 1.2 m/s, to the virtual floor.
Figure 3 illustrates the Kinect-joints of the stick figure obtained
from the Microsoft Kinect Sensor v2 (blue) and the associated
virtual markers on the model (red). Kinect-Joint names without
underscore indicate that it’s a paired Kinect-joint. Marker names
with underscore
2.2.3.2. Musculoskeletal model driven by
marker based data
Retro-reflective markers attached to the skin drove
the model for the MBS, see figure 4. The first gait trial
for each subject was used for scaling the GaitFullBody
model for all movements. The segment lengths and
6
marker positions were estimated by minimizing the
least square difference between the GaitFullBody
model and the corresponding measured marker
positions, using the previously mentioned method
optimization method proposed by Andersen et al. (8).
The measured GRFs were applied under the feet.
Furthermore the joint reaction forces along with the
muscle forces were computed using AMS muscle
recruitment solver (23).
2.2.4. Data analysis
A
complete gait cycle from heel strike to heel strike, for
the non-dominant leg was analyzed. This was done
since the stride length of the dominant leg was
shortened. The shoulder abduction movement was
analyzed for the non-dominant arm from the point
where the arm began to migrate away from the body
until the point where the arm was returned to its
initial position along the torso. The squat was
analyzed from the beginning of the knee flexion until
the subjects returned to the initial position. MATLAB
was used to compute specific data obtained from the
three movements.
The computed data from the squat and gait
movements consisted of ankle flexion angle (AFA),
knee flexion angle (KFA) and hip flexion angle (HFA),
ankle flexion moment (AFM), knee flexion moment
(KFM), hip flexion moment (HFM), ankle joint reaction
force (AJRF), knee joint reaction force (KJRF), hip joint
reaction force (HJRF), maximal ankle joint reaction
force (MAJRF), maximal knee joint reaction force
(MKJRF), maximal hip joint reaction force (MHJRF) and
ankle joint range-of-motion (AROM), knee joint range-
of-motion (KROM), hip joint range-of-motion (HROM).
Vertical ground reaction force (GRF), maximal vertical
ground reaction force (MGRF) were also analyzed.
GRF and MGRF were computed in both the left and
right foot for the squat movement. Data analyzed
from the shoulder abduction movement consisted of
shoulder abduction angle (SAA), shoulder abduction
moment (SAM) and shoulder abduction joint reaction
force (SJRF), shoulder abduction range-of-motion
(SROM) and shoulder abduction maximal joint
reaction force (MSJRF). Pearson’s correlation
coefficient (r) and root-mean-square deviation
(RMSD) were used to compare the data from the
comparison test, except for maximum and range-of-
motion values. Only mean and standard deviation (SD)
was computed for those. Furthermore, the position of
the hand (PH), which shows the estimated movement
in 3-D and the acceleration for the hand (AH), in the
shoulder abduction movement, were illustrated to see
fluctuations for both the MLS and the MBS.
3. Results
3.1. Optimal Kinect Positioning test
Table 1 shows the percentage of tracked Kinect-joints in the
positioning test for different heights and distances for gait, squat
and shoulder abduction at 0°. *indicates that this specific
height/distance was chosen for the comparison test. - indicates
that the tracking of the Kinect-joints failed.
Gait 0 ° Squat 0 ° Shoulder 0 °
2.6m/0.75m - 100* 100*
2.6m/1.15m 95.91 ±
0.72
99.93 ±
0.07
100
2.6m/1.55m - 96.47
±1.21
99.99 ± 0.02
3m/0.75m 99.85 ±
0.05
99.90 ±
0.01
99.98 ± 0.02
3m/1.15m 99.79 ±
0.30
99.84 ±
0.15
99.96 ± 0.03
3m/1.55m - 99.97 ±
0.02
99.99 ± 0.02
3.4m/0.75m 99.96 ±
0.07*
99.42 ±
0.47
99.90 ± 0.08
3.4m/1.15m 99.79 ±
0.22
99.52 ±
0.36
99.96 ± 0.04
3.4m/1.55m 98.16 ±
0.22
99.63 ±
0.02
100
Figure 4 illustrates reflective markers obtained from the
MBS (blue) and the associated virtual markers on the
model (red). Marker names without underscore indicate
that it’s a paired marker. Marker names with underscore
indicate that it is not a paired
7
Table 2 Shows the mean correlation coefficient (r) ± SD, RMSD ± SD and Mean diff. ± SD between MLS and MBS in gait, squat and shoulder
abduction.
Selected results from the optimal positioning test are
shown in table 1. The results showed that the
Microsoft Kinect Sensor v2 could track all Kinect-joints
100% at several positions for the shoulder abduction
movement. At 0°, all Kinect-joints were tracked 100%
in the positions: 2.6 m/0.75 m, 2.6 m/1.15 m and 3.4
m/1.55 m. Even though the results are not shown,
Microsoft Kinect Sensor v2 were capable of tracking
all Kinect-joints 100% from 0° to 45° at different
positions. In the squat movement, the Microsoft
Kinect Sensor v2 was manageable of tracking all
Kinect-joints 100% at 0° for one position: 2.6 m/0.75
m. Although the results are not shown, the Microsoft
Kinect Sensor v2 was capable of tracking all Kinect-
joint 100% at 15° for one position: 2.6 m/0.75 m. In
the gait movement, the Microsoft Kinect Sensor v2
was incapable of tracking all Kinect-joints 100% at any
position. The position that had the highest amount of
tracked joints (99.96%) was the position: 3.4 m/0.75
m at 0°. The Microsoft Kinect Sensor v2 failed to track
the Kinect-joints at 0° in three different positions: 2.6
m/0.75 m, 2.6 m/1.55 m and 3 m/ 1.55 m.
3.2. Comparison test
The results from the comparison test are illustrated in
figure 5-7 and table 2. Analysis of the MLS in the
shoulder abduction movement showed some
inconsistencies in the rotation of the arm. For the
AMS to be capable of performing kinematics, drivers
were added to fix the rotation of glenohumeral and
elbow pronation and supination. Furthermore, the
complex construction of the shoulder muscles within
the AMS, resulted in errors for multiple trials when
performing inverse dynamics. Therefore, all muscles
were removed from the models when analyzing the
shoulder abduction movement. Few other errors
occurred when performing kinematics in AMS for the
gait and the shoulder abduction movement for both
systems. For each subject, a trial was discarded if an
error occurred. If no error occurred for a subject, a
random trial was discarded, in order to have an equal
amount of trials for each subject at each movement.
This meant that four trials for each subject were
included in the analysis of the gait and shoulder
abduction movement.
Variable Mean r ± SD RMSD ± SD Variable Mean diff. ± SD
Gait GRF left (% BW) 0.81 ± 0.22 32.89 ± 7.04 MGRF left (% BW) 29.41 ± 13.59
AFA (deg) 0.6 ± 0.45 17.12 ± 1.63 AROM (deg) 26.79 ± 9.38
KFA (deg) 0.93 ± 0.1 16.33 ± 1.17 KROM (deg) 4.29 ± 5.65
HFA (deg) 0.96 ± 0.07 8.99 ± 3.11 HROM (deg) 4.39 ± 6.42
AJRF (% BW) 0.68 ± 0.36 158.57 ± 32.42 MAJRF (% BW) 50.78 ± 13.51
KJRF (% BW) 0.67 ± 0.37 125.16 ± 83.25 MKJRF (% BW) 16.59 ± 14.73
HJRF (% BW) 0.66 ± 0.38 163.11 ± 177.81 MHJRF (% BW) 271.49 ± 335.59
AFM (% BW*BH) 0.57 ± 0.46 2.49 ± 0.41
KFM (% BW*BH) 0.47 ± 0.56 1.78 ± 0.92
HFM (% BW*BH) 0.56 ± 0.46 2.19 ± 0.99
Squat GRF left(% BW) 0.49 ± 0.54 5.19 ± 1.04 MGRF left (% BW) 7.77 ± 4.44
GRF right (% BW) 0.5 ± 0.53 5.65 ± 1.57 MGRF right (% BW) 5.18 ± 5.53
AFA (deg) 0.71 ± 0.44 16.26 ± 6.87 AROM (deg) 25.51 ± 18.77
KFA (deg) 0.95 ± 0.08 11.7 ± 4.33 KROM (deg) 4.66 ± 10.16
HFA (deg) 0.93 ± 0.1 16.09 ± 6.13 HROM (deg) 10.53 ± 10
AJRF (% BW) 0.6 ± 0.43 42.56 ± 4.77 MAJRF (% BW) 3.01 ± 6.55
KJRF (% BW) 0.63 ± 0.4 30.84 ± 10.95 MKJRF (% BW) 4.41 ± 11.41
HJRF (% BW) 0.83 ± 0.23 33.13 ± 17.65 MHJRF (% BW) 27.18 ± 20.32
AFM (% BW*BH) 0.63 ± 0.4 0.72 ± 0.13
KFM (% BW*BH) 0.89 ± 0.15 1.41 ± 0.42
HFM (% BW*BH) 0.85 ± 0.18 0.88 ± 0.18
Shoulder abduction SAA (deg) 0.99 ± 0.01 8.12 ± 2.32 SROM (deg) 0.21 ± 3.39
SAM (% BW*BH) 0.88 ± 0.18 0.10 ± 0.05
8
Overall, the results showed that the MLS, in some
extent, was capable of detecting joint angles with a
comparable accuracy to the MBS. However, the MLS
tracking showed to have fluctuations compared to the
MBS.
3.3. Gait
Strong correlations were found between the MLS and
MBS for GRF (0.81), KFA (0.93) and HFA (0.96). All
other variables showed to have moderate correlation
ranging from 0.47 to 0.68. However, the RMSD
showed substantial differences for KFA (16.33). The
mean differences between the MLS and the MBS for
KROM (4.29) and HROM (4.39) showed, in some
extent, to be similar. The graph for GRF illustrates
inconsistencies, in the end of the gait cycle, see figure
6. The MLS predicts GRF where as the MBS do not
measure any GRF. Furthermore, Substantial bigger SD
is seen in the first peak and in the end of the gait cycle
for GRF. The same increases of the SD are seen in the
graphs illustrating AFM, KFM, HFM, AJRF, KJRF and
HJRF.
3.4. Squat
Strong correlations were found between the two
systems for AFA (0.71), KFA (0.95), HFA (0.93), HJRF
(0.83), KFM (0.89) and HFM (0.85). All other variables
showed to have moderate correlation ranging from
0.49 to 0.63. However, the AROM (25.51) showed to
be substantial bigger compared to KROM (4.66) and
HROM (10.53). This can also be seen in the associated
graphs on figure 7. Furthermore, the graphs
illustrating GRF left and GRF right, shows substantial
SD through the squat cycle.
3.5. Shoulder abduction
Strong correlations were found between the two
systems for SAA (0.99) and SAM (0.88). Additionally
the mean difference of SROM (0.21) for the two
systems showed in some extent to be similar. In figure
5, the illustration of the AH MLS is restricted to show
accelerations between -50 m/s2 and 50 m/s2, even
though outliers up to 500 m/s2 were observed.
Figure 5 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials for SAA and SAM. The shaded
areas surrounding the curves represent the SD for MBS (shaded red) and MLS (shaded blue). In the graphs representing the PH and AH
for MBS and MLS the anterior- posterior direction (green), medial-lateral direction (red) and vertical direction (blue) are illustrated. The
x-axis represents the shoulder abduction cycle completion in percent, where 0 is the initiation of the shoulder abduction cycle, and 100
is when the arm has returned to its initial position.
9
Figure 6 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials. The shaded areas surrounding the
curves represent the SD for MBS (shaded red) and MLS (shaded blue). The x-axis represents the gait cycle completion in percent, where 0 is
the initiation of the gait cycle, and 100 is the termination of the gait cycle.
10
Figure 7 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials. The shaded areas surrounding the
curves represent the SD for the MBS (shaded red) and the MLS (shaded blue). The AFA graph in the bottom right corner represents the data
collected for each trial for every subject. The x-axis represents the squat cycle completion in percent, where 0 is the initiation of the squat
cycle, and 100 is the termination of the squat cycle.
11
4. Discussion
4.1. Optimal position test
The optimal position test was conducted to
systematically assess different positions of the
Microsoft Kinect Sensor v2, in order to determine the
optimal position for each of the three different
movements. For the squat and shoulder abduction
movement, the Microsoft Kinect Sensor v2 were able
to track all Kinect-joints 100% at multiple positions,
therefore multiple optimal positions were determined
for both movements. The positions for both
movements were chosen to be similar to minimize
potential errors when adjusting the Microsoft Kinect
Sensor v2 in the comparison test. Positions at 0° were
chosen in accordance with previous studies
recommendations (11,24). As seen in table 1 the
Microsoft Kinect Sensor v2 was not capable of
tracking all Kinect-joints 100% at any position for the
gait movement. Therefore, the position with the most
tracked Kinect-joints was chosen as the optimal
position. The chosen positions for the three different
movements are shown in table 1.
A limitation of the positioning test was that the state
information regarding the tracking of Kinect-joints, did
not take the precision of the data into account. This
means that when fluctuations occurred in the data,
the state information did not adjust. Supplementary, a
fluctuation threshold could be applied to also assess
the fluctuations in the tracking of the Kinect-joints.
4.2. Comparison test
As previously mentioned a musculoskeletal model
driven by data obtained from the MLS was developed,
to compare the selected output with the output from
an existing musculoskeletal model driven by data
obtained from the MBS. Overall, the results showed
an inconsistency between the MLS and MBS for the
investigated movements. This is particularly seen in
the results regarding the ankle data. Here, the AFA in
the squat movement, showed to have a strong
correlation, however, with a substantial SD.
Furthermore, the differences in AROM are
considerable larger compared to KROM and HROM, in
both the squat and gait movement. This suggests that
the MLS has difficulties tracking the ankles. However,
the results of this study show tendencies towards an
improvement of Microsoft Kinect Sensor v2s
capabilities in tracking KFA and HFA, compared to the
Microsoft Kinect Sensor v1. The correlations between
the MLS and the MBS are strong for the KFA and the
HFA for both the squat and the gait movement.
Moreover, this is in disagreement with previous
studies findings (11,13), which concludes that the
Microsoft Kinect Sensor v1s is incapable of tracking
lower extremities with sufficiently accuracy.
To describe the MLSs capability of predicting forces, a
comparison between the measured GRF and the
predicted GRF was made. Despite the strong
correlation in GRF for the gait movement, it was
revealed to have a substantial SD, which can be seen
in the associated graph in figure 6. A substantial peak
can be seen in the start of the midstance phase during
the gait cycle. As the graph illustrates, the SD
increases at this point. This leads towards a lower
correlation between the MLS and the MBS, as well as
inconsistency in JRFs and moments between the MLS
and the MBS. This is also illustrated in the associated
graphs, where the SD for the predicted JRFs and
moments increases at the same point as mentioned
before. This issue could be related to the MLS inability
to track the ankles sufficiently. Another inconsistency
in the prediction of GRF for a gait cycle occurs when
the foot has its swing phase. In the graph the swing
phase occurs from the point where the GRF is 0 until
the end of the gait cycle. The graph shows that the
MLS is predicting some GRF with a substantial SD,
before the next heel strike occurs. This is partial due
to an irregular dorsal flexion of the ankle, which can
be seen in figure 6, and premature interaction with
the virtual floor. This causes the inconsistencies in
JRFs and moments, at the end of the gait cycle, which
causes a lower correlation between the MLS and the
MBS. During squat a similar problem occurred in the
comparison of the GRFs between the MLS and the
MBS. The associated graphs in figure 7, illustrates how
the predicted GRFs fluctuated around the measured
GRFs, which resulted in the moderate ranking
correlations. This is again due to the MLSs incapability
of tracking the ankle sufficiently. This incapability is
shown in the individual AFA in figure 7, which
illustrates the fluctuation and range of the detected
ankle angles for all trials performed for all subjects.
This fluctuation leads to a lower correlation between
the two systems for GRFs, moments, JRFs and for the
AFA.
Previous studies (11,13) have concluded that the
Microsoft Kinect Sensor v1 was capable of tracking
the upper extremities with sufficient accuracy. This is
in agreement with the results shown in figure 5 and
table 2, which shows strong correlation in SAA, SAM
12
and a mean difference in SROM of 0.21°. Despite the
strong correlation, fluctuations in SAM can be seen in
figure 5. The origin of these fluctuations is illustrated
in AH MLS, which shows large changes, in the
acceleration of the hand, compared to AH MBS. These
changes can be caused by the lack of smoothness and
accuracy in the depth sensors resolution (11).
An issue regarding the methods for this comparative
study is the limitations of the MBS. Mainly the
accuracy of the MBS suffers from soft tissue artifact
moving relative to the underlying bone. This means
that the MBS cannot be interpreted as a golden
standard for obtaining kinematic input data (25).
Hence, the assessment of the MLS abilities based
upon a comparison with the MBS could be erroneous
due to the precision of the MBS. Regardless of the
limitations of the MBS it is one of the most common
methods for obtaining kinematic input (25). For that
reason it proves as a valuable system for the
evaluation of the MLS as an alternative.
The segment lengths estimated by the MLS was the
same for parried segments such as the legs. This could
oppose a limitation because of the possible bilateral
asymmetry of the subjects (26). Furthermore, the
lengths were measured at a specific time point in the
trial. This could also oppose a limitation within the
MLS, due to the large fluctuations observed in the
data. If the segment lengths were estimated at a time
point where the data was suffering from large
fluctuations, it could result in errors within the
kinematic estimation of the movements. These
limitations could explain some of the irregular results
previously mentioned.
For the purpose of using MLS in rehabilitation and
sports performance optimization, the accuracy of the
MLS seems to be insufficient. The tracking of the
Kinect-joints need to be more accurate to correctly
estimate forces in muscles and joints. As assessed in
the optimal position test, the position in which the
MLS is most capable of tracking Kinect-joints is
directly in front of the subject. As discussed, this
opposes a limitation in tracking the ankle flexion and
thereby dislocating the foot segment, which
additionally affect the prediction of GRFs. To more
accurately estimate the ankle flexions, by use of the
MLS, multiple Microsoft Kinect Sensors v2s could be
applied to the system. In the study by Andersen et al.
(27), dual Microsoft Kinect Sensor v1s was combined
with the AMS, by the use of the iPi Motion Capture v.
2.0. Using iPi or similar software, which enables
multiple Microsoft Kinect Sensor v2 to be used by the
MLS, could increase the accuracy of the 3-D position
data obtained with MLS.
5. Conclusion
The results of this study show that data obtained by
the Microsoft Kinect Sensor v2, in some extend, can
be used as input in a musculoskeletal model. However
in comparison to MBS the result suggest that further
development in the tracking accuracy is required.
Especially the estimation of the ankle Kinect-joint
needs to be improved. Despite the limitations of the
Microsoft Kinect Sensor v2, the results concerning
tracking of hip and knee flexion angles in gait and
squat showed encouraging results compared to the
abilities of MBS. Regardless, the use of one Microsoft
Kinect Sensor v2 is an inadequate alternative in terms
of obtaining data for analysis of human kinematics.
Therefore, future studies should implement an
additional Microsoft Kinect Sensor v2 into the
protocol for testing of optimal positioning and
movement analysis.
6. Acknowledgement
We express our thanks to the participants for their
time and effort. Also we would like to thank Michael
Skipper Andersen for his guidance throughout the
process.
13
7. Literature
(1) Tranberg R, Zugner R, Karrholm J. Improvements in
hip- and pelvic motion for patients with
osseointegrated trans-femoral prostheses. Gait
Posture 2011 Feb;33(2):165 - 168.
(2) Eek MN, Tranberg R, Beckung E. Muscle strength
and kinetic gait pattern in children with bilateral
spastic CP. Gait Posture 2011 Mar;33(3):333 - 337.
(3) Michaud-Paquette Y, Magee P, Pearsall D, Turcotte
R. Whole-body predictors of wrist shot accuracy in ice
hockey: a kinematic analysis. Sports Biomechanics
2011;10(1):12 - 21.
(4) Sommer M, Häger C, Rönnqvist L. Synchronized
metronome training induces changes in the kinematic
properties of the golf swing. Sports Biomechanics
2014;13(1):1 - 16.
(5) Rasmussen J, Holmberg LJ, Sørensen K, Kwan M,
Andersen MS, de Zee M. Performance optimization by
musculoskeletal simulation. Movement & Sport
Sciences - Science & Motricité 2012(75):73 - 83.
(6) Vicon | Systems. Available at:
http://www.vicon.com/system/bonita. Accessed
12/15/2014, 2014.
(7) Herda L, Fua P, Plänkers R, Boulic R, Thalmann D.
Using skeleton-based tracking to increase the
reliability of optical motion capture. Human
Movement Science 2001;20(3):313 - 341.
(8) Andersen MS, Damsgaard M, MacWilliams B,
Rasmussen J. A computationally efficient
optimisation-based method for parameter
identification of kinematically determinate and over-
determinate biomechanical systems. Comput
Methods Biomech Biomed Engin 2010;13(2):171 -
183.
(9) Dutta T. Evaluation of the Kinect sensor for 3-D
kinematic measurement in the workplace. Appl Ergon
2012 Jul;43(4):645- 649.
(10) Fernández-Baena A, Susin A, Lligadas X.
Biomechanical Validation of Upper-Body and Lower-
Body Joint Movements of Kinect Motion Capture Data
for Rehabilitation Treatments. 2012 Fourth
International Conference on Intelligent Networking
and Collaborative Systems 2012;1(12):656 - 661.
(11) Pfister A, West AM, Bronner S, Noah JA.
Comparative abilities of Microsoft Kinect and Vicon 3D
motion capture for gait analysis. J Med Eng Technol
2014;38(5):274 - 280.
(12) Clark RA, Pua YH, Fortin K, Ritchie C, Webster KE,
Denehy L, et al. Validity of the Microsoft Kinect for
assessment of postural control. Gait Posture 2012
Jul;36(3):372- 377.
(13) Bonnechere B, Jansen B, Salvia P, Bouzahouene
H, Omelina L, Moiseev F, et al. Validity and reliability
of the Kinect within functional assessment activities:
comparison with standard stereophotogrammetry.
Gait Posture 2014;39(1):593 - 598.
(14) Shotton J, Sharp T, Kipman A, Fitzgibbon A,
Finocchio M, Blake A, et al. Real-time human pose
recognition in parts from single depth images.
Commun ACM 2013;56(1):116 - 124.
(15) Vineetha GR, Sreeji C, Lentin J. Face Expression
Detection Using Microsoft Kinect with the Help of A
rtificial Neural Network. Trends in Innovative
Computing 2012 - Intelligent Systems Design
2012;2014(12/15/2014):176 - 180.
(16) Ganapathi V, Plagemann C, Koller D, Thrun S. Real
Time Motion Capture Using a Single Time-Of-Flight
Camera. Stanford University, Computer Science
Department, Stanford, CA, USA 2010 Computer Vision
and Pattern Recognition (CVPR), 2010 IEEE
Conference 07/13-18 2010:755 - 762.
(17) Microsoft. Kinect for Windows features. 2014;
Available at: http://www.microsoft.com/en-
us/kinectforwindows/meetkinect/features.aspx.
Accessed 12/04, 2014.
(18) Damsgaard M, Rasmussen J, Christensen ST,
Surma E, de Zee M. Analysis of musculoskeletal
systems in the AnyBody Modeling System. Simulation
Modelling Practice and Theory 2006;14(8):1100 -
1111.
(19) Microsoft Developer Network. Microsoft Kinect
TrackingState Enumeration. 2014; Available at:
http://msdn.microsoft.com/en-
us/library/microsoft.kinect.kinect.trackingstate.aspx.
Accessed 12/08, 2014.
14
(20) Klein Horsman MD, Koopman HFJM, van der
Helm FCT, Prosé LP, Veeger HEJ. Morphological
muscle and joint parameters for musculoskeletal
modelling of the lower extremity. Clin Biomech
2007;22(2):239 - 247.
(21) Rasmussen J, de Zee M, Damsgaard M,
Christensen ST, Marek C, Siebertz K. A General
Method for Scaling Musculo-Skeletal Models.
International Symposium on Computer Simulation in
Biomechanics 2005.
(22) Fluit R, Andersen MS, Kolk S, Verdonschot N,
Koopman HF. Prediction of ground reaction forces and
moments during various activities of daily living. J
Biomech 2014 Jul 18;47(10):2321 - 2329.
(23) Rasmussen J, Damsgaard M, Voigt M. Muscle
recruitment by the min/max criterion — a
comparative numerical study. J Biomech
2001;34(3):409 - 415.
(24) Nambiar AM, Correia P, Soares LD. Frontal gait
recognition combining 2D and 3D data. MM&Sec '12
Proceedings of the on Multimedia and security 2012
09/ 06-07:145 - 150.
(25) Benoit DL, Ramsey DK, Lamontagne M, Xu L,
Wretenberg P, Renström P. Effect of skin movement
artifact on knee kinematics during gait and cutting
motions measured in vivo. Gait Posture
2006;24(2):152 - 164.
(26) Auerbach BM, Ruff CB. Limb bone bilateral
asymmetry: variability and commonality among
modern humans. J Hum Evol 2006;50(2):203 - 218.
(27) Andersen MS, Yang J, de Zee M, Zhou L, Bai S,
Rasmussen J. Full-body musculoskeletal modeling
using dual microsoft kinect sensors and the anybody
modeling system. Proceedings of the 14th
International Symposium on Computer Simulation in
Biomechanics 2013:23 - 24.
15
8. Supplementary material
8.1. Marker placement
The placement of the markers was conducted according to a previous performed study by Andersen et. Al (27). In total 33
passive reflective markers were placed on the subjects during the gait, squat and shoulder abduction movement. On figure 8-9
the placement of the markers on the upper and lower extremities of the subjects are illustrated. In table 3 the name of the
markers and their positions are further described.
Figure 8 shows the placement of markers in an anterior (Left) and posterior (Right) point of view
Figure 9 shows the placement of markers in an anterior (Left) and posterior (Right) point of view of the lower extremity
16
Table 3 shows the marker names, and their respective positions
Marker Name Position of Marker Anatomical Position of Marker
LSHO Left Shoulder Left Acromion
LUPA Left Upper Arm Left Triceps Branchii
LELB Left Elbow Left Olecranon
LWR Left Wrist Left Triquetrum
LFINL Left Finger Lateral Left Metacartal 5
LFINM Left Finger Medial Left Metacartal 2
LASI Left Anterior Pelvic Bone Left Anterior Superior iliac spine
LPSI Left Posterior Pelvic Bone Left Posterior Superior iliac spine
LTHI Left Thigh Left Quadriceps
LKNE Left Knee Left Medial Epicondyle
LTIB Left Tibia Left Tibia
LANK Left Ankle Left Lateral Malleolus (fibula)
LHEE Left Heel Left Calcaneus
LMT1 Left Little Toe Left Metatarsal 1
LMT5 Left Big Toe Left Metatarsal 5
RSHO Right Shoulder Right Acromion
RUPA Right Upper Arm Right Triceps Branchii
RELB Right Elbow Right Olecranon
RWR Right Wrist Right Triquetrum
RFINL Right Finger Lateral Right Metacartal 5
RFINM Right Finger Medial Right Metacartal 2
RASI Right Anterior Pelvic Bone Right Anterior Superior iliac spine
RPSI Right Posterior Pelvic Bone Right Posterior Superior iliac spine
RTHI Right Thigh Right Quadriceps
RKNE Right Knee Right Medial Epicondyle
RTIB Right Tibia Right Tibia
RANK Right Ankle Right Lateral Malleolus (fibula)
RHEE Right Heel Right Calcaneus
RMT1 Right Little Toe Right Metatarsal 1
RMT5 Right Big Toe Right Metatarsal 5
C7 Cervical Vertebrae, C7 Cervical Vertebrae, C7
CLAV Clavicle Clavicular Articulation
STRN Sternum Sternum Xiphoid Process

More Related Content

DOCX
researchPaper
PDF
micwic2013_poster
PDF
Interactive Full-Body Motion Capture Using Infrared Sensor Network
PDF
Interactive full body motion capture using infrared sensor network
PDF
Review of Pose Recognition Systems
PDF
Final_draft_Practice_School_II_report
PDF
Paulin hansen.2011.gaze interaction from bed
researchPaper
micwic2013_poster
Interactive Full-Body Motion Capture Using Infrared Sensor Network
Interactive full body motion capture using infrared sensor network
Review of Pose Recognition Systems
Final_draft_Practice_School_II_report
Paulin hansen.2011.gaze interaction from bed

Viewers also liked (11)

PPTX
Applicazioni Windows Store con Kinect 2
DOC
IMRAN 0
PPTX
Topp tips for sikker kjøring
DOCX
NATHANIEL CV
PDF
Face-to-Face
PPTX
Crea ted and presented by mirna mandela
PDF
20151106091843405
PPTX
Ppt. ms iin
PDF
REC01.PDF
RTF
Resume Josephine
Applicazioni Windows Store con Kinect 2
IMRAN 0
Topp tips for sikker kjøring
NATHANIEL CV
Face-to-Face
Crea ted and presented by mirna mandela
20151106091843405
Ppt. ms iin
REC01.PDF
Resume Josephine
Ad

Similar to A musculoskeletal model driven by microsoft kinect sensor v2 data (20)

PDF
Poster Competition - Hwan Lee
PDF
(Fall 2012) Microsoft Kinect - Skeletal Repeatability
PDF
Virtual Yoga System Using Kinect Sensor
PDF
To Design and Develop Intelligent Exercise System
PDF
Enhancing the measurement of clinical outcomes using Microsoft Kinect choices...
PPTX
Measuring the Drop Vertical Jump using the Microsoft Kinect
PPTX
Kinect2 hands on
PPTX
Nui e biometrics in windows 10
PDF
Depth sensor independent body part localization in depth images using a multi...
PPTX
Kinect sensor
PDF
Programming with kinect v2
PPTX
GWAB - Kinecting the Cloud.
PDF
Develop store apps with kinect for windows v2
PPTX
Develop Store Apps with Kinect for Windows v2
PDF
IRJET- Virtual Fitness Trainer with Spontaneous Feedback using a Line of Moti...
PPTX
MSDN LATAM Kinect SDK V2
PPTX
Microsoft Kinect for Windows SDK V2 Developer Preview
PPTX
Writing applications using the Microsoft Kinect Sensor
PDF
Communityday2013
PPTX
Exergaming - Technology and beyond
Poster Competition - Hwan Lee
(Fall 2012) Microsoft Kinect - Skeletal Repeatability
Virtual Yoga System Using Kinect Sensor
To Design and Develop Intelligent Exercise System
Enhancing the measurement of clinical outcomes using Microsoft Kinect choices...
Measuring the Drop Vertical Jump using the Microsoft Kinect
Kinect2 hands on
Nui e biometrics in windows 10
Depth sensor independent body part localization in depth images using a multi...
Kinect sensor
Programming with kinect v2
GWAB - Kinecting the Cloud.
Develop store apps with kinect for windows v2
Develop Store Apps with Kinect for Windows v2
IRJET- Virtual Fitness Trainer with Spontaneous Feedback using a Line of Moti...
MSDN LATAM Kinect SDK V2
Microsoft Kinect for Windows SDK V2 Developer Preview
Writing applications using the Microsoft Kinect Sensor
Communityday2013
Exergaming - Technology and beyond
Ad

Recently uploaded (20)

PDF
Encapsulation theory and applications.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Unlocking AI with Model Context Protocol (MCP)
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
A Presentation on Artificial Intelligence
PDF
Machine learning based COVID-19 study performance prediction
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Encapsulation theory and applications.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Building Integrated photovoltaic BIPV_UPV.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Big Data Technologies - Introduction.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
Advanced methodologies resolving dimensionality complications for autism neur...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Mobile App Security Testing_ A Comprehensive Guide.pdf
MYSQL Presentation for SQL database connectivity
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Empathic Computing: Creating Shared Understanding
Review of recent advances in non-invasive hemoglobin estimation
Unlocking AI with Model Context Protocol (MCP)
The AUB Centre for AI in Media Proposal.docx
A Presentation on Artificial Intelligence
Machine learning based COVID-19 study performance prediction
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf

A musculoskeletal model driven by microsoft kinect sensor v2 data

  • 1. 1 A Musculoskeletal Model Driven by Microsoft Kinect Sensor v2 Data Department of Health, Science and Technology, Aalborg University, Denmark Project period: 02.09.14 – 18.12.14 Project group: 748 – 1th semester, Sports Technology. Authors: Lasse Schov Jacobsen Adam Frank Magnus Hansen Morten Kudsk Mikkel Svindt Gammelgaard Supervisor: Michael Skipper Andersen Number of Pages: 14 Supplementary material: 2 Worksheets: 9 Completed: 18.12.14 ABSTRACT Objective. To develop a musculoskeletal model driven by data retrieved from Microsoft Kinect Sensor v2 and compare the output to a musculoskeletal model driven by data from maker- based motion capture system for three different movements. Furthermore, determine the optimal position for the Microsoft Kinect Sensor v2 for each movement. Method. In the positioning test, a combination of seven angles, three heights and three distances was conducted to find the optimal position for obtaining data for a musculoskeletal model, doing a gait, squat and shoulder abduction cycle. When the optimal positions for the three different movements were determined, data for the comparison test were collected for five healthy male subjects. Eight Oqus 1 infrared high-speed cameras and two force platforms were used to collect the maker-less based motion capture data. One Microsoft Kinect Sensor v2 was used to collect the marker-less based motion capture data. AnyBody Modeling System was used to analyze different variables for the two systems. Results. Multiple positions were fund to be optimal for the position of the Microsoft Kinect Sensor v2 at the squat and the shoulder abduction movement. The same positions for these movements were chosen to be the same (0°, 0.75/2.6). The optimal position for the gait movement (0°, 0.75m/3.4m) was determined, based on the highest percentage of tracked Kinect- joints. Strong correlations were found in the comparison test for knee flexion angle and hip flexion angle for both the gait and the squat movement. Doing the shoulder abduction movement, strong correlations were found for shoulder abduction angle (0.99) and moment (0.88). Even though strong correlation were found in the ankle flexion angle (0.71) in the squat movement, other results indicates that the Microsoft Kinect Sensor v2 has limitations tracking the ankle sufficiently. A strong correlation in the ground reaction force (0.81) was observed for the gait movement, where as the ground reaction forces in the squat movement were: left (0.49) and right (0.50). Conclusion. The results of this study show that data obtained by the Microsoft Kinect Sensor v2 can be used as input in a musculoskeletal model. Though the Microsoft Kinect Sensor v2 show some encouraging results for some variables, it still proves insufficient as a alternative to marker-based systems.
  • 2. 2 1. Introduction When analyzing human kinematics and kinetics, motion capture have often been used to detect skeletal movement in space and external forces with force platforms. Data from motion capture systems and external loads can be an input to a musculoskeletal model. Using inverse dynamics, the musculoskeletal model uses kinematics and external loads to compute forces in muscles and joints. A popular motion capture method is based upon infrared cameras, retro reflective skin markers and force plates. This approach has been used in research areas like rehabilitation (1, 2) and sports performance optimization (3-5), where accuracy of movements is necessary to estimate forces in muscles and joints. Despite its popularity, the marker-based motion capture system (MBS) has several limitations. The MBS is expensive due to the cost of the cameras alone, starts from approximately 24.000 U.S. dollars (6). Subsequently the MBS requires skin markers, force platforms and specialized software. Secondly, markers are placed on the skin and can therefore move relative to the underlying bones, giving obscured estimations of the bones (7, 8). Furthermore, the MBS is in some cases not able to detect one or more markers, because the markers are not visible to at least the required minimum of 3 cameras, and thereby the markers become occluded (7). Studies have researched Microsoft Kinect Sensor v1 as an alternative to MBS’s (9-13). The Microsoft Kinect Sensor v1 was introduced in the game industry as a device, which allowed interaction with video game consoles through gesture recognition and voice commands. The depth camera in this device consists of an infrared laser projector as well as an infrared video camera. To recognize gestures within the sensors field of view, it uses the laser projector to project a speckle pattern. The infrared camera uses deformations in the speckle pattern to create a 3-D depth map (9, 11, 12). Based upon a randomized decision forests algorithm, the Microsoft Kinect Sensor v1 is able to automatically determine anatomical landmarks on a subject within the 3-D map. The Microsoft Kinect Sensor v1 is capable of tracking 20 Kinect-joints and creating a 19-segment stick figure (12, 14). Dutta (9) researched the depth mapping capabilities of the Microsoft Kinect Sensor v1 and found promising results when compared to a MBS. Additionally, R.A. Clark et al. researched the Microsoft Kinect Sensor v1’s abilities to assess anatomical landmarks position and angular displacement. R.A. Clark et al. concluded that the Microsoft Kinect Sensor v1 could provide comparable data to a MBS (12). These results suggest that the Microsoft Kinect Sensor v1 has the potential to be a motion capture method, which is cheaper compared to MBS and do not rely on markers. However, researches have shown numerous limitations of the Microsoft Kinect Sensor v1, one of them being its inability to assess internal/external joint rotation in the peripheral limbs (12). This is because the Microsoft Kinect Sensor v1 does not possess the ability to determine an orthogonal axis in relation to the longitudinal axis of the arm, due to inadequate number of points in the stick figure. Another limitation is the resolution of the depth sensor in Microsoft Kinect Sensor v1, which could contribute to lack of smoothness and accuracy of the measurements (11). Furthermore, Pfister et al., who investigated the percentage of tracked Kinect-joints, have proposed issues regarding the position of the Microsoft Kinect Sensor v1. The Issue arises from the fact that the Microsoft Kinect Sensor v1 is only able to track segments directly in its field of view (11). In 2014 the Microsoft Kinect Sensor v2 was introduced as an upgraded version to the Microsoft Kinect Sensor v1. The Microsoft Kinect Sensor v2 was developed with a different depth sensor technology than its predecessor. Microsoft Kinect Sensor v1 was developed with a structured light technique, where the Microsoft Kinect Sensor v2 is based on a time of flight as a core mechanism of depth retrieval (15). Time of flight technology measures distance from the sensor to the captured object based on the speed of light. The time of flight for the light pulse from the sensor to the captured object can be used to estimate the distance to the object (16). This new depth sensor has a resolution of 512 x 424 compared to the Microsoft Kinect Sensor v1’s resolution of 320 x 240, which Microsoft claims to improve smoothness and accuracy of the measurements. Furthermore, Microsoft Kinect Sensor v2 is able to track 25 Kinect- joints compared to Microsoft Kinect Sensor v1, which tracks 20 Kinect-joints. The additional Kinect-joints have been added to the hand and neck segments. This could enable tracking of internal/external rotation of the arm (17). The developments incorporated in the Microsoft Kinect Sensor v2 has the potential of improving the detectability of skeletal movements compared to the Microsoft Kinect Sensor v1 (17). Therefore, the purpose of this study was to develop a musculoskeletal model driven by data retrieved from Microsoft Kinect Sensor v2 and compare the output to
  • 3. 3 a musculoskeletal model driven by data from MBS. The comparison was based upon three movements: gait, squat and shoulder abduction. The squat and shoulder abduction were chosen to represent movements in the lower- and upper body respectively. Gait was selected due to the implementation of this movement in rehabilitation (18). To approach the issue previously mentioned regarding the inability to track joints, which is not directly in the sensors field of view, a test, to systematically assess different positions of the Microsoft Kinect Sensor v2, were implemented into this study to determine the optimal position for each of the three movements. 2. Method 2.1. Optimal positioning of Microsoft Kinect Sensor v2 2.1.1. Subject information 1 male subject (age 22 years, height 1.86 m, weight 82.41 kg) participated in this test. The subject was not affected by any musculoskeletal illnesses or chronic pain conditions. Prior to the test, the subject signed an informed consent statement. 2.1.2. Experimental method To systematically test the positions of the Microsoft Kinect Sensor v2 different heights, distances and angles were determined. The distances and angles were determined from the center between the two force platforms, see figure 1-2. The force platforms will be described further in section 2.2.2. The heights were based upon the maximum and minimum recommendations from Microsoft. The distances were based upon a pretest, which determined the minimum and maximum distance the Microsoft Kinect Sensor v2 was able to track all Kinect-joints. The median of the minimum and maximum height and distance was selected, and subsequently ± 0.4 m was selected as upper and lower height and distance. However, the tripod where the Microsoft Kinect Sensor v2 was attached had a limitation regarding the maximal height. Therefore, 0.05 m was subtracted due to these limitations. Hence, the following heights: 0.75 m, 1.15 m, 1.55 m and distances: 2.6 m, 3 m, 3.4 m were chosen. Seven different angles were chosen by steps of 15° from the frontal to the sagittal plane of a subject standing in the center between the two force platforms, see figure 1-2. For each of the seven different angles, three trials were completed for every combination of distance and height. Using the Microsoft Kinect Sensor v2, data was obtained with a sampling frequency of 30 Hz. A customized program developed using the Microsoft Software Development Kit (SDK) was used to directly obtain the data. The program processed the recording of the Microsoft Kinect Sensor v2 and stored the data as a .csv format. The format contained 25 coordinates, which made up a stick figure of 24 segments. Figure 1 illustrates the positioning of Microsoft Kinect Sensors v2 at different angles and distances in the gait movement. Figure 2 illustrates the positioning of Microsoft Kinect Sensors v2 at different angles and distances in the squat and shoulder abduction movement. 2.1.3. Movement protocol The subject wore tight fitted shorts to ensure that the Microsoft Kinect Sensor v2 only detected the body segments. The Microsoft Kinect Sensor v2 automatically tracked the Kinect-joints in its field of view. To control the tracking of the Kinect-joints, trials were only initiated when all Kinect-joints were tracked. Therefore, when performing the gait movement, the subject was instructed to initiate the walk from a specific point, which was defined as being one step away from the first force platform. Due to limitations of Microsoft Kinect Sensor v2s field of
  • 4. 4 view, the fourth step was shortened and the walk was terminated afterwards as seen in figure 1. The subject was instructed to initiate gait, using the non-dominant leg. This was done to ensure that several subjects would have the same conditions. When performing the squat movement, the subject was instructed to place their feet shoulder width apart and squat until they reached approximately a 90° angle in the knee joint, before returning to the initial position. When performing the shoulder abduction movement, the subject was instructed to raise the non-dominant arm laterally away from the midline of the body to approximately 90°, before returning it to the initial position. To make the movement velocity of the trials similar, a time window was implemented for each movement. This consisted of a start and end audio queue, which the subject was instructed to follow. The time windows for the gait and squat movements lasted 5 seconds, and for the shoulder abduction movement it lasted 7 seconds. The subject was instructed to keep their head in neutral position, when performing the different movements. This was done to ensure similarity in the motion of each trial. One experimenter was observing that the subject performed the movements correctly and consistently. If an error in the performance of a trial was observed, the trial would be discarded and repeated. Prior to the data collection for each movement, the subject was given 5 minutes to practice the instructed movement. 2.1.4. Computational method Additionally to the 3-D position of the 25 Kinect-joints, the output of Microsoft Kinect Sensor v2 also reports the state, in which the Kinect-joints were tracked. A Kinect-joint could be specified as not tracked, inferred and tracked. This state information detected the confidence of the 3-D position data (19). By systematically studying different positions of the Microsoft Kinect Sensor v2, the state information was used to compute the mean percentage and SD (SD) of tracked Kinect-joints for each position. A customized MATLAB (R2014b, The MathWorks, Inc., Natick, MA, United States) script was used to analyze each position for all of the movements. For each of the movements, the optimal position was determined. The optimal positions were defined by having the highest percentage of tracked Kinect-joints. The optimal positions determined, were used as the Microsoft Kinect Sensor v2 placements during the comparison test. The specific positions optimal for the gait, squat and shoulder abduction movements can be found in table 1. 2.2. Comparison between a marker-based system and a marker-less system 2.2.1. Subject information 5 male subjects (age 26 ± 3.7 years, height 182.4 ± 5.5 cm, weight 81.8 ± 7.1 kg) participated in this study. The subjects were not affected by any musculoskeletal illnesses or chronic pain conditions. Prior to the test, each subject signed an informed consent statement. 2.2.2. Experimental method The Microsoft Kinect Sensor v2 was compared to a MBS consisting of eight Oqus 1 infrared high-speed cameras, sampling at 100 Hz. The system was combined with Qualisys Track Manager v. 2.9 (Qualisys, Sweden). Ground reaction forces were collected using two AMTI (MA, USA) force plates v. OR6-7-1000 and OR6-5-2000, with a sampling rate of 2000 Hz filtering at 1050 Hz. 33 retro reflective markers were placed on the subject in accordance to a full body protocol (see supplementary material). The retro reflective markers were placed to track the motion of each segment see figure 4. Additionally the markers were placed on bony landmarks to ensure the least amount of movement relative to the underlying bone. The subjects wore tight fitted shorts, which enabled the markers to be placed directly on the skin of the subjects. The protocol for the gait, squat and shoulder abduction movements, were similar to the protocol used for the optimal positioning test. 2.2.3. Computational method The musculoskeletal models used in this study, were developed using the AnyBody Modeling System (AMS) v. 6.0.3 (AnyBody Technology A/S, Aalborg, Denmark). The models were based on the GaitFullBody template (20) from the AnyBody Managed Model Repository v. 1.5. Two different musculoskeletal models were used for each trial. The data obtained by MBS drove one musculoskeletal model and the data obtained by Microsoft Kinect Sensor v2 drove the other musculoskeletal model. Henceforth, the system consisting of the Microsoft Kinect Sensor v2 as well as the associated musculoskeletal model shall be referred to as MLS. For scaling the GaitFullBody model to the sizes of the different subjects, the method for scaling musculoskeletal models proposed by Rasmussen et al. (21) was used. This method establishes coherence between the geometry and mass of segments and enables subject specific scaling
  • 5. 5 of the model, without compromising the kinematic function of the model. For the gait and squat movements constant strength muscles were added to the lower extremities. For the shoulder abduction movement constant strength muscles were added for both arms and neck. These muscles were added to enable estimations of internal forces within the joints, using inverse dynamics. For both models a 2’ order zero phases lag low pass Butterworth filter was applied, with a cut-off frequency of 6 Hz. The filter was applied to minimize fluctuations of the data trajectories. Spyder v. 2.3.1 (The Scientific Python Development EnviRonment) was used to drive several models simultaneously. Furthermore, the neck point was anticipated not to have influence on the movement because it was held in neutral position during all movements. The neck was therefore fixed in neutral position in both models. 2.2.3.1. Musculoskeletal model driven by the marker-less data MLS data was collected using the customized program previously described in section 2.1.4. The segment lengths of the stick figure were computed for one data point in the movement’s initial position. The length of a segment was computed by subtracting the 3-D positions of the two attached Kinect-joints, which defined the segment. For every paired segment the mean of the two segments were used. Based upon these segment lengths the GaitFullBody model was scaled. Virtual markers were defined on the GaitFullBody model and on the Kinect-based stick figure, see figure 3. The virtual markers were defined at all the Kinect-joint locations illustrated in figure 3. Corresponding markers were defined on the GaitFullBody model. The least square difference between the virtual markers and the corresponding Kinect-joints, obtained with the MLS, was minimized using the local optimization-based method proposed by Andersen et al. (8). This was done because the equations within the model exceeded the total number of degrees of freedom, and therefore the model was over-determined. For all pairs of markers, weights were introduced such that the effect of segment length discrepancies between the Kinect- based stick figure and the GaitFullBody model could be reduced. This was accomplished by assigning a high weight for the hip virtual markers in all three directions and reducing the weight for the knee, ankle, shoulder, elbow and wrist in the longitudinal direction. Due to inadequate information for properly estimation of specific movements, drivers were added to fix the position of lateral bending, extension and rotation of the thorax. The MLS did not have an output that provided information regarding ground reaction forces (GRF). The GRFs were predicted using the method suggested by Fluit et al. (22). The method solves the under- determinacy problem during a double contact phase, by computing the GRFs using the muscle recruitment algorithm (18). This is done by introducing artificial muscle-like actuators at 12 contact points under each foot. For each contact point, five muscle-like actuators detecting a normal force in the vertical direction. Furthermore, medio-lateral and anterior-posterior directions were added to the force plate as static friction forces. Each actuator only generated reaction force if the contact point was within 0.05 m and had a maximal velocity of 1.2 m/s, to the virtual floor. Figure 3 illustrates the Kinect-joints of the stick figure obtained from the Microsoft Kinect Sensor v2 (blue) and the associated virtual markers on the model (red). Kinect-Joint names without underscore indicate that it’s a paired Kinect-joint. Marker names with underscore 2.2.3.2. Musculoskeletal model driven by marker based data Retro-reflective markers attached to the skin drove the model for the MBS, see figure 4. The first gait trial for each subject was used for scaling the GaitFullBody model for all movements. The segment lengths and
  • 6. 6 marker positions were estimated by minimizing the least square difference between the GaitFullBody model and the corresponding measured marker positions, using the previously mentioned method optimization method proposed by Andersen et al. (8). The measured GRFs were applied under the feet. Furthermore the joint reaction forces along with the muscle forces were computed using AMS muscle recruitment solver (23). 2.2.4. Data analysis A complete gait cycle from heel strike to heel strike, for the non-dominant leg was analyzed. This was done since the stride length of the dominant leg was shortened. The shoulder abduction movement was analyzed for the non-dominant arm from the point where the arm began to migrate away from the body until the point where the arm was returned to its initial position along the torso. The squat was analyzed from the beginning of the knee flexion until the subjects returned to the initial position. MATLAB was used to compute specific data obtained from the three movements. The computed data from the squat and gait movements consisted of ankle flexion angle (AFA), knee flexion angle (KFA) and hip flexion angle (HFA), ankle flexion moment (AFM), knee flexion moment (KFM), hip flexion moment (HFM), ankle joint reaction force (AJRF), knee joint reaction force (KJRF), hip joint reaction force (HJRF), maximal ankle joint reaction force (MAJRF), maximal knee joint reaction force (MKJRF), maximal hip joint reaction force (MHJRF) and ankle joint range-of-motion (AROM), knee joint range- of-motion (KROM), hip joint range-of-motion (HROM). Vertical ground reaction force (GRF), maximal vertical ground reaction force (MGRF) were also analyzed. GRF and MGRF were computed in both the left and right foot for the squat movement. Data analyzed from the shoulder abduction movement consisted of shoulder abduction angle (SAA), shoulder abduction moment (SAM) and shoulder abduction joint reaction force (SJRF), shoulder abduction range-of-motion (SROM) and shoulder abduction maximal joint reaction force (MSJRF). Pearson’s correlation coefficient (r) and root-mean-square deviation (RMSD) were used to compare the data from the comparison test, except for maximum and range-of- motion values. Only mean and standard deviation (SD) was computed for those. Furthermore, the position of the hand (PH), which shows the estimated movement in 3-D and the acceleration for the hand (AH), in the shoulder abduction movement, were illustrated to see fluctuations for both the MLS and the MBS. 3. Results 3.1. Optimal Kinect Positioning test Table 1 shows the percentage of tracked Kinect-joints in the positioning test for different heights and distances for gait, squat and shoulder abduction at 0°. *indicates that this specific height/distance was chosen for the comparison test. - indicates that the tracking of the Kinect-joints failed. Gait 0 ° Squat 0 ° Shoulder 0 ° 2.6m/0.75m - 100* 100* 2.6m/1.15m 95.91 ± 0.72 99.93 ± 0.07 100 2.6m/1.55m - 96.47 ±1.21 99.99 ± 0.02 3m/0.75m 99.85 ± 0.05 99.90 ± 0.01 99.98 ± 0.02 3m/1.15m 99.79 ± 0.30 99.84 ± 0.15 99.96 ± 0.03 3m/1.55m - 99.97 ± 0.02 99.99 ± 0.02 3.4m/0.75m 99.96 ± 0.07* 99.42 ± 0.47 99.90 ± 0.08 3.4m/1.15m 99.79 ± 0.22 99.52 ± 0.36 99.96 ± 0.04 3.4m/1.55m 98.16 ± 0.22 99.63 ± 0.02 100 Figure 4 illustrates reflective markers obtained from the MBS (blue) and the associated virtual markers on the model (red). Marker names without underscore indicate that it’s a paired marker. Marker names with underscore indicate that it is not a paired
  • 7. 7 Table 2 Shows the mean correlation coefficient (r) ± SD, RMSD ± SD and Mean diff. ± SD between MLS and MBS in gait, squat and shoulder abduction. Selected results from the optimal positioning test are shown in table 1. The results showed that the Microsoft Kinect Sensor v2 could track all Kinect-joints 100% at several positions for the shoulder abduction movement. At 0°, all Kinect-joints were tracked 100% in the positions: 2.6 m/0.75 m, 2.6 m/1.15 m and 3.4 m/1.55 m. Even though the results are not shown, Microsoft Kinect Sensor v2 were capable of tracking all Kinect-joints 100% from 0° to 45° at different positions. In the squat movement, the Microsoft Kinect Sensor v2 was manageable of tracking all Kinect-joints 100% at 0° for one position: 2.6 m/0.75 m. Although the results are not shown, the Microsoft Kinect Sensor v2 was capable of tracking all Kinect- joint 100% at 15° for one position: 2.6 m/0.75 m. In the gait movement, the Microsoft Kinect Sensor v2 was incapable of tracking all Kinect-joints 100% at any position. The position that had the highest amount of tracked joints (99.96%) was the position: 3.4 m/0.75 m at 0°. The Microsoft Kinect Sensor v2 failed to track the Kinect-joints at 0° in three different positions: 2.6 m/0.75 m, 2.6 m/1.55 m and 3 m/ 1.55 m. 3.2. Comparison test The results from the comparison test are illustrated in figure 5-7 and table 2. Analysis of the MLS in the shoulder abduction movement showed some inconsistencies in the rotation of the arm. For the AMS to be capable of performing kinematics, drivers were added to fix the rotation of glenohumeral and elbow pronation and supination. Furthermore, the complex construction of the shoulder muscles within the AMS, resulted in errors for multiple trials when performing inverse dynamics. Therefore, all muscles were removed from the models when analyzing the shoulder abduction movement. Few other errors occurred when performing kinematics in AMS for the gait and the shoulder abduction movement for both systems. For each subject, a trial was discarded if an error occurred. If no error occurred for a subject, a random trial was discarded, in order to have an equal amount of trials for each subject at each movement. This meant that four trials for each subject were included in the analysis of the gait and shoulder abduction movement. Variable Mean r ± SD RMSD ± SD Variable Mean diff. ± SD Gait GRF left (% BW) 0.81 ± 0.22 32.89 ± 7.04 MGRF left (% BW) 29.41 ± 13.59 AFA (deg) 0.6 ± 0.45 17.12 ± 1.63 AROM (deg) 26.79 ± 9.38 KFA (deg) 0.93 ± 0.1 16.33 ± 1.17 KROM (deg) 4.29 ± 5.65 HFA (deg) 0.96 ± 0.07 8.99 ± 3.11 HROM (deg) 4.39 ± 6.42 AJRF (% BW) 0.68 ± 0.36 158.57 ± 32.42 MAJRF (% BW) 50.78 ± 13.51 KJRF (% BW) 0.67 ± 0.37 125.16 ± 83.25 MKJRF (% BW) 16.59 ± 14.73 HJRF (% BW) 0.66 ± 0.38 163.11 ± 177.81 MHJRF (% BW) 271.49 ± 335.59 AFM (% BW*BH) 0.57 ± 0.46 2.49 ± 0.41 KFM (% BW*BH) 0.47 ± 0.56 1.78 ± 0.92 HFM (% BW*BH) 0.56 ± 0.46 2.19 ± 0.99 Squat GRF left(% BW) 0.49 ± 0.54 5.19 ± 1.04 MGRF left (% BW) 7.77 ± 4.44 GRF right (% BW) 0.5 ± 0.53 5.65 ± 1.57 MGRF right (% BW) 5.18 ± 5.53 AFA (deg) 0.71 ± 0.44 16.26 ± 6.87 AROM (deg) 25.51 ± 18.77 KFA (deg) 0.95 ± 0.08 11.7 ± 4.33 KROM (deg) 4.66 ± 10.16 HFA (deg) 0.93 ± 0.1 16.09 ± 6.13 HROM (deg) 10.53 ± 10 AJRF (% BW) 0.6 ± 0.43 42.56 ± 4.77 MAJRF (% BW) 3.01 ± 6.55 KJRF (% BW) 0.63 ± 0.4 30.84 ± 10.95 MKJRF (% BW) 4.41 ± 11.41 HJRF (% BW) 0.83 ± 0.23 33.13 ± 17.65 MHJRF (% BW) 27.18 ± 20.32 AFM (% BW*BH) 0.63 ± 0.4 0.72 ± 0.13 KFM (% BW*BH) 0.89 ± 0.15 1.41 ± 0.42 HFM (% BW*BH) 0.85 ± 0.18 0.88 ± 0.18 Shoulder abduction SAA (deg) 0.99 ± 0.01 8.12 ± 2.32 SROM (deg) 0.21 ± 3.39 SAM (% BW*BH) 0.88 ± 0.18 0.10 ± 0.05
  • 8. 8 Overall, the results showed that the MLS, in some extent, was capable of detecting joint angles with a comparable accuracy to the MBS. However, the MLS tracking showed to have fluctuations compared to the MBS. 3.3. Gait Strong correlations were found between the MLS and MBS for GRF (0.81), KFA (0.93) and HFA (0.96). All other variables showed to have moderate correlation ranging from 0.47 to 0.68. However, the RMSD showed substantial differences for KFA (16.33). The mean differences between the MLS and the MBS for KROM (4.29) and HROM (4.39) showed, in some extent, to be similar. The graph for GRF illustrates inconsistencies, in the end of the gait cycle, see figure 6. The MLS predicts GRF where as the MBS do not measure any GRF. Furthermore, Substantial bigger SD is seen in the first peak and in the end of the gait cycle for GRF. The same increases of the SD are seen in the graphs illustrating AFM, KFM, HFM, AJRF, KJRF and HJRF. 3.4. Squat Strong correlations were found between the two systems for AFA (0.71), KFA (0.95), HFA (0.93), HJRF (0.83), KFM (0.89) and HFM (0.85). All other variables showed to have moderate correlation ranging from 0.49 to 0.63. However, the AROM (25.51) showed to be substantial bigger compared to KROM (4.66) and HROM (10.53). This can also be seen in the associated graphs on figure 7. Furthermore, the graphs illustrating GRF left and GRF right, shows substantial SD through the squat cycle. 3.5. Shoulder abduction Strong correlations were found between the two systems for SAA (0.99) and SAM (0.88). Additionally the mean difference of SROM (0.21) for the two systems showed in some extent to be similar. In figure 5, the illustration of the AH MLS is restricted to show accelerations between -50 m/s2 and 50 m/s2, even though outliers up to 500 m/s2 were observed. Figure 5 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials for SAA and SAM. The shaded areas surrounding the curves represent the SD for MBS (shaded red) and MLS (shaded blue). In the graphs representing the PH and AH for MBS and MLS the anterior- posterior direction (green), medial-lateral direction (red) and vertical direction (blue) are illustrated. The x-axis represents the shoulder abduction cycle completion in percent, where 0 is the initiation of the shoulder abduction cycle, and 100 is when the arm has returned to its initial position.
  • 9. 9 Figure 6 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials. The shaded areas surrounding the curves represent the SD for MBS (shaded red) and MLS (shaded blue). The x-axis represents the gait cycle completion in percent, where 0 is the initiation of the gait cycle, and 100 is the termination of the gait cycle.
  • 10. 10 Figure 7 The curves represent the mean values of the subjects in the MBS (red) and the MLS (blue) trials. The shaded areas surrounding the curves represent the SD for the MBS (shaded red) and the MLS (shaded blue). The AFA graph in the bottom right corner represents the data collected for each trial for every subject. The x-axis represents the squat cycle completion in percent, where 0 is the initiation of the squat cycle, and 100 is the termination of the squat cycle.
  • 11. 11 4. Discussion 4.1. Optimal position test The optimal position test was conducted to systematically assess different positions of the Microsoft Kinect Sensor v2, in order to determine the optimal position for each of the three different movements. For the squat and shoulder abduction movement, the Microsoft Kinect Sensor v2 were able to track all Kinect-joints 100% at multiple positions, therefore multiple optimal positions were determined for both movements. The positions for both movements were chosen to be similar to minimize potential errors when adjusting the Microsoft Kinect Sensor v2 in the comparison test. Positions at 0° were chosen in accordance with previous studies recommendations (11,24). As seen in table 1 the Microsoft Kinect Sensor v2 was not capable of tracking all Kinect-joints 100% at any position for the gait movement. Therefore, the position with the most tracked Kinect-joints was chosen as the optimal position. The chosen positions for the three different movements are shown in table 1. A limitation of the positioning test was that the state information regarding the tracking of Kinect-joints, did not take the precision of the data into account. This means that when fluctuations occurred in the data, the state information did not adjust. Supplementary, a fluctuation threshold could be applied to also assess the fluctuations in the tracking of the Kinect-joints. 4.2. Comparison test As previously mentioned a musculoskeletal model driven by data obtained from the MLS was developed, to compare the selected output with the output from an existing musculoskeletal model driven by data obtained from the MBS. Overall, the results showed an inconsistency between the MLS and MBS for the investigated movements. This is particularly seen in the results regarding the ankle data. Here, the AFA in the squat movement, showed to have a strong correlation, however, with a substantial SD. Furthermore, the differences in AROM are considerable larger compared to KROM and HROM, in both the squat and gait movement. This suggests that the MLS has difficulties tracking the ankles. However, the results of this study show tendencies towards an improvement of Microsoft Kinect Sensor v2s capabilities in tracking KFA and HFA, compared to the Microsoft Kinect Sensor v1. The correlations between the MLS and the MBS are strong for the KFA and the HFA for both the squat and the gait movement. Moreover, this is in disagreement with previous studies findings (11,13), which concludes that the Microsoft Kinect Sensor v1s is incapable of tracking lower extremities with sufficiently accuracy. To describe the MLSs capability of predicting forces, a comparison between the measured GRF and the predicted GRF was made. Despite the strong correlation in GRF for the gait movement, it was revealed to have a substantial SD, which can be seen in the associated graph in figure 6. A substantial peak can be seen in the start of the midstance phase during the gait cycle. As the graph illustrates, the SD increases at this point. This leads towards a lower correlation between the MLS and the MBS, as well as inconsistency in JRFs and moments between the MLS and the MBS. This is also illustrated in the associated graphs, where the SD for the predicted JRFs and moments increases at the same point as mentioned before. This issue could be related to the MLS inability to track the ankles sufficiently. Another inconsistency in the prediction of GRF for a gait cycle occurs when the foot has its swing phase. In the graph the swing phase occurs from the point where the GRF is 0 until the end of the gait cycle. The graph shows that the MLS is predicting some GRF with a substantial SD, before the next heel strike occurs. This is partial due to an irregular dorsal flexion of the ankle, which can be seen in figure 6, and premature interaction with the virtual floor. This causes the inconsistencies in JRFs and moments, at the end of the gait cycle, which causes a lower correlation between the MLS and the MBS. During squat a similar problem occurred in the comparison of the GRFs between the MLS and the MBS. The associated graphs in figure 7, illustrates how the predicted GRFs fluctuated around the measured GRFs, which resulted in the moderate ranking correlations. This is again due to the MLSs incapability of tracking the ankle sufficiently. This incapability is shown in the individual AFA in figure 7, which illustrates the fluctuation and range of the detected ankle angles for all trials performed for all subjects. This fluctuation leads to a lower correlation between the two systems for GRFs, moments, JRFs and for the AFA. Previous studies (11,13) have concluded that the Microsoft Kinect Sensor v1 was capable of tracking the upper extremities with sufficient accuracy. This is in agreement with the results shown in figure 5 and table 2, which shows strong correlation in SAA, SAM
  • 12. 12 and a mean difference in SROM of 0.21°. Despite the strong correlation, fluctuations in SAM can be seen in figure 5. The origin of these fluctuations is illustrated in AH MLS, which shows large changes, in the acceleration of the hand, compared to AH MBS. These changes can be caused by the lack of smoothness and accuracy in the depth sensors resolution (11). An issue regarding the methods for this comparative study is the limitations of the MBS. Mainly the accuracy of the MBS suffers from soft tissue artifact moving relative to the underlying bone. This means that the MBS cannot be interpreted as a golden standard for obtaining kinematic input data (25). Hence, the assessment of the MLS abilities based upon a comparison with the MBS could be erroneous due to the precision of the MBS. Regardless of the limitations of the MBS it is one of the most common methods for obtaining kinematic input (25). For that reason it proves as a valuable system for the evaluation of the MLS as an alternative. The segment lengths estimated by the MLS was the same for parried segments such as the legs. This could oppose a limitation because of the possible bilateral asymmetry of the subjects (26). Furthermore, the lengths were measured at a specific time point in the trial. This could also oppose a limitation within the MLS, due to the large fluctuations observed in the data. If the segment lengths were estimated at a time point where the data was suffering from large fluctuations, it could result in errors within the kinematic estimation of the movements. These limitations could explain some of the irregular results previously mentioned. For the purpose of using MLS in rehabilitation and sports performance optimization, the accuracy of the MLS seems to be insufficient. The tracking of the Kinect-joints need to be more accurate to correctly estimate forces in muscles and joints. As assessed in the optimal position test, the position in which the MLS is most capable of tracking Kinect-joints is directly in front of the subject. As discussed, this opposes a limitation in tracking the ankle flexion and thereby dislocating the foot segment, which additionally affect the prediction of GRFs. To more accurately estimate the ankle flexions, by use of the MLS, multiple Microsoft Kinect Sensors v2s could be applied to the system. In the study by Andersen et al. (27), dual Microsoft Kinect Sensor v1s was combined with the AMS, by the use of the iPi Motion Capture v. 2.0. Using iPi or similar software, which enables multiple Microsoft Kinect Sensor v2 to be used by the MLS, could increase the accuracy of the 3-D position data obtained with MLS. 5. Conclusion The results of this study show that data obtained by the Microsoft Kinect Sensor v2, in some extend, can be used as input in a musculoskeletal model. However in comparison to MBS the result suggest that further development in the tracking accuracy is required. Especially the estimation of the ankle Kinect-joint needs to be improved. Despite the limitations of the Microsoft Kinect Sensor v2, the results concerning tracking of hip and knee flexion angles in gait and squat showed encouraging results compared to the abilities of MBS. Regardless, the use of one Microsoft Kinect Sensor v2 is an inadequate alternative in terms of obtaining data for analysis of human kinematics. Therefore, future studies should implement an additional Microsoft Kinect Sensor v2 into the protocol for testing of optimal positioning and movement analysis. 6. Acknowledgement We express our thanks to the participants for their time and effort. Also we would like to thank Michael Skipper Andersen for his guidance throughout the process.
  • 13. 13 7. Literature (1) Tranberg R, Zugner R, Karrholm J. Improvements in hip- and pelvic motion for patients with osseointegrated trans-femoral prostheses. Gait Posture 2011 Feb;33(2):165 - 168. (2) Eek MN, Tranberg R, Beckung E. Muscle strength and kinetic gait pattern in children with bilateral spastic CP. Gait Posture 2011 Mar;33(3):333 - 337. (3) Michaud-Paquette Y, Magee P, Pearsall D, Turcotte R. Whole-body predictors of wrist shot accuracy in ice hockey: a kinematic analysis. Sports Biomechanics 2011;10(1):12 - 21. (4) Sommer M, Häger C, Rönnqvist L. Synchronized metronome training induces changes in the kinematic properties of the golf swing. Sports Biomechanics 2014;13(1):1 - 16. (5) Rasmussen J, Holmberg LJ, Sørensen K, Kwan M, Andersen MS, de Zee M. Performance optimization by musculoskeletal simulation. Movement & Sport Sciences - Science & Motricité 2012(75):73 - 83. (6) Vicon | Systems. Available at: http://www.vicon.com/system/bonita. Accessed 12/15/2014, 2014. (7) Herda L, Fua P, Plänkers R, Boulic R, Thalmann D. Using skeleton-based tracking to increase the reliability of optical motion capture. Human Movement Science 2001;20(3):313 - 341. (8) Andersen MS, Damsgaard M, MacWilliams B, Rasmussen J. A computationally efficient optimisation-based method for parameter identification of kinematically determinate and over- determinate biomechanical systems. Comput Methods Biomech Biomed Engin 2010;13(2):171 - 183. (9) Dutta T. Evaluation of the Kinect sensor for 3-D kinematic measurement in the workplace. Appl Ergon 2012 Jul;43(4):645- 649. (10) Fernández-Baena A, Susin A, Lligadas X. Biomechanical Validation of Upper-Body and Lower- Body Joint Movements of Kinect Motion Capture Data for Rehabilitation Treatments. 2012 Fourth International Conference on Intelligent Networking and Collaborative Systems 2012;1(12):656 - 661. (11) Pfister A, West AM, Bronner S, Noah JA. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J Med Eng Technol 2014;38(5):274 - 280. (12) Clark RA, Pua YH, Fortin K, Ritchie C, Webster KE, Denehy L, et al. Validity of the Microsoft Kinect for assessment of postural control. Gait Posture 2012 Jul;36(3):372- 377. (13) Bonnechere B, Jansen B, Salvia P, Bouzahouene H, Omelina L, Moiseev F, et al. Validity and reliability of the Kinect within functional assessment activities: comparison with standard stereophotogrammetry. Gait Posture 2014;39(1):593 - 598. (14) Shotton J, Sharp T, Kipman A, Fitzgibbon A, Finocchio M, Blake A, et al. Real-time human pose recognition in parts from single depth images. Commun ACM 2013;56(1):116 - 124. (15) Vineetha GR, Sreeji C, Lentin J. Face Expression Detection Using Microsoft Kinect with the Help of A rtificial Neural Network. Trends in Innovative Computing 2012 - Intelligent Systems Design 2012;2014(12/15/2014):176 - 180. (16) Ganapathi V, Plagemann C, Koller D, Thrun S. Real Time Motion Capture Using a Single Time-Of-Flight Camera. Stanford University, Computer Science Department, Stanford, CA, USA 2010 Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference 07/13-18 2010:755 - 762. (17) Microsoft. Kinect for Windows features. 2014; Available at: http://www.microsoft.com/en- us/kinectforwindows/meetkinect/features.aspx. Accessed 12/04, 2014. (18) Damsgaard M, Rasmussen J, Christensen ST, Surma E, de Zee M. Analysis of musculoskeletal systems in the AnyBody Modeling System. Simulation Modelling Practice and Theory 2006;14(8):1100 - 1111. (19) Microsoft Developer Network. Microsoft Kinect TrackingState Enumeration. 2014; Available at: http://msdn.microsoft.com/en- us/library/microsoft.kinect.kinect.trackingstate.aspx. Accessed 12/08, 2014.
  • 14. 14 (20) Klein Horsman MD, Koopman HFJM, van der Helm FCT, Prosé LP, Veeger HEJ. Morphological muscle and joint parameters for musculoskeletal modelling of the lower extremity. Clin Biomech 2007;22(2):239 - 247. (21) Rasmussen J, de Zee M, Damsgaard M, Christensen ST, Marek C, Siebertz K. A General Method for Scaling Musculo-Skeletal Models. International Symposium on Computer Simulation in Biomechanics 2005. (22) Fluit R, Andersen MS, Kolk S, Verdonschot N, Koopman HF. Prediction of ground reaction forces and moments during various activities of daily living. J Biomech 2014 Jul 18;47(10):2321 - 2329. (23) Rasmussen J, Damsgaard M, Voigt M. Muscle recruitment by the min/max criterion — a comparative numerical study. J Biomech 2001;34(3):409 - 415. (24) Nambiar AM, Correia P, Soares LD. Frontal gait recognition combining 2D and 3D data. MM&Sec '12 Proceedings of the on Multimedia and security 2012 09/ 06-07:145 - 150. (25) Benoit DL, Ramsey DK, Lamontagne M, Xu L, Wretenberg P, Renström P. Effect of skin movement artifact on knee kinematics during gait and cutting motions measured in vivo. Gait Posture 2006;24(2):152 - 164. (26) Auerbach BM, Ruff CB. Limb bone bilateral asymmetry: variability and commonality among modern humans. J Hum Evol 2006;50(2):203 - 218. (27) Andersen MS, Yang J, de Zee M, Zhou L, Bai S, Rasmussen J. Full-body musculoskeletal modeling using dual microsoft kinect sensors and the anybody modeling system. Proceedings of the 14th International Symposium on Computer Simulation in Biomechanics 2013:23 - 24.
  • 15. 15 8. Supplementary material 8.1. Marker placement The placement of the markers was conducted according to a previous performed study by Andersen et. Al (27). In total 33 passive reflective markers were placed on the subjects during the gait, squat and shoulder abduction movement. On figure 8-9 the placement of the markers on the upper and lower extremities of the subjects are illustrated. In table 3 the name of the markers and their positions are further described. Figure 8 shows the placement of markers in an anterior (Left) and posterior (Right) point of view Figure 9 shows the placement of markers in an anterior (Left) and posterior (Right) point of view of the lower extremity
  • 16. 16 Table 3 shows the marker names, and their respective positions Marker Name Position of Marker Anatomical Position of Marker LSHO Left Shoulder Left Acromion LUPA Left Upper Arm Left Triceps Branchii LELB Left Elbow Left Olecranon LWR Left Wrist Left Triquetrum LFINL Left Finger Lateral Left Metacartal 5 LFINM Left Finger Medial Left Metacartal 2 LASI Left Anterior Pelvic Bone Left Anterior Superior iliac spine LPSI Left Posterior Pelvic Bone Left Posterior Superior iliac spine LTHI Left Thigh Left Quadriceps LKNE Left Knee Left Medial Epicondyle LTIB Left Tibia Left Tibia LANK Left Ankle Left Lateral Malleolus (fibula) LHEE Left Heel Left Calcaneus LMT1 Left Little Toe Left Metatarsal 1 LMT5 Left Big Toe Left Metatarsal 5 RSHO Right Shoulder Right Acromion RUPA Right Upper Arm Right Triceps Branchii RELB Right Elbow Right Olecranon RWR Right Wrist Right Triquetrum RFINL Right Finger Lateral Right Metacartal 5 RFINM Right Finger Medial Right Metacartal 2 RASI Right Anterior Pelvic Bone Right Anterior Superior iliac spine RPSI Right Posterior Pelvic Bone Right Posterior Superior iliac spine RTHI Right Thigh Right Quadriceps RKNE Right Knee Right Medial Epicondyle RTIB Right Tibia Right Tibia RANK Right Ankle Right Lateral Malleolus (fibula) RHEE Right Heel Right Calcaneus RMT1 Right Little Toe Right Metatarsal 1 RMT5 Right Big Toe Right Metatarsal 5 C7 Cervical Vertebrae, C7 Cervical Vertebrae, C7 CLAV Clavicle Clavicular Articulation STRN Sternum Sternum Xiphoid Process