dpb <none@non.net> wrote in message <kljore$g4v$1@speranza.aioe.org>...
> On 4/28/2013 11:41 AM, Mary wrote:
> ...
>
> >> W/O far more actual knowledge about the experiment and the data I have
> >> know idea about whether it's "good" or not, but what about just
> >> standaradized all of the time series--ie, subtract mean, divide by
> >> estimated st dev.
> >>
> >> And, I'd ask why you can't do something similar over the duration--or
> >> use a normalized accumulative time over a duration length as a
> >> slightly different presentation of same idea...
> >>
> ...
>
> > The data is good it's just difficult to line up over time duration. The
> > data is the change in angle from the elbow to shoulder originating from
> > motion capture data of a person raising and lowering their arm. (each
> > set ends up looking similar to a bell curve (the angle increases then
> > decreases), but because each person is different, and because there were
> > no triggers added to the motion capture software, each data set has a
> > bell curve starting and ending at different times.
> >
> > I would like to compare these angles across sets of data- but I'm having
> > trouble figuring out the best way to scale them so that I can take an
> > average of all of them to draw conlcusions from.
> >
> > Example: Trial A's set has the person moving starting at 3.76 seconds
> > and ending at 9.87 seconds Trial B's set has the person starting at 2.39
> > seconds and ending at 8.342 seconds
> > I would like a way to normalize set A and B so that I can average them.
> > Then use the std deviations between the sets to create an error bar.
>
> I wasn't talking about the data; I was talking about whether the
> suggested normalization method was meaningful or not...
>
> But again it seems essentially throwing darts at a wall blindfolded to
> no so little about a research project and expect to make meaningful
> assessments of how the data should be analyzed.
>
> These are the questions that should have been asked/addressed _before_
> the data were collected, obviously, of how it was to be processed in
> order to draw whatever conclusions it was that were intended to be made
> from the experiments.
>
> At this point, having apparently collected a brown paper bag of
> happenstance data, it appears you're trying to make something of the hash.
>
> I'd guess one should be able to fit some sort of general model to each
> channel of measured positions vs time. Start then by generalizing that
> to include another term that is the time shift and include it in the
> model as another term to estimate.
>
> It's not clear at all what is important wrt to the time variable
> here--the comparison of the position at a given time since the beginning
> of the movement or what? Or is time important at all and what angle is
> it that is being measured? There's just too little background for
> somebody here to be able to do anything of meaning methinks. Talk to
> your thesis advisor and see if can get some help out of the morass
> you've created...
>
> --
Hi thank you for the help and commentary- it's much appreciated.
As for the data collected - this collection type is extremely normal, and was a very well thought out process. The goal of finding an average change in angle from individual over multiple trials is extremely helpful in biomechanics research and not something that I am attempting to cobble together as evidence for a made-up thesis.
Despite the fact that I am still struggling with my normalization, my current code processes motion capture data (x,y,z coordinates per marker) and changes it into vector of angle between two points over time ( a single vector derived from mechanics). It then marks in each vector the start and stop time of the movement with triggers.
Be it biomechanics, electronics, or mechanical engineering, real time data should be something that needs to be normalized across trials for a true average, and I don't believe I'm asking anything far out of the ordinary by wanting to equalize vector lengths
Thanks,
ML
> On 4/28/2013 11:41 AM, Mary wrote:
> ...
>
> >> W/O far more actual knowledge about the experiment and the data I have
> >> know idea about whether it's "good" or not, but what about just
> >> standaradized all of the time series--ie, subtract mean, divide by
> >> estimated st dev.
> >>
> >> And, I'd ask why you can't do something similar over the duration--or
> >> use a normalized accumulative time over a duration length as a
> >> slightly different presentation of same idea...
> >>
> ...
>
> > The data is good it's just difficult to line up over time duration. The
> > data is the change in angle from the elbow to shoulder originating from
> > motion capture data of a person raising and lowering their arm. (each
> > set ends up looking similar to a bell curve (the angle increases then
> > decreases), but because each person is different, and because there were
> > no triggers added to the motion capture software, each data set has a
> > bell curve starting and ending at different times.
> >
> > I would like to compare these angles across sets of data- but I'm having
> > trouble figuring out the best way to scale them so that I can take an
> > average of all of them to draw conlcusions from.
> >
> > Example: Trial A's set has the person moving starting at 3.76 seconds
> > and ending at 9.87 seconds Trial B's set has the person starting at 2.39
> > seconds and ending at 8.342 seconds
> > I would like a way to normalize set A and B so that I can average them.
> > Then use the std deviations between the sets to create an error bar.
>
> I wasn't talking about the data; I was talking about whether the
> suggested normalization method was meaningful or not...
>
> But again it seems essentially throwing darts at a wall blindfolded to
> no so little about a research project and expect to make meaningful
> assessments of how the data should be analyzed.
>
> These are the questions that should have been asked/addressed _before_
> the data were collected, obviously, of how it was to be processed in
> order to draw whatever conclusions it was that were intended to be made
> from the experiments.
>
> At this point, having apparently collected a brown paper bag of
> happenstance data, it appears you're trying to make something of the hash.
>
> I'd guess one should be able to fit some sort of general model to each
> channel of measured positions vs time. Start then by generalizing that
> to include another term that is the time shift and include it in the
> model as another term to estimate.
>
> It's not clear at all what is important wrt to the time variable
> here--the comparison of the position at a given time since the beginning
> of the movement or what? Or is time important at all and what angle is
> it that is being measured? There's just too little background for
> somebody here to be able to do anything of meaning methinks. Talk to
> your thesis advisor and see if can get some help out of the morass
> you've created...
>
> --
Hi thank you for the help and commentary- it's much appreciated.
As for the data collected - this collection type is extremely normal, and was a very well thought out process. The goal of finding an average change in angle from individual over multiple trials is extremely helpful in biomechanics research and not something that I am attempting to cobble together as evidence for a made-up thesis.
Despite the fact that I am still struggling with my normalization, my current code processes motion capture data (x,y,z coordinates per marker) and changes it into vector of angle between two points over time ( a single vector derived from mechanics). It then marks in each vector the start and stop time of the movement with triggers.
Be it biomechanics, electronics, or mechanical engineering, real time data should be something that needs to be normalized across trials for a true average, and I don't believe I'm asking anything far out of the ordinary by wanting to equalize vector lengths
Thanks,
ML