Description of the Action Capture Project.

Developed by the Virtual Reality and Multimedia Research Group at the Technical University of Freiberg


A VR user interacts with a virtual prototype of a car. Through the use of action capture, these interactions are recorded at a high level of abstraction and then repeated by a virtual human.


To demonstrate the generalization capability of Action Capture the same sequence of actions, as recorded above, was played on two slightly different scenarios. One scenario being the same as above and the other with a mirrored gear shift. The virtual human can grasp both gear shifts without problems.


Natural motion style and timing can be added to the animation by adapting pre-recorded motion capture data. Furthermore, the captured action sequence can be replayed by many virtual humans of different body type and size. Through this, crucial insight is gained about the prototype.


A VR user demonstrates different object manipulations which are then imitated by the virtual human (the artificial workbench scenario contains different objects covering all grasp types of the Schlesinger taxonomy).


Actions are represented in the newly developed, high-level description language XSAMPL3D (XML Synchronized Action MarkuP Language for 3D). XSAMPL3D is fairly human-readable and facilitates quick authoring of animations, e.g. via an XML editor. The video shows some animations generated from authored (i.e. not captured) action sequences. The same XSAMPL3D action description is used to animate virtual humans of different sizes and in settings with repositioned interaction objects.


Robust, real-time inverse kinematics using our Ikconac (Inverse kinematics by construction of articulated chains) algorithm: Singularities and numerical instabilities do not cause jitter in the position and orientation of the end-effector, as can be the case with some other inverse kinematics algorithms.