Dance + Code
These 3D models depict the movements of 3 dancers across space and time. As part of a collaboration with Future Arts on an augmented reality mural project “Cut, Crush, Decay”, I explored ways of creating motion trails as an abstraction of the original performances with a custom mesh extrusion implemented fully in C#. I tried to convey a sense of stillness vs motion by adjusting the thickness of the extrusions based on the velocity of joints. These meshes were later used to prepare a painterly 3D animation in AR.
Here is a visualization of the generation process. Motion capture data is noisy, so I apply some smoothing to the curves that are generated, and create meshes in a rotation-minimizing fashion.
While I did the generation work in C#/Unity, I created a sandbox using A-Frame/ThreeJS to prototype the AR animations locally. Here is what the geometry looks like when animated with a single vertex/fragment shader pair. The geometry contains extra attributes (baked during generation) that preserve the notion of time and the indices for individual skeletal joints used to generate the trails. These are used to constrain the range of trails and color individual strands differently in shaders. The animation intended to create a brush-stroke like effect as a “time window” moves through the trails.
The AR experience also contained three tracks of 3D positional audio corresponding to the animated motion trails, created by Joshua Borsman. To make sure the experience would be perfectly aligned with the mural, and to get a fairly accurate preview of the audio effects, I set up a sandbox in A-Frame using a photogrammetry scan of the location before the physical mural was installed. The video below shows the sandbox.
I also added day-mode and night-mode shaders to the AR experience, to account for the changing lighting conditions and to ensure the trails are visible properly. The final experience was built using 8thwall/AFrame/Threejs and used location/3D marker based tracking (since the mural was 40 feet long and it would be very difficult to rely on image based markers). The video below shows what the final experience looked like (night mode). User interactions result in a slowing down of time, and reveal the dancers to provide context to the abstract animations, and allowing one to sample a small portion of the original animation.
During the project, we explored multiple motion capture methods: MoveAI (used in the process above), skeletal animation capture using Azure Kinect, and point-cloud capture using the Azure Kinect. The skeletal animations captured with the Kinect were very low quality and unusable. The pointclouds captured using the Azure Kinect were nice at close distances, but we ended up not using it in the final capture. Still, I developed an application in Python (using the Open3D library) to process the raw Kinect capture files, extracting undistorted depth frames, and converting them to a regular mp4 by spreading the depth value (16 bit) across the R,G,B channels in a manner that was encoder-friendly. I used that data to create the following holographic renders in Unity, for playing on a Looking Glass Portrait display (shown in the video below).