30 min listen
#1266: Converting Dance into Multi-Channel Generative AI Performance at 30FPS with “Kinetic Diffusion”
FromVoices of VR
#1266: Converting Dance into Multi-Channel Generative AI Performance at 30FPS with “Kinetic Diffusion”
FromVoices of VR
ratings:
Length:
43 minutes
Released:
Aug 27, 2023
Format:
Podcast episode
Description
Brandon Powers is a creative director and choreographer who is creating experiences across physical and virtual space at the intersection of performance and technology. He was showing a dance performance at ONX Studios during Tribeca Immersive that was titled Kinetic Diffusion. It was created in collaboration with Aaron Santiago, and featured three screens that were being filled with delayed generative AI footage in near real-time and 30 frames per second, which required eleven 4090 GPUs in the cloud to achieve.
Powers was recording his dance with a mirrorless camera, and then was applying a depth map AI model to extrapolate his embodied movements so that it could be input as a real-time feed into Stable Diffusion with a set of prompts that were precisely timed out. The AI generated images ended up having a 2-8 second delay, which gave the effect of Powers dancing in a duet with himself, but modulated through a series of style transfer prompts. Overall, it was a hypnotically impressive display of generative AI at the intersection of XR and dance. I had a chance to catch up with Powers after his performance to get more context for how it came about, and the long evolution from his previous explorations at the intersections of AI and dance with Frankenstein AI that premiered at Sundance 2019 (see our previous conversation about it in episode #728). You can see a brief explainer video of Kinetic Diffusion within from Powers' TikTok channel.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Powers was recording his dance with a mirrorless camera, and then was applying a depth map AI model to extrapolate his embodied movements so that it could be input as a real-time feed into Stable Diffusion with a set of prompts that were precisely timed out. The AI generated images ended up having a 2-8 second delay, which gave the effect of Powers dancing in a duet with himself, but modulated through a series of style transfer prompts. Overall, it was a hypnotically impressive display of generative AI at the intersection of XR and dance. I had a chance to catch up with Powers after his performance to get more context for how it came about, and the long evolution from his previous explorations at the intersections of AI and dance with Frankenstein AI that premiered at Sundance 2019 (see our previous conversation about it in episode #728). You can see a brief explainer video of Kinetic Diffusion within from Powers' TikTok channel.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Released:
Aug 27, 2023
Format:
Podcast episode
Titles in the series (100)
#1198: AmazeVR is Bringing High-Res, Immersive Concert Experiences to the Quest Starting with K-Pop Band Aespa: Last year, AmazeVR launched their Into the Hottiverse virtual concert experience at SXSW featuring Megan Thee Stallion before taking it on a VR concert tour at AMC theaters around the United States. This year AmazeVR is back with an even more streamlin... by Voices of VR