Generative Choreograpy

"Generative Choreography using Deep Learning" has been accepted as a full paper for ICCC16. You can read the paper here: (pdf) 

Recent advances in deep learning have enabled the extraction of high-level features from raw sensor data which has opened up new possibilities in many different fields, including computer generated choreography. We have in collaboration with The Lulu Art group developed  a system,  chor-rnn,  for generating novel choreographic material in the nuanced choreographic language and style of an individual choreographer. It also shows promising results in producing a higher level compositional cohesion, rather than just generating sequences of movement. At the core of chor-rnn is deep recurrent neural network trained on raw motion capture data and that can generate new dance sequences for a solo dancer. Chor-rnn can be used for collaborative human-machine choreography or as a creative catalyst, serving as inspiration for a choreographer. 

Process

Results

After 10 minutes of training , movements are more or less random. After 6 hours of training, the RNN knows how the joints are related and it makes its first, careful and somewhat wobbly attempts at dancing. After 48 hours it has become an accomplished dancer, making up the choreography as it goes.

 

Use as an artist’s tool


While there are interesting philosophical questions regarding machine creativity especially in a longer perspective, it is also interesting to see how current results can be used as a practical tool for a working choreographer. When a choreographer works with a dancer to develop a choreography, the latter will inevitably influence the end result. While this may be desirable, it also dilutes the distinctive style (and possibly syntax) that is unique to the choreographer. 

With chor-rnn, the choreographer works with a virtual companion using the same choreographic langue.  At the same time as it is capable of producing novel work, it can provide creative inspiration. As the level of machine involvement is variable and can be chosen by the choreographer, the results can be an interesting starting point of philosophical discussions on authenticity and computer generated art.


Future work

 

  • Collect a larger corpus of data The five hours of motion capture data was enough to build a proof of concept system but ideally the corpus should be larger – especially if multiple choreographers are involved. For comparison state of the art speech recognition models use 100+ hours of data (and it is considered to be a major bottleneck in that field of research) (Graves and Jaitly 2014).
  • Derive a choreographic symbolic language One of the most intriguing features of deep neural networks is that they internally build up multiple levels of abstraction (Hinton 2014). Using a recurrent variational autoencoder would allow us to compress meaningful higher order in-formation into a fixed size tensor (encoding) (Sutskever, Vinyals et al. 2014). This in turn would allow a derivation of a symbolic language and by mapping it to feature detectors that operate on that encoding. 
  • A general symbolic encoding could provide an alternative to the existing notation systems and simplify the creation of computer created choreography. It could also provide a convenient method of recording a choreographic work in a compact, human readable format. As multiple mobile phone makers are now integrating 3D cameras (comparable to the Kinect) into their devices, it may be of signifi-cant practical use for documentation/archiving purposes (Kadambi, Bhandari et al. 2014). 
  • Multiple bodies The Kinect sensor can’t directly handle occluded body parts. This is problematic even with one dancer and makes it nearly impossible to capture interactions between multiple dancers. The solution is to use multiple Kinect sensors and combine their data (Kwon, Kim et al. 2015). This would allow us to record choreographies with up to 6 dancers and allow the system to learn about interactions between dancers. 
  • Multi-modal input The input data could be extended to apart from motion capture data to include sound (and even images and video). One could for instance build a system that in the generated choreography relates to a musical composition. 

 

Deep Dreaming

An artistic-scientific experiment combining choreography and artifical intelligence. A deep neural network interprets the images and movements and shows what it thinks it sees. The choreographer in turn interprets the result and develops the piece and the process is repeated.

This is the first in line of a number of experiments exploring an artistic collaboration between a human and a computer. Deep neural networks are an artificial intelligence technology that is inspired by the workings of the human brain but implemented in software. 

Choreography & performance: Louise Crnkovic-Friis
Music: Etude E Minor, Francisco Tarrega