“Black Latents” is a RAVE model I submitted to the RAVE model challenge 2025 hosted by IRCAM, where it was voted into first place.

A heartfelt Thank You to the Forum IRCAM team and everybody involved in making the RAVE model challenge possible. The award ceremony took place during this year’s Forum IRCAM workshops in Paris.
RAVE is a variational autoencoder developed by the acids team at IRCAM that can be trained on audio data. Models created with RAVE can be used to perform neural audio synthesis in real time using object/ ecosystem nn~ in Max or Pure Data.
The idea behind my “Black Latents” model was to extract dominant characteristics from a defined body of musical work – in this case my Black Plastics series, a compilation of 7 EPs with a total of 28 audio tracks of genres Experimental Techno, Breakbeats and Drum & Bass, released between 2012-2020.
With the model, you can spawn new material in the style of what it has learned during the training using all kinds of audio input.
Below are some sound examples that have been generated with “Black Latents”.
In the following video, I’m using a model variant of “Black Latents” in a Latent Jamming setup in Pure Data. Latent Jamming is an improvisation technique where I operate from within the latent space of the model using simple signal value generators fed directly into the decoder/ generator unit of the RAVE model. All components used in the video can be found on GitHub.
Congratulations to the other winners Dylan Burchett, Christopher Trapani as well as Tristan Zand and Julien Bloit with BeatSurfing team. I highly recommend checking their models as well as all the other submissions that have been published on the Forum IRCAM website.