Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production.
However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising.
I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools.
Defence : 06/07/2018 - 16h - Site Jussieu 24-25/405
Jury members :
Michèle Sebag, Laboratoire de Recherche en Informatique [Rapporteur]
Marc Tommasi, INRIA / Université de Lille 3 [Rapporteur]
Sylvain Marchand, Université de La Rochelle [Rapporteur]
François Pachet, Spotify
Frank Nielsen, École Polytechnique / Sony CSL Tokyo
Philippe Esling, IRCAM / Sorbonne Universités
Douglas Eck, Google
- J.‑P. Briot, G. HADJERES, F.‑D. Pachet : “Deep Learning Techniques for Music Generation”, Computational Synthesis and Creative Systems Series, (Springer), (ISBN: 978-3-319-70162-2) (2019)
- J.‑P. Briot, G. HADJERES, F.‑D. Pachet : “Deep Learning Techniques for Music Generation -- A Survey”, (2019)
- G. Hadjeres : “Interactive deep generative models for symbolic music”, thesis, defence 06/07/2018, supervision Pachet, François, rapporteurs : NIELSEN Frank (2018)