To apply such networks to sound and music, several problems can arise. We chose to study 3 of them: - What is the impact of input representation? Which can of transform can we use to inform the resolution of MIR problems? To answer those questions, we will present works don on structure estimation and singing voice separation. - How can we gather large amount of labeled data ? We will present two different strategies: one for singing voice detection using teacher student paradigm and one for singing voice separation, using signal processing tools for data augmentation. - How can we use ConvNet not only to solve problems, but also to validate solutions? To study this problem, we will present how we design and evaluate a voice anonymization method in urban sound recordings.
**Alice Cohen's bio**
Alice Cohen was a PhD student at Institut de Recherche et de Coordination Acoustique Musique (IRCAM). She defended her thesis, untitled "Estimating music and sound descriptions using deep learning" last October. Her research interests mostly focus on voice extraction and detection, with signal processing and machine learning tools.
_Plus d'information sur Alice Cohen :