Music source separation is the task of isolating individual instruments which are mixed in a musical piece. This task is particularly challenging, and even state-of-the-art models can hardly generalize to unseen test data. Nevertheless, prior knowledge about individual sources can be used to better adapt a generic source separation model to the observed signal. In this work, we propose to exploit a temporal segmentation provided by the user, that indicates when each instrument is active, in order to fine-tune a pre-trained deep model for source separation and adapt it to one specific mixture. This paradigm can be referred to as user-guided one-shot deep model adaptation for music source separation, as the adaptation acts on the target song instance only. Our results are promising and show that state-of-the-art source separation models have large margins of improvement especially for those instruments which are underrepresented in the training data.
User-guided one-shot deep model adaptation for music source separation
User-guided one-shot deep model adaptation for music source separation
User-guided one-shot deep model adaptation for music source separation
Research Paper / Oct 2021 / Audio Processing, Neural network, Machine learning/ Deep learning /Artificial Intelligence
Related Content
Research Paper /Feb 2024 / Wireless communication, 5G, Machine learning/ Deep learning /Artificial Intelligence
The ubiquitous deployment of 4G/5G technology has made it a critical infrastructure for society that will facilitate the delivery and adoption of emerging applications and use cases (extended reality, automation, robotics, to name but a few). These new applications require high throughput and low latency in both uplink and downlink for optimal performance, while…
Research Paper /Apr 2024 / Compression, Volumetric Imaging, Machine learning/ Deep learning /Artificial Intelligence
"Learning-based point cloud (PC) compression is a promising research avenue to reduce the transmission and storage costs for PC applications. Existing learning-based methods to compress PCs attributes employ variational autoencoders (VAE) or normalizing flows (NF) to learn compact signal representations. However, VAEs leverage a lower-dimensional bottleneck that…
Achieving successful variable bitrate compression with computationally simple algorithms from a single end-to-end learned image or video compression model remains a challenge. Many approaches have been proposed, including conditional auto-encoders, channel-adaptive gains for the latent tensor or uniformly quantizing all elements of the latent tensor. This paper …
Webinar /Jun 2024
Blog Post /Jun 2025
Blog Post /Jun 2025