Cernak, MilosRicaud, BenjaminVandergheynst, PierreMocanu, Alexandru2020-08-142020-08-142020-08-142020-08-14https://infoscience.epfl.ch/handle/20.500.14299/170856Musical source separation is a complex topic that has been extensively explored in the signal processing community and has benefited greatly from recent machine learning research. Many deep learning models with impressive source separation quality have been released in the last couple of years, all of them dealing with studio recorded music split into four instrument categories, vocals, drums, bass and other. We study how we can extend the number of instrument categories and conclude that electric guitar is also feasible to separate. We then turn our attention towards learning relevant signal encodings using parameterized filterbanks and we observe that filterbanks can not improve over simple convolutions on their own, but can help if the encoder is composed of both convolutions and filterbanks. Finally, we try to adapt models trained on studio music to live music separation and conclude that models trained on clean data also provide the best performance on live music as well.musicneural networksfilterbankssignal processingMusical Source Separationstudent work::master thesis