Music recommendation systems can be implemented using off-the-shelf collaborative filtering approaches, but such an approach is sub-optimal in that it does not take into account sources of information that are specific to the music domain. In this talk we show how deep learning is being used at Spotify for extracting meaningful information from the audio content in order to provide better recommendations. Use cases include: learning a measure of music similarity based on purely acoustic properties; classifying songs into genres and correcting erroneous metadata through audio-based artist disambiguation.
Nicola Montecchio is a Music Information Retrieval Scientist at Spotify. He got his Ph.D. in Computer Science at the University of Padova, Italy, working on real-time alignment of music and gesture streams for interactive applications. After spending a year as invited researcher at the IRCAM institute in Paris focusing on the interaction between musicians and computers, he joined The Echo Nest / Spotify in 2012 to work on large scale content-based classification, ranking and similarity algorithms applied to acoustic aspects of music.