Over the last two decades, computers have fundamentally changed the ways that people discover and interact with music. While music provides a rich, complex, and highly organized domain structure, it also presents a host of computational challenges not found in other acoustic domains, such as speech or environmental sound. In this talk, I’ll describe machine learning methods to expose and exploit different forms of structure in musical data. The motivating applications of these methods span multiple scales, including catalog-level similarity models for content-based recommendation, session-level models for streaming radio applications, and track-level models for visualizing and analyzing individual recordings. While each of these applications relies on an underlying notion of similarity, the interaction models and availability of data differ substantially, and lead to different algorithmic solutions.
Dr. Brian McFee is a Moore-Sloan Fellow at New York University’s Center for Data Science and Music and Audio Research Laboratory. His work touches on various topics at the intersection of machine learning, information retrieval, and audio analysis.