tldr

My primary research develops human-centered deep learning tools that support creative audio workflows—enabling practitioners to transform, shape, and generate sound through interfaces where the relationship between user input and sonic output is intuitive, expressive, and controllable.

Outside of this, I also look at how algorithmic systems reshape musical experience, bridging methods from HCI and computational musicology to examine questions of influence attribution, algorithmic mediation, and the broader social impacts of how we collectively listen to and engage with sound.

A few select projects

For a full list of my publications, please check out Google Scholar or my CV.

Towards Expressive, Controllable Deep Learning Tools for Creative Audio Production

  • TBA (under review) — While interning at Adobe, I explored ways to blend sound concepts together.
  • Text2FX (ICASSP 2025) — How can we control audio FX like EQ and reverb using natural language descriptions instead of technical parameters? By leveraging the CLAP embedding space and differentiable signal processing, the system maps high-level semantic intent (“make this guitar warm and dreamy”) to interpretable, refinable DSP parameters.
  • Text2EQ (ISMIR LBD 2024) — Human-in-the-loop interface exploring interactive refinement of text-guided audio processing

Understanding Music in the Age of Algorithms

  • Listening in the Age of the Algorithm (ongoing) — A collaboration with NOISE lab, This work examines how recommendation systems on streaming platforms shape musical discovery and listening behavior, aiming to understand algorithmic mediation’s impact on musical culture and listening practices (to be presented at Clouds, Streams, and Ground Truths 2026).