𝄞
𝄞
𝄞
𝄢
𝄢
𝄢

Welcome to AIM.

AIM (Artificial Intelligence for Musicians) is a Purdue music technology research group, whose aim is to create reactive, human-like systems which support musicians during their practice sessions and performances.

Some of AIM's projects are supported by a National Science Foundation grant .

We are looking for motivated graduate or undergraduate students to join our team. Click here for info.

Follow on GitHub

News & Upcoming Events

2025 Transcription Competition

January 01 - April 30, 2025

Challenge yourself to create the best transcription model for classical music.

Learn more

Workshop: AI for Music at AAAI 2025

March 3, 2025

Discover the latest advancements in AI for music at AAAI 2025.

Learn more

Workshop: AI for Music at ICME 2025

June 30, 2025

Join us for an exciting workshop exploring the intersection of AI and music.

Learn more

Our Projects

Automatic Music Transcription

Automatic Music Transcription

Fall 2024 - present

Automatic Music Transcription is a project focused on streamlining audio-to-MIDI transcription for musicians and educators, with applications in isolating sounds in noisy environments. We are conducting a systematic review of AMT models, examining their strengths and limitations with complex, multi-instrument music.

We hosted a competition in April 2025, challenging participants to create accurate transcription models for classical music. Currently, we are building our own interpretable AMT model, as well as focusing on other niches such as using computer vision to generate guitar tablature and quantifying a music piece's "complexity" as inputs for future AMT models.

Learn more
Evaluator

Evaluator

Fall 2023 - present

Evaluator is an app that aims to help musicians practice more effectively. It utilizes computer vision and YOLO localization techniques to help musicians track, analyze, and improve their posture.

It also uses spectrogram analysis and multi-modal transformers to help the musicians identify their mistakes in music and correct them.

Drawing by Cecilia Ines Sanchez.

View on GitHub
Companion

Companion

Fall 2023 - present

Companion is an app that not only plays along with a human player during a chamber music piece, but actively responds to their playing habits and voice commands like a real human would.

The project involves machine learning and filtering/DSP algorithms to analyze and edit sound quickly and accurately and utilizes small NLP language models for voice command implementation.

Drawing by Cecilia Ines Sanchez.

View on GitHub
Mus2Vid

Mus2Vid

Spring 2022 - present

Mus2Vid is a real-time art project that uses diffusion models to generate video depictions in response to classical music. It uses recurrent and transformer networks to analyze input audio and estimate its emotion and genre qualities, which are converted into text and fed to a text-to-image diffusion model to generate images.

Learn more

Robot Cello

Spring 2024 - present

As the name suggests, Robot Cello is a project about using reinforcement learning to teach a robot arm to play cello. The project is currently in its survey phase but is currently investigating using motion capture technology to get training data for an RL model.

We partner with the Purdue Envision Center to collect motion data for our robot arm to train on. On the left is a video of Prof. Yun playing cello while wearing a motion-capture rig.

Research Areas + Questions

Our projects often span most or all of these areas, as they are all important to making effective, human-like music technology.

Generative Audio and DSP

How can Companion utilize machine learning and filtering to resynthesize string instrument articulations on-the-fly?

Beat Detection and Tempo Tracking

What are the most effective methods for Companion, Evaluator, and Mus2Vid to follow a musician's playing in reference to a score, and play along to match?

Emotion and Perception

How can Mus2Vid analyze emotion of classical music in real-time and utilize it to generate real-time video accompaniments?

Music Classification / Information Retrieval

What musical features extracted from various media, such as tempo, key, genre, notes, are useful to music performance technology, and how can we extract such features?

Human-Computer Interaction

How can our apps be designed in ways that are human-like and natural for humans to interact with?

User Studies + Deployments

How can we ensure that our users actually utilize and enjoy the apps we develop?

Research

Publications

Peer-reviewed work at the intersection of artificial intelligence and music technology.

AAAI 2025

Detecting Music Performance Errors with Transformers

Benjamin Shiue-Hal Chou, Purvish Jajal, Nicholas John Eliopoulos, Tim Nadolsky, Cheng-Yun Yang, Nikita Ravi, James C. Davis, Kristen Yeon-Ji Yun, Yung-Hsiang Lu

Vertically Integrated Projects Team

AIM stands out among other music technology research groups because of its pedagogy. While other music technology groups may cater primarily to graduate students and professionals, our group is open to all Purdue students of any major and experience level.

We hope that by helping any student interested in music technology/machine learning learn to work with these technologies, we can make a difference in these students' lives while simultaneously encouraging the adoption of music technology.

Learn More
AIM Team Photo

Our Talented Team

Students from diverse backgrounds working together on cutting-edge music AI

AIM Team Photo