An acoustic-based keystroke recognition program
Motivations
This project concluded my MAIS 202 program in Winter 2025. I implemented a CNN deeplearning model taking a set of time-frequency bins as inputs and returning a predicted key upon inference. This is a multi-class classification problem which helped me learn tremendously about acoustics and how to process audio files in python.
I learned the theory of Fourrier transforms along the way, which was really interesting !
We use librosa to process the audio, pytorch.nn module for writing the model.
The final project’s github repo
- ← Previous
This is a fifth post (draft) - Next →
Another way to see the comp 206 homework