I’ve just uploaded an attempt at extracting pitch from human voice to GitHub. It’s a Processing sketch that uses the following classes:
PitchProject.pde – main sketch file.
AudioSource.pde – gets audio from wav files or microphone.
ToneGenerator.pde – creates an output tone with a triangle wave.
PitchDetectorAutocorrelation.pde – an audio listener that uses Autocorrelation to extract pitch from captured sound.
PitchDetectorHPS – an audio listener that uses Harmonic Product Spectrum to extract pitch from captured sound (not working yet).
Right now it’s prepared to open wav files with a file chooser and apply Autocorrelation to extract pitch. When executing the sketch, you should see some bars on screen that represent the pitch, and an output tone that corresponds to the one found in the audio. Try it!
However, while it works quite nicely, I am not sure if I’m doing things right (this is my first Processing project!). Are minim buffers being used properly? Is the Audio Listener correctly implemented? Shouldn’t all this be a little bit smoother? (audio experiments small hiccups from time to time…) Opinions, advices and suggestions are welcomed!