Automatic Song-Type Classification and Speaker Identification of Norwegian Ortolan Bunting Emberiza Hortulana Vocalizations
Format of Original
Institute of Electrical and Electronics Engineers (IEEE)
2005 IEEE Workshop on Machine Learning for Signal Processing
Original Item ID
This paper presents an approach to song-type classification and speaker identification of Norwegian Ortolan Bunting (Emberiza Hortulana) vocalizations using traditional human speech processing methods. Hidden Markov models (HMMs) are used for both tasks, with features including mel-frequency cepstral coefficients (MFCCs), log energy, and delta (velocity) and delta-delta (acceleration) coefficients. Vocalizations were tested using leave-one-out cross-validation. Classification accuracy for 5 song-types is 92.4%, dropping to 63.6% as the number and similarity of the songs increases. Song-type dependent speaker identification rates peak at 98.7%, with typical accuracies of 80-95% and a low end at 76.2% as the number of speakers increases. These experiments fit into a larger framework of research working towards methods for acoustic censusing of endangered species populations and more automated bioacoustic analysis methods.