Skip to content

Commit f18843b

Browse files
committed
Add context for need
1 parent 65bcc84 commit f18843b

File tree

2 files changed

+16
-3
lines changed

2 files changed

+16
-3
lines changed

paper/paper.bib

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,3 +89,14 @@ @misc{tzanetakis_essl_cook_2001
8989
publisher = "The International Society for Music Information Retrieval",
9090
year = "2001"
9191
}
92+
93+
@misc{nlp,
94+
doi = {10.5281/ZENODO.4915746},
95+
url = {https://zenodo.org/record/4915746},
96+
author = {Singh, Jyotika},
97+
keywords = {YouTube, NER, NLP},
98+
title = {jsingh811/pyYouTubeAnalysis: pyYouTubeAnalysis: YouTube data requests and NER on text},
99+
publisher = {Zenodo},
100+
year = {2021},
101+
copyright = {Open Access}
102+
}

paper/paper.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,8 @@ This software aims to provide machine learning engineers, data scientists, resea
5151

5252
The motivation behind this software is understanding the popularity of Python for Machine Learning and presenting solutions for computing complex audio features using Python. This not only implies the need for resource to guide solutions for audio processing, but also signifies the need for Python guides and implementations to solve audio and speech classification tasks. The classifier implementation examples that are a part of this software and the README aim to give the users a sample solution to audio classification problems and help build the foundation to tackle new and unseen problems.
5353

54+
Different data processing techniques work well for different types of data. For example, word vector formations work great for text data [@nlp]. However, passing numbers data, an audio signal or an image through word vector formation is not likely to bring back any meaningful numerical representation that can be used to train machine learning models. Different data types correlate with feature formation techniques specific to their domain rather than a "one size fits all".
55+
5456
PyAudioProcessing is a Python based library for processing audio data into features and building Machine Learning models. Audio processing and feature extraction research is popular in MATLAB. There are comparatively fewer resources for audio processing and classification in Python. This tool contains implementation of popular and different audio feature extraction that can be use in combination with most scikit-learn classifiers. Audio data can be trained, tested and classified using pyAudioProcessing. The output consists of cross validation scores and results of testing on custom audio files.
5557

5658
The library lets the user extract aggregated data features calculated per audio file. Unique feature extractions such as Mel Frequency Cepstral Coefficients (MFCC) [@6921394], Gammatone Frequency Cepstral Coefficients (GFCC) [@inbook], spectral coefficients, chroma features and others are available to extract and use in combination with different backend classifiers. While MFCC features find use in most commonly encountered audio processing tasks such as audio type classification, speech classification, GFCC features have been found to have application in speaker identification or speaker diarization. Many such applications, comparisons and uses can be found in this IEEE paper [@6639061]. All these features are also helpful for a variety of other audio classification tasks.
@@ -72,22 +74,22 @@ Given the use of this software in the community today inspires the need and grow
7274

7375
This software offer pre-trained models. This is an evolving feature as new datasets and classification problems gain prominence in research. Some of the pre-trained models include the following.
7476

75-
1. Audio type classifier to determine speech versus music: Trained SVM classifier for classifying audio into two possible classes - music, speech. This classifier was trained using MFCC, spectral and chroma features. Cross-validation confusion matrix has scores such as follows.
77+
1. Audio type classifier to determine speech versus music: Trained SVM classifier for classifying audio into two possible classes - music, speech. This classifier was trained using MFCC, spectral and chroma features. Confusion matrix has scores such as follows.
7678

7779
| | music | speech |
7880
| --- | --- | --- |
7981
| music | 48.80 | 1.20 |
8082
| speech | 0.60 | 49.40 |
8183

82-
2. Audio type classifier to determine speech versus music versus bird sounds: Trained SVM classifier that classifying audio into three possible classes - music, speech and birds. This classifier was trained using MFCC, spectral and chroma features.
84+
2. Audio type classifier to determine speech versus music versus bird sounds: Trained SVM classifier that classifying audio into three possible classes - music, speech and birds. This classifier was trained using MFCC, spectral and chroma features. Confusion matrix has scores such as follows.
8385

8486
| | music | speech | birds |
8587
| --- | --- | --- | --- |
8688
| music | 31.53 | 0.73 | 1.07 |
8789
| speech | 1.00 | 32.33 | 0.00 |
8890
| birds | 0.00 | 0.00 | 33.33 |
8991

90-
3. Music genre classifier using the GTZAN [@tzanetakis_essl_cook_2001] dataset: Trained on SVM classifier using GFCC, MFCC, spectral and chroma features to classify music into 10 genre classes - blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock.
92+
3. Music genre classifier using the GTZAN [@tzanetakis_essl_cook_2001] dataset: Trained on SVM classifier using GFCC, MFCC, spectral and chroma features to classify music into 10 genre classes - blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock. Confusion matrix has scores such as follows.
9193

9294
| | pop | met | dis | blu | reg | cla | rock | hip | cou | jazz |
9395
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |

0 commit comments

Comments
 (0)