acoss: Audio Cover Song Suite is a feature extraction and benchmarking frameworks for the cover song identification (CSI) tasks. This tool has been developed along with the new DA-TACOS dataset.
acoss
includes a standard feature extraction framework with state-of-art audio features for CSI task and open source implementations of seven state-of-the-art CSI algorithms to facilitate the future work in this
line of research. Using this framework, researchers can easily compare existing algorithms on different datasets,
and we encourage all CSI researchers to incorporate their
algorithms into this framework which can easily done following the usage examples.
Please site our paper if you use this tool in your resarch.
Benchmarking results on Da-Tacos dataset can be found in the paper.
We recommend you to install the package inside a python virtualenv.
pip install acoss
OR
- Clone or download the repo.
- Install
acoss
package by using the following command inside the directory.
python3 setup.py install
acoss
requires a local installation of madmom library for computing some audio features and essentia library for similarity algorithms.
For linux-based distro users,
pip install "acoss[extra-deps]"
or if you are a Mac OSX user,
you can install madmom from pip
pip install madmom
you can install essentia
from homebrew
brew tap MTG/essentia
brew install essentia --HEAD
acoss
mainly provides the following python sub-modules-
-
acoss.algorithms
: Sub-module with various cover identification algorithms, utilities for similarity comparison and an template for adding new algorithms. -
acoss.coverid
: Interface to benchmark a specific cover identification algorithms. -
acoss.features
: Sub-module with implementation of various audio features. -
acoss.extractors
: Interface to do efficient batch audio feature extraction for an audio dataset. -
acoss.utils
: Utility functions used in acoss package.
Cover Song Identification algorithms in acoss |
|
---|---|
Serra09 |
Paper |
LateFusionChen |
Paper |
EarlyFusionTraile |
Paper |
SiMPle |
Paper |
FTM2D |
Paper |
MOVE |
adding soon ... |
{
"chroma_cens": numpy.ndarray,
"crema": numpy.ndarray,
"hpcp": numpy.ndarray,
"key_extractor": {
"key": numpy.str_,
"scale": numpy.str_,_
"strength": numpy.float64
},
"madmom_features": {
"novfn": numpy.ndarray,
"onsets": numpy.ndarray,
"snovfn": numpy.ndarray,
"tempos": numpy.ndarray
}
"mfcc_htk": numpy.ndarray,
"label": numpy.str_,
"track_id": numpy.str_
}
audio_dir
/work_id
/track_id.mp3
feature_dir
/work_id
/track_id.h5
import deepdish as dd
feature = dd.load("feature_file.h5")
An example feature file will be in the following structure.
{
'feature_1': [],
'feature_2': [],
'feature_3': {'type_1': [], 'type_2': [], ...},
......
}
work_id | track_id |
---|---|
W_163930 |
P_546633 |
... | ... |
acoss
benchmark methods expect the dataset annotation csv file in the above given format.
There are also some utility functions in acoss
which generates the csv annotation file automatically for da-tacos from it's subset metadata file and for covers80 dataset from it's audio data directory.
from acoss.utils import da_tacos_metadata_to_acoss_csv
da_tacos_metadata_to_acoss_csv("da-tacos_benchmark_subset_metadata.json",
"da-tacos_benchmark_subset.csv")
from acoss.utils import generate_covers80_acoss_csv
generate_covers80_acoss_csv("coversongs/covers32k/",
"covers80_annotations.csv")
For quick prototyping, let's use the tiny covers80, dataset.
from acoss.utils import COVERS_80_CSV
from acoss.extractors import batch_feature_extractor
from acoss.extractors import PROFILE
print(PROFILE)
{ 'sample_rate': 44100, 'input_audio_format': '.mp3', 'downsample_audio': False, 'downsample_factor': 2, 'endtime': None, 'features': ['hpcp', 'key_extractor', 'madmom_features', 'mfcc_htk'] }
Compute
# Let's define a custom acoss extractor profile
extractor_profile = {
'sample_rate': 32000,
'input_audio_format': '.mp3',
'downsample_audio': True,
'downsample_factor': 2,
'endtime': None,
'features': ['hpcp']
}
# path to audio data
audio_dir = "../coversongs/covers32k/"
# path where you wanna store your data
feature_dir = "features/"
# Run the batch feature extractor with 4 parallel workers
batch_feature_extractor(dataset_csv=COVERS_80_CSV,
audio_dir=audio_dir,
feature_dir=feature_dir,
n_workers=4,
mode="parallel",
params=extractor_profile)
from acoss.coverid import benchmark, algorithm_names
from acoss.utils import COVERS_80_CSV
# path to where your audio feature h5 files are located
feature_dir = "features/"
# list all the available algorithms in acoss
print(algorithm_names)
# we can easily benchmark any of the given cover id algorithm
# on the given dataset using the following function
benchmark(dataset_csv=COVERS_80_CSV,
feature_dir=feature_dir,
algorithm="Serra09",
parallel=False)
# result of the evaluation will be stored in a csv file
# in the current working directory.
from acoss.algorithms.algorithm_template import CoverSimilarity
class MyCoverIDAlgorithm(CoverSimilarity):
def __init__(self,
dataset_csv,
datapath,
name="MyAlgorithm",
shortname="mca"):
CoverAlgorithm.__init__(self,
dataset_csv=dataset_csv,
name=name,
datapath=datapath,
shortname=shortname)
def load_features(self, i):
"""Define how to want to load the features"""
feats = CoverAlgorithm.load_features(self, i)
# add your modification to feature arrays
return
def similarity(self, idxs):
"""Define how you want to compute the cover song similarity"""
return
# create an instance your algorithm
my_awesome_algorithm = MyCoverIDAlgorithm(dataset_csv, datapath)
# run pairwise comparison
my_awesome_algorithm.all_pairwise()
# Compute standard evaluation metrics
for similarity_type in my_awesome_algorithm.Ds.keys():
print(similarity_type)
my_awesome_algorithm.getEvalStatistics(similarity_type)
my_awesome_algorithm.cleanup_memmap()
- Fork the repo!
- Create your feature branch: git checkout -b my-new-feature
- Add your new audio feature or cover identification algorithm to acoss.
- Commit your changes: git commit -am 'Add some feature'
- Push to the branch: git push origin my-new-feature
- Submit a pull request
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068 (MIP-Frontiers).
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 770376 (Trompa).