The label aggregation for crowdsourced classification datasets consists in presenting a set of
Other objectives as the F1 score can also be considered.
This benchmark can be run using the following commands:
$ pip install -U benchopt $ git clone https://github.com/benchopt/benchmark_crowdsourcing $ benchopt run benchmark_crowdsourcing
Apart from the problem, options can be passed to benchopt run
, to restrict the benchmarks to some solvers or datasets, e.g.:
$ benchopt run benchmark_crowdsourcing -s solver1 -d dataset2 --max-runs 10 --n-repetitions 10
Use benchopt run -h
for more details about these options, or visit https://benchopt.github.io/api.html.