Skip to content

Hyperparameter study

André Pires edited this page May 22, 2017 · 1 revision

In order to improve and check the influence of the hyperparamenter for each tool, I ran the tools again with new configurations. Because the repeated 10-fold cross validation took a long time to run, I decided to perform only repeated validation, with a split of 70% training and 30% testing with the HAREM golden collection.

Hyperparameters for each tool

Tool Parameter Default Values
Stanford CoreNLP max iterations infinite -
OpenNLP iterations 100 [70 - 130, 125, 135, 150 - 180, 200]
cutoff 5 [0, 3 - 7, 10]
spaCy iterations 10 [10 - 110]
NLTK [ME] max iterations 10 [10 - 120]
[ME] min_ll 0 -
[ME] min_lldelta 0.1 [0 - 0.2 (0.05)]
[DT] entropy cutoff 0.05 [0.03 - 0.13 (x0.01)]
[DT] depth cutoff 100 2, 5, [10 - 120]
[DT] support cutoff 10 3, [7 - 20]
Clone this wiki locally