Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Will these data be available after the challenge? #12

Open
puolival opened this issue May 7, 2018 · 4 comments
Open

Will these data be available after the challenge? #12

puolival opened this issue May 7, 2018 · 4 comments
Labels
question Further information is requested

Comments

@puolival
Copy link

puolival commented May 7, 2018

Hi,

I just wanted to ask what will happen to these data after the challenge? Also, is it possible to use these data for other purposes? For example, suppose one comes up with some novel idea during the challenge and would like to publish that (in collaboration with the owners of the data). Would it be possible?

Br,
Tuomas

@glemaitre glemaitre added the question Further information is requested label May 8, 2018
@glemaitre
Copy link
Contributor

I would like to have @GaelVaroquaux, @kegl and @agramfort opinions.

I would think that it could be great to benchmark new submission post-challenge. However, it would require computation power on our side and we need to think about the conditions to to that.

@ghost
Copy link

ghost commented May 18, 2018

I want to know if I could use these preprocessed data(*.csv files) to do my own research after the challenge? maybe publish some papers with it. Besides, is it possible to get another part of the preprocessed data(not raw data) in the private set after the challenge? hope to have your reply, thx. @glemaitre

@kegl
Copy link
Contributor

kegl commented May 18, 2018

I'm not the main person to decide, but my suggestion would be to keep the test data private. We could have a service where researchers could officially register their experiments, submit the code, and receive one private test score before publication, which we would also make public, independently whether the researcher decides to include it or not in the paper.

The plan for this challenge is similar: every participant can select one submission (the default is the one with the highest public score), for which we will publish the test score. This is why we have now this message at the top of each submission:

"By default, all your submissions enter the competition, and the one with the highest public leaderboard score will become your official entry. If you don't want that this submission becomes a candidate, pull it out by pressing the button."

@GaelVaroquaux
Copy link
Collaborator

GaelVaroquaux commented May 18, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants