CodRep is a machine learning competition on source code data. It provides the community with a curated dataset and a well-defined loss function. If you use this data, please acknowledge it by citing the following technical report: The CodRep Machine Learning on Source Code Competition (Zimin Chen, Martin Monperrus), arXiv 1807.03200, 2018.
@techreport{arXiv-1807.03200,
author = {Zimin Chen and Martin Monperrus},
title = {The CodRep Machine Learning on Source Code Competition},
year = {2018},
number = {1807.03200},
institution = {arXiv},
url = {http://arxiv.org/pdf/1807.03200},
}
The goal of the competition is provide different communities (machine learning, software engineering, programming language) with a common playground to test and compare ideas. The competition is designed with the following principles:
- There is no specific background or skill requirements on program analysis to understand the data.
- The systems that use the competition data can be used beyond the competition itself. In particular, there are potential usages in the field of automated program repair.
To take part to the competition, you have to write a program which predicts where to insert a specific line into a source code file. In particular, we consider replacement insertions, where the new line replaces an old line, such as
public class test{
int a = 1;
- int b = 0.1;
+ double b = 0.1;
}
More specifically, the program takes as input a set of pairs (source code line, source code file), and outputs, for each pair, the predicted line number of the line to be replaced by in the initial source code file.
The competition is organized by KTH Royal Institute of Technology, Stockholm, Sweden. The organization team is Zimin Chen and Martin Monperrus.
To be get news about CodRep and be informed about the next edition, register to the CodRep mailing list: codrep+subscribe@googlegroups.com
Here is the current CodRep ranking based on Dataset5 (lower score is better). The official track means a results obtained for the Oct 14th 2018 deadline, the open track means from Oct 14th 2018 to present. For the open track, since Dataset5 is public now, the tool/model can overfit the data.
# | Team (Institution/Company) | Score (lower is better) | Tool/Source |
---|---|---|---|
1 | University of Wisconsin--Madison & Microsoft Research (open track) | 0.07180536507565463 | source |
2 | Inria (open track) | 0.0722766571799 | tool |
3 | University of Wisconsin--Madison & Microsoft Research (official track) | 0.07747553105298915 | tool |
4 | KAIST, South Korea (official track) | 0.079663531979 | tool |
5 | Universidad Central "Marta Abreu" de Las Villas (official track) | 0.08577749683758787 | tool |
6 | JetBrains Research, HSE (official track) | 0.10971915848 | tool |
7 | Ericsson & Rise (official track) | 0.114314056641 | |
8 | source{d} (official track) | 0.14120935716477273 |
The code prediction task has been proposed in this paper, it is not the initial competition task. Using Dataset5 as testing dataset, the results are:
The official ranking was computed based on a hidden dataset, which was not public or part of already published datasets. In order to maintain integrity, the hash or the encrypted version of the hidden dataset was uploaded beforehand (commit b8801401).
The provided data are in Datasets/.../Tasks/*.txt
. The txt files are meant to be parsed by competing programs. Their format is as follows, each file contains:
{Code line to insert}
\newline
{The full program file}
For instance, let's consider this example input file, called foo.txt
.
double b = 0.1;
public class test{
int a = 1;
int b = 0.1;
}
In this example, double b = 0.1;
is the code line to be added somewhere in the file in place of another line.
For such an input, the competing programs output for instance foo.txt 3
, meaning replacing line 3 (int b = 0.1;
) with the new code line double b = 0.1;
.
To train the system, the correct answer for all input files is given in folder Datasets/.../Solutions/*.txt
, e.g. the correct answer to Datasets/Datasets1/Tasks/1.txt
is in Datasets/Datasets1/Solutions/1.txt
The data used in the competition is taken from real commits in open-source projects. For a number of different projects, we have analyzed all commits and extracted all the one line replacement changes. We have further filtered the data based on the following criteria (best effort):
- Only source code files are kept (Java files in dataset00)
- Comment-only changes are discarded (e.g. replacing
// TODO
with// Fixed
) - Inserted or removed lines are not empty lines, and are not space-only changes
- Only one replaced code line in the whole file
The datasets used in this competition are from:
Main Statistics about the data:
Directory | Total #diffs | Lines of code (LOC) |
---|---|---|
Dataset1/ | 3858 | 2056900 |
Dataset2/ | 10088 | 5388282 |
Dataset3/ | 15326 | 627593 |
Dataset4/ | 10431 | 2308279 |
Dataset5/ | 18366 | 2785599 |
Total | 58069 | - |
To play in the competition, your program takes as input input a folder name, that folder containing input data files (per the format explained above).
$ your-predictor Files
Your programs outputs on the console, for each task, the predicted line number. Warning: by convention, line numbers start from 1 (and not 0). If there is no prediction made for certain task (by not outputting <path> <line number>), you will receive maximum loss (which is 1) for the task, more information about this in Loss function below.
<Path1> <line number>
<Path2> <line number>
<Path3> <line number>
...
E.g.;
/Users/foo/bar/CodRep-competition/Datasets/Dataset1/Tasks/1.txt 42
/Users/foo/bar/CodRep-competition/Datasets/Dataset1/Tasks/2.txt 78
/Users/foo/bar/CodRep-competition/Datasets/Dataset1/Tasks/3.txt 30
...
You can evaluate the performance of your program by piping the output to Baseline/evaluate.py
, for example:
your-program Files | python evaluate.py
The output of evaluate.py
will be:
Total files: 15463
Average line error: 0.988357635773 (the lower, the better)
Recall@1: 0.00750177843885 (the higher, the better)
For evaluating specific datasets, use [-d] or [-datasets=] options and specify paths to datasets. The default behaviour is evaluating on all datasets. The path must be absolute path and multiple paths should be separated by :
, for example:
your-program Files | python evaluate.py -d /Users/foo/bar/CodRep-competition/Datasets/Dataset1:/Users/foo/bar/CodRep-competition/Datasets/Dataset2
Explanation of the output of evaluate.py
:
Total files
: Number of prediction tasks in datasetsAverage error
: A measurement of the errors of your prediction, as defined in Loss function below. This is the only measure used to win the competitionRecall@1
: The percentage of predictions where the correct answer is in your top 1 predictions. As such,Recall@1
is the percentage of perfect predictions. We give the recall because it is easily understandable, however, it is not suitable for the competition itself, because it does not has the right properties (explained in Loss function below)
The average error is a loss function, output by evaluate.py
, it measures how well your program performs on predicting the lines to be replaced. The lower the average line is, the better are your predictions.
The loss function for one prediction task is tanh(abs({correct line}-{predicted line}))
. The average line error is the loss function over all tasks, as calculated as the average of all individual loss.
This loss function is designed with the following properties in mind:
- There is 0 loss when the prediction is perfect
- There is a bounded and constant loss even when the prediction is far away
- Before the bound, the loss is logarithmic
- A perfect prediction is better, but only a small penalty is given to almost-perfect ones. (in our context, some code line replacement are indeed insensitive to the exact insertion locations)
- The loss is symmetric, continuous and differentiable (except at 0)
- Easy to understand and to compute
We note that the Recall@1
does not comply with all those properties.
We provide 5 dumb systems for illustrating how to parse the data and having a baseline performance. These are:
guessFirst.py
: Always predict the first line of the fileguessMiddle.py
: Always predict the line in the middle of the fileguessLast.py
: Always predict the last line of the filerandomGuess.py
: Predict a random line in the filemaximumError.py
: Predict the worst case, the farthest line from the correct solution
Thanks to the design of the loss function, guessFirst.py
, guessMiddle.py
, guessLast.py
and randomGuess.py
have the same order of magnitude of error, therefore the value of Average line error
are comparable.
Registered participants:
- JetBrains Research, HSE
- Microsoft Research
- The University of Edinburgh
- Inria
- Siemens Technology and Services Private Limited
- source{d}
- Universidad Central "Marta Abreu" de Las Villas
- IPT Sao Paulo
- Singapore Management University
- Ericsson & Rise
- Otto-von-Guericke University Magdeburg
- KAIST, South Korea
- University of Wisconsin--Madison & Microsoft Research
Dates:
- Official competition start: April 14th 2018.
- Submission deadline for intermediate ranking: July 4th 2018.
- Announcement of the intermediate ranking: July 14th 2018.
- Final submission deadline: Oct. 4th 2018.
- Announcement of the final ranking & end of the competition Oct 14th 2018.
The official final ranking based on Dataset5 (Oct 14th 2018)
# | Team (Institution/Company) | Score | Tool |
---|---|---|---|
(1)* | Inria | 0.0722766571799 | tool |
1 | University of Wisconsin--Madison & Microsoft Research | 0.07747553105298915 | tool |
2 | KAIST, South Korea | 0.079663531979 | tool |
3 | Universidad Central "Marta Abreu" de Las Villas | 0.08577749683758787 | tool |
* Conflict of interest
Intermediate ranking based on Dataset4 (July 4th 2018)
Position | Team name | Score on Dataset4 |
---|---|---|
#1 | Thomas Durieux (INRIA) | 0.0834200326357 |
#2 | Gabin An & Shin Yoo (KAIST) | 0.0884776175201 |
#3 | Jesper Derehag & Olof Mogren (Ericsson & RISE) | 0.09253418191163333 |
#4 | Sebastian Nielebock, Robert Heumüller, Kevin Michael Schott, Frank Ortmeier (Otto-von-Guericke University Magdeburg, Germany) | 0.11869677510133332 |