Skip to content

Commit

Permalink
CI move to github action
Browse files Browse the repository at this point in the history
  • Loading branch information
tomMoral committed Dec 2, 2024
1 parent c343ef6 commit 1be766a
Show file tree
Hide file tree
Showing 3 changed files with 106 additions and 54 deletions.
75 changes: 75 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
name: Test
on:
push:
branches:
- main
pull_request:
branches:
- main

# Cancel in-progress workflows when pushing
# a new commit on the same branch
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
test_kit:
name: Test
runs-on: ubuntu-latest
env:
CONDA_ENV: 'testcondaenv'

defaults:
run:
# Need to use this shell to get cond working properly.
# See https://github.com/marketplace/actions/setup-miniconda#important
shell: 'bash -l {0}'

steps:
- uses: actions/checkout@v3
- name: Setup Conda
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV }}
python-version: "3.10"
# Use miniforge to only get conda-forge as default channel.
miniforge-version: latest

- name: Install benchopt and its dependencies
run: |
pip install -r requirements.txt
- name: 'Run the tests'
run: ramp-test

flake8:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: "3.10"

- name: Install dependencies
run: pip install flake8
- name: Flake8 linter
run: flake8 .

nbconvert:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: "3.10"

- name: Install dependencies
run: pip install seaborn nbconvert[test]

- name: Check the starting-kiyt notebook
run: jupyter nbconvert --execute variable_stars_starting_kit.ipynb --to html --ExecutePreprocessor.kernel_name=$IPYTHON_KERNEL
20 changes: 0 additions & 20 deletions .travis.yml

This file was deleted.

65 changes: 31 additions & 34 deletions template_starting_kit.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -70,7 +70,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Submission\n",
"# Submission format\n",
"\n",
"Here, you should describe the submission format. This is the format the participants should follow to submit their predictions on the RAMP plateform.\n",
"\n",
Expand All @@ -83,72 +83,69 @@
"source": [
"## The pipeline workflow\n",
"\n",
"The input data are stored in a dataframe. To go from a dataframe to a numpy array we will a scikit-learn column transformer. The first example we will write will jus t consist in\n",
"selecting a subset of columns we want to work with."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Testing using a scikit-learn pipeline"
"The input data are stored in a dataframe. To go from a dataframe to a numpy array we will a scikit-learn column transformer. The first example we will write will just consist in selecting a subset of columns we want to work with."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import problem\n",
"from sklearn.model_selection import cross_val_score\n",
"# %load submissions/starting_kit/estimator.py\n",
"\n",
"X_df, y = problem.get_train_data()\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.linear_model import LogisticRegression\n",
"\n",
"scores = cross_val_score(get_estimator(), X_df, y, cv=2, scoring='accuracy')\n",
"print(scores)"
"\n",
"def get_estimator():\n",
" pipe = make_pipeline(\n",
" StandardScaler(),\n",
" LogisticRegression()\n",
" )\n",
"\n",
" return pipe\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submission\n",
"\n",
"To submit your code, you can refer to the [online documentation](https://paris-saclay-cds.github.io/ramp-docs/ramp-workflow/stable/using_kits.html)."
"## Testing using a scikit-learn pipeline"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[0;31mInit signature:\u001b[0m\n",
"\u001b[0mrw\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mscore_types\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mBalancedAccuracy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'balanced_accuracy'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mprecision\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0madjusted\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mDocstring:\u001b[0m <no docstring>\n",
"\u001b[0;31mFile:\u001b[0m ~/Work/ramp/ramp-workflow/rampwf/score_types/balanced_accuracy.py\n",
"\u001b[0;31mType:\u001b[0m type\n",
"\u001b[0;31mSubclasses:\u001b[0m "
"[0.97222222 0.96527778 0.97212544 0.95121951 0.96167247]\n"
]
}
],
"source": [
"import rampwf as rw\n",
"rw.score_types.BalancedAccuracy?"
"import problem\n",
"from sklearn.model_selection import cross_val_score\n",
"\n",
"X_df, y = problem.get_train_data()\n",
"\n",
"scores = cross_val_score(get_estimator(), X_df, y, cv=5, scoring='accuracy')\n",
"print(scores)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
"source": [
"## Submission\n",
"\n",
"To submit your code, you can refer to the [online documentation](https://paris-saclay-cds.github.io/ramp-docs/ramp-workflow/stable/using_kits.html)."
]
}
],
"metadata": {
Expand Down

0 comments on commit 1be766a

Please sign in to comment.