Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove TODOs from the code base, use issue tracker instead #679

Merged
merged 1 commit into from
Dec 14, 2024

Conversation

PGijsbers
Copy link
Collaborator

No description provided.

@codecov-commenter
Copy link

codecov-commenter commented Dec 14, 2024

Codecov Report

Attention: Patch coverage is 50.00000% with 2 lines in your changes missing coverage. Please review.

Please upload report for BASE (master@98bf554). Learn more about missing BASE report.

Files with missing lines Patch % Lines
amlb/datautils.py 0.00% 1 Missing ⚠️
amlb/resources.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##             master     #679   +/-   ##
=========================================
  Coverage          ?   71.00%           
=========================================
  Files             ?       55           
  Lines             ?     6832           
  Branches          ?        0           
=========================================
  Hits              ?     4851           
  Misses            ?     1981           
  Partials          ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -1621,52 +1618,3 @@ def _ec2_startup_script(self, instance_key, script_params="", timeout_secs=-1):
if timeout_secs > 0
else rconfig().aws.max_timeout_seconds,
)


class AWSRemoteBenchmark(Benchmark):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea is to create an AWS benchmark which saves results during execution to avoid losing data when a task errors when you run multiple tasks? Not entirely sure what Seb meant. Running multiple tasks is assumed to be sequential (in parallel we can't guarantee fair resource usage), which means also extra cleanup between runs and so on. There hasn't been a request for this in the 6 years since it was written, i'm not 100% sure what is meant, so this is not converted to an issue,

@@ -136,7 +135,6 @@ def _run():
else rconfig().seed,
)
)
# TODO: would be nice to reload generated scores and return them
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While that would follow the other job results, results currently get printed and saved to logs fine? So I am not 100% sure what the added value is. Might be missing something, didn't have a close look.

@PGijsbers PGijsbers merged commit 75685ee into master Dec 14, 2024
9 of 38 checks passed
@PGijsbers PGijsbers deleted the remove-todos branch December 14, 2024 16:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants