-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration to depthcharge v0.4.8 #350
base: dev
Are you sure you want to change the base?
Migration to depthcharge v0.4.8 #350
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this excellent PR 🎉
As a note for others, PR also adds a few features and changes some functionality. Some that I noticed:
- Checkpoints now have useful file names.
- Early stopping can be enabled.
- The learning rate is logged.
- Model precision can now be changed.
- Various gradient things (clipping and such) can be configured. These are very useful for stability.
I've requested a few, mostly small changes. The biggest thing we need to address now are updates to the unit tests, so that all of our CI checks pass.
@wsnoble, @bittremieux, @melihyilmaz - with as big of a change as this is, you should all take it for a spin and make sure we didn't miss anything!
Hi @wfondrie , thanks for the comments :) are you taking care of the updates to the unit tests, so that the CI checks pass or is it better if I have a look? I think the documentation probably needs to be updated a bit, as well as the download of the latest weights for prediction, as the old ones are not compatible anymore. Best, |
Yes, we'll cut a new release v5.x.x for this implementation, as these are some breaking changes. We'll have to train a new model, but with the new major version the downloading code won't get confused.
Some of the first fixes might be relatively straightforward, with some renamed modules that have to be updated in the unit tests. If you have some bandwidth to look at it, feel free to do so. |
casanovo/denovo/model_runner.py
Outdated
# Configure early stopping | ||
if config.early_stopping_patience is not None: | ||
self.callbacks.append( | ||
EarlyStopping( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing for now. This will be introduced back into casanovo/dev
by a future pr if it is decided to introduce early stopping functionality into the mainline casanovo release.
casanovo/denovo/model_runner.py
Outdated
# Configure learning rate monitor | ||
if config.tb_summarywriter is not None: | ||
self.callbacks.append( | ||
LearningRateMonitor(logging_interval="step", log_momentum=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same thing here, removing for now as this will be reintroduced in an open pr.
assert os.path.basename(mgf_small.name) not in out_writer._run_map | ||
assert os.path.abspath(mgf_small.name) in out_writer._run_map | ||
assert mgf_small.name in out_writer._run_map | ||
assert os.path.abspath(mgf_small.name) not in out_writer._run_map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this test to reflect the current behavior of MztabWriter
, but it might be worth looking into we want to change the behavior of MztabWriter
, especially if depthcharge is updated to include the full path in the spectrum dataloaders.
It looks like there might be a bug in Particularly, this loop for aa in ([None] if finished_beams[i] else aa_neg_mass_idx): that does the early termination check is never entered if the beam isn't finished and there is nothing in |
I've initialized the model's tokenizer with the residues from the tiny config as a work around to get the |
Yes, from a quick check I think you're right. In the current version, there's always at least |
beam = model.n_beams # S | ||
model.decoder.reverse = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't look like the current PeptideDecoder
supports the reverse option.
The tests still seem to fail on GitHub, in contrast to the latest commit message. @Lilferrit is this expected behavior? |
Best I can tell looking at the GitHub actions logs, the reason the tests fail on GitHub is due to the Pylance |
Yes. And also make an issue and link it to the one in DepthCharge to track this. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## dev #350 +/- ##
==========================================
- Coverage 94.37% 88.85% -5.53%
==========================================
Files 13 14 +1
Lines 1102 1202 +100
==========================================
+ Hits 1040 1068 +28
- Misses 62 134 +72 ☔ View full report in Codecov by Sentry. |
Done, all of the test on GitHub pass now, and seem to run faster than they have historically as well. |
943dda4
to
6ab3397
Compare
I overwrote the previous merge with dev with the latest rebase to make the diff more representative of any new functionality. |
This version of Casanovo is now based on depthcharge v0.4.8 instead of v.0.2.3.