You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've seen that reducing the sample rate from 2048 Hz to 512 Hz improves our sensitivity at lower masses. This opens up a few questions/experiments to run:
What happens if we keep 2048 Hz sampling, but quadruple the length of the convolutions?
Follow-up ideas from Phil: shrink the number of layers in our network, downsample after the first set of convolutions, have parallel sets of convolutions with different lengths that get combined at the end
Can we lower the sample rate and extend the kernel length?
With a longer kernel length, we likely need a longer integration length to properly recover event time, which incurs latency. How well do we do if we don't integrate our output?
What does our sensitivity look like at mass bins outside the four that we usually look at?
The text was updated successfully, but these errors were encountered:
Thanks for opening this up, good to keep track of this.
I was unable to reproduce this result using the new repository (i.e. this one) and the new rejection sampling validation scheme.
The results are on wandb here. You can see that the 2048Hz run is handily outperforming the 512Hz run in terms of validation score. Would be curious for you to try to do this yourself and catch if I went wrong anywhere
We've seen that reducing the sample rate from 2048 Hz to 512 Hz improves our sensitivity at lower masses. This opens up a few questions/experiments to run:
The text was updated successfully, but these errors were encountered: