-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow simulations when apertures enables #151
Comments
Hello, I think @sergey-tomin is currently on holiday, and he wrote that code I believe, but all I can suggest (at least until he is back) is that you profile the code and see where the slowdown could be. If there is a problem with the implementation then I can try to help you optimise it. I don't think it should be 100x slower, no. |
Did you profile the code? |
Hi @st-walker, No, I have had to prioritize completely different things lately (and will have to for the next little while), and I also don't know how to profile a code since I'm terrible at programming.
|
Hi, the use of physics process is also correct. |
Hi, I didn't forget to activate them, I even tried putting the activation bit in several various locations, even at multiple locations at the same time, but with no effect. It's unclear what's going wrong in that case. I'd prefer to have the apertures as part of the lattice and not as a physical process. The beamline section is about 8 m long, and I was originally running with a 0.01 m step since the optics can vary quite a bit over short distances in that section depending on settings. I increased that to 0.1 m to reduce the time. Once I have time to look at this again, I'm going to try again with 0.5 m and see if I see a difference.
|
how do you define the effect? maybe you do not have losses? try to use very small apertures. |
Hi, Yes, I tried reducing the apertures to something like 50-100 um, still saw no effect. When I used the same aperture size but enabled apertures as a physical process I lost ~70 % of the particles.
|
I was having the exact same problem and found the bug in the source code. If you explicitly specify the elliptic type of your aperture as type='ellipt', it works: ap = Aperture(xmax=r, ymax=r, type='ellipt') @sergey-tomin the problem is in the activate_apertures() function in here ocelot/cpbd/navi.py: if elem.type == "rect":
ap = RectAperture(xmin=-elem.xmax + elem.dx, xmax=elem.xmax + elem.dx,
ymin=-elem.ymax + elem.dy, ymax=elem.ymax + elem.dy)
self.add_physics_proc(ap, elem, elem)
elif elem.type == "ellipt":
ap = EllipticalAperture(xmax=elem.xmax, ymax=elem.ymax,
dx=elem.dx, dy=elem.dy)
self.add_physics_proc(ap, elem, elem) elif elem.type == "ellipt" should be elif elem.type == "elliptical" or allowing several options to be consistent with the definition in the Aperture element class: ocelot/cpbd/elements/aperture.py: class Aperture(OpticElement):
"""
Aperture
xmax - half size in horizontal plane in [m],
ymax - half size in vertical plane in [m],
type - "rect" or "elliptical".
""" When defining the aperture as ap = Aperture(xmax=r, ymax=r, type='elliptical'), calling navi.activate_apertures() is doing exactly nothing - as Jonas and I experienced. If you let me know how the workflow for bug fixes works here, I will be happy to do the PRs myself in future. |
Note: my comment only relates to the problem of the Aperture element not working and not the simulation speed. @jonasbjorklundsvensson, if you have a working setup now and are still concerned about the simulation speed, you probably don't need to apply the aperture throughout the whole beamline. You can add apertures only at the specific positions of interest, e.g. navi.add_physics_proc( ap , sequence[2] , sequence[2] ) |
Hi @SchroederSa, Thank you for the comment! You’re right about the issue with the docstring. When I was programming this, I intended to shorten the type names to "rect" and "ellipt" and have examples that use these forms. However, I mistakenly wrote "elliptical" in the docstring, which is incorrect. I'll correct this issue. Sergey. |
@SchroederSa thanks for the tip! I eventually got it working (I forget exactly how since it was some time ago), but great that you found a bug and that it's getting fixed :) -Jonas |
Hi,
I'm trying to run simulations with apertures enabled to simulate some expected particle loss in the beam pipe, but I'm encountering some unexpected behavior. I'm running the latest version of Ocelot from the dev branch.
Firstly, I tried following the tutorial in https://nbviewer.org/github/ocelot-collab/ocelot/blob/dev/demos/ipython_tutorials/small_useful_features.ipynb for the implementation of the apertures. This approach seems to be deprecated, as defining the apertures in this way does absolutely nothing in the simulations.
When I instead define the apertures as a physics process through
The simulation time is 86 times (!!!) longer, going from about 0.12 s to 10.4 s if I have a navi unit step of 1 cm, 11 times longer if I have 10 cm steps. The optics functions in this section vary relatively quickly since the quads are quite strong and closely spaced, so I'd not like to have more than 10 cm step sizes.
This is with a test beam of 1e4 particles. I eventually want to run with many times more particles (up to about 1e6 or so), but am getting a little bit worried about the length of these simulations as I will be doing parameter scans at many different working points.
Is this large leap in simulation time expected? Am I implementing the apertures the wrong way still, or is the scaling to more particles perhaps good (such that little additional time will be added when adding more particles)?
Best regards,
Jonas
The text was updated successfully, but these errors were encountered: