-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add do fgs_plan #140
base: main
Are you sure you want to change the base?
Add do fgs_plan #140
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we will probably want another level of categorisation than this, maybe put do_fgs
into a gridscan module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I think we can maybe keep them even more similar
src/mx_bluesky/plan_stubs/do_fgs.py
Outdated
if pre_plans: | ||
yield from pre_plans() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should: So in the hyperion case all we're doing here is reading the file name from the eiger. There's no reason we can't do the same in VMXm is there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, but I also think pre_plans
gives us a bit more flexibility in case we want to also read other hardware at this point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest we don't build in complexity we don't yet need
src/mx_bluesky/plan_stubs/do_fgs.py
Outdated
if post_plans: | ||
yield from post_plans() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should: Similar to above, I think we might be able to:
- Wait for the zocalo stage in hyperion before calling
do_fgs
- Just always read the devices we have in both plans
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the wait for zocalo stage has the potential to take 2 seconds. I think maybe for the moment keep the post_plans
bit and call it with a wait for stage on hyperion. I think it should be renamed to during_collection_plans
though, as that's what it is
from mx_bluesky.i03.parameters.constants import CONST | ||
|
||
|
||
def read_hardware_for_zocalo(detector: EigerDetector): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguably I could just move all these plans in the same PR. I thought i'd try and separate it out a bit though, otherwise it will quickly spiral
7a76b60
to
adb89b3
Compare
adb89b3
to
b439f20
Compare
Fixes #80 as part of https://github.com/orgs/DiamondLightSource/projects/6/views/1?pane=issue&itemId=73047222. This adds a slightly modified version of the
do_fgs
plan in Hyperion. Here, you can optionally device whether or not we need to do extra things - eg zocalo. This plan should be usable for VMXm, i03, i04, and hopefully other MX beamlines.The idea of theI renamed itlevel 1
folder is for it to be filled with plan stubs, i.e plans that cannot be ran on their own. I guess it could just be renamed toplan_stubs
...plan_stubs
To test: