-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ROS Iron #8
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor change requested.
I've switched the kinematic plugin to regular kdl in the moveit_config pkg for irb1300 so hoping CI passes now. The cached version can be configured on a deployment basis while ensuring moveit is built from source as you have documented.
1c7749d
to
2890b9c
Compare
Signed-off-by: Shane Loretz <sloretz@openrobotics.org>
Signed-off-by: Shane Loretz <sloretz@openrobotics.org>
Signed-off-by: Shane Loretz <sloretz@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Shane Loretz <sloretz@openrobotics.org>
* Use regular abb control launch Signed-off-by: Yadunund <yadunund@openrobotics.org> * Interpolate instead of plan Signed-off-by: Yadunund <yadunund@openrobotics.org> * Increase timeout Signed-off-by: Yadunund <yadunund@openrobotics.org> --------- Signed-off-by: Yadunund <yadunund@openrobotics.org>
Signed-off-by: Yadunund <yadunund@openrobotics.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are good to merge! Thanks for identifying all these changes.
Turns out CI kept getting killed midway due to the 30min timeout configured in the yaml. This would lead to termination during build or midway during testing. After increasing the timeout, I observed that occasionally CI fails as one of the workcells fails to register itself with the system orchestrator. I could reproduce this locally and I observed that zenoh processes from previous tests were still lingering. I added some explicit commands to terminate these processes in the test files and CI passes more deterministically.
|
||
|
||
class ParallelWoTest(NexusTestCase): | ||
@RosTestCase.timeout(60) | ||
async def asyncSetUp(self): | ||
# todo(YV): Find a better fix to the problem below. | ||
# zenoh-bridge was bumped to 0.72 as part of the upgrade to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw the same behavior with the previous zenoh version too. It's why I made the zenoh launch file launch the bridge directly instead of ros2 run
- I thought the leftover processes might have been from ros2 run
leaving them open.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. Sounds like the bridge may not be handling termination signals correctly?
This migrates the nexus packages to ROS iron.
It requires Yadunund/abb_ros2#2
There is an issue in upstream moveit2 that still requires
nexus/thirdparty.repos
to get thenexus_integration_tests
tests to pass. The fix has already been backported (moveit/moveit2#2300), so it will be resolved when Moveit makes a new release into Iron (moveit/moveit2#2327).I bumped the version of
zenoh_bridge_dds
to0.7.2-rc
and used the newament_vendor()
macro to vendor it. Previously nexus was using pre-built binaries from a fork. This PR now builds it from source.I removed the call to
ros2 run
inzenoh_bridge.launch.py
. I was seeing a lot ofros2 run
andzenoh_bridge_dds
processes running from failed test runs, and I thought getting rid of the intermediate process would fix it. I'm not sure if it did, but it's at least one less process running. I'm not sure this change is strictly necessary for Iron so I'd be happy to move it to another PR.