This repository contains a ROS node for Object Detection and grasp planning. After naming objects, the program recognizes the objects and claculates a grasping point for the two-finger gripper of KUKA youBot.
See the following video for visualization:
For a full environment setup please refer to this document.
The node expects depth image messages under the topic /royale_camera_driver/depth_image
and point_cloud2 messages under the topic /royale_camera_driver/point_cloud
. These are provided by the official pmd Royale ROS Wrapper.
The node publishes object messages under the topic /object_recognition/recognized_object
. The message type is defined in msg/RecognizedObject.msg. An example message is shown below:
header:
seq: 1939
stamp:
secs: 1525095329
nsecs: 744063000
frame_id: royale_camera_link
name: Duplo
midpoint:
x: -0.0169242266566
y: 0.00806986819953
z: 0.188075810671
width: 0.033241365099
rotation: -51
Configuration is done in parameters/settings.json. The default values in this file correspond to the youBot object recognition pose descibed in parameters/ObjectRecognitionPose.md.
-
"debugging"
: Do you want to see debugging images? -
"objects"
: Path to object .json fil -
"camera_thresh"
: Camera distance to ground. (Float in meters) -
"camera_max"
: Camera distance to highest object. (Float in meters) -
"maximal_contour_difference"
: Maxiumum difference between contours to be recognized accoridng to Hu-Moments. -
"minimal_contour_length"
: Miniumum number of a contour's points to be estimated as potential object.
-
Executing the Learner:
roslaunch object_recognition_pico_flexx Learner.launch
-
Executing the Recognizer:
roslaunch object_recognition_pico_flexx Recognizer.launch