The home robot project. The goal is to integrate multiple components into one project. The key components are:
- Manipulation : to pick and place objects
- Navigation: to move inside a "home" environment
- Perception: For object detection and related modules
- (Optional) : Coppeliasim for simulation
The goal is to run different modules in their own docker container.
As of now, you need to build all 3 modules, but perhaps these dependencies can be managed better later.
All docker images are in src/docker/
directory. The different docker images are :
base_image
: contains a base installation of ros-melodic, which all other containers are built from.navigation
: contains installations required for navigation : primarily mapping (with slam_karto), localization (with amcl) and navigation (with move_base)perception_noetic
: contains a Detic detector image.perception
: contains object and receptacle detector image. A simple ros-melodic image with basic opencv packages.tidy_module
: a simple ros-melodic image with dependencies for house tidying modulespipeline
: support for behavior trees and other messagescoppeliasim
: running simulation
For manipulation
, refer to Jiaming's docker image.
Each docker container is also accompanied by a catkin build
command that builds the respective ros packages needed for the module.
Navigate to each directory inside the docker directory, and run bash build.sh
to build the different docker images.
To run, some tools are provided in docker/dockerlogin
directory.
bash cogrob_docker_create.sh <cont_name> <image_name>
to create a containerbash cogrob_docker_exec.sh <cont_name>
to enter a created container.
The different modules in detail are :
Contains the default fetch modules with the controller and its application packages. The packages of interest for this project are:
- fetch_moveit_config : that loads the controllers and
move_group
needed for manipulation - fetch_navigation: that launches
move_base
node for navigation. - fetch_description : contains the URDF of the robot.
Navigation package is responsible for localization and navigation. The different submodules of interest are:
This package requires extensive tuning.
Found inside fetch/fetch_ros/fetch_navigation
package. The main file of interest is src/fetch/fetch_ros/fetch_navigation/launch/fetch_nav.launch
This has been modified from the default package, and we run two planners.
- Navfn global planner, that runs in the
move_base_planner_navfn
namespace. This is the standard global planner, that takes any free location on the map as a goal and navigates towards it. This is used for room-room navigation, or a default location navigation. The config files are named just*move_base*
- Carrot global planner that runs in the
move_base_planner_carrot
namespace. This planner can plan a path only in straight lines, and is useful when the goal location is on an obstacle. This is good for receptacle navigation, where the goal point is on the receptacle, and we navigate in a straight line to a point as close as possible to the receptacle. The config files are named*move_base_carrot*
Some values to start tuning are:
- Local planner tolerance values in
config/move_base.yaml
and/orconfig/move_base_carrot.yaml
- Velocity and acceleration proviles in
config/fetch/move_base.yaml
and/orconfig/fetch/move_base_carrot.yaml
Talks with move_base
node of fetch_navigation
. It receives a Pose2D as a goal, and sends a Pose2D goal either to Navfn or Carrot planner depending on the input. It runs an ActionServer with the "move_fetch_robot_base" namespace.
Contains services that can perform:
- map-to-pixel and pixel-to-map conversions, to go from a pixel map to a metric 2D pose (x,y) in map frame and vice-versa. (This should probably be moved to a package called map_utils)
- Semantic map modules. Loads an npy file containing semantic annotations of a map, along with its label.txt giving number->category conversion. It offers two services:
- "semantic_localize" - to provide the "room" and Pose2D of the robot, i.e. which room it is in, and the exact coordinates in map frame.
- "semantic_location_to_pose" - given a room, provides the Pose2D center of the room.
Details can be found in semantic_localization/scripts/localize.py
Contains a module that navigates to a room on request.
Relevant file : src/navigation/room_graph_navigator/scripts/object_room_navigator.py
.
Action namespace: "object_room_navigator"
Input : room (string)
Output: None
Effect: Navigates to the room, by calling "semantic_location_to_pose" from the Semantic Localization package to get the Pose2D of the room, and navigates to it.\
Contains a module that navigates to receptacles. Contains an action to navigate, and 2 services.
-
available_receptacles service - given a list of candidate receptacles, returns the receptacles that are found in the scene and their locations
Service namespace: available_receptacles
SRV name: GetReceptacleLocations
Input: list of receptacles to search for (list of strings)
Output: NamedLocation[] of receptacle name with its 2D pose.\ -
receptor_approach_pose service - given a receptacle with its 2D pose, find a free location that is closest to it. Service namespace: receptor_approach_pose
SRV name: GetGoalPoseForReceptacle
Input: NamedLocation of Receptacle with its 2D pose
Output: 2DPose to navigate to.\ -
receptacle_navigator action - navigates to a receptacle Given a receptacle name, calls receptor_approach_pose service to get a 2D location, and calls the carrot planner to drive in a straight line to it. Action namespace: receptacle_navigator/ Action name: NavigateToReceptacleAction/ Input: 2D pose/ Output: Success or Failure/
A package to launch the different navigation packages
Responsible for creating ros services/actions for all perception related modules. The main ones are:
Standard ROS message formats for object detection messages for 2D and 3D. Important messages are: Detection2D, Detection2DArray
A server for detecting objects on demand.
Service namespace: detector_2d
SRV name: detect2DObject
Input: None
Output: Detection2DArray of 2D bounding boxes of detected objects in the scene\
A server for detecting receptacles on demand.
Service namespace: receptacle_detector
SRV name: DetectReceptacles
Input: None
Output: NamedLocation of receptacles\
Refer to Jiaming's modules
Contains services to detect objects out of place, and a service to get the correct placements. Services are:
Service namespace: objects_out_of_place_service
SRV name: IdentifyMisplacedObjects
Input: None (implicitly should call camera)
Output: [(obj, room, recep)], i.e. list of (obj,room,recep) pairs of object that are out of place from the given camera view.\
It calls semantic_localize first to get its current location and the room it is in. Then needs to call receptacle detector to get the receptacle it is looking at. Then calls object detector to get the objects on the receptacle. This information of (obj, room, recep) of all objects found in the scene on receptacles is sent to the internal tidy module. Finally outputs the objects that are out of place
#TODO Split into two files. One containing just the tidy components, and the other with the ROS related things to be able to call semantic localize only when needed
Service namespace: correct_object_placement_service
SRV name: GetCorrectPlacements
Input: (obj, room, recep) pair
Output: [room, [receptacles]]\
Given an object, and its current location (room, receptacle), gets the candidates of placements it can go to.
Runs all the different parts in a sequence. There are two ways to do this.
- Standard python script with instructions one after another
- Behavior Trees
Found in src/service_robot_pipeline/scripts/main_pipeline.py
. NOT COMPLETE YET. Just calls one module after another and takes care of message passing.
Adds everything in a behavior tree. Individual behaviors are listed in src/service_robot_pipeline/scripts/behaviors.py
Open 6 terminals.
In each terminal, create an instance of a docker image
bash src/docker/dockerlogin/cogrob_docker_create.sh homerobot_coppeliasim coppeliasim:melodic
source devel/setup.bash
roslaunch fetch_coppeliasim launch_simulation.launch
bash src/docker/dockerlogin/cogrob_docker_create.sh homerobot_perception_noetic ros_homerobot_perception:noetic
source devel/setup.bash
roslaunch detic_ros detic_detector.launch
bash src/docker/dockerlogin/cogrob_docker_create.sh homerobot_perception_melodic homerobot_perception:melodic
source devel/setup.bash
roslaunch perception_bringup perception_launch.launch
bash src/docker/dockerlogin/cogrob_docker_create.sh homerobot_navigation ros_navigation_homerobot:melodic
source devel/setup.bash
roslaunch navigation_bringup navigation_launch.launch
Manipulation : Replace with whatever Jiaming says
bash src/docker/dockerlogin/cogrob_docker_create.sh ros_sr_pipeline homerobot_pipeline:melodic
source devel/setup.bash
roslaunch manipulation_bringup manipulation_launch.launch
bash src/docker/dockerlogin/cogrob_docker_create.sh homerobot_tidyservices homerobot_tidy_services:melodic
source devel/setup.bash
roslaunch tidy_module tidy_services.launch
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_pipeline
source devel/setup.bash
rosrun service_robot_pipeline main_pipeline_bt.py
Open 5 terminals. And navigate to home-robot-project
in each terminal.
Perception - 2D detector
docker start homerobot_perception_noetic
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_perception_noetic
cd /root/detic_ws
source devel/setup.bash
roslaunch detic_ros detic_detector.launch
Perception - melodic detectors
docker start homerobot_perception_melodic
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_perception_melodic
cd /catkin_ws
source devel/setup.bash
roslaunch perception_bringup perception_launch.launch
Navigation
docker start homerobot_navigation
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_navigation
cd /catkin_ws
source devel/setup.bash
roslaunch navigation_bringup navigation_launch.launch
Tidy Services
docker start homerobot_tidyservices
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_tidyservices
cd /catkin_ws
source devel/setup.bash
roslaunch tidy_module tidy_services.launch
Main pipeline
docker start homerobot_pipeline
bash src/docker/dockerlogin/cogrob_docker_exec.sh homerobot_pipeline
cd /catkin_ws
source devel/setup.bash
rosrun service_robot_pipeline main_pipeline_bt.py
Run manipulation in yours. (No need to run localization I think since this system is more powerful)
In each terminal, need to go to bashrc
and change the IP of the fetch. Then source bashrc
and then source devel/setup.bash
.
Also need to go to /etc/hosts
in terminals 1, 2, 3, 4, 5 if needed.
Sometimes you may need to run
rosservice call /move_base_planner_navfn/move_base/clear_costmaps "{}"
rosservice call /move_base_planner_carrot/move_base/clear_costmaps "{}"
in a new terminal if navigation fails sometimes
Everything should run