An overview of this program.
Go to Introduction
A rapid description of how the program works (pseudo-code).
Go to How it works
How install and run this program in Linux.
Go to Installation and Execution
How Jupyter Notebook interface could be improved.
Go to Improvements
The main aim of this repository is provide an alternative, more modular, implementation of the program that can be downloaded at this link:
https://github.com/simone-contorno/rt-assignment-3
Now, the User Interface, wrote in python, is implemented in a different node and also in Jupyter.
There are 3 files:
- rt2_robot_logic.cpp: simulates the 'logic' of the robot.
- rt2_robot_interface.py: provides the UI (User Interface) to the user.
- rt2_robot_jupyter_ui.ipynb: provides the graphic UI (User Interface) to the user.
Managing publishers, subscribers and server parameters it is able to avoid the crashing of the robot against a wall, if asked by the user, and register how many targets it reached and how many not. A target is considered 'not reached' when the robot does not reach it within 2 minutes, then the goal is cancelled.
There are 3 subscribers that run simultaneously thanks to a multi-thread architecture given by the ROS class AsyncSpinner:
- sub_pos: subscribes to the topic /move_base/feedback through the function currentStatus that continuously update the current goal ID and check whether the robot has reached the goal position.
- sub_goal: subscribes to the topic /move_base/goal through the function currentGoal that continuously update the current goal coordinates.
- sub_laser: subscribes to the topic /scan through the function drivingAssistance that continuously take data by the laser scanner and, if the driving assistance is enabled, help the user to drive the robot stopping it if there is a wall too close in the current direction.
You can change 3 constant values to modify some aspect of the program:
- DIST: minimum distance from the wall with the driving assistance enabled.
- POS_ERROR: position range error.
- MAX_TIME: maximum time to reach a goal (microseconds).
#define DIST 0.35
#define POS_ERROR 0.5
#define MAX_TIME 120000000
FUNCTION drivingAssistance WITH (msg)
COMPUTE minimum distance on the right
COMPUTE minimum distance in the middle
COMPUTE minimum distance on the left
IF driving assistance is enabled AND
the robot is going against a wall THEN
SET robot velocity TO 0
PUBLISH robot velocity
ENDIF
IF a goal position is set THEN
COMPUTE the time elapsed
IF the time elapsed IS GREATER THAN 120 seconds THEN
DELETE current goal
INCRESE number of non-reached goals
ENDIF
ENDIF
ENDFUNCTION
FUNCTION currentStatus WITH (msg)
SET current robot position
COMPUTE the difference between the current robot position and the current goal position
IF the difference IS LESS THAN 0.5 THEN
STOP to compute the elapsed time
INCREASE number of reached goals
ENDIF
UPDATE the goal ID if there is a new one
ENDFUNCTION
FUNCTION currentGoal WITH (msg)
SET current goal position
ENDFUNCTION
FUNCTION main WITH (argc, argv)
INITIALIZE the node "rt2_robot_logic"
SET the first publisher TO "move_base/cancel"
SET the second publisher TO "cmd_vel"
SET the first subscriber TO "/move_base/feedback" WITH currentStatus
SET the second subscriber TO "/move_base/goal" WITH currentGoal
SET the third subscriber TO "/scan" WITH drivingAssistance
INITIALIZE spinner WITH 3 threads
START spinner
INPUT "If you want to quit, enter any key:"
PRINT "Bye."
STOP spinner
CALL ros::shutdown
CALL ros::waitForShutdown
RETURN 0
ENDFUNCTION
The user will be able to choose between two different modalities:
- Automatic goal reaching.
- Manual driving with or without the driving assistance.
FUNCTION manualDriving
WHILE user does not quit
TAKE user input through the keyboard
EXEC corresponding task
PUBLISH new robot velocity
ENDWHILE
ENDFUNCTION
FUNCTION userInterface
WHILE user does not quit
TAKE user input through the keyboard
EXEC corresponding task
ENDWHILE
ENDFUNCTION
FUNCTION main WITH (argc, argv)
INITIALIZE the node "rt2_robot_ui"
PRINT starting message
CALL interface
PRINT "Bye."
RETURN 0
ENDFUNCTION
Additionally with respect to the 'rt2_robot_interface.py" file, in Jupyter, it is possible to plot:
- the robot position (with and without tracking all the history, respectively uncommenting and commenting the Section 4.2);
- the laser scanner data;
- the number of reached and non-reached goals.
In order to use the first modality: uncomment the Section 1.1 and comment the 1.2.
In order to use the second modality: comment the Section 1.1 and uncomment the 1.2.
The first one is used by default.
It is also possible visualize the 3D map of Rviz running the Section 5.
To read the documentation of the files "rt2_robot_interface.py" and "rt2_robot_logic.cpp" go to this link:
https://simone-contorno.github.io/rt2-assignment/files.html.
Go into the src folder of your ROS workspace.
Download this repository:
git clone https://github.com/simone-contorno/rt2-assignment
Go into the root folder of your ROS workspace and build the workspace:
catkin_make
Afterwards refresh the package list:
rospack profile
Now, to launch all these nodes using the launch file final_assignment.launch, install xterm:
sudo apt install xterm
Launch the launch file:
roslaunch rt2_robot rt2_robot.launch
Alternatively, if you don't want intall xterm, you can open 4 terminals and:
- In the first one launch the environment:
roslaunch rt2_robot simulation_gmapping.launch
- In the second one launch the action server move_base:
roslaunch rt2_robot move_base.launch
- In the third one run the logic of the robot:
rosrun rt2_robot rt2_robot_logic
- In the fouth one run the user interface:
rosrun rt2_robot rt2_robot_interface.py
In this second case, be careful to run the rt2_robot_logic before starting to use the user interface, otherwise you can encounter some troubles!
If you want to use the Jupyter Notebook interface, you need to start it:
jupyter notebook --allow-root --ip 0.0.0.0
Now you can access to the rt2_robot folder in your ROS workspace and open the file rt2_robot_jupyter_ui.ipynb.
Once open, run all the cells.
By default, you will find:
- The User Interface to control the robot in the Section 3.
- The data plots in the Section 4.
- Robot position data: Section 4.3.
- Laser scanner data: Section 4.4.
- Number of reached/non-reached targets: Section 4.5.
- The 3D map in the Section 5.
If you change the plot modality in the Section 1, then you will not find the data plots in the Section 4, but you can plot them by the User Interface in the Section 3.
In Jupyter Notebook, when data are plotted directly by the UI in the Section 3, the kernel remains busy and it cannot be possible continue to use the interface until the plotting windows will be closed. Then, a good improvement would be allow to the user to continue to use the interface while it is plotting the data.
Also, using the default plot modality, update the number of reached/non-reached targets dinamically, without click on the update button, would be a good user experience improvement.