Skip to content

[IROS 2024] [ICML 2024 Workshop Differentiable Almost Everything] Physics-informed model to predict robot-terrain interactions from RGB images.

License

Notifications You must be signed in to change notification settings

ctu-vras/monoforce

Repository files navigation

MonoForce: PINN for Traversability Estimation from RGB images

Arxiv ICML-2024-Diff-XYZ Video Video Poster Data

Examples of predicted trajectories and autonomous traversal through vegetation:

Input: onboard camera images:

Output: predicted trajectory, terrain shape and properties, interaction forces and contacts:

The three people are visible as the pillars within the blue area.

Robot-terrain interaction prediction from only RGB images as input.

Table of Contents

Running

The MonoForce pipeline consists of the Terrain Encoder and the Differentiable Physics modules. Given input RGB images and cameras calibration the Terrain Encoder predicts robot's supporting terrain. Then the Differentiable Physics module simulates robot trajectory and interaction forces on the predicted terrain for a provided control sequence (linear and angular velocities). Refer to the monoforce/examples folder for implementation details.

Please run the following command to explore the MonoForce pipeline:

cd monoforce/
python scripts/run.py --img-paths IMG1_PATH IMG2_PATH ... IMGN_PATH --cameras CAM1 CAM2 ... CAMN --calibration-path CALIB_PATH

For example if you want to test the model with the provided images from the ROUGH dataset:

cd monoforce/scripts/
./run.sh

Please, refer to the Terrain Encoder documentation to download the pretrained model weights.

ROS Integration

If you have ROS and Docker installed you can also run:

docker pull agishrus/monoforce
cd monoforce_demos/scripts/
./demo.sh

or equivalently:

roslaunch monoforce_demos monoforce_rough.launch

We provide a ROS nodes for both the trained Terrain Encoder model and the Differentiable Physics module. They are integrated into the launch file:

roslaunch monoforce monoforce.launch

Terrain Properties Prediction

Except for the terrain shape (Elevation), we estimate the additional terrain properties:

  • Friction: The friction coefficient between the robot and the terrain.
  • Stiffness: The terrain stiffness.
  • Damping: The terrain damping.

An example of the predicted elevation and friction maps:

video link

One can see that the model predicts the friction map with higher values for road areas and with the smaller value for grass where the robot could have less traction.

Please refer to the train_friction_head_with_pretrained_terrain_encoder.ipynb notebook for the example of the terrain properties learning with the pretrained Terrain Encoder model and differentiable physics loss.

Navigation

Navigation method with MonoForce predicting terrain properties and possible robot trajectories from RGB images and control inputs. The package is used as robot-terrain interaction and path planning pipeline.

video link

We provide the two differentiable physics models for robot-terrain interaction prediction:

Navigation consists of the following stages:

  • Height map prediction: The Terrain Encoder part of the MonoForce is used to estimate terrain properties.
  • Trajectories prediction: The Diff Physics part of the MonoForce is used to shoot the robot trajectories.
  • Trajectory selection: The trajectory with the smallest cost based on robot-terrain interaction forces is selected.
  • Control: The robot is controlled to follow the selected trajectory.

To run the navigation pipeline in the Gazebo simulator:

roslaunch monoforce_demos husky_gazebo_monoforce.launch

Citation

Consider citing the papers if you find the work relevant to your research:

@inproceedings{agishev2024monoforce,
    title={MonoForce: Self-supervised Learning of Physics-informed Model for Predicting Robot-terrain Interaction},
    author={Ruslan Agishev and Karel Zimmermann and Vladimír Kubelka and Martin Pecka and Tomáš Svoboda},
    booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems - IROS},
    year={2024},
    eprint={2309.09007},
    archivePrefix={arXiv},
    primaryClass={cs.RO},
    url={https://arxiv.org/abs/2309.09007},
}
@inproceedings{agishev2024endtoend,
    title={End-to-end Differentiable Model of Robot-terrain Interactions},
    author={Ruslan Agishev and Vladim{\'\i}r Kubelka and Martin Pecka and Tomas Svoboda and Karel Zimmermann},
    booktitle={ICML 2024 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators},
    year={2024},
    url={https://openreview.net/forum?id=XuVysF8Aon}
}