Skip to content

Paper Title: Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion Authors: Huanyu Tian, Martin Huber, Christopher E. Mower, Zhe Han, Changsheng Li, Xingguang Duan, and Christos Bergeles

Notifications You must be signed in to change notification settings

RViMLab/HeyFusion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Learned HeyFusion

Welcome to the official GitHub repository for the open-source software and hardware associated with our article. This repository provides all the necessary code, designs, and documentation to reproduce the results and experiments described in our paper.

About the Project

In this study, we introduce a novel shared-control system for key-hole docking operations, combining a commercial camera with occlusion-robust pose estimation and a hand-eye information fusion technique. This system is used to enhance docking precision and force-compliance safety. To train a hand-eye information fusion network model, we generated a self-supervised dataset using this docking system. After training, our pose estimation method showed improved accuracy compared to traditional methods, including observation-only approaches, hand-eye calibration, and conventional state estimation filters. In real-world phantom experiments, our approach demonstrated its effectiveness with reduced position dispersion (1.23± 0.81 mm vs. 2.47 ± 1.22 mm) and force dispersion (0.78± 0.57 N vs. 1.15 ± 0.97 N) compared to the control group. These advancements in semi-autonomy co-manipulation scenarios enhance interaction and stability. The study presents an anti-interference, steady, and precision solution with potential applications extending beyond laparoscopic surgery to other minimally invasive procedures.

Key Features

  • Software: Coming Soon and will be avaliable on this page
  • Hardware: Coming Soon and will be avaliable on this page

Paper Information

For a detailed explanation of the methods and results, please refer to our paper:

  • Title: Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion
  • Authors: Huanyu Tian, Martin Huber, Christopher E. Mower, Zhe Han, Changsheng Li, Xingguang Duan, and Christos Bergeles
  • Journal: Under Reviewing
  • arXiv Link: https://arxiv.org/abs/2405.05817

Video Demonstration

A video demonstration of the project is available at the following link:

Getting Started

Prerequisites

Before you begin, ensure you have met the following requirements:

  • Python3
  • OpTas
  • CasaDi
  • OpenCV
  • ROS2 Humble
  • LBR_FRI_LIB

About

Paper Title: Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion Authors: Huanyu Tian, Martin Huber, Christopher E. Mower, Zhe Han, Changsheng Li, Xingguang Duan, and Christos Bergeles

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published