-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
61 changed files
with
2,594 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
*.m~ |
Large diffs are not rendered by default.
Oops, something went wrong.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,178 @@ | ||
# best_shape | ||
# Optimal Shape Generation | ||
|
||
|
||
## Overview | ||
This is a package for the generation of the optimal shape, described in paper: **Optimal Target Shape for LiDAR Pose Estimation** ([PDF](./OptimalShape_WOSolver_RAL_Longer.pdf))([arXiv](https://arxiv.org/abs/2109.01181)). This work is submitted to IEEE Robotics and Automation Letters. | ||
|
||
Both the simulation and the experimental results (verified by a motion capture system) confirm that by using the optimal shape and the global solver, we achieve **centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away.** | ||
|
||
|
||
* Author: Jiunn-Kai (Bruce) Huang, William Clark, and Jessy W. Grizzle | ||
* Maintainer: [Bruce JK Huang](https://www.BrucebotStudio.com/), brucejkh[at]gmail.com | ||
* Affiliation: [The Biped Lab](https://www.biped.solutions/), the University of Michigan | ||
|
||
This package has been tested under MATLAB2020a and Ubuntu 18.04.\ | ||
**[Note]** More detailed introduction will be updated shortly. Sorry for the inconvenient!\ | ||
**[Issues]** If you encounter _any_ issues, I would be happy to help. If you cannot find a related one in the existing issues, please open a new one. I will try my best to help! | ||
|
||
Table of Contents | ||
================= | ||
|
||
* [Optimal Shape Generation](#optimal-shape-generation) | ||
* [Overview](#overview) | ||
* [Table of Contents](#table-of-contents) | ||
* [Abstract](#abstract) | ||
* [Video](#video) | ||
* [Quick View](#quick-view) | ||
* [Why Optimal Shape?](#why-optimal-shape) | ||
* [How is the Optimal Generated?](#how-is-the-optimal-generated) | ||
* [Pose Definition and Estimation](#pose-definition-and-estimation) | ||
* [Algorithm Evaluations](#algorithm-evaluations) | ||
* [Simulation Results](#simulation-results) | ||
* [Qualitative Evaluations](#qualitative-evaluations) | ||
* [Quantitative Evaluations](#quantitative-evaluations) | ||
* [Experimental Results](#experimental-results) | ||
* [Qualitative Evaluations](#qualitative-evaluations-1) | ||
* [Quantitative Evaluations](#quantitative-evaluations-1) | ||
* [Required Toolboxes / Packages](#required-toolboxes--packages) | ||
* [Paper Results for Target Shape Generation](#paper-results-for-target-shape-generation) | ||
* [Make your Own Optimal Target Shape](#make-your-own-optimal-target-shape) | ||
* [Paper Results for Pose Estimation and Estimate the Target's Pose](#paper-results-for-pose-estimation-and-estimate-the-targets-pose) | ||
* [Citations](#citations) | ||
|
||
|
||
|
||
|
||
|
||
|
||
## Abstract | ||
Targets are essential in problems such as object tracking in cluttered or textureless environments, camera (and multi-sensor) calibration tasks, and simultaneous localization and mapping (SLAM). Target shapes for these tasks typically are symmetric (square, rectangular, or circular) and work well for structured, dense sensor data such as pixel arrays (i.e., image). However, symmetric shapes lead to pose ambiguity when using sparse sensor data such as LiDAR point clouds and suffer from the quantization uncertainty of the LiDAR. This paper introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. A target is designed to induce large gradients at edge points under rotation and translation relative to the LiDAR to ameliorate the quantization uncertainty associated with point cloud sparseness. Moreover, given a target shape, we present a means that leverages the target’s geometry to estimate the target’s vertices while globally estimating the pose. Both the simulation and the experimental results (verified by a motion capture system) confirm that by using the optimal shape and the global solver, we achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away. | ||
|
||
|
||
## Video | ||
Please checkout the introduction [video](https://www.brucebotstudio.com/optimal-shape). It highlights some important keypoints in the paper! | ||
[<img src="./graphics/VideoImg.png" width="960">](https://www.brucebotstudio.com/optimal-shape) | ||
|
||
|
||
|
||
## Quick View | ||
Both the simulation and the experimental results (verified by a motion capture system) confirm that by using the optimal shape and the global solver, we achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away. | ||
<img src="./graphics/FRBAtrium.png" width="960"> | ||
|
||
|
||
|
||
## Why Optimal Shape? | ||
Symmetric shapes lead to pose ambiguity when using sparse sensor data such as LiDAR point clouds and suffer from the quantization uncertainty of the LiDAR. This paper introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. A target is designed to induce large gradients at edge points under rotation and translation relative to the LiDAR to ameliorate the quantization uncertainty associated with point cloud sparseness. | ||
|
||
This target can be used in tandem with fiducial markers such as LiDARTag ([paper](https://ieeexplore.ieee.org/abstract/document/9392337), [GitHub](https://github.com/UMich-BipedLab/LiDARTag)) and LiDAR-camera extrinsic calibration ([paper](https://ieeexplore.ieee.org/document/9145571), [GitHub](https://github.com/UMich-BipedLab/extrinsic_lidar_camera_calibration)). | ||
|
||
The following shows the resulting optimal shape in arbitrary units. | ||
<img src="./graphics/OptimalShape.png" width="960"> | ||
|
||
## How is the Optimal Generated? | ||
Projective transformations applied to a nominal quadrilateral generate candidate convex quadrilaterals (targets). Edge points are intersections of LiDAR rings with the target boundaries. The objective is to maximize the gradient of edge points under actions of SE(2) applied to the target. To enhance robustness, the gradients are computed for n-discrete rotations of the quadrilateral under partial illumination, and the score is the worst case. | ||
|
||
<img src="./graphics/Optimization.png" width="960"> | ||
|
||
|
||
## Pose Definition and Estimation | ||
A coordinate frame for the template (target shown in black) is defined by aligning the plane of the template with the y-z plane of the LiDAR frame and also aligning the mean of its vertices with the origin of the LiDAR. | ||
|
||
Let <img src="https://latex.codecogs.com/svg.image?H_T^L" title="H_T^L" /> (blue arrow) be an estimate of the rigid-body transformation from target to LiDAR, projecting the edge points of the target back to the template. The estimated pose of the target is then given by the inverse transformation, <img src="https://latex.codecogs.com/svg.image?H_L^T" title="H_L^T" /> (green arrow). The optimal <img src="https://latex.codecogs.com/svg.image?H_T^L" title="H_T^L"/> is obtained by minimizing (18) (based on point-to-line distance). This figure also shows a fitting result of a target at 2 meters in the Ford Robotics Building. The red frame is the template re-projected onto the LiDAR point cloud by <img src="https://latex.codecogs.com/svg.image?H_L^T" title="H_L^T" />. | ||
|
||
<img src="./graphics/Pose.png" width="960"> | ||
|
||
|
||
|
||
## Algorithm Evaluations | ||
We evaluate the proposed algorithm in both simulation and experiments. Both quantitative and qualitative results are provided. We do not compare against standard targets, such as unpatterned rectangles, diamonds, or circles, because their symmetry properties result in pose ambiguity. At large distances, a pattern would not be discernible. | ||
|
||
|
||
|
||
### Simulation Results | ||
Before carrying out experiments with the new target shape, we used a MATLAB-based LiDAR simulator introduced in [GitHub](https://github.com/UMich-BipedLab/lidar_simulator) to extensively evaluate the pose and vertex estimation of the optimal shape. | ||
|
||
|
||
#### Qualitative Evaluations | ||
Simulation results of the noise-free dataset of the pose estimation at various distances (10, 20, 30, 40 m). LiDAR returns (blue dots) on the target are provided by the LiDAR simulator[GitHub](https://github.com/UMich-BipedLab/lidar_simulator). Black indicates the ground truth pose from the simulator, and red is the estimated pose and vertices. The top and bottom show the front view and a side view of the fitting results, respectively. | ||
<img src="./graphics/SimQual.png" width="1280"> | ||
|
||
|
||
|
||
#### Quantitative Evaluations | ||
* **Subset of Quantitative Results of the Pose and Vertex Estimation Using the Noise-free Dataset:** Pose and vertex accuracy of the simulation results at various distances. | ||
<img src="./graphics/SimTable.png" width="960"> | ||
|
||
* **Complete Quantitative Results of Distances and Noise Levels:** Simulation results with a target placed at distances from 2 to 40 meters in 2 m increments in the LiDAR simulator[GitHub](https://github.com/UMich-BipedLab/lidar_simulator). At each distance, the simulation data are collected with the target face-on to the LiDAR as an easy case (solid line), and for the other, the target is rotated by the Euler angle (roll = 20 deg , pitch = 30 deg, yaw = 30 deg) as a challenging case (dashed line). In addition, we induce two different levels of noises to each dataset, as indicated by the different colors. | ||
<img src="./graphics/SimQuan2mIncrements.png" width="1280"> | ||
|
||
|
||
|
||
### Experimental Results | ||
We now present experimental evaluations of the pose and vertex estimation of the optimal shape. All the experiments are conducted with a 32-Beam Velodyne ULTRA Puck LiDAR and an Intel RealSense camera rigidly attached to the torso of a Cassie-series bipedal robot. The Qualisys motion capture system in M-Air [Website](https://robotics.umich.edu/about/mair/) is used as a proxy for ground truth poses and vertices. The setup consists of 33 motion capture cameras with passive markers attached to the target, the LiDAR and the camera. | ||
<img src="./graphics/ExpSetup.png" width="1280"> | ||
|
||
|
||
|
||
#### Qualitative Evaluations | ||
For distances beyond 16 meters (the distance limit in M-Air), we present qualitative results from the atrium to support the simulation-based analysis. | ||
|
||
* **No Missing Points:** Fitting results of the optimal shape at various distances (20, 24, 28, 32 meters) in the atrium of the Ford Robotics Building at the University of Michigan. The blue dots are the LiDAR returns on the target and the red frame are the fitting results. The top and bottom show the front view and a side view of the results, respectively. | ||
<img src="./graphics/ExpQual.png" width="1280"> | ||
|
||
|
||
|
||
|
||
* **Green Points are Considered Missing:** | ||
Fitting results of the partially illuminated target at various distances (4, 10, 22, 30 meters) in the atrium of the Ford Robotics Building at the University of Michigan. The selected distances are different from [No Missing Pionts](#no-missing-points) to show more results. The red frames are the fitting results. The blue dots are the LiDAR returns on the targets while the green dots are considered missing. The top and bottom show the front view and a side view of the fitting results, respectively. | ||
<img src="./graphics/ExpQualMissing.png" width="1280"> | ||
|
||
#### Quantitative Evaluations | ||
Pose and vertex accuracy of the experimental results. The ground truth is provided by a motion capture system with 33 cameras. Similar to the simulation environment, the optimal-shape target is placed at distances from 2 to 16 meters (maximum possible in M-Air) in 2 meter increments. At each distance, data is collected with a target face-on to the LiDAR and another dataset with the target roughly rotated by the Euler angles (roll = 20 deg, pitch = 30 deg, yaw = 30 deg) as a challenging case. | ||
<img src="./graphics/ExpQuanTable.png" width="960"> | ||
|
||
|
||
|
||
## Required Toolboxes / Packages | ||
* Required MATLAB Toolboxes | ||
* Optimization Toolbox | ||
* Phased Array System Toolbox | ||
* Required Packages | ||
* [matlab_utils](https://github.com/UMich-BipedLab/matlab_utils) | ||
|
||
|
||
## Paper Results for Target Shape Generation | ||
* To generate Fig. 3: double-click ``projective_vs_convexity2.m`` and run! | ||
* To generate Fig. 4 and Fig. 7: double-click ``draw_optimal_shape.m`` and run! | ||
* To re-optimize the optimal shape: double-click ``gen_optimal_shape.m`` and run! | ||
|
||
|
||
|
||
## Make your Own Optimal Target Shape | ||
If you want to build your own target to do experiments, you probably need a template, with 1:1 scale of the shapes to cut. I have provided several pdfs in [target_pdfs](./target_pdfs) that I used to quote from a vendor. | ||
|
||
###### NOTE: | ||
If you want to regenerate the files: Double-click ``draw_selected_shape.m`` and run! | ||
|
||
|
||
## Paper Results for Pose Estimation | ||
To generate Fig. | ||
|
||
## Estimate the Target's Pose | ||
If you want not only to generate/build the optimal target but also to estimate the target's pose, please refer to another [GitHub](https://github.com/UMich-BipedLab/global_pose_estimation_for_optimal_shape). | ||
|
||
|
||
## Citations | ||
The detail is described in: **Optimal Target Shape for LiDAR Pose Estimation,** Jiunn-Kai Huang, William Clark, and Jessy W. Grizzle. ([PDF](./OptimalShape_WOSolver_RAL_Longer.pdf)) ([arXiv](https://arxiv.org/abs/2109.01181)) | ||
|
||
``` | ||
@article{huang2021optimal, | ||
title={Optimal Target Shape for LiDAR Pose Estimation}, | ||
author={Jiunn-Kai Huang and William Clark and Jessy W. Grizzle}, | ||
year={2021}, | ||
journal={arXiv preprint arXiv:2109.01181} | ||
} | ||
``` | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
function [intersection_t, horzontal_lines, num_ring, sorted_vertices_h] = compute2DIntersectionPoints(opts, dis_vertices_mat_h) | ||
colinear = checkColinear(dis_vertices_mat_h); | ||
if colinear | ||
intersection_t = struct('line', [], 'intersection_point', [], 'vertices', [], 'status', []); | ||
horzontal_lines = []; | ||
num_ring = 4; % any positive number, we use the default value | ||
sorted_vertices_h = []; | ||
else | ||
% Should be convex already. | ||
% This is only sort for counterclockwise | ||
[indices, dis_area] = convhull(dis_vertices_mat_h(1,:), dis_vertices_mat_h(2,:)); | ||
|
||
% scaling area to one | ||
cor_dis_vertices_mat_2d_h = sqrt(1/dis_area) * dis_vertices_mat_h(1:2, :); | ||
sorted_vertices_h = [cor_dis_vertices_mat_2d_h(:, indices(1:end-1)); ones(1, length(indices)-1)]; | ||
[horzontal_lines, num_ring] = getHorizontalLines(opts, sorted_vertices_h); | ||
intersection_t = compute2DIntersectionPointsGivenLines(sorted_vertices_h, horzontal_lines); | ||
end | ||
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
function [intersection_point, status] = compute2DIntersectionPointsGivenHorizontalLines(p1,p2,line) | ||
% status = 0: horizontal line | ||
% status = 1: intersection point is outside of the segment | ||
% status = 2: points on a vertices | ||
% status = 3: points inside the segment | ||
|
||
status = 0; | ||
y_offect = p2(2) - p1(2); | ||
intersection_point = []; | ||
|
||
% p1, p2 are not horizontal | ||
if abs(y_offect) > 1e-5 | ||
x = (p2(1) - p1(1))/(p2(2) - p1(2)) * (line - p1(2)) + p1(1); | ||
min_x = min(p1(1), p2(1)); | ||
max_x = max(p1(1), p2(1)); | ||
threshold = 1e-6; | ||
if x > min_x && x < max_x | ||
if abs(x - min_x) < threshold || abs(x - max_x) < threshold | ||
% on p1 or p2 | ||
status = 2; | ||
else | ||
status = 3; | ||
end | ||
else | ||
status = 1; | ||
end | ||
intersection_point = [x; line]; | ||
end | ||
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
function intersection_t = compute2DIntersectionPointsGivenLines(sorted_vertices_h, horizontal_lines) | ||
num_line = length(horizontal_lines); % how many lines | ||
num_vertices = size(sorted_vertices_h, 2); | ||
intersection_t(num_line*2) = struct('line', [], 'intersection_point', [], 'vertices', [], 'status', []); | ||
|
||
% [sorted_vertices, indices] = sortrows(vertices_mat_h', 2, 'descend'); | ||
% sorted_vertices = sorted_vertices'; | ||
|
||
% at most num_line*2 edge points | ||
% intersection_t(num_line*2) = struct('line', [], 'intersection_point', [], 'vertices', [], 'status', []); | ||
count = 1; | ||
for i = 1:num_vertices | ||
% this vertices | ||
p1 = sorted_vertices_h(:, i); | ||
j = max(mod(i+1, num_vertices+1), 1); % next vertices | ||
p2 = sorted_vertices_h(:, j); | ||
for line = 1:num_line | ||
[intersection_point, status] = compute2DIntersectionPointsGivenHorizontalLines(p1, p2, horizontal_lines(line)); | ||
if status == 0 | ||
warning("horizontal line") | ||
% return | ||
elseif status == 1 | ||
% point is outside of the segement | ||
% warning("points are outside of the segment") | ||
elseif status == 2 | ||
warning("overlapping intersection pionts with vertices") | ||
elseif status == 3 | ||
intersection_t(count).line = horizontal_lines(line); | ||
intersection_t(count).intersection_point = intersection_point; | ||
intersection_t(count).vertices = [p1, p2]; | ||
intersection_t(count).status = status; | ||
count = count + 1; | ||
end | ||
end | ||
end | ||
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
function shape = computeHeightWidthAnglesLengthOfQuadrilateral(vertices_unsorted, legth_conversion) | ||
shape.unsorted_mat = vertices_unsorted; | ||
vertices = sortVertices([], vertices_unsorted); | ||
shape.sorted_vertices = vertices; | ||
shape.height = (max(vertices(3,:)) - min(vertices(3,:))) * legth_conversion; | ||
shape.width = (max(vertices(2,:)) - min(vertices(2,:))) * legth_conversion; | ||
|
||
cur_points = [1, 2]; | ||
shape.d1 = norm(vertices(:, cur_points(1)) - vertices(:, cur_points(2))) * legth_conversion; | ||
shape.d1_points = cur_points; | ||
shape.d1_mean = mean(vertices(:, cur_points), 2); | ||
|
||
cur_angle = [1, 2, 4]; | ||
v1 = vertices(2:3, cur_angle(2)) - vertices(2:3, cur_angle(1)); | ||
v2 = vertices(2:3, cur_angle(3)) - vertices(2:3, cur_angle(1)); | ||
shape.ang1 = acosd( dot(v1, v2)/(norm(v1)*norm(v2)) ); | ||
shape.ang1_angles = cur_angle; | ||
|
||
|
||
cur_points = [2, 3]; | ||
shape.d2 = norm(vertices(:, cur_points(1)) - vertices(:, cur_points(2))) * legth_conversion; | ||
shape.d2_points = cur_points; | ||
shape.d2_mean = mean(vertices(:, cur_points), 2); | ||
cur_angle = [2, 1, 3]; | ||
v1 = vertices(2:3, cur_angle(2)) - vertices(2:3, cur_angle(1)); | ||
v2 = vertices(2:3, cur_angle(3)) - vertices(2:3, cur_angle(1)); | ||
shape.ang2 = acosd( dot(v1, v2)/(norm(v1)*norm(v2)) ); | ||
shape.ang2_angles = cur_angle; | ||
|
||
|
||
cur_points = [3, 4]; | ||
shape.d3 = norm(vertices(:, cur_points(1)) - vertices(:, cur_points(2))) * legth_conversion; | ||
shape.d3_points = cur_points; | ||
shape.d3_mean = mean(vertices(:, cur_points), 2); | ||
cur_angle = [3, 2, 4]; | ||
v1 = vertices(2:3, cur_angle(2)) - vertices(2:3, cur_angle(1)); | ||
v2 = vertices(2:3, cur_angle(3)) - vertices(2:3, cur_angle(1)); | ||
shape.ang3 = acosd( dot(v1, v2)/(norm(v1)*norm(v2)) ); | ||
shape.ang3_angles = cur_angle; | ||
|
||
|
||
cur_points = [4, 1]; | ||
shape.d4 = norm(vertices(:, cur_points(1)) - vertices(:, cur_points(2))) * legth_conversion; | ||
shape.d4_points = cur_points; | ||
shape.d4_mean = mean(vertices(:, cur_points), 2); | ||
cur_angle = [4, 3, 1]; | ||
v1 = vertices(2:3, cur_angle(2)) - vertices(2:3, cur_angle(1)); | ||
v2 = vertices(2:3, cur_angle(3)) - vertices(2:3, cur_angle(1)); | ||
shape.ang4 = acosd( dot(v1, v2)/(norm(v1)*norm(v2)) ); | ||
shape.ang4_angles = cur_angle; | ||
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
function ratio = computeHeightWidthRatio(p1, p2, p3, p4) | ||
% height/width ratio | ||
max_height = max([p1(2), p2(2), p3(2), p4(2)]); | ||
min_height = min([p1(2), p2(2), p3(2), p4(2)]); | ||
delta_height = abs(max_height - min_height); | ||
|
||
max_width = max([p1(1), p2(1), p3(1), p4(1)]); | ||
min_width = min([p1(1), p2(1), p3(1), p4(1)]); | ||
delta_width = abs(max_width - min_width); | ||
if delta_width~=0 | ||
ratio = delta_height/delta_width; | ||
else | ||
ratio = 0; | ||
end | ||
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
function cost_puffibility = computePuffibility(p1, p2) | ||
cost_puffibility = norm(p1-p2)^2; | ||
end |
Oops, something went wrong.