Now, lets compute the connection between geographical coordinates and image pixels using real data!
We will use Sophus to represent poses and orientations. It lets us perform operations on SO(3) and SE(3) matrix Lie groups with Eigen. Take a look at so3.hpp and se3.hpp for documentation. Ask the instructors when you have questions about Sophus!
In camera_pose_lab.cpp, we are given the geodetic position of a light pole:
We will create a local Cartesian NED coordinate system around this position using LocalCoordinateSystem
.
The helicopter (body) navigation data is given as a set of geodetic positions with the corresponding NED attitude.
The LocalCoordinateSystem
class lets us convert navigation data to poses in the local coordinate system Fl.
Take a look at the documentation in local_coordinate_system.h.
With LocalCoordinateSystem
, we are able to represent the poses of the helicopter Fb in a common Cartesian coordinate system Fl.
The camera pose measurements in Fc are represented as Cartesian positions with corresponding NED attitudes relative to the body frame Fb.
For each image, we are given these data in DataElement
.
Take a look at DataElement
in dataset.h.
Then, get an overview of the data structures in attitude.h, geodetic_position.h and cartesian_position.h.
The first step is to convert the navigation data to the poses Tlb (pose of Fb relative to Fl). To accomplish this, we need to convert the orientation from roll-pitch-yaw Euler angles (θ1, θ2, θ3) to a proper rotation matrix:
You will need to finish the Attitude::toQuaternion
member function in attitude.cpp, which is used by Attitude::toSO3
to return a Sophus::SO3d
orientation.
- Convert attitude in Euler angles (roll-pitch-yaw) to a Quaternion.
Use
Eigen::AngleAxisd
, which lets us create principal rotations, combine them and return quaternions. - Add body axes to the 3D viewer in camera_pose_lab.cpp using
Viewer3D::addBodyAxes
. Use the image number provided by the data element. - Compile and run. You should see the body axes for each image. The forward axis (is that x, y or z?) should point approximately towards the next pose. Check also that the other directions (right and down) look reasonable
The next step is to convert the camera pose measurements to poses Tlc (pose of Fc relative to Fl).
You will need to finish CartesianPosition::toVector
to return a translation vector between Fb and Fc.
You can then use this translation and the corresponding SO(3) orientation from the camera's attitude to compute the SE(3) pose Tbc.
This will let you compute and visualize the pose Tlc.
- Convert the Cartesian position to a translation vector between Fb and Fc by finishing
CartesianPosition::toVector
in cartesian_position.cpp. - Construct the pose Tbc in the body coordinate system in camera_pose_lab.cpp.
- Compute the pose Tlc in the local coordinate system in camera_pose_lab.cpp.
- Add the camera axes to the 3D viewer in camera_pose_lab.cpp using
Viewer3D::addCameraAxes
. - Compile and run. You should see the smaller camera axes together with the body axes for each image. The optical axis (is that x, y or z?) should point approximately towards the origin of the local coordinate system, shown by the big axes in the center of the grid plane. Does the result look reasonable?
Before we can project world points into the images, we need to undistort them. Why?
The calibration parameters have been read into Intrinsics
objects.
To use them, we need to finish Intrinsics::toCalibrationMatrix
to return the camera calibration matrix K and Intrinsics::toDistortionVector
to return an OpenCV distortion coefficient vector.
We can then undistort the image using PerspectiveCameraModel
.
Take a look at the documentation in perspective_camera_model.h.
- Finish
Intrinsics::toCalibrationMatrix
in intrinsics.cpp. - Finish
Intrinsics::toDistortionVector
in intrinsics.cpp. You should return a vector on the form [k1, k2, 0, 0, k3], where ki are the different radial distortion parameters. - Use the camera model to undistort the image in camera_pose_lab.cpp.
- Compile and run. You will probably not notice any significant changes to the images, but if they still look reasonable, we are ready to continue!
We are now ready to project the position of the light pole into each image. Remember that the light pole is at the origin in the local coordinate system Fl.
First, we need to compute the camera projection matrix
in PerspectiveCameraModel::computeCameraProjectionMatrix
.
Make certain that you are using the correct pose when you extract R and t.
You might be interested in the se3::matrix3x4
member function.
We can then use the camera projection matrix to implement the perspective camera projection function in PerspectiveCameraModel::projectWorldPoint
.
Remember to use the homogenous representations in Eigen.
- Finish
PerspectiveCameraModel::computeCameraProjectionMatrix
in perspective_camera_model.cpp. - Finish
PerspectiveCameraModel::projectWorldPoint
in perspective_camera_model.cpp. - Add the camera frustum to the 3D viewer using
Viewer3D::addCameraFrustum
in camera_pose_lab.cpp. - Compile and run. Does the camera frustums look reasonable? Is the light pole projected correctly into the images?
Congratulations! You have now established the connection between geographical coordinates and image pixels using the power of imaging geometry! Cool, right?
- Finish
PerspectiveCameraModel::projectWorldPoints
and project other geographical points into the images (see www.norgeskart.no). - Create a virtual camera and project points into this camera.