Skip to content

Sec. 4: Creating a light sensor plugin

Ramviyas Parasuraman edited this page Feb 23, 2018 · 1 revision

Creating and adding a Custom ROS light sensor plugin

Introduction

Our goal is to mount a custom sensor on top of our robot and publish the sensor readings on a rostopic. To do that, we first create a ROS plugin for our camera-based light sensor.

In this section, we cover how to create a custom light sensor and its plugin to publish sensor readings on a ROS topic. The idea is that the sensor can be mounted on the previously built mobile robot to monitor the environment(world) created on gazebo with the robot.

Task

We build our simple light detector sensor based on a camera. We use a camera to capture an image and then use the image to calculate the illuminance of the image. The illuminance value is then published through a ROS topic.

The tutorial is divided into two sections.

  1. Create the light sensor plugin
  2. Test that it works

The next section creates the light sensor model and mounts it on our previously built robot.

Create the light sensor plugin

  1. Create e ROS package for the plugin in your catkin workspace that will allow compiling of the plugin.

    > cd ~/catkin_ws/src
    > catkin_create_pkg gazebo_light_sensor_plugin gazebo_ros gazebo_plugins roscpp
    
  2. Since we are using a camera to capture the light, we are going to create a plugin class that inherits from the CameraPlugin. The code that follows has been created taking as guideline the code of the authentic gazebo ROS camera plugin

Navigate to your package and create a file called light_sensor_plugin.h in directory /include/gazebo_light_sensor_plugin/. Paste the following code:

    #ifndef GAZEBO_ROS_LIGHT_SENSOR_HH
    #define GAZEBO_ROS_LIGHT_SENSOR_HH
    
    #include <string>
    
    // library for processing camera data for gazebo / ros conversions
    #include <gazebo/plugins/CameraPlugin.hh>
    
    #include <gazebo_plugins/gazebo_ros_camera_utils.h>
    
    namespace gazebo
    {
      class GazeboRosLight : public CameraPlugin, GazeboRosCameraUtils
      {
        /// \brief Constructor
        /// \param parent The parent entity, must be a Model or a Sensor
        public: GazeboRosLight();
    
        /// \brief Destructor
        public: ~GazeboRosLight();
    
        /// \brief Load the plugin
        /// \param take in SDF root element
        public: void Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf);
    
        /// \brief Update the controller
        protected: virtual void OnNewFrame(const unsigned char *_image,
        unsigned int _width, unsigned int _height,
        unsigned int _depth, const std::string &_format);
    
        ros::NodeHandle _nh;
        ros::Publisher _sensorPublisher;
    
        double _fov;
        double _range;
      };
    }
    #endif

The above code includes a node handler to connect to the roscore. It also defines a publisher that will publish messages containing the illuminance value. Two parameters have been defined: fov (field of view) and range. At present only FOV is used to indicate the number of pixels around the center of the image that will be taken into account to calculate the illuminance.

  1. Create a file named light_sensor_plugin.cpp containing the following code in the src directory of your package:

    #include <gazebo/common/Plugin.hh>
    #include <ros/ros.h>
    #include "gazebo_light_sensor_plugin/light_sensor_plugin.h"
    
    #include "gazebo_plugins/gazebo_ros_camera.h"
    
    #include <string>
    
    #include <gazebo/sensors/Sensor.hh>
    #include <gazebo/sensors/CameraSensor.hh>
    #include <gazebo/sensors/SensorTypes.hh>
    
    #include <sensor_msgs/Illuminance.h>
    
    namespace gazebo
    {
      // Register this plugin with the simulator
      GZ_REGISTER_SENSOR_PLUGIN(GazeboRosLight)
    
      ////////////////////////////////////////////////////////////////////////////////
      // Constructor
      GazeboRosLight::GazeboRosLight():
      _nh("light_sensor_plugin"),
      _fov(6),
      _range(10)
      {
        _sensorPublisher = _nh.advertise<sensor_msgs::Illuminance>("lightSensor", 1);
      }
    
      ////////////////////////////////////////////////////////////////////////////////
      // Destructor
      GazeboRosLight::~GazeboRosLight()
      {
        ROS_DEBUG_STREAM_NAMED("camera","Unloaded");
      }
    
      void GazeboRosLight::Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf)
      {
        // Make sure the ROS node for Gazebo has already been initialized
        if (!ros::isInitialized())
        {
          ROS_FATAL_STREAM("A ROS node for Gazebo has not been initialized, unable to load plugin. "
            << "Load the Gazebo system plugin 'libgazebo_ros_api_plugin.so' in the gazebo_ros package)");
          return;
        }
    
        CameraPlugin::Load(_parent, _sdf);
        // copying from CameraPlugin into GazeboRosCameraUtils
        this->parentSensor_ = this->parentSensor;
        this->width_ = this->width;
        this->height_ = this->height;
        this->depth_ = this->depth;
        this->format_ = this->format;
        this->camera_ = this->camera;
    
        GazeboRosCameraUtils::Load(_parent, _sdf);
      }
    
      ////////////////////////////////////////////////////////////////////////////////
      // Update the controller
      void GazeboRosLight::OnNewFrame(const unsigned char *_image,
        unsigned int _width, unsigned int _height, unsigned int _depth,
        const std::string &_format)
      {
        static int seq=0;
    
        this->sensor_update_time_ = this->parentSensor_->GetLastUpdateTime();
    
        if (!this->parentSensor->IsActive())
        {
          if ((*this->image_connect_count_) > 0)
          // do this first so there's chance for sensor to run once after activated
            this->parentSensor->SetActive(true);
        }
        else
        {
          if ((*this->image_connect_count_) > 0)
          {
            common::Time cur_time = this->world_->GetSimTime();
            if (cur_time - this->last_update_time_ >= this->update_period_)
            {
              this->PutCameraData(_image);
              this->PublishCameraInfo();
              this->last_update_time_ = cur_time;
    
              sensor_msgs::Illuminance msg;
              msg.header.stamp = ros::Time::now();
              msg.header.frame_id = "";
              msg.header.seq = seq;
    
              int startingPix = _width * ( (int)(_height/2) - (int)( _fov/2)) - (int)(_fov/2);
    
              double illum = 0;
              for (int i=0; i<_fov ; ++i)
              {
                int index = startingPix + i*_width;
                for (int j=0; j<_fov ; ++j)
                  illum += _image[index+j];
              }
    
              msg.illuminance = illum/(_fov*_fov);
              msg.variance = 0.0;
    
              _sensorPublisher.publish(msg);
    
              seq++;
            }
          }
        }
      }
    }
    

The above code calculates the illuminance in a very simple way (calculates the average value). It adds the values of all the pixels in the FOV of the camera and then divides by the total number of pixels.

  1. Substitute the code of the automatically created CMakeLists.txt by the code below:

    cmake_minimum_required(VERSION 2.8.3)
    project(gazebo_light_sensor_plugin)
    
    find_package(catkin REQUIRED COMPONENTS
      gazebo_plugins
      gazebo_ros
      roscpp
    )
    
    find_package (gazebo REQUIRED)
    
    catkin_package(
      INCLUDE_DIRS include
      CATKIN_DEPENDS gazebo_plugins gazebo_ros roscpp
    )
    
    ###########
    ## Build ##
    ###########
    
    set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
    
    link_directories(${GAZEBO_LIBRARY_DIRS})
    include_directories(include)
    include_directories( ${catkin_INCLUDE_DIRS} 
                         ${Boost_INCLUDE_DIR} 
                         ${GAZEBO_INCLUDE_DIRS}
    )
    
    add_library(${PROJECT_NAME} src/light_sensor_plugin.cpp)
    
    ## Specify libraries to link a library or executable target against
    target_link_libraries( ${PROJECT_NAME} ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} CameraPlugin )
    
  2. Update the package.xml file by including the following line in your package.xml between the tags _<export></export>_

    <gazebo_ros plugin_path="${prefix}/lib" gazebo_media_path="${prefix}" />

  3. Compile your plugin using the commands below. Compilation should generate the library containing the plugin inside our building directory.

     > roscd
     > cd ..
     > catkin_make
    

Testing the Plugin before attaching it to the custom robot

  1. We first create a world file that includes the plugin. Create a worlds directory inside your plugin package, and save the following code in a file entitled light.world. This world file just loads the camera with its plugin.

    <?xml version="1.0" ?>
    <sdf version="1.4">
     <world name="default">
     <include>
       <uri>model://ground_plane</uri>
     </include>
    
     <include>
       <uri>model://sun</uri>
     </include>
    
     <!-- reference to your plugin -->
     <model name='camera'>
       <pose>0 -1 0.05 0 -0 0</pose>
       <link name='link'>
         <inertial>
           <mass>0.1</mass>
           <inertia>
            <ixx>1</ixx>
             <ixy>0</ixy>
             <ixz>0</ixz>
             <iyy>1</iyy>
             <iyz>0</iyz>
             <izz>1</izz>
           </inertia>
         </inertial>
         <collision name='collision'>
           <geometry>
             <box>
                <size>0.1 0.1 0.1</size>
             </box>
           </geometry>
           <max_contacts>10</max_contacts>
           <surface>
             <contact>
               <ode/>
             </contact>
             <bounce/>
             <friction>
               <ode/>
             </friction>
           </surface>
         </collision>
         <visual name='visual'>
           <geometry>
             <box>
               <size>0.1 0.1 0.1</size>
             </box>
           </geometry>
         </visual>
         <sensor name='camera' type='camera'>
           <camera name='__default__'>
             <horizontal_fov>1.047</horizontal_fov>
             
             <clip>
               <near>0.1</near>
               <far>100</far>
             </clip>
           </camera>
           <plugin name="gazebo_light_sensor_plugin" filename="libgazebo_light_sensor_plugin.so">
             <cameraName>camera</cameraName>
             <alwaysOn>true</alwaysOn>
             <updateRate>10</updateRate>
             <imageTopicName>rgb/image_raw</imageTopicName>
             <depthImageTopicName>depth/image_raw</depthImageTopicName>
             <pointCloudTopicName>depth/points</pointCloudTopicName>
             <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
             <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
             <frameName>camera_depth_optical_frame</frameName>
             <baseline>0.1</baseline>
             <distortion_k1>0.0</distortion_k1>
             <distortion_k2>0.0</distortion_k2>
             <distortion_k3>0.0</distortion_k3>
             <distortion_t1>0.0</distortion_t1>
             <distortion_t2>0.0</distortion_t2>
             <pointCloudCutoff>0.4</pointCloudCutoff>
             <robotNamespace>/</robotNamespace>
           </plugin>
         </sensor>
         <self_collide>0</self_collide>
         <kinematic>0</kinematic>
         <gravity>1</gravity>
       </link>
     </model>
     </world>
    </sdf>
    
  2. Next, we create a launch file for the plugin. A launch file is useful when you want to start/launch many nodes at once without any need to do "rosrun for every node". In the package, create a directory by the name 'launch' and create a main.launch file.

    <launch>
      <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched -->
      <include file="$(find gazebo_ros)/launch/empty_world.launch">
        <arg name="verbose" value="true"/>
        <arg name="world_name" value="$(find gazebo_light_sensor_plugin)/worlds/light.world"/>
        <!-- more default parameters can be changed here -->
      </include>
    </launch>
    
  3. To run the light sensor plugin on Gazebo from ROS, run ROSCORE and launch the world. Be sure that the GAZEBO_PLUGIN_PATH environment var includes the path to the new plugin.

Now execute the following command:

    > roslaunch gazebo_light_sensor_plugin main.launch

Add a few objects on the world in front of the camera. You can see what the camera is observing by running the following command:

    > rosrun image_view image_view image:=/camera/rgb/image_raw

After running that command, a small window will appear on your screen showing what the camera is capturing.

Next we check the calculated illuminance by watching the published topic (/light_sensor_plugin/lightSensor). Type the following command to see the calculated illuminance value from the image:

    > rostopic echo /light_sensor_plugin/lightSensor

You should see the topic messages being published on screen:

Putting it all together

In this section we mount the light sensor on the custom mobile robot (sdf file) created previously. Let the mobile robot move about; watch the images being captured and the calculated illuminance values published in real time.

  1. Create a model directory for the light sensor:

    mkdir -p ~/.gazebo/models/light_sensor
    
  2. Create a model config file similar to what we did for our robot:

    gedit ~/.gazebo/models/light_sensor/model.config
    
  3. Paste in the following contents:

    <?xml version="1.0"?>
    
     <model>
     <name>light_sensor</name>
      <version>1.0</version>
      <sdf version="1.4">model.sdf</sdf>
    
     <author>
      <name>My Name</name>
       <email>me@my.email</email>
     </author>
    
     <description>
       light sensor model based on camera
     </description>
     </model>
    
  4. Create a model.sdf file:

    gedit ~/.gazebo/models/light_sensor/model.sdf
    
  5. Paste in the following:

    <?xml version="1.0" ?>
    <sdf version="1.4">
    <model name='light_sensor'>
        <pose>0 -1 0.05 0 -0 0</pose>
        <link name='link'>
        <inertial>
          <mass>0.1</mass>
          <inertia>
            <ixx>1</ixx>
            <ixy>0</ixy>
            <ixz>0</ixz>
            <iyy>1</iyy>
            <iyz>0</iyz>
            <izz>1</izz>
          </inertia>
        </inertial>
       <collision name='collision'>
        <geometry>
         <box>
            <size>0.1 0.1 0.1</size>
         </box>
        </geometry>
       <max_contacts>10</max_contacts>
        <surface>
         <contact>
           <ode/>
         </contact>
         <bounce/>
         <friction>
           <ode/>
         </friction>
        </surface>
       </collision>
       <visual name='visual'>
        <geometry>
          <box>
           <size>0.1 0.1 0.1</size>
          </box>
        </geometry>
       </visual>
       <sensor name='camera' type='camera'>
       <camera name='__default__'>
         <horizontal_fov>1.047</horizontal_fov>
         
         <clip>
           <near>0.1</near>
           <far>100</far>
         </clip>
       </camera>
       <plugin name="gazebo_light_sensor_plugin" filename="libgazebo_light_sensor_plugin.so">
         <cameraName>camera</cameraName>
         <alwaysOn>true</alwaysOn>
         <updateRate>10</updateRate>
         <imageTopicName>rgb/image_raw</imageTopicName>
         <depthImageTopicName>depth/image_raw</depthImageTopicName>
         <pointCloudTopicName>depth/points</pointCloudTopicName>
         <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
         <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
         <frameName>camera_depth_optical_frame</frameName>
         <baseline>0.1</baseline>
         <distortion_k1>0.0</distortion_k1>
         <distortion_k2>0.0</distortion_k2>
         <distortion_k3>0.0</distortion_k3>
         <distortion_t1>0.0</distortion_t1>
         <distortion_t2>0.0</distortion_t2>
         <pointCloudCutoff>0.4</pointCloudCutoff>
         <robotNamespace>/</robotNamespace>
        </plugin>
      </sensor>
      <self_collide>0</self_collide>
      <kinematic>0</kinematic>
      <gravity>1</gravity>
      </link>
     </model>
    </sdf>
    

The above code is adapted from the light.world file we created in the previous section to test our light sensor. Instead of loading the model as a rectangular box on an empty plane, we only define the box here.

Once the light_sensor model has been created, we place it on top of our custom robot model. Open the 'model.sdf' file for our previously built custom robot '2wd_mr' and add the following lines.

    <?xml version='1.0'?>
    <sdf version='1.4'>
      <model name="2wd_mr">

       <static>false</static>

        <link name='chassis'>
          <pose>0 0 .1 0 0 0</pose>
          <collision name='collision'>
            <geometry>
              <box>
                <size>.4 .2 .1</size>
              </box>
            </geometry>
          </collision>

          <visual name='visual'>
            <geometry>
              <box>
                <size>.4 .2 .1</size>
              </box>
            </geometry>
          </visual>

          <collision name='caster_collision'>
            <pose>-0.15 0 -0.05 0 0 0</pose>
            <geometry>
              <sphere>
              <radius>.05</radius>
            </sphere>
          </geometry>

          <surface>
            <friction>
              <ode>
                <mu>0</mu>
                <mu2>0</mu2>
                <slip1>1.0</slip1>
                <slip2>1.0</slip2>
              </ode>
            </friction>
          </surface>
        </collision>

        <visual name='caster_visual'>
          <pose>-0.15 0 -0.05 0 0 0</pose>
          <geometry>
            <sphere>
              <radius>.05</radius>
            </sphere>
          </geometry>
        </visual>
      </link>
      <link name="left_wheel">
        <pose>0.1 0.13 0.1 0 1.5707 1.5707</pose>
        <collision name="collision">
          <geometry>
            <cylinder>
              <radius>.1</radius>
              <length>.05</length>
            </cylinder>
          </geometry>
        </collision>
        <visual name="visual">
          <geometry>
            <cylinder>
              <radius>.1</radius>
              <length>.05</length>
            </cylinder>
          </geometry>
        </visual>
      </link>

      <link name="right_wheel">
        <pose>0.1 -0.13 0.1 0 1.5707 1.5707</pose>
        <collision name="collision">
          <geometry>
            <cylinder>
              <radius>.1</radius>
              <length>.05</length>
            </cylinder>
          </geometry>
        </collision>
        <visual name="visual">
          <geometry>
            <cylinder>
              <radius>.1</radius>
              <length>.05</length>
            </cylinder>
          </geometry>
        </visual>
      </link>

     <joint type="revolute" name="left_wheel_hinge">
       <pose>0 0 -0.03 0 0 0</pose>
       <child>left_wheel</child>
       <parent>chassis</parent>
       <axis>
         <xyz>0 1 0</xyz>
       </axis>
     </joint>

     <joint type="revolute" name="right_wheel_hinge">
       <pose>0 0 0.03 0 0 0</pose>
       <child>right_wheel</child>
       <parent>chassis</parent>
       <axis>
        <xyz>0 1 0</xyz>
       </axis>
     </joint>
        <include>
  	     <uri>model://light_sensor</uri>
  	     <pose>0.2 0 0.2 0 0 0</pose>
	    </include>
       <joint name="sensor_joint" type="revolute">
	        <child>light_sensor::link</child>
        <parent>chassis</parent>
        <axis>
         <xyz>0 0 1</xyz>
         <limit>
          <upper>0</upper>
          <lower>0</lower>
         </limit>
        </axis>
      </joint>
      </model>
    </sdf>

Once the model has been updated, we are ready to run our robot and capture the environment with our custom sensor. Call roslaunch with an empty world on gazebo.

  roslaunch gazebo_ros empty_world.launch

Spawn a copy of the custom robot model on the empty world. The robot model previously built now shows the rectangular box on top which is our custom sensor. You can adjust its pose as necessary to align the sensor properly. The completed robot with the sensor is shown below.

Place random objects around the robot for monitoring.

The model at the moment is publishing on rostopic and the topics can be listed by the following command.

  rostopic list

If everything went well, the result should show up as follows.

Call 'rqt' and select the raw image topic from the drop-down menu to view the sensor captured frames.

Use the following command to publish the illuminance value calculated on each frame.

  rostopic echo /light_sensor_plugin/lightSensor

Allow the robot to move around by changing the wheel joint forces as shown previously. As the robot moves along, it now captures the video using the custom built sensor and calculates the illuminance value, published on a ROS node.

Demonstration

A demo video of the outcome is available below.

Demo video