Skip to content

Latest commit

 

History

History
282 lines (252 loc) · 11.4 KB

README.md

File metadata and controls

282 lines (252 loc) · 11.4 KB

Gaze Dialogue Model

Build Status GitHub license

Gaze Dialogue Model system for iCub Humanoid Robot

Tested on

Static Badge Static Badge

Dependencies

For controller App

follow instructions in icub website:

  • YCM
  • YARP
  • icub-main
  • OpenCV (optional)

Ubuntu 16.04

git clone https://github.com/robotology/ycm.git -b v0.11.3
git clone https://github.com/robotology/yarp.git -b v2.3.72
git clone https://github.com/robotology/icub-main.git -b v1.10.0
git clone https://github.com/robotology/icub-contrib-common -b 7d9b7e4

Ubuntu 20.04

git clone https://github.com/robotology/ycm.git -b v0.11.3
git clone https://github.com/robotology/yarp.git -b v3.4.0
git clone https://github.com/robotology/icub-main.git -b v1.17.0

OpenCV (tested on v3.4.1 and v3.4.17)

recommended with CUDA (tested on CUDA-8.0, CUDA-11.2, and CUDA-11.4). Please follow the official OpenCV documentation.

For the detection App Python 3.5.5 Python 3.9 Tensorflow 1.9 Tensorflow 2.15.0

install the requirements. We recommend installing in a virtual environment like Anaconda

pip3 install -r requirements.txt

For our gaze fixations we use Tensorflow models

git clone https://github.com/tensorflow/models.git

utils package is from Tensorflow Object Detection API (follow the instructions to install it). Then add it to your path

cd models/research
export PYTHONPATH=$PYTHONPATH:$(pwd)/slim
echo $PYTHONPATH 
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/object_detection 

pylsl needs liblsl (v1.13.0). Either install in /usr/ or add the filepath specified by an environment variable named PYLSL_LIB

cd liblsl & mkdir build & cmake ..
export PYLSL_LIB=/path/to/liblsl.so

You can test if the detection system is working by running python main_offline.py

For the connectivity App: PupilLabs 3.6.7

This is to send the communication of PupilLabs to the detection App which then sends it to the iCub (through YARP)

Either install the PupilLabs Capture app or from source. We use LabStreamingLayer(LSL) to stream the data and convert to YARP. Alternatively to LabStreamingLayer is ROS (not yet tested)

Building controller app

C++

  1. clone repository
git clone git@github.com:NunoDuarte/GazeDialogue.git
  1. start with the controller App
cd controller
  1. install controller App Dependencies
  2. build
mkdir build
ccmake .
make -j
  1. install detection App Dependencies
  2. install connectivity App Dependencies (optional when using iCub)
  3. Jump to Setup for the first tests of the GazeDialogue pipeline

Demo

Test detection App (pupil_data_test)

  1. go to detection app
cd detection
  1. run detection system offline
python3 main_offline.py

You should see a window of a video output appear. The detection system is running on the PupilLabs exported data (pupil_data_test) and the output are [timestep, gaze fixations label, pixel_x, pixel_y], for each detected gaze fixation.

Setup

Manual mode:

Test controller App (iCubSIM). There are three modes: manual robot leader; gazedialogue robot leader; gazedialogue robot follower. manual robot leader does not need eye-tracker(PupilLabs) while gazedialogue modes require eye-tracker(PupilLabs) for it to work.

Open terminals:

yarpserver --write
yarpmanager

in yarpmanager do:

  1. open controller/apps/iCub_startup.xml
  2. open controller/apps/GazeDialogue_leader.xml
  3. run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
  1. connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
./gazePupil-manual-leader
  1. connect all modules in GazeDialogue-Leader. Open terminal:
yarp rpc /service
  1. Write the following >> help this shows the available actions:
>> look_down
>> grasp_it
>> pass or place

GazeDialogue mode:

Open terminals:

yarpserver --write
yarpmanager

in yarpmanager do:

  1. open controller/apps/iCub_startup.xml
  2. open controller/apps/GazeDialogue_leader.xml
  3. run all modules in iCub_startup You should see the iCubSIM simulator open a window, and a second window. Open more terminals:
cd GazeDialogue/controller/build
./gazePupil-detector
  1. connect all modules in iCub_startup. You should see the iCub's perspective in the second window now.
  2. turn PupilLabs Capture on
  3. make sure the streaming plugin is on
  4. open a new terminal and open the detection app
python3 main.py

you should see a window open of the eye-tracker output. It should highlight the objects, faces, and gaze.

  1. run Pupil_Stream_to_Yarp (pl1_yarp) to convert the message to YARP !!!! (this should be improved)

Now, depending on whether you want to interact with the iCub or iCubSIM as a Leader or Follower the instructions change slightly

Robot as a Leader:

open a new terminal to run main process for leader

./gazePupil-main-leader
  1. connect the GazeDialogue-Leader yarp port that receives the the gaze fixations.
  2. Press Enter - robot will run GazeDialogue system for leader

Robot as a Follower:

open a new terminal to run main process for follower

./gazePupil-main-follower
  1. connect the GazeDialogue-Follower yarp port that receives the the gaze fixations.
  2. Press Enter - robot will run GazeDialogue system for follower

Run in real robot (iCub)

You need to change robot name in the file src/extras/configure.cpp

        // Open cartesian solver for right and left arm
        string robot="icub";

from "icubSim" to "icub". Then recompile build.

Robot as a Follower:

  1. open YARP - yarpserver
  2. use yarpnamespace /icub (for more information check link)
  3. open Pupil-Labs (Capture App)
  4. open detection project
  5. run Pupil_Stream_to_Yarp to open LSL
  6. check /pupil_gaze_tracker is publishing gaze fixations

Run on the real robot - without right arm (optional). Firstly, start iCubStartup from the yarpmotorgui in the real iCub and run the following packages:

  • yarprobotinterface --from yarprobotinterface_noSkinNoRight.ini
  • iKinCartesianSolver -part left_arm
  • iKinGazeCtrl
  • wholeBodyDynamics icubbrain1 --headV2 --autocorrect --no_right_arm
  • gravityCompensator icubbrain2 --headV2 --no_right_arm
  • fingersTuner icub-laptop
  • imuFilter pc104

Structure

.
├─── Controller
	├── CMakeLists.txt
	├── app
	│   ├── GazeDialogue_follower.xml
	|   ├── GazeDialogue_leader.xml
	|   └── iCub_startup.xml
	|   
	├── include
	│   ├── compute.h
	│   ├── configure.h
	|   ├── helpers.h
	|   └── init.h
	└── src
	    ├── icub_follower.cpp
	    ├── icub_leader.cpp
	    └── extras
		├── CvHMM.h
		├── CvMC.h
		├── compute.cpp
		├── configure.cpp
		├── detector.cpp
		└── helpers.cpp
├─── Detection
	├── main.py | main_offline.py
	├── face_detector.py | face_detector_gpu.py
	├── objt_tracking.py
	├── gaze_behaviour.py
	└── pupil_lsl_yarp.py

Instructions for a dual-computer system

In case you have the detection App and/or the connectivity App in a different computer do not forget to point YARP to where iCub is running:

  • yarp namespace /icub (in case /icub is the name of the yarp network)
  • yarp detect (to check you are connected)
  • gedit /home/user/.config/yarp/_icub.conf
  • 'ip of computer you wish to connect' 10000 yarp

Extras

Read camera output

  • yarpdev --device grabber --name /test/video --subdevice usbCamera --d /dev/video0
  • yarp connect /test/video /icubSim/texture/screen

Issues

  • To make it work on Ubuntu 16.04 with CUDA-11.2 and Tensorflow 2.7 you need to do the following:
  1. install nvidia driver 460.32.03 (cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb)
  2. wget https://developer.download.nvidia.com/compute/cuda/11.2.1/local_installers/cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
  3. sudo dpkg -i cuda-repo-ubuntu1604-11-2-local_11.2.1-460.32.03-1_amd64.deb
  4. sudo apt-key add /var/cuda-repo-ubuntu1604-11-2-local/7fa2af80.pub
  5. sudo apt-get install cuda-11-2
  6. check that apt-get is not removing any packages
  7. install Cudnn 8.1 for CUDA-11.0, 11.1, and 11.2
  8. test using deviceQuery on cuda-11.0 samples/1_Utilities
  9. follow the guidelines of Building and Instructions
  10. if after installing tensorflow, the system complains about missing cudart.so.11.0 then do this: (you can add this to ~/.bashrc)
export PATH=$PATH:/usr/local/cuda-11.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2/lib64
  • To make it work on tensorflow 2.7 I needed to alter the code in ~/software/tensorflow/models/research/object_detection/utils/label_map_utils.py (line 132)
with tf.io.gfile.GFile(path, 'r') as fid:

instead of

with tf.gfile.GFile(path, 'r') as fid:

Citation

If you find this code useful in your research, please consider citing our paper:

M. Raković, N. F. Duarte, J. Marques, A. Billard and J. Santos-Victor, "The Gaze Dialogue Model: Nonverbal Communication in HHI and HRI," in IEEE Transactions on Cybernetics, doi: 10.1109/TCYB.2022.3222077.