(former student project)
- Lane Detection (missing)
- Lane Regression
- Lane Assist
- clone repository to your ROS workspace:
~/catkin_ws/scr$: git clone https://gitlab.com/zfinnolab/laneregression/laneregression
- clone Lane Detection repo to your ROS workspace (check for requirements!):
~/catkin_ws/scr$: git clone https://gitlab.com/zfinnolab/lane-detection-assist/detectionlane.git
to get required python packages run: ~/catkin_ws$: pip install -r requirements.txt
- build the project:
~/catkin_ws$: catkin_make
~/catkin_ws$: source devel/setup.bash
~/catkin_ws/$: source devel/setup.bash
intput for Lane Detection from video:
~/catkin_ws/$: roslaunch lane_keeping_assist all_video.launch
- check in launch file for correct config file (.yaml) of Lane Detection
- check in launch file for correct video path
- check for correct parameters in
/config/config_laneregression.yaml
input for Lane Detection from camera:
~/catkin_ws/$: roslaunch lane_keeping_assist all_camera.launch
- check in launch file for correct config file (.yaml) of Lane Detection
- check for correct parameters in
/config/config_laneregression.yaml
development setup with Lane Detection dummy:
~/catkin_ws/$: roslaunch lane_keeping_assist all_dev.launch
- check for correct parameters in
/config/config_laneregression.yaml
- start roscore:
~/catkin_ws/$: roscore
for each step open new terminal and first pastesource devel/setup.bash
in/catkin_ws$
- dummy Lane Detection:
~/catkin_ws$: rosrun lane_keeping_assist lanedetection_dummy.py
- Lane Detection:
~/catkin_ws$: rosrun detection detection
~/catkin_ws$: rosrun lane_keeping_assist laneregression.py
~/catkin_ws$: rosrun lane_keeping_assist laneassist.py
- TruckMaker video: Spurerkennungssimulation.avi
- Lab video: capture_webcam_lab.avi
save in: ~/catkin_ws/scr/detection
- use Capture Card (Mira Box)
- use v4l2loopback + OBS (install v4l2loopback, install OBS, run
sudo modprobe v4l2loopback
, in OBS: tools -> V4L2 Video output) - use v4l2loopback + shell script (install v4l2loopback, run
./tools/screen_capture.sh
)
with \tools\send_udp.py
you can send a steering angle to TM (go sure that IP is correct)
(before Lane Detection wasn't finished a dummy Lane Detection was used for publishing data)
- in the original Lane Detection software an algorithm for Persepctive Transformation was implemented (to get the top-down view)
- the cluster points of a few frames were written to a file
- a dummy publisher retrieve test points from the file and publishes successively the frames with the all clusterpoints on a topic
- the actual reduction of points with cv2.polydp() (Ramer–Douglas–Peucker algorithm) was run in Lane Regression alfter cluster_lane_segments (this code part still exists in Lane Regression but is switched of)
- subscribe to topic
- look for related cluster (dashed line consists of several clusters)
- order new points
- calculate x(t) and y(t) functions (third degree polynomials)
- get order of functions and look for border lines
- calculate ideal line and offset
- publish offset on a topic
- subscribe to topic
- with offset calculate steering angle
- send a UDP message to Arduino with steering angle and speed
- project report (T3101) with videos: .zip file
- Result with TruckMaker Video: Truck Maker Video
- Result with TruckMaker closed loop (kp = -8 / 0,2 * EZ): TruckMaker Simulation
- other videos and screen records can be found under
~/Videos
on the Lane Regression Intel NUC
Here is an example image of one frame (using Lane Detection dummy).