Skip to content

Commit

Permalink
走向实用化之v3.0
Browse files Browse the repository at this point in the history
  • Loading branch information
guojianyang committed Jan 5, 2022
1 parent 2ac361f commit 3689044
Show file tree
Hide file tree
Showing 106 changed files with 3,923 additions and 307 deletions.
33 changes: 32 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

![enter image description here](https://img-blog.csdnimg.cn/7007a6ec9d584018bdf289bd8987c45d.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA6YOt5bu65rSL,size_20,color_FFFFFF,t_70,g_se,x_16)
# [中文] | [[English]](https://github.com/guojianyang/cv-detect-robot/blob/main/README_EN.md)
# CDR(cv-detect-robot)项目介绍-------(工业级视觉算法侧端部署)
Expand All @@ -10,6 +9,19 @@
> **备注(3)**):随着本人及团队《中山大学-MICRO_lab实验室》的学习成长,该项目会不定期进行维护和更新,由于能力有限,项目中存在错误和不足之处望各位批评指正或在`issue`中留言。
> **备注(4)**):为方便大家学习交流,已建立**CDR(cv-detect-robot)项目**交流微信群,请添加群负责人`小郭`微信号`17370042325`,以方便拉您进群。
***
***
# CDR~走向实用化之-v3.0版本更新内容如下🔥🔥🔥🔥🔥:
- 增加子项目(六)和(七),分别为yolox的python接口和cpp接口。
- 在子项目(五)中,将摄像头检测和视频文件检测集成到一个python程序中,具体操作参考同级文件夹下README.md。
- 解决子项目(一)(二)(三)(四)中实时摄像头检测屏幕画面全覆盖问题。
- 解决频繁出现的`mmap err:Bad file descriptor`错误。
- 对yolov5和yolox检测模型后都级联了DCF目标跟踪器。
- 对于所有子项目都可进行指定目标类别检测跟踪。
- 发布了Nano板和NX板在运行CDR项目时需要注意事项的README.md文档。
- 发布如何生成engine文件的README.md文档。
- 解决子项目(二)yolov5-deepstream-python中ros节点读取数据一直显示24个恒定目标数据的问题

***
***
# CDR子项目(一)(yolov5-ros-deepstream)
Expand Down Expand Up @@ -50,5 +62,24 @@
> 最终视频检测效果请进入[resnet10-ros-deepstream检测](https://www.bilibili.com/video/BV1Xg411w78P/)
# CDR子项目(六)(yolox-deepstream-python)
- yolox-deepstream-python 子项目简介
> 该项目是将yolox视觉检测算法与神经网络加速引擎tensorRT结合,本子项目采另一种引擎文件生成方法,通过onnx转到engine,此方法更具灵活性,也越来越稳定,符合行业主流发展趋势,在英伟达的deepstream框架下运行,在同一硬件平台上的任意软件目录中,建立一个读取物理内存的`client.py`脚本文件(里面只包含一个读取内存的代码段),将指定好的物理内存中的数据读取出来,在读取成功的前提下,可将该代码段插入到任意需要目标检测数据的python项目中,从而使该python项目能顺利获取目标检测数据。
> 详细教程请进入[yolox-deepstream-python](https://github.com/guojianyang/cv-detect-robot/tree/main/yolox-ros-deepstream)
> 最终视频检测效果请进入[yolox-deepstream-python检测](https://www.bilibili.com/video/BV1k34y1o7Ck/)
# CDR子项目(七)(yolox-deepstream-cpp)
- yolox-deepstream-cpp 子项目简介
> 该项目是将yolox视觉检测算法与神经网络加速引擎tensorRT结合,本子项目采另一种引擎文件生成方法,通过onnx转到engine,此方法更具灵活性,也越来越稳定,符合行业主流发展趋势,在英伟达的deepstream框架下运行,在同一硬件平台上的任意软件目录中,建立一个读取物理内存的`yolox_tensor.cpp`文件(里面只包含一个读取内存的代码段),编译后可将指定好的物理内存中的数据读取出来,在读取成功的前提下,可将该代码段插入到任意需要目标检测数据的C++项目中,从而使该C++项目能顺利获取目标检测数据。
> 详细教程请进入[yolox-deepstream-cpp](https://github.com/guojianyang/cv-detect-robot/tree/main/yolox-ros-deepstream)
> 最终视频检测效果请进入[yolox-deepstream-cpp检测](https://www.bilibili.com/video/BV1k34y1o7Ck/)
# [CDR项目常见问题及其解决方案(Common problems and solutions)](https://github.com/guojianyang/cv-detect-robot/wiki/CDR%E9%A1%B9%E7%9B%AE%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E5%8F%8A%E5%85%B6%E8%A7%A3%E5%86%B3%E6%96%B9%E6%A1%88(Common-problems-and-solutions))
# [Jetson Nano和 NX在运行CDR项目时注意事项](https://github.com/guojianyang/cv-detect-robot/wiki/Jetson-Nano%E5%92%8C-NX%E5%9C%A8%E8%BF%90%E8%A1%8CCDR%E9%A1%B9%E7%9B%AE%E6%97%B6%E6%B3%A8%E6%84%8F%E4%BA%8B%E9%A1%B9)
# [wts文件生成engine文件的方法](https://github.com/guojianyang/cv-detect-robot/wiki/wts%E6%96%87%E4%BB%B6%E7%94%9F%E6%88%90engine%E6%96%87%E4%BB%B6%E7%9A%84%E6%96%B9%E6%B3%95)


58 changes: 0 additions & 58 deletions resnet10-ros-deepstream/deepstream_python_apps/README.md

This file was deleted.

12 changes: 11 additions & 1 deletion resnet10-ros-deepstream/deepstream_python_apps/apps/deepstream-test7/README.md
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
# test7 建立只能播放的实例,并能实时打印出目标检测数据(坐标、目标跟踪ID号和类别)
# test7 建立只能播放的实例,并能实时打印出目标检测数据(坐标、置信度和类别)

python3命令后面第一个参数“1”代表视频文件输入模式,“2”代表实时摄像头输入模式

- video_file:
python3 deepstream-test_7_usb_file.py 1 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264
(备注):jetson NX 中,以上输入可能会报错,此时需将183行`source_file.set_property('location', args[2])`中的args[2]直接改为视频文件的绝对路径


- real_video:
python3 deepstream-test_7_usb_file.py 2 /dev/video0


Empty file.
90 changes: 72 additions & 18 deletions ...pps/deepstream-test7/deepstream-test_7.py → ...tream-test7/deepstream-test_7_usb_file.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
fps_streams = {}
frame_count = {}
saved_count = {}
Detect_Mode = 0 # 默认值为"0", "1"为 file文件读取模式, "2"为usb实时视频检测模式
bounding_bboxes =[]
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
Expand Down Expand Up @@ -75,7 +76,7 @@ def osd_sink_pad_buffer_probe(pad, info, u_data):
bounding_bboxes.append(int(width))
bounding_bboxes.append(int(height))
bounding_bboxes.append(int(object_id))
obj_meta.rect_params.border_color.set(0.0, 0.0, 0.0, 1.0)
obj_meta.rect_params.border_color.set(1.0, 0.0, 0.0, 0.0) #--------red ,green, blue, black
try:
l_obj = l_obj.next

Expand Down Expand Up @@ -133,15 +134,32 @@ def osd_sink_pad_buffer_probe(pad, info, u_data):
return Gst.PadProbeReturn.OK


def main():
def main(args):
if len(args)!=3:
sys.stderr.write(" \n\n\n对不起,您还未输入视频源!!!\n\n\n")
sys.exit(1)
else:
if args[1]=="1":
Detect_Mode = 1
print("您已进入视频文件检测模式!!!")
if args[1]=="2":
Detect_Mode = 2
print("您已进入实时视频检测模式!!!")
GObject.threads_init()
Gst.init(None)

print("Creating Pileline \n")
pipeline = Gst.Pipeline()
source = Gst.ElementFactory.make("filesrc", "file-source")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parper") # h264的编解码
decoder = Gst.ElementFactory.make("nvv4l2decoder"," nvv4l2-decoder") # h264的编解码
if Detect_Mode ==1:
source_file = Gst.ElementFactory.make("filesrc", "file-source")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parper") # h264的编解码
decoder = Gst.ElementFactory.make("nvv4l2decoder"," nvv4l2-decoder") # h264的编解码
if Detect_Mode == 2:
source_usb = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")
caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")

streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer") # autobatch 自动批量处理
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference") #第一级网络(主要的推理引擎engine)

Expand All @@ -153,22 +171,29 @@ def main():
if not sink_real:
sys.stderr.write(" Unable to create egl sink \n")


tracker = Gst.ElementFactory.make("nvtracker", "tracker")
if not tracker:
sys.stderr.write(" Unable to create tracker \n")

sgie1= Gst.ElementFactory.make("nvinfer", "secondary1-inference") #第二级网络
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor") #转换工具(因为输出的东西要放到画面上)
nvosd = Gst.ElementFactory.make("nvdsosd"," onscreendisplay") # 显示display工具
if Detect_Mode==1:
source_file.set_property('location', args[2]) #输入本地文件视频源
if Detect_Mode==2:
source_usb.set_property('device', args[2]) #输入usb实时视频源
caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
caps_vidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM)"))

source.set_property('location', "/opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264") #输入视频源
streammux.set_property('width', 1920) #设置输入源的宽
streammux.set_property("height", 1080) #设置输入源的高
streammux.set_property("batch-size", 1)
streammux.set_property('height', 1080) #设置输入源的高
streammux.set_property('batch-size', 1)
streammux.set_property("batched-push-timeout", 4000000)
pgie.set_property('config-file-path',"dstest7_pgie_config.txt")
sgie1.set_property('config-file-path',"dstest7_sgie1_config.txt")
#pgie.set_property('config-file-path', "dstest3_pgie_config.txt")
sink_real.set_property('sync', False) #决定实时视频是否同步显示

config = configparser.ConfigParser()
config.read('dstest7_tracker_config.txt')
Expand All @@ -195,25 +220,49 @@ def main():
tracker.set_property('enable_batch_process', tracker_enable_batch_process)

print("Adding elements to Pipeline \n")
pipeline.add(source )
pipeline.add(h264parser)
pipeline.add(decoder)
if Detect_Mode==1:
pipeline.add(source_file )
pipeline.add(h264parser)
pipeline.add(decoder)
if Detect_Mode==2:
pipeline.add(source_usb)
pipeline.add(caps_v4l2src)
pipeline.add(vidconvsrc)
pipeline.add(nvvidconvsrc)
pipeline.add(caps_vidconvsrc)
pipeline.add(streammux)
pipeline.add(pgie)

pipeline.add(tracker)

pipeline.add(sgie1)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
if is_aarch64():
pipeline.add(transform)
pipeline.add(sink_real)

source.link(h264parser)
h264parser.link(decoder)
sinkpad = streammux.get_request_pad("sink_0")
srcpad = decoder.get_static_pad("src")
srcpad.link(sinkpad)
if Detect_Mode==1:
source_file.link(h264parser)
h264parser.link(decoder)
sinkpad_file = streammux.get_request_pad("sink_0")
srcpad_file = decoder.get_static_pad("src")
srcpad_file.link(sinkpad_file)
if Detect_Mode==2:
source_usb.link(caps_v4l2src)
caps_v4l2src.link(vidconvsrc)
vidconvsrc.link(nvvidconvsrc)
nvvidconvsrc.link(caps_vidconvsrc)
sinkpad_usb = streammux.get_request_pad("sink_0")
if not sinkpad_usb:
sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad_usb = caps_vidconvsrc.get_static_pad("src")
if not srcpad_usb:
sys.stderr.write(" Unable to get source pad of caps_vidconvsrc \n")
srcpad_usb.link(sinkpad_usb)
streammux.link(pgie)


pgie.link(tracker)
tracker.link(sgie1)
sgie1.link(nvvidconv)
Expand All @@ -223,6 +272,7 @@ def main():
transform.link(sink_real)
else:
nvosd.link(sink_real)

loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
Expand All @@ -245,4 +295,8 @@ def main():
pipeline.set_state(Gst.State.NULL)

if __name__ == "__main__":
sys.exit(main())
sys.exit(main(sys.argv))




Original file line number Diff line number Diff line change
Expand Up @@ -60,26 +60,24 @@
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373

model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt

#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=2
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=5
interval=1
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
pre-cluster-threshold=0.4
eps=0.2
group-threshold=1

16 changes: 3 additions & 13 deletions ...et10-ros-deepstream/deepstream_python_apps/apps/deepstream-test7/dstest7_sgie1_config.txt
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -62,34 +62,24 @@ gpu-id=0
net-scale-factor=1
model-file=../../../../samples/models/Secondary_CarColor/resnet18.caffemodel
proto-file=../../../../samples/models/Secondary_CarColor/resnet18.prototxt
model-engine-file=../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp32.engine
model-engine-file=../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
mean-file=../../../../samples/models/Secondary_CarColor/mean.ppm
labelfile-path=../../../../samples/models/Secondary_CarColor/labels.txt
int8-calib-file=../../../../samples/models/Secondary_CarColor/cal_trt.bin

#int8-calib-file=../../../../samples/models/Secondary_CarColor/cal_trt.bin
force-implicit-batch-dim=1
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=0
network-mode=1
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0

#本模型的id号
gie-unique-id=2

#接收哪个推理模型输出的数据
operate-on-gie-id=1

#所检测的类别id(PGIE_CLASS_ID_VEHICLE = 0)
operate-on-class-ids=0
is-classifier=1

# 输出层
output-blob-names=predictions/Softmax

classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,13 @@
# ll-config-file: required for NvDCF, optional for KLT and IOU
#
[tracker]
tracker-width=640
tracker-height=384
#tracker-width=640
#tracker-height=384
tracker-width=960
tracker-height=544
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-config-file=tracker_config.yml
#enable-past-frame=1
enable-batch-process=1
Loading

0 comments on commit 3689044

Please sign in to comment.