-
Notifications
You must be signed in to change notification settings - Fork 2
/
index.html
225 lines (201 loc) · 9.65 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
<!DOCTYPE html>
<!-- modified from url=(0049)https://eborboihuc.github.io/Mono-3DT -->
<html class="gr__ee_nthu_edu" lang="en">
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
<meta content="IE=edge" http-equiv="X-UA-Compatible">
<meta content="width=device-width, initial-scale=1" name="viewport">
<meta content="shijieSun" name="author">
<title>Simultaneous Detection and Tracking with Motion Modelling for Multiple Object Tracking</title>
<!-- CSS includes -->
<link href="./asset/bootstrap.min.css" rel="stylesheet">
<link href="./asset/css" rel="stylesheet" type="text/css">
<link href="./asset/mystyle.css" rel="stylesheet">
</head>
<body data-gr-c-s-loaded="true">
<div class="topnav" id="myTopnav">
<a href="#header">Home</a>
<a href="#abstract">Abstract</a>
<a href="#video">Video</a>
<a href="#dataset">Code&Dataset</a>
<a href="#paper">Paper</a>
<a href="#acknowledgement">Acknowledgements</a>
<a class="icon" href="javascript:void(0);" onclick="toggleTopNav()">☰</a>
</div>
<div class="container-fluid" id="header">
<div class="row">
<h1>Simultaneous Detection and Tracking with Motion Modelling for Multiple Object Tracking</h1>
<div class="authors">
<a href="https://scholar.google.com/citations?user=Jm8efcoAAAAJ&hl=en" target="_blank">ShiJie Sun</a>,
<a href="https://scholar.google.com/citations?user=Xqmlj18AAAAJ&hl=en&oi=sra" target="_blank">Naveed Aktar</a>,
XiangYu Song,
Huansheng Song*,
<a href="https://scholar.google.com/citations?user=X589yaIAAAAJ&hl=en" target="_blank">Ajmal Mian</a>,
<a href="https://scholar.google.com/citations?hl=en&user=p8gsO3gAAAAJ" target="_blank">Mubarak Shah</a>
<br><br>
<p style="text-align:center;">
<!-- <a href="" target="_blank"><img height="100"
src="./ECCV2020/nthu.png"></a>
  -->
<a href="http://en.chd.edu.cn/" target="_blank"><img height="100" src="./ECCV2020/chd.jpg"></a>
 
<a href="http://staffhome.ecm.uwa.edu.au/~00053650/" target="_blank"><img height="100" src="./ECCV2020/uwa.png"></a>
 
<a href="https://www.crcv.ucf.edu/" target="_blank"><img height="100" src="./ECCV2020/ucf.png"></a>
</p>
</div>
</div>
</div>
<div class="container-fluid" id="teaser">
<div class="row">
<p style="text-align:center;">
<img height="400" src="./ECCV2020/framework.png">
</p>
<p style="text-align:center;">
<img height="400" src="./ECCV2020/DatasetDemos.png">
</p>
</div>
</div>
<div class="container" id="abstract">
<h2>Abstract</h2>
Deep learning based Multiple Object Tracking (MOT) currently relies on off-the-shelf detectors for tracking-by-detection. This results in deep models that are detector biased and evaluations that are detector influenced. To resolve this issue, we introduce Deep Motion Modeling Network (DMM-Net) that can estimate multiple objects' motion parameters to perform joint detection and association in an end-to-end manner. DMM-Net models object features over multiple frames and simultaneously infers object classes, visibility and their motion parameters. These outputs are readily used to update the tracklets for efficient MOT. DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the popular UA-DETRAC challenge - which is better performance and orders of magnitude faster. We also contribute a synthetic large-scale public dataset <a href="https://github.com/shijieS/OmniMOTDataset" target="_blank">Omni-MOT</a> for vehicle tracking that provides precise ground-truth annotations to eliminate the detector influence in MOT evaluation. This 14M+ frames dataset is extendable with <a href="https://github.com/shijieS/OMOTDRecorder" target="_blank">our public script</a>. We demonstrate the suitability of Omni-MOT for deep learning with DMM-Net, and also make the <a href="https://github.com/shijieS/DMMN" target="_blank">source code of our network</a> public.
</div>
<div class="container" id="video">
<h2>Video Overview</h2>
<p style="text-align:center;">
<iframe width="375" height="200" src="https://www.youtube.com/embed/9isr0WQB_IA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="375" height="200" src="https://www.youtube.com/embed/pSJg135sZrY" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="375" height="200" src="https://www.youtube.com/embed/Ya3HwEYTwrE" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<!--<iframe width="375" height="200" src="https://www.youtube.com/embed/9isr0WQB_IA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<video controls="controls" height="200" width="375">
<source src="https://www.dropbox.com/sh/nom2u2s7snivwhu/AABCywWoePDi0MnNXM9tXzyfa?dl=0&preview=omni_result.avi" type="video/mp4"></source>
</video>
<video controls="controls" height="200" width="375">
<source src="https://www.dropbox.com/sh/nom2u2s7snivwhu/AABCywWoePDi0MnNXM9tXzyfa?dl=0&preview=UA-DETREAC-MVI_39271.avi" type="video/mp4"></source>
</video>-->
</p>
</div>
<div class="container" id="dataset">
<h2>Code & Dataset</h2>
<!-- <p style="text-align:center;"> <img src="./ICCV2019/sports-360.png" width="100%"></p>-->
<!-- <p>Following resources are provided:</p> -->
<div class="row" style="alignment: center">
<div class="col-sm-3"></div>
<div class="col-sm-3">
<a href="https://github.com/shijieS/DMMN" target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/dmmn_icons.png", height="128"><br>
DMM-Net (GitHub)</p>
</a>
</div>
<div class="col-sm-3">
<a href="https://github.com/shijieS/OmniMOTDataset"
target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/omotd_icons.png", height="128"><br>
OMOT dataset (GitHub)</p>
</a>
</div>
<div class="col-sm-3">
<a href="https://github.com/shijieS/OMOTDRecorder"
target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/omotd_script_icons.png", height="128"><br>
OMOT Recording Script (GitHub)</p>
</a>
</div>
<!-- <div class="col-sm-3"></div> -->
</div>
</div>
<div class="container" id="paper">
<h2>Citation</h2>
<a href="https://drive.google.com/file/d/1qzYH4l0zJZXXz7eqgniaXrKS4F4FhYMC/view" target="_blank">
<div class="thumbs">
<!-- <img src="./ECCV2020/thumbs-0.png"> -->
</div>
</a>
<div>
<pre class="citation">@inproceedings{ShiJie20,
author = {Shijie Sun, Naveed Aktar, XiangYu Song, Huansheng Song, Ajmal Mian, Mubarak Shah},
title = {Simultaneous Detection and Tracking with Motion Modelling for Multiple Object Tracking},
booktitle = {Proceedings of the European conference on computer vision (ECCV)}},
year = {2020}
}</pre>
</div>
<div class="row">
<div class="col-sm-2"></div>
<div class="col-sm-3">
<a href="./ECCV2020/paper.pdf" target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/pdf.png"><br>
Paper (High-resolution)</p>
</a>
</div>
<div class="col-sm-2">
<a href="./ECCV2020/supplementary.pdf" target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/pdf.png"><br>
Supplementary (High-resolution)</p>
</a>
</div>
<!--
<div class="col-sm-3">
<a href="https://arxiv.org/abs/1811.10742" target="_blank">
<p style="text-align:center;">
<img src="./ECCV2020/pdf.png"><br>
Paper (ArXiv)</p>
</a>
</div>-->
<div class="col-sm-2"></div>
</div>
<div class="container" id="acknowledgement">
<h2>Acknowledgements</h2>
<p style="text-align:center;">
<a href="http://en.chd.edu.cn/" target="_blank">
<img height="100" src="./ECCV2020/chd.jpg">
</a>
<a href="https://www.uwa.edu.au/" target="_blank">
<img height="100" src="./ECCV2020/uwa.png">
</a>
<a href="https://www.crcv.ucf.edu/" target="_blank">
<img height="100" src="./ECCV2020/ucf.png">
</a>
<a href="https://carla.org/" target="_blank">
<img height="100" src="./ECCV2020/carla.png">
</a>
<a href="https://www.nvidia.com/en-us/" target="_blank">
<img height="100" src="./ECCV2020/nvidia.jpg">
</a>
<a href="https://pytorch.org/" target="_blank">
<img height="100" src="./ECCV2020/pytorch.png">
</a>
</p>
</div>
<div id="footer">
<br>
<p style="text-align:center;">Copyright © ShiJie Sun 2020 Based on Mono-3DT</p>
</div>
</div>
<!-- Javascript includes -->
<script src="./asset/jquery-1.8.3.min.js"></script>
<script src="./asset/mystyle.js"></script>
<script src="./asset/bootstrap.min.js"></script>
<script async="" src="./asset/analytics.js"></script>
<script>
(function (i, s, o, g, r, a, m) {
i['GoogleAnalyticsObject'] = r;
i[r] = i[r] || function () {
(i[r].q = i[r].q || []).push(arguments)
}, i[r].l = 1 * new Date();
a = s.createElement(o),
m = s.getElementsByTagName(o)[0];
a.async = 1;
a.src = g;
m.parentNode.insertBefore(a, m)
})(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');
ga('create', 'UA-98479202-1', 'auto');
ga('send', 'pageview');
</script>
<div id="point-jawn" style="z-index: 2147483647;"></div>
</body>
</html>