Skip to content
forked from Jiahao-Ma/MvCHM

[ACCV 2024] The official implementation of Multiview Detection with Cardboard Human Modeling.

Notifications You must be signed in to change notification settings

ZichengDuan/MvCHM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multiview Detection with Cardboard Human Modeling

ACCV2024

teaser

The project provides official implementation of MvCHM in ACCV'24. The paper introduces a multiview pedestrian detection method using "cardboard human modeling," which aggregates 3D point clouds from multiple camera views. This approach improves accuracy by considering human appearance and height, reducing projection errors compared to traditional 2D methods.

TODOs

  • Inference and Training Codes
  • Pretrained Models
  • Supplementary dataset Wildtrack+ and MultiviewX+

Prerequisites

This project is tested to run on environment with:

  • CUDA > 11
  • torchvision 0.11.3
  • Windowns 10/11, Ubuntu 20.04

Installation

  1. Create a new conda environment named mvchm to run this project:
conda create -n mvchm python=3.9.7
conda activate mvchm
  1. Make sure your system meet the CUDA requirements and install some core packages:
pip install easydict torch==1.12.1+cu113 torchvision==0.13.1+cu113 tqdm scipy opencv-python
  1. Clone this repository:
cd Your-Project-Folder
gir clone git@github.com:Jiahao-Ma/MvCHM.git
  1. Download the pretrained checkpoint.
  • Download the standing point detection model checkpoint mspn_mx.pth and mspn_wt.pth from here and put them in \model\refine\checkpoint.
  • Download the human detection model checkpoint rcnn_mxp.pth and rcnn_wtp.pth from here and put them in \model\detector\checkpoint.

Inference

Quick start for the project

  1. Download the pre-trained checkpoint Wildtrack.pth from here and put them in \checkpoints.
  2. Inference.
python inference.py --dataname Wildtrack --data_root /path/to/Wildtrack

Training

Train on Wildtrack dataset. Specify the path to Wildtrack dataset.

python train.py --dataname Wildtrack --data_root /path/to/Wildtrack

Train on MultiviewX dataset. Specify the path to MultiviewX dataset.

python train.py --dataname MultiviewX --data_root /path/to/MultiviewX

Evaluation

Evaluate on the trained model.

# Example: --cfg_file: experiments\2022-10-23_19-53-52_wt\MvDDE.yaml
python evaluate.py --dataname Wildtrack --data_root /path/to/Wildtrack --cfg_file /path/to/cfg_file

Wildtrack+ and MultiviewX+ (Optional)

We provide the supplementary datasets Wildtrack+ and MultiviewX+, which include additional annotations for pedestrians located outside the predefined ground plane. You can download the dataset here.

About

[ACCV 2024] The official implementation of Multiview Detection with Cardboard Human Modeling.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 78.2%
  • MATLAB 15.1%
  • C++ 5.2%
  • Other 1.5%