Skip to content

[ICCV 23] Official implementation of Source-free Depth for Object Pop-out (One of First Attempts for RGB-D COD)

Notifications You must be signed in to change notification settings

Zongwei97/PopNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Source-free Depth for Object Pop-out, ICCV'23

Official PyTorch implementaton of ICCV'23 paper Source-free Depth for Object Pop-out

Abstract

Depth cues are known to be useful for visual perception. However, direct measurement of depth is often impracticable. Fortunately, though, modern learning-based methods offer promising depth maps by inference in the wild. In this work, we adapt such depth inference models for object segmentation using the objects' pop-out prior in 3D. The pop-out is a simple composition prior that assumes objects reside on the background surface. Such compositional prior allows us to reason about objects in the 3D space. More specifically, we adapt the inferred depth maps such that objects can be localized using only 3D information. Such separation, however, requires knowledge about contact surface which we learn using the weak supervision of the segmentation mask. Our intermediate representation of contact surface, and thereby reasoning about objects purely in 3D, allows us to better transfer the depth knowledge into semantics. The proposed adaptation method uses only the depth model without needing the source data used for training, making the learning process efficient and practical. Our experiments on eight datasets of two challenging tasks, namely camouflaged object detection and salient object detection, consistently demonstrate the benefit of our method in terms of both performance and generalizability.

abstract

Training/Testing Datasets

The RGB-D datasets with GT depth can be found at SPNet.

The COD dataset with source-free depth can downloaded from here (Training/Testing)

Train and Test

Please follow the training, inference, and evaluation steps:

python train.py
python test_produce_maps.py
python test_evaluation_maps.py

Make sure that you have changed the path to your dataset in the config file and in the abovementioned Python files.

We use the same evaluation protocol as here

Results

RGB-D SOD

Our results for RGB-D salient object detection (SOD) benchmarks can be downloaded here (Google Drive).

Quantitative comparison

abstract

Qualitative comparison

abstract

COD

Our results for camouflaged object detection (COD) benchmarks can be downloaded here (Google Drive).

The checkpoint can be downloaded here (Google Drive).

Quantitative comparison

abstract

Qualitative comparison

abstract

Towards urban applications

We take the pretrained/freezed COD ckpt and figure out that our method can also generalize well on nightlight urban scenes:

abstract

Citation

If you find this repo useful, please consider citing:

@INPROCEEDINGS{wu2023popnet,
  title={Source-free depth for object pop-out},
  author={Wu, Zongwei and Paudel, Danda Pani and Fan, Deng-Ping and Wang, Jingjing and Wang, Shuo and Demonceaux, Cédric and Timofte, Radu and Van Gool, Luc},
  booktitle={ICCV}, 
  year={2023},
}
  

Open Discussion

In the paper, we have made "an assumption that pixels in front of the contact surface belong to objects". Thus, we conduct Dpo-Dc for depth2semantic transfer. While it may be more logical and reasonable if we used Dc-Dpo, however, such a difference should not severely impact the final results.

Related works

  • ACMMM 23 - Object Segmentation by Mining Cross-Modal Semantics [Code]
  • TIP 23 - HiDANet: RGB-D Salient Object Detection via Hierarchical Depth Awareness [Code]
  • 3DV 22 - Robust RGB-D Fusion for Saliency Detection [Code]
  • 3DV 21 - Modality-Guided Subnetwork for Salient Object Detection [Code]

About

[ICCV 23] Official implementation of Source-free Depth for Object Pop-out (One of First Attempts for RGB-D COD)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages