Skip to content

DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution

License

Notifications You must be signed in to change notification settings

callsys/DynRefer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

93 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution

This is the official implementaion of paper 'DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution'. This repository contains Pytorch training code, evaluation code.

Python 3.8 PyTorch 2.1 LICENSE

1. Contents

2. Todo

  • Release training and evaluation code
  • Release demo code

3. Introduction

Region-level multi-modality methods can translate referred image regions to human preferred language descriptions. Unfortunately, most of existing methods using fixed visual inputs remain lacking the resolution adaptability to find out precise language descriptions. In this study, we propose a dynamic resolution approach, referred to as DynRefer, to pursue high-accuracy region-level referring through mimicking the resolution adaptability of human visual cognition. DynRefer first implements stochastic vision-language alignment. It aligns desired language descriptions of multi-modality tasks with images of stochastic resolution, which are constructed by nesting a set of views around the referred region. DynRefer then implements dynamic multi-modality referring, which is realized by selecting views based on image and language priors. This allows the visual information used for referring to better match human preferences, thereby improving the representational adaptability of region-level multi-modality models. Extensive experiments show that DynRefer brings mutual improvement upon tasks including region-level captioning, open-vocabulary region recognition and attribute detection. Last but not least, DynRefer achieves new state-of-the-art on multiple region-level multi-modality tasks using a single model.

4. Results

5. Code Usage

6. Contacts

If you have any question about our work or this repository, please don't hesitate to contact us by emails or open an issue under this project.

7. Acknowledgment

  • Part of the code is borrowed from LAVIS, GlaMM, Osprey, RAM, and OVAD, we sincerely thank them for their contributions to the community.

8. Citation

@misc{zhao2024dynrefer,
      title={DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution}, 
      author={Yuzhong Zhao and Feng Liu and Yue Liu and Mingxiang Liao and Chen Gong and Qixiang Ye and Fang Wan},
      year={2024},
      eprint={2405.16071},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@article{zhao2024controllable,
  title={Controllable Dense Captioner with Multimodal Embedding Bridging},
  author={Zhao, Yuzhong and Liu, Yue and Guo, Zonghao and Wu, Weijia and Gong, Chen and Ye, Qixiang and Wan, Fang},
  journal={arXiv preprint arXiv:2401.17910},
  year={2024}
}

About

DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published