Skip to content

Latest commit

 

History

History
71 lines (52 loc) · 2.27 KB

README.md

File metadata and controls

71 lines (52 loc) · 2.27 KB

Autodistill Qwen-VL Module

This repository contains the code supporting the Qwen-VL base model for use with Autodistill.

Qwen-VL, introduced in the paper Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond, is a multimodal vision model. Qwen-VL has visual grounding capabilities, which allows you to use the model for zero-shot object detection.

You can use Autodistill Qwen-VL to auto-label images for use in training a smaller, fine-tuned vision model.

Read the full Autodistill documentation.

Read the Qwen-VL Autodistill documentation.

Installation

To use Qwen-VL with Autodistill, you need to install the following dependency:

pip3 install autodistill-qwen-vl

Quickstart

from autodistill_qwen_vl import QwenVL
from autodistill.utils import plot
from autodistill.detection import CaptionOntology

# define an ontology to map class names to our QwenVL prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = QwenVL(
    ontology=CaptionOntology(
        {
            "person": "person",
            "a forklift": "forklift"
        }
    )
)
results = base_model.predict("logistics.jpeg")

plot(
    image=cv2.imread("logistics.jpeg"),
    classes=base_model.ontology.classes(),
    detections=results
)

# label all images in a folder called `context_images`
base_model.label("./context_images", extension=".jpeg")

License

[add license information here]

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!