YOLOv8 Car Counter is a powerful and optimized solution for counting cars in video streams or images using the YOLOv8 object detection model. This project focuses on enhancing performance by utilizing a tracer instead of capturing cars in every frame and applying masking to focus on the relevant area.
- High Accuracy: Utilizes the YOLOv8 model for precise car detection.
- Optimized Performance: Uses a tracer to track cars across frames, reducing the computational load.
- Area Masking: Focuses on a specific area of interest to improve detection efficiency.
- Python 3.8+
pip
(Python package installer)- NVIDIA GPU with CUDA support (recommended for optimal performance)
-
Clone the repository:
git clone https://github.com/yourusername/YOLOv8-Car-Counter.git cd YOLOv8-Car-Counter
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Download the YOLOv8 model weights:
- Download the YOLOv8 weights from the official YOLOv8 repository or use your trained model.
- Place the weights file in the
models
directory.
-
Prepare your input data:
- Place your video files or images in the
input
directory.
- Place your video files or images in the
-
Run the car counter:
python car_counter.py --input_path ./input --output_path ./output
-
Check the output:
- Processed videos or images with car count annotations will be saved in the
output
directory.
- Processed videos or images with car count annotations will be saved in the
You can customize the behavior of the car counter by modifying the config.yaml
file. Key parameters include:
confidence_threshold
: Confidence threshold for YOLOv8 detections.nms_threshold
: Non-max suppression threshold.mask_area
: Coordinates for the area of interest.
python car_counter.py --input_path ./input/sample_video.mp4 --output_path ./output
Contributions are welcome! Please fork the repository and create a pull request with your changes. Ensure your code adheres to the project's coding standards and includes appropriate tests.
This project is licensed under the MIT License. See the LICENSE file for details.