Most popular metrics used to evaluate object detection algorithms.
-
Updated
Nov 22, 2024 - Python
Most popular metrics used to evaluate object detection algorithms.
Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
A package to read and convert object detection datasets (COCO, YOLO, PascalVOC, LabelMe, CVAT, OpenImage, ...) and evaluate them with COCO and PascalVOC metrics.
Online meter ploter for pytorch. Real time ploting Accuracy, Loss, mAP, AUC, Confusion Matrix
Information Retrieval with Vector Space Model for News Article
A Query-Document pair ranking system using GloVe embeddings and RankCosine.
Python library for Object Detection metrics.
Understanding of use of mAP as a metric for Objects Detection problems
Mean Average Precision from Scratch using PyTorch
ArDoCo: Metrics for Classification & Ranking Tasks
All scripts related to yoloV4 sliding window
Evaluates a given detection by calculating the mAP of the bounding box detections results based on a given test set and the detector code. This repo uses for now the yolov3_detector as an example detector for illustration
In computer vision, this project meticulously constructs a dataset for precise 'Shoe' tracking using YOLOv8 models. Emphasizing detailed data organization, advanced training, and nuanced evaluation, it provides comprehensive insights. A final project for the Computer Vision cousre on Ottawa Master's in (2023).
Implementing the training pipeline for YOLOv4 using PyTorch
Using Faster RCNN to detect Scratches/Spots and Dents on damaged cars data
This repository contains a Jupyter notebook. It demonstrates the process of training and evaluating a YOLOv10 model for object detection using the Rock, Paper, Scissors dataset from Roboflow.
Evaluate a detection model performance
Information retrieval system that gives ranked results when a query is given
Information Retrieval with Lucene and CISI dataset. Index documents and search between them with IB, DFR, BM-25, TF-IDF, Boolean, Axiomatic, LM-Dirichlet similarity and calculate Recall, Precision, MAP (Mean Average Precision) and F-Measure
Description of computing object tracking metrics.
Add a description, image, and links to the mean-average-precision topic page so that developers can more easily learn about it.
To associate your repository with the mean-average-precision topic, visit your repo's landing page and select "manage topics."