# mAP **Repository Path**: lpweb/mAP ## Basic Information - **Project Name**: mAP - **Description**: mean Average Precision - This code evaluates the performance of your neural net for object recognition. - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-03-15 - **Last Updated**: 2021-11-03 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # mAP (mean Average Precision) [](https://github.com/Cartucho/mAP) This code will evaluate the performance of your neural net for object recognition.
Using this criterium, we calculate the precision/recall curve. E.g:
Then we compute a version of the measured precision/recall curve with **precision monotonically decreasing** (shown in light red), by setting the precision for recall `r` to the maximum precision obtained for any recall `r' > r`.
Finally, we compute the AP as the **area under this curve** (shown in light blue) by numerical integration.
No approximation is involved since the curve is piecewise constant.
#### 2. Calculate mAP
We calculate the mean of all the AP's, resulting in an mAP value from 0 to 100%. E.g:
## Prerequisites
You need to install:
- [Python](https://www.python.org/downloads/)
Optional:
- **plot** the results by [installing Matplotlib](https://matplotlib.org/users/installing.html) - Linux, macOS and Windows:
1. `python -mpip install -U pip`
2. `python -mpip install -U matplotlib`
- show **animation** by installing [OpenCV](https://www.opencv.org/):
1. `python -mpip install -U pip`
2. `python -mpip install -U opencv-python`
## Quick-start
To start using the mAP you need to clone the repo:
```
git clone https://github.com/Cartucho/mAP
```
## Running the code
Step by step:
1. [Create the ground-truth files](#create-the-ground-truth-files)
2. Copy the ground-truth files into the folder **input/ground-truth/**
3. [Create the detection-results files](#create-the-detection-results-files)
4. Copy the detection-results files into the folder **input/detection-results/**
5. Run the code:
```
python main.py
```
Optional (if you want to see the **animation**):
6. Insert the images into the folder **input/images-optional/**
#### PASCAL VOC, Darkflow and YOLO users
In the [scripts/extra](https://github.com/Cartucho/mAP/tree/master/scripts/extra) folder you can find additional scripts to convert **PASCAL VOC**, **darkflow** and **YOLO** files into the required format.
#### Create the ground-truth files
- Create a separate ground-truth text file for each image.
- Use **matching names** for the files (e.g. image: "image_1.jpg", ground-truth: "image_1.txt").
- In these files, each line should be in the following format:
```