Skip to content

Parameters

Laurent Thomas edited this page Apr 26, 2019 · 2 revisions

Input images

  • (1) Template image : This is the image searched in the target image. It must be smaller than the target image. It can be of any bit depth. RGB images are automatically converted to 32-bit grayscale.

  • (2) Target image : This is the image in which the template image is searched (and must thus be larger than the template). Like for the template image any bit depth is possible.

Template transformations

NB : The more transformations the longer will be the computation time, but the higher the probability to find the object.

  • (3) Flip template vertically/horizontally: Performing additional searches with the transformed template allows to maximize the probability to find the object, if the object is expected to have different orientations in the image.
    This is due to the fact that the template matching only looks for translated version of the templates provided.
    Possible transformations include flipping (also called mirroring), rotation (below). Scaling is not proposed in the interface but several templates at different scale can be provided in the multiple template version of the plugin.
    If vertical and horizontal flipping are selected, then the plugin generates 2 additional templates for the corresponding tansformation.

  • (4) Additional rotations : it is possible to provide a list of clockwise rotations in degrees separated by commas,e.g.: 45,90,180.
    AS with flipping, performing searches with rotated version of the template increases the probability to find the object if it is expected to be rotated.
    If flipping is selected, both the original and flipped versions of the template will be rotated.
    NOTE: The template must be of rectangular shape, i.e. for angles not corresponding to "square rotations" (not a multiple of 90°) the rotated template will have some background area which are filled either with the modal gray value of the template (Fiji) or with the pixel at the border in the initial template (KNIME). For higher performance, the non square rotations can be manually generated before calling the plugin and saved as templates.

  • (5) Score Type (default : 0-mean Normalized Cross-Correlation) : This is the formula used to compute the probability map (see opencv documentation). The choice is limited to normalised scores to be able to compare different correlation maps when multiple templates are used.

    • Normalised Square Difference (NSD): The pixels in the probability map are computed as the sum of difference between the gray level of pixels from the image patch and from the template normalised by the square root of the sum of the squared pixel values. Therefore a high probability to find the object corresponds to a low score value (not as good as the correlation scores usually).
    • Normalised Cross-Correlation (NCC) : The pixels in the probability map are computed as the sum of the pixel/pixel product between the template and current image patch, also normalized for the difference. a high probability to find the object corresponds to a high score value.
    • 0-mean Normalized Cross-Correlation (0-mean NCC) : The mean value of the template and the image patch is substracted to each pixel value before computing the cross-correlation as above. Like the correlation method, a high probability to find the object corresponds to a high score value. (usually this method is most robust to change of illumination)
  • (6) Expected number of objects (N): This is the expected number of object expected in each image. The plugin will return N or less predicted locations of the object.

If N>1

Parameters 7 and 8 are only used when several object are expected in each image

  • (7) Score Threshold (range 0-1, default 0.5): Used for the extrema detection on the score map(s).
    If the difference-score is used, only minima below this threshold are collected before NMS (i.e. increase to evaluate more hits).
    If a correlation-score is used, only maxima above this threshold are collected before NMS (i.e. decrease to evaluate more hits).

  • (8) Maximal overlap (range 0-1, default 0.3): Typically in the range 0.1-0.5.
    This parameter is for the Non-Maxima Suppression (NMS). It must be adjusted to prevent overlapping detections while keeping detections of close objects. This is the maximal value allowed for the ratio of the Intersection Over Union (IoU) area between overlapping bounding boxes.
    If 2 bounding boxes are overlapping above this threshold, then the lower score one is discarded.

Output

  • (9) Add detected ROI to ROI Manager : The predicted location will be added as a rectangular bounding box to the ROI manager and overlaid on the image (if not tick the "Display all" in the ROI Manager)

  • (10) Show result table : Returns a result table at the end of the execution with the name of the image and template for that match, the coordinates of the bounding box (center and top left corner) as well as the score of the detection.