Skip to content

eschlager/UNet-Drilling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

U-Net for Semantic Image Segmentation on Microscopic Drilling Tool Images for Wear Detection

This repo contains the code accompanying our paper Evaluation of Data Augmentation and Loss Functions in Semantic Image Segmentation for Drilling Tool Wear Detection.

Data and Data preparation

The microscopic images of cutting inserts of drilling tools have dimensions reaching from 4750 x 1200 pixels, up to 11500 x 1500 pixels. To process such high resolution images in U-Net, they are partitioned into smaller tiles. For training, the script prepare/create_augmented_tiles.py cuts smaller (overlapping) tiles from the whole images, and applies several augmentation techniques:

We distinguish two types of wear: abrasive wear, coloured in blue, and build-up-edge, coloured in yellow. Routines in src/data_loader.py and src/utils.py are specific to these colour labels and have to be adjusted when applied to different coloured masks.

Training

Training (with cross validation) is performed in train_unet.py.

Based on the two different wear types, the code can be used in 4 different modes:

  • mode 0: build-up-edge only (binary problem)
  • mode 1: abrasive wear only (binary problem)
  • mode 2: build-up-edge and abrasive wear as one class (binary problem)
  • mode 3: build-up-edge and abrasive wear as distinct classes (multi class problem)

Furthermore, three different types of loss functions are used:

  • Cross Entropy
  • Focal Crossentropy
  • IoU-based loss

At the end of the training, the evaluation script is started, evaluating the trained models on an unseen development set, which has to be prepared using prepare/create_augmented_tiles.py.

Predicting

For predicting whole images, the overlap-tile strategy is applied as proposed in Ronneberger et. al: U-Net: Convolutional Networks for Biomedical Image Segmentation. The pipeline in predictor.py can be run in mode 0, which stores the predicted masks only, or in mode 1, which adds the predicted mask to the original image.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published