Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.
** This is unofficial fork of the library. Maintained by Eugene Khvedchenya (Ex. Albumentations core team). Use at own risk **
On February 24th, 2022, Russia declared war and invaded peaceful Ukraine. After the annexation of Crimea and the occupation of the Donbas region, Putin's regime decided to destroy Ukrainian nationality. Ukrainians show fierce resistance and demonstrate to the entire world what it's like to fight for the nation's independence.
Ukraine's government launched a website to help russian mothers, wives & sisters find their beloved ones killed or captured in Ukraine - https://200rf.com & https://t.me/rf200_now (Telegram channel). Our goal is to inform those still in Russia & Belarus, so they refuse to assault Ukraine.
Help us get maximum exposure to what is happening in Ukraine, violence, and inhuman acts of terror that the "Russian World" has brought to Ukraine. This is a comprehensive Wiki on how you can help end this war: https://how-to-help-ukraine-now.super.site/
Official channels
- Official account of the Parliament of Ukraine
- Ministry of Defence
- Office of the president
- Cabinet of Ministers of Ukraine
- Center of strategic communications
- Minister of Foreign Affairs of Ukraine
Glory to Ukraine!
Here is an example of how you can apply some augmentations from Albumentations to create new images from the original one:
- Albumentations supports all common computer vision tasks such as classification, semantic segmentation, instance segmentation, object detection, and pose estimation.
- The library provides a simple unified API to work with all data types: images (RBG-images, grayscale images, multispectral images), segmentation masks, bounding boxes, and keypoints.
- The library contains more than 70 different augmentations to generate new training samples from the existing data.
- Albumentations is fast. We benchmark each new release to ensure that augmentations provide maximum speed.
- It works with popular deep learning frameworks such as PyTorch and TensorFlow. By the way, Albumentations is a part of the PyTorch ecosystem.
- Written by experts. The authors have experience both working on production computer vision systems and participating in competitive machine learning. Many core team members are Kaggle Masters and Grandmasters.
- The library is widely used in industry, deep learning research, machine learning competitions, and open source projects.
- Authors
- Installation
- Documentation
- A simple example
- Getting started
- Who is using Albumentations
- List of augmentations
- A few more examples of augmentations
- Benchmarking results
- Contributing
- Comments
- Citing
Alexander Buslaev — Computer Vision Engineer at Mapbox | Kaggle Master
Alex Parinov — Tech Lead at SberDevices | Kaggle Master
Vladimir I. Iglovikov — Staff Engineer at Lyft Level5 | Kaggle Grandmaster
Evegene Khvedchenya — Computer Vision Research Engineer at Piñata Farms | Kaggle Grandmaster
Mikhail Druzhinin — Computer Vision Engineer at ID R&D | Kaggle Expert
Albumentations requires Python 3.6 or higher. To install the latest version from PyPI:
pip install -U albumentations
Other installation options are described in the documentation.
The full documentation is available at https://albumentations.ai/docs/.
import albumentations as A
import cv2
# Declare an augmentation pipeline
transform = A.Compose([
A.RandomCrop(width=256, height=256),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.2),
])
# Read an image with OpenCV and convert it to the RGB colorspace
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Augment an image
transformed = transform(image=image)
transformed_image = transformed["image"]
Please start with the introduction articles about why image augmentation is important and how it helps to build better models.
If you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to the set of articles that has an in-depth description of this task. We also have a list of examples on applying Albumentations for different use cases.
We have examples of using Albumentations along with PyTorch and TensorFlow.
Check the online demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have a list of all available augmentations and their targets.
- A list of papers that cite Albumentations.
- A list of teams that were using Albumentations and took high places in machine learning competitions.
- Open source projects that use Albumentations.
Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:
- AdvancedBlur
- Blur
- CLAHE
- ChannelDropout
- ChannelShuffle
- ColorJitter
- Downscale
- Emboss
- Equalize
- FDA
- FancyPCA
- FromFloat
- GaussNoise
- GaussianBlur
- GlassBlur
- HistogramMatching
- HueSaturationValue
- ISONoise
- ImageCompression
- InvertImg
- MedianBlur
- MotionBlur
- MultiplicativeNoise
- Normalize
- PixelDistributionAdaptation
- Posterize
- RGBShift
- RandomBrightnessContrast
- RandomFog
- RandomGamma
- RandomRain
- RandomShadow
- RandomSnow
- RandomSunFlare
- RandomToneCurve
- RingingOvershoot
- Sharpen
- Solarize
- Superpixels
- TemplateTransform
- ToFloat
- ToGray
- ToSepia
- UnsharpMask
Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.
Transform | Image | Masks | BBoxes | Keypoints |
---|---|---|---|---|
Affine | ✓ | ✓ | ✓ | ✓ |
CenterCrop | ✓ | ✓ | ✓ | ✓ |
CoarseDropout | ✓ | ✓ | ✓ | |
Crop | ✓ | ✓ | ✓ | ✓ |
CropAndPad | ✓ | ✓ | ✓ | ✓ |
CropNonEmptyMaskIfExists | ✓ | ✓ | ✓ | ✓ |
ElasticTransform | ✓ | ✓ | ||
Flip | ✓ | ✓ | ✓ | ✓ |
GridDistortion | ✓ | ✓ | ||
GridDropout | ✓ | ✓ | ||
HorizontalFlip | ✓ | ✓ | ✓ | ✓ |
Lambda | ✓ | ✓ | ✓ | ✓ |
LongestMaxSize | ✓ | ✓ | ✓ | ✓ |
MaskDropout | ✓ | ✓ | ||
NoOp | ✓ | ✓ | ✓ | ✓ |
OpticalDistortion | ✓ | ✓ | ||
PadIfNeeded | ✓ | ✓ | ✓ | ✓ |
Perspective | ✓ | ✓ | ✓ | ✓ |
PiecewiseAffine | ✓ | ✓ | ✓ | ✓ |
PixelDropout | ✓ | ✓ | ✓ | ✓ |
RandomCrop | ✓ | ✓ | ✓ | ✓ |
RandomCropNearBBox | ✓ | ✓ | ✓ | ✓ |
RandomGridShuffle | ✓ | ✓ | ✓ | |
RandomResizedCrop | ✓ | ✓ | ✓ | ✓ |
RandomRotate90 | ✓ | ✓ | ✓ | ✓ |
RandomScale | ✓ | ✓ | ✓ | ✓ |
RandomSizedBBoxSafeCrop | ✓ | ✓ | ✓ | |
RandomSizedCrop | ✓ | ✓ | ✓ | ✓ |
Resize | ✓ | ✓ | ✓ | ✓ |
Rotate | ✓ | ✓ | ✓ | ✓ |
SafeRotate | ✓ | ✓ | ✓ | ✓ |
ShiftScaleRotate | ✓ | ✓ | ✓ | ✓ |
SmallestMaxSize | ✓ | ✓ | ✓ | ✓ |
Transpose | ✓ | ✓ | ✓ | ✓ |
VerticalFlip | ✓ | ✓ | ✓ | ✓ |
To run the benchmark yourself, follow the instructions in benchmark/README.md
Results for running the benchmark on the first 2000 images from the ImageNet validation set using an Intel(R) Xeon(R) Gold 6140 CPU. All outputs are converted to a contiguous NumPy array with the np.uint8 data type. The table shows how many images per second can be processed on a single core; higher is better.
albumentations 1.1.0 |
imgaug 0.4.0 |
torchvision (Pillow-SIMD backend) 0.10.1 |
keras 2.6.0 |
augmentor 0.2.8 |
solt 0.1.9 |
|
---|---|---|---|---|---|---|
HorizontalFlip | 10220 | 2702 | 2517 | 876 | 2528 | 6798 |
VerticalFlip | 4438 | 2141 | 2151 | 4381 | 2155 | 3659 |
Rotate | 389 | 283 | 165 | 28 | 60 | 367 |
ShiftScaleRotate | 669 | 425 | 146 | 29 | - | - |
Brightness | 2765 | 1124 | 411 | 229 | 408 | 2335 |
Contrast | 2767 | 1137 | 349 | - | 346 | 2341 |
BrightnessContrast | 2746 | 629 | 190 | - | 189 | 1196 |
ShiftRGB | 2758 | 1093 | - | 360 | - | - |
ShiftHSV | 598 | 259 | 59 | - | - | 144 |
Gamma | 2849 | - | 388 | - | - | 933 |
Grayscale | 5219 | 393 | 723 | - | 1082 | 1309 |
RandomCrop64 | 163550 | 2562 | 50159 | - | 42842 | 22260 |
PadToSize512 | 3609 | - | 602 | - | - | 3097 |
Resize512 | 1049 | 611 | 1066 | - | 1041 | 1017 |
RandomSizedCrop_64_512 | 3224 | 858 | 1660 | - | 1598 | 2675 |
Posterize | 2789 | - | - | - | - | - |
Solarize | 2761 | - | - | - | - | - |
Equalize | 647 | 385 | - | - | 765 | - |
Multiply | 2659 | 1129 | - | - | - | - |
MultiplyElementwise | 111 | 200 | - | - | - | - |
ColorJitter | 351 | 78 | 57 | - | - | - |
Python and library versions: Python 3.9.5 (default, Jun 23 2021, 15:01:51) [GCC 8.3.0], numpy 1.19.5, pillow-simd 7.0.0.post3, opencv-python 4.5.3.56, scikit-image 0.18.3, scipy 1.7.1.
To create a pull request to the repository, follow the documentation at https://albumentations.ai/docs/contributing/
If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:
@Article{info11020125,
AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
TITLE = {Albumentations: Fast and Flexible Image Augmentations},
JOURNAL = {Information},
VOLUME = {11},
YEAR = {2020},
NUMBER = {2},
ARTICLE-NUMBER = {125},
URL = {https://www.mdpi.com/2078-2489/11/2/125},
ISSN = {2078-2489},
DOI = {10.3390/info11020125}
}