Releases: Project-AgML/AgML
AgML 0.7.0
This is a major update to AgML that includes a new agml.models
API, new datasets, new tools, and a suite of bugfixes and improvements across the board.
Major Changes
- The
agml.models.Detector
API has been introduced. This API wraps the Ultralytics YOLO11 model (see ultralytics for more information) and provides an easy-to-use interface for training and inference with state-of-the-art object detection models.- Use
agml.models.Detector.train(loader, 'yolo_model')
to quickly train an AgML dataset using your choice of YOLO model - all right within the AgML API. - Use
agml.models.Detector.load('model_name')
to load a trained model easily, stored directly within the AgML local model repository. - Use
agml.models.Detector.load_benchmark('dataset_name', 'yolo_name')
to access a pre-trained agricultural object detection model from a set of available pretrained benchmarks. - Note: If you want fine-grained transfer learning and finetuning capabilities, use the standard
agml.models.DetectionModel
API, but theagml.models.Detector
API is a much more user-friendly quick-start for quick training and inference.
- Use
- We have added four new datasets:
tomato_ripeness_detection
: Object Detectioncorn_maize_leaf_disease
: Image Classificationtomato_leaf_disease
: Image Classificationvine_virus_photo_dataset
: Image Classification
- A new export tool has been added
agml.data.extensions.restructure_cvat_annotations
, that automatically reformats the exported COCO annotations from the CVAT tool into the format used in AgML. This increases AgML's interoperability with existing machine learning frameworks.
Major Bugfixes
- Helios compilation and execution is now fixed and works for Windows systems - Windows users can now use the
agml.synthetic
API to generate their own simulated data.
Other Changes and Improvements
- The documentation for AgML datasets has been improved to show more representative samples, and datasets without documentation now have theirs added.
- Improvements have been made to the
agml.data.ImageLoader
, and there is now a new methodagml.data.AgMLDataLoader.take_images()
which enables you to extract just the images from anAgMLDataLoader
, useful for inference and other image-only applications. - A bug which prevented saving and loading splits for object detection has been fixed: you can now load and save object detection data splits successfully.
- Some small errors with visualization have been fixed, including orientation and number of images in certain methods.
- The
agml.data.tools.convert_bbox_format
tool now has new formatsxyxy
andyxyx
which wrap longer strings, for ease of use.
AgML v0.6.2
This release introduces a new export tool for AgML datasets, various expansions to methods, and some bugfixes.
Major Changes
- A new method
agml.data.exporters.export_yolo()
has been added (alongside a companion wrapper to a dataset,loader.export_yolo()
), that enables exporting datasets to the Ultralytics YOLO format. The resulting datasets can be directly integrated in one line of code into Ultralytics/Darknet YOLO training pipelines, which augments AgML's object detection resources.
Improvements
- Splitting datasets, specifically with multi-dataset loaders, now works properly.
Bugfixes
- Added an
export_tensorflow()
method for multi-loaders, which was previously missing. - Updated Helios compilation instructions to be compatible with the C++17 standard needed for updated Helios versions.
- All example notebooks have been updated to be consistent with new AgML methods.
- AgML visualization methods now no longer double-display in Jupyter notebooks.
AgML 0.6.1
This is a small update.
- A new dataset
vegann_multicrop_presence_segmentation
has been added: this is the AgML-incorporation of the VegAnn dataset. - Fixed an issue with the class names in the
rangeland_weeds_australia
dataset: the awry "no_weeds" class has been removed and the dataset now follows the original specification of the DeepWeeds dataset. - A bug that prevented the storage/loading of custom splits has been fixed.
- A bug that cut off text when visualizing image classification data samples has been fixed.
Full Changelog: v0.6.0...v0.6.1
AgML 0.6.0
This release introduces tools for training custom agricultural machine learning models using AgML.
Main Changes
agml.models
- The
agml.models
API has been extended with new features for training, namely therun_training
method which enables quick training of image classification, semantic segmentation, and object detection models.- Simply instantiate a model with your number of classes, build an
AgMLDataLoader
with your data preprocessing, and pass it to therun_training
method alongside other training hyperparameters to train a model. - Choose your level of customizability: for newer users, options like optimizers, loss, and other hyperparameters are auto-selected, but for experienced users, you can go as far as extending the
training_step
,validation_step
, and other arguments for greater customizability over training.
- Simply instantiate a model with your number of classes, build an
Other Changes and Bugfixes
- A major bug which prevented recompliation of Helios without LiDAR has been fixed, enabling users to switch between using LiDAR-compiled Helios and standard Helios.
- A bug which caused Helios installation on basic inspection is now patched. Helios will no longer auto-install unless the
agml.synthetic
module is actively used for data generation. - You can now correctly display image samples when using
agml.viz.show_sample
with theimage_only
option. - Bugfixes have been done for
agml.models.metrics.Accuracy
andagml.models.metrics.MeanAveragePrecision
to ensure that they work with training.
Read the Full Changelog: v0.5.2...v0.6.0
AgML 0.5.2
New Features and Bugfixes
- Added the ability to save and load data splits for individual datasets using
loader.save_split
andloader.load_split
. - Added the ability to load multiple datasets using a wildcard name instead of passing each individual dataset in a list (e.g., to load all of the apple detection datasets, use
apple_detection*
). - Use
agml.models.get_benchmark
to get benchmark information for a certain model trained on a certain dataset. - Fixed bugs with mAP calculation during training.
- Fixed bugs with image preprocessing for object detection and visualization of object detection predictions.
Full Changelog: v0.5.1...v0.5.2
AgML 0.5.1
Main Changes
- Added a new dataset,
ghai_strawberry_fruit_detection
. - Added a new method
agml.io.read_image
for convenience. - Added the ability to generate both RGB & LiDAR data using the option
agml.synthetic.SimulationType.Both
.
Bugfixes and Updates
- You can now instantiate a collection of synthetic datasets by passing a list of them to the initialization of an
AgMLDataLoader
. - A bug which prevented the visualization of predictions when using a
SegmentationModel
has now been patched.
Full Changelog: v0.5.0...v0.5.1
AgML 0.5.0
This release introduces synthetic LiDAR data generation through Helios, a rework of the agml.viz
module with better functionality, and a number of new available datasets, amongst other features.
Major Changes
- The
agml.io
module has been added, with a few convenience functions for working with file and directory structures.- Currently available functions include
get_file_list
andget_dir_list
which also work with nested structures, as well asrecursive_dirname
,parent_path
, andrandom_file
.
- Currently available functions include
agml.data
- Three new datasets have been introduced:
- Object Detection:
ghai_iceberg_lettuce_detection
,ghai_broccoli_detection
- Image Classification:
riseholme_strawberry_classification_2021
- Object Detection:
- The
agml.data.ImageLoader
has been added, which is essentially a simple loader designed only for images.- Enables loading images from a nested directory structure.
- Enables easy resizing and transforms of the loaded images.
- The
AgMLDataLoader
now has a new methodshow_sample
which can be used to visualize samples from the dataset directly from the loader.
agml.synthetic
- LiDAR Data Generation: You can generate LiDAR-based synthetic data now, using
opt.simulation_type = agml.synthetic.SimulationType.LiDAR
. - You can recompile Helios with LiDAR enabled, as well as in parallel (for Linux and MacOS systems) using
recompile_helios(lidar_enabled = True)
andrecompile_helios(parallel = True)
[note that parallel compilation is enabled by default]. - A new loader
agml.synthetic.LiDARLoader
has been added, which can be used to load point clouds from a generated directory in the same format asImageLoader
, and can be used to get point clouds and visualize them.
agml.viz
- The entire
agml.viz
module has been reworked with new methods and a functional visualization backend which enables bothcv2
andmatplotlib
displays, depending on what the user desires. - The following is a mapping of the old functions to new functions:
visualize_images_and_labels
->show_images_and_labels
output_to_mask
->convert_mask_to_colored_image
overlay_segmentation_masks
->annotate_semantic_segmentation
visualize_image_and_mask
->show_image_and_mask
visualize_overlaid_masks
->show_image_and_overlaid_mask
visualize_image_mask_and_predicted
->show_semantic_segmentation_truth_and_prediction
annotate_bboxes_on_image
->annotate_object_detection
visualize_image_and_boxes
->show_image_and_boxes
visualize_real_and_predicted_bboxes
->show_object_detection_truth_and_prediction
visualize_images
->show_images
- To swap between viz backends, use
get_viz_backend
andset_viz_backend
. - In order to simply display an image, you can use
display_image
. - To visualize a point cloud (either in Open3D if installed, or matplotlib), use
show_point_cloud
.
Other Changes
- Major Change for New Users:
torch
,torchvision
, and other related modeling packages for theagml.models
package are no longer distributed with the AgML requirements -- you must install these on your own if you want to use them.
Minor/Functional Changes + Bugfixes
- Fixed backend swapping between TensorFlow and PyTorch when using clashing transforms.
- Added the ability to run prediction with a classification and a segmentation model without normalizing the input image using
model.predict(..., normalize = False)
. - Images no longer auto-show as matplotlib figures when using
agml.viz.show_*
methods, instead they are returned as image arrays and can be displayed in a desired format. - Improved access and setting of information in the backend
config.json
file, so information is not accidentally overwritten.
Full Changelog: v0.4.7...v0.5.0
AgML 0.4.7
Main Changes
- Introduced two new public object detection datasets from the Western Growers Global Harvest Automation Initiative (https://github.com/AxisAg/GHAIDatasets):
ghai_romaine_detection
, andghai_green_cabbage_detection
. - Added the ability to generate multiple different types of annotations when generating synthetic datasets with Helios.
Other Changes
- Fixed errors with different visualization utilities as well as synthetic data annotation conversions.
Full Changelog: v0.4.6...v0.4.7
AgML 0.4.6
Main Changes
- You can now manually generate synthetic data with Helios using your own C++ generation file and CMake parameters/XML style file, using the method
agml.synthetic.manual_generate_data
. This allows for a greater deal of customizability and control over how Helios is compiled, as well as which methods are being used. - A convenience method
agml.viz.show_sample
has been added which enables quick visualization of an image and its annotations: this works for all types of annotations, simply pass a loader as the argument.
Other Improvements
- If you want to overwrite files when generating synthetic data with the
HeliosDataGenerator
, you can now pass the argumentclear_existing_files = True
to thegenerate
method. - You can now compile Helios using the standard API in both the 'Release' and 'Debug' modes.
AgML 0.4.5
This feature adds a couple of new features as well as bugfixes for the existing API.
Main Changes
- You can now use image classification models (
agml.models.ClassificationModel
) as image regression models by passingregression = True
upon instantiation. This drops the finalargmax
computation and returns the softmax regression values. - Pass a custom set of RGB values to
agml.viz.set_colormap
to use a custom colormap. - A new preprocessing function has been added to
agml.models.preprocessing
:agml.models.preprocessing.imagenet_preprocess
, which prepares images for input to an ImageNet-backend model (image classification, semantic segmentation).
Bugfixes
- The
MeanAveragePrecision
metric has been fixed, and no longer throws errors for empty predictions (or for early-stage training results). - Custom object detection datasets can now be auto-loaded and classes automatically inferred without throwing an error.