Releases: Trusted-AI/adversarial-robustness-toolbox
ART 1.10.1
This release of ART 1.10.1 provides updates to ART 1.10.
Added
[None]
Changed
- Changed
AdversarialTrainerMadryPGD.fit
to support argumentsnb_epochs
andbatch_size
(#1612) - Changed
GradientMatchingAttack
to add support for models with undefined input shape by abstracting the shape information from the input data (#1624) - Changed
PyTorchObjectDetector
to support inputs with number of channels other than 1 and 3 (#1633)
Removed
[None]
Fixed
ART 1.10.0
This release of ART 1.10.0 introduces multiple poisoning attacks on image classification and deep generative models, the first attack with dynamic patches on object tracking in videos, classification certification based on zonotope representations, EoT support for object detection in image rotation and center cropping, new features for attribute inference attacks and more.
Added
- Added Gradient Matching (Witches' Brew) attack
art.attacks.poisoning.GradientMatchingAttack
in TensorFlow (#1587) - Added functions
projection_l1_1
andprojection_l1_2
toart.utils
for two algorithms computing orthogonal projections on L1-norm balls (#1586) - Added perspective transformations to
art.attacks.evasion.AdversarialTexturePyTorch
attack to enable dynamic texture/patches (#1557) - Added support for object detection in
art.attacks.evasion.AdversarialPatchPyTorch
(#1535) - Added new features to attribute inference attacks including support for optional use of true labels in black-box attribute inference attacks, automatic calculation of values in fit() method, additional scaling method for labels/predictions and an additional attack model type (random forest) (#1534)
- Added estimator
art.estimators.certification.PytorchDeepZ
based on DeepZ for robustness certification using zonotope representations datapoints (#1531) - Added Expectation over Transformation (EoT) for rotation and centre crop with support for classification and object detection (#1516)
- Added support for SummaryWriter in
art.attacks.evasion.RobustDpatch
(#1513) - Added PGD L-Inf optimizer to
art.attacks.evasion.AdversarialPatch*
attacks (#1495) - Added two backdoor poisoning attacks, Red in
art.attacks.poisoning.BackdoorAttackDGMReD
and Trail inart.attacks.poisoning.BackdoorAttackDGMTrail
, targeting Deep Generative Models (#1490) - Added Hidden Trigger Backdoor Poisoning Attack in Keras and PyTorch in
art.attacks.poisoning.HiddenTriggerBackdoor
(#1487) - Added Feature Collision Poisoning Attack in PyTorch in
art.attacks.poisoning.FeatureCollisionAttack
(#1435 )
Changed
- Changed imports of TensorFlow v2 in
TensorFlowClassifier
to support TensorFlow v1 compatibility mode (#1560) - Changed Python used for unit testing to newer versions, upgraded style checks and improved code quality (#1517)
Removed
[None]
Fixed
- Fixed import of Scipy in
PixelThreshold
attack to supportscipy>=1.8
(#1589) - Fixed bug of missing attribute in
PixelAttack
for scaled images (#1574) - Fixed use of
torchaudio.functional.magphase
inPyTorchDeepSpeech
to support Deep Speech 2 version 3 withtorch>=1.10
(#1550) - Fixed method
fit
ofScikitlearnRegressor
to process labels correctly (#1537) - Fixed scalar names of Indicators of Attack Failure 2 and 3 for aggregated losses (#1512)
- Fixed raising of DataConversionWarning in fitting black box membership inference attacks with attack_model_type 'rf' or 'gb (#1488)
ART 1.9.1
This release of ART 1.9.1 provides updates to ART 1.9.
Added
- Added support for TensorFlow 1.15 as backend in
KerasClassifier.compute_loss
. (#1466) - Added support for input range [0, 1] in
art.defences.preprocessor.VideoCompression*
. (#1470)
Changed
[None]
Removed
[None]
Fixed
- Fixed bug in
art.utils.load_nursery
for loading nursery dataset with argumentraw=True
. (#1460) - Fixed import of
matplotlib
to keep it an optional dependency. (#1467) - Fixed bug to allow preprocessing defences to be applied in
PyTorchGoturn.predict
by adding back missing sample dimension. (#1470) - Fixed bug in
PyTorchClassifier.get_activations
to also apply preprocessing if argumentframework=True
. This fix likely changes the results obtained withBullseyePolytopeAttackPyTorch
, the main attack usingframework=True
. (#1471)
ART 1.9.0
This release of ART 1.9.0 introduces the first evasion attack specifically designed against object tracking applications and able to distinguish foreground and background objects, the first evasion attack against image classifiers simulating attacks with laser beams on target objects, the new Summary Writer API to collect attack internal custom metrics, a defense against general poisoning attacks and tools for shadow model training to support membership inference attacks.
Added
- Added tools for training shadow models and generating shadow-datasets in support of membership inference attacks in
art.attacks.inference.membership_inference.shadow_models
. (#1345, #1395) - Added hill-climbing synthetic data generation algorithm (Shokri et al., 2017) to train shadow models without access to actual data. (#1345, #1395)
- Added experimental estimator for classification models in JAX in
art.experimental.estimators.classification.JaxClassifier
(#1360) - Added Deep Partition Aggregation as classification estimator in
art.estimators.classification.DeepPartitionEnsemble
to defend against general poisoning attacks (#1397) - Added Adversarial Laser Beam attack in
art.attacks.evasion.LaserAttack
as a easy to realize physical evasion attack (#1398) - Added customizable Summary Writer API in
art.summary_writer.SummaryWriter
to collect attack internal metrics in supported attacks providing collected metrics in TensorBoard format for analysis (#1416 ) - Added Indicators of Attack Failure (Pintor et al., 2021) as metrics in default summary writer
art.summary_writer.SummaryWriterDefault
(#1416) - Added Adversarial Texture Attack against object tracking models in
art.attacks.evasion.AdversarialTexturePyTorch
. The attack distinguishes foreground and background objects to create textures/patches that work even if partially covered. (#1430)
Changed
- Changed implementation of Carlini & WAgner L_inf attack in
art.attacks.evasion.CarliniLInfMethod
to exactly reproduce performance of reference implementation (#1380) - Changed
art.defences.preprocessor.preprocessor.PreprocessorPyTorch
to acceptdevice_type
in__init__
to set attribute_device
for all PyTorch preprocessors in a single location (#1444)
Removed
- Removed deprecated Numpy scalar type names (#1296)
- Removed outdated comments in
tests.attacks.test_simba
that SimBA would not support PyTorch (#1423)
Fixed
- Fixed missing support for input with more than one input image in
art.attacks.evasion.SimBA.generate
, so far only the first sample had been attacked if more than one image was provided. (#1422) - Fixed
art.attacks.poisoning.perturbations.insert_image
to preserve dtype of input images in the returned output images (#1441) - Fixed missing transformation of binary index to one-hot encoded labels in
art.utils.check_and_transform_label_format
for argumentreturn_one_hot=True
(#1443)
ART 1.8.1
This release of ART 1.8.1 provides updates to ART 1.8.
Added
- Added support for
torch.Tensor
inputs and required argumentinput_shape
toart.estimators.object_tracking.PyTorchGoturn
. (#1348)
Changed
- Changed supported PyTorch version check to include
torch==1.9
andtorchvision==0.10
to exception inart.estimators.object_detection.PyTorchObjectDetector
. (#1356)
Removed
[None]
Fixed
- Fixed docstring and cuda device support in
art.attacks.evasion.AdversarialPatchPyTorch
. (#1333)
ART 1.8.0
This release of ART v1.8.0 introduces the first estimators for object tracking and regression, adds a general model-independent object detection estimator and new membership inference attacks.
Added
- Added estimator for object tracker GOTURN in PyTorch in
art.estimators.object_tracking.PyTorchGoturn
(#1318) - Added estimator for scikit-learn DecisionTreeRegressor in
art.estimators.regression.ScikitlearnDecistionTreeRegressor
and added compatibility in attacksAttributeInferenceBlackBox
andMembershipInferenceBlackBox
(#1272) - Added general estimator for all object detection models of
torchvision
inart.estimators.object_detection.PyTorchObjectDetector
(#1295) - Added membership inference attack based on boundary attacks with general threshold selection by Li and Zhang (#1197)
Changed
- Changed
art.estimators.classification.BlackboxClassifier*
to also accept recorded input/prediction data pairs, instead of a callable providing predictions by evaluating the attacked model, enabling attacks on prediction data only without the necessity for direct access to the attacked model (#1247) - Moved patched Lingvo decoder to
art.contrib
(#1261)
Removed
- Removed
art.classifiers
andart.wappers
, both modules have been replaced with tools inart.preprocessing.expectation_over_transformation
,art.estimators.classification
andart.estimators.classification.QueryEfficientGradientEstimationClassifier
(#1256)
Fixed
[None]
ART 1.7.2
This release of ART 1.7.2 provides updates to ART 1.7.
Added
[None]
Changed
[None]
Removed
[None]
Fixed
ART 1.7.1
This release of ART 1.7.1 provides updates to ART 1.7.
Added
- Added wrapper
Mp3CompressionPyTorch
forMp3Compression
to make it compatible with PyTorch-specific attack implementations. (#1210) - Added new install option
non-framework
tosetup.py
to install all non-framework dependencies of ART. (#1209) - Added wrapper
VideoCompressionPyTorch
forVideoCompression
to make it compatible with PyTorch-specific attack implementations. (#1210)
Changed
- Changed
Mp3Compression
to add back reapplication of normalization to the compressed result. (#1210) - Changed
KerasClassifier.fit
to use batching provided by the methodfit
of the Keras model. (#1182)
Removed
[None]
Fixed
- Fixed bug of not passing user-provided device type, and instead always using default
gpu
, to standardisation preprocessor in allPyTorchEstimator
by using user-provided device type. (#1223) - Fixed bug in method
BaseEstimator.fit_generator
for fitting generators in cases where preprocessing is defined to not apply preprocessing twice. (#1219) - Fixed bug in
ImperceptibleASRPyTorch
to prevent NaN loss value for batch size larger than 1 by removing unnecessary zero-padding. (#1198) - Fixed two bugs in
OverTheAirFlickeringPyTorch
by making sure that the regularization norms are computed over the whole batch of perturbations, rather than per sample's perturbation and second that the "roll" operations are performed over the batch samples, rather than over the frames. (#1192) - Fixed bug in
SpectralSignatureDefense
, that lead to rejections of all clean images, by correctly indexing the label data. (#1189) - Fixed bug of accidentally removed checks for
apply_fit
andapply_predict
properties of framework-independentPreprocessor
tools inPyTorchEstimator
andTensorFlowV2Estimator
. With the bug thePreprocessor
tools were always applied in methodsfit
andpredict
independent of the values ofapply_fit
andapply_predict
. (#1181) - Fixed bug in
MembershipInferenceBlackBoxRemove.infer
by removing unnecessary shuffling of the test data. (#1173) - Fixed bug in
PixelAttack
andThresholdAttack
by casting input data to correct dtype. (#1175)
ART 1.7.0
This release of ART v1.7.0 introduces many new evasion and inference attacks providing support for the evaluation of malware or tabular data classification, new query-efficient black-box (GeoDA) and strong white-box (Feature Adversaries) evaluation methods. Furthermore, this release introduces an easy to use estimator for Espresso ASR models to facilitate ASR research and connect Espresso and ART. This release also introduces support for binary classification with single outputs in neural networks classifiers and selected attacks. Many more new features and details can be found below:
Added
- Added LowProFool evasion attack for imperceptible attacks on tabular data classification in
art.attacks.evasion.LowProFool
. (#1063) - Added Over-the-Air-Flickering attack in PyTorch for evasion on video classifiers in
art.attacks.evasion.OverTheAirFlickeringPyTorch
. (#1077, #1102) - Added API for speech recognition estimators compatible with Imperceptible ASR attack in PyTorch. (#1052)
- Added Carlini&Wagner evasion attack with perturbations in L0-norm in
art.attacks.evasion.CarliniL0Method
. (#844, #1109) - Added support for Deep Speech v3 in
PyTorchDeepSpeech
estimator. (#1107) - Added support for TensorBoard collecting evolution of norms (L1, L2, and Linf) of loss gradients per batch, adversarial patch, and total loss and its model-specific components where available (e.g. PyTochFasterRCNN) in
AdversarialPatchPyTorch
,AdversarialPatchTensorFlow
,FastGradientMethod
, and allProjectedGradientDescent*
attacks. (#1071) - Added
MalwareGDTensorFlow
attack for evasion on malware classification of portable executables supporting append based, section insertion, slack manipulation, and DOS header attacks. (#1015) - Added Geometric Decision-based Attack (GeoDA) in
art.attacks.evasion.GeoDA
for query-efficient black-box attacks on decision labels using DCT noise. (#1001) - Added Feature Adversaries framework-specific in PyTorch and TensorFlow v2 as efficient white-box attack generating adversarial examples imitating intermediate representations at multiple layers in
art.attacks.evasion.FeatureAdversaries*
. (#1128, #1142, #1156) - Added attribute inference attack based on membership inference in
art.attacks.inference.AttributeInferenceMembership
. (#1132) - Added support for binary classification with neural networks with a single output neuron in
FastGradientMethod
, and allProjectedGradientDescent*
attacks. Neural network binary classifiers with a single output require settingnb_classes=2
and labelsy
in shape (nb_samples, 1) or (nb_samples,) containing 0 or 1. Backward compatibility for binary classifiers with two outputs is guaranteed withnb_classes=2
and labelsy
one-hot-encoded in shape (nb_samples, 2). (#1118) - Added estimator for Espresso ASR models in
art.estimators.speech_recognition.PyTorchEspresso
with support for attacks withFastGradientMethod
,ProjectedGradientDescent
andImperceptibleASRPyTorch
. (#1036) - Added deprecation warnings for
art.classifiers
andart.wrappers
to be replace withart.estimators
. (#1154)
Changed
- Changed
art.utils.load_iris
to use Iris dataset fromsklearn.datasets
instead ofarchive.ics.uci.edu
. (#1097 ) - Changed
HopSkipJump
to check for NaN in the adversarial example candidates and return original (benign) sample if at least one NaN is detected. (#1124) - Changed
SquareAttack
to accept user-defined loss and adversarial criterium definitions to enable black-box attacks on all machine learning tasks on images beyond classification. (#1127) - Changed
PyTorchFasterRCNN.loss_gradients
to process each sample separately to avoid issues with gradient propagation withtorch>=1.7
. (#1138)
Removed
[None]
Fixed
ART 1.6.2
This release of ART 1.6.2 provides updates to ART 1.6.
Added
- Added targeted option to
RobustDpatch
(#1069) - Added option
standardise_output
to define provided label format (#1069) - Added property
native_label_is_pytorch_format
to object detectors to define label format expected by the model (#1069)
Changed
- Changed
Dpatch
andRobustDpatch
to work internally with PyTorchFasterRCNN's object detection label format and convert labels if provided inTensorFlowFasterRCNN
's format accordingly using optionstandardise_output
(#1069) - Change
setup.py
to only contain core dependencies ininstall_requires
and added additional install optionstensorflow_image
,tensorflow_audio
,pytorch_image
, andpytorch_audio
(#1116) - Changed check for version of
torch
andtorchvision
inAdversarialPatchPyTorch
to account for suffixes like+cu102
(#1115) - Changed
art.utils.load_iris
to usesklearn.datasets.load_iris
instead of download fromhttps://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
(#1097)
Removed
- Removed unnecessary requirement for
scores
in labelsy
forTensorFlowFasterRCNN.loss_gradient
andPyTorchFasterRCNN.loss_gradient
(#1069)
Fixed
- Fixed docstrings of methods
predict
andloss_gradient
to correctly describe the expected and provided label format (#1069) - Fixed bug of missing transfer of tensor to device
ProjectedGradientDescentPyTorch
(#1076) - Fixed bug resulting in wrong loss gradients calculated with
ScikitlearnLogisticRegression.loss_gradient
(#1065)