-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an example of Anomalib adversary #256
Conversation
anomalib[full] @ git+https://github.com/openvinotoolkit/anomalib.git@v1.0.1 | ||
mart @ git+https://github.com/IntelLabs/MART@main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can probably just throw these in setup.py (or ideally pyproject.toml).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion. I replaced setup.py
and requirements.txt
with pyproject.toml
in e0117ae
examples/anomalib_adversary/setup.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would consider converting this to pyproject.toml.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, what is the point of this? Seems like there is no code to install anyways?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setup.py
is replaced with pyproject.toml
in e0117ae for managing the dependency.
mart/attack/gain.py
Outdated
|
||
|
||
from torch.nn import BCELoss as BCELoss_ | ||
class BCELoss(BCELoss_): | ||
def forward(self, input, target): | ||
# We clone target because that tensor can be made in inference mode. | ||
return super().forward(input, target.clone()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a complete hack and needs to go away. The problem is anomalib creates tensors in inference_mode and we cannot use those tensors to compute a loss.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't we use those tensors to compute a loss? I wrote a minimal example that works.
>>> import torch
>>> x = torch.tensor(3.0, requires_grad=True)
>>>
>>> with torch.inference_mode():
... y = torch.tensor(2.0)
...
>>> (x + y).backward()
>>> assert x.grad == 1.0
>>> y._version
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Inference tensors do not track version counter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "works"? You can get rid of the .clone()
call above and run the example and see an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>>> import torch
>>> x = torch.tensor(3.0, requires_grad=True)
>>> with torch.inference_mode():
... y = torch.tensor(2.0)
...
>>> torch.nn.functional.binary_cross_entropy(x, y)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ccorneli/Projects/MART/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 3122, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. I was not aware that PyTorch is a more restrictive on inference tensor in the CPP domain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are reasonably specific that "Tensors created while the [inference] mode is enabled can[not] be used in grad-mode later": https://pytorch.org/docs/stable/notes/autograd.html#grad-modes.
@@ -62,6 +62,7 @@ def __init__( | |||
val_adversary: Callable = None, | |||
test_adversary: Callable = None, | |||
batch_c15n: Callable = None, | |||
module_step_fn: str = "training_step", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This enables us to use anomalib's implementation of WinClip, which only implements a validation_step
function.
What does this PR do?
This PR adds an example of constructing adversaries in Anomalib.
See examples/anoamlib_adversary/README.md for details.
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest
CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16
reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2
reports 70% (14 sec/epoch).Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃