-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Semantic Robustness #259
Semantic Robustness #259
Conversation
…local configs folder.
mart/callbacks/anomalib.py
Outdated
best_batch = None | ||
batch_history = defaultdict(list) | ||
|
||
for step in (pbar := trange(self.steps, desc="Attack", position=1)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This :=
is a bit of an exotic syntax, doesn't seem to appear anywhere else in the MART repo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's this:
MART/mart/callbacks/progress_bar.py
Line 16 in 36609c7
class ProgressBar(TQDMProgressBar): |
Not sure how helpful it is to re-use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you should learn about the walrus (:= a Python 3.8 feature)! It's super useful in some circumstances.
I think using ProgressBar will be better once this code becomes MART native.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good article here about the walrus operator: https://realpython.com/python-walrus-operator/
mart/callbacks/anomalib.py
Outdated
image_hsv = rgb_to_hsv(image_adv) | ||
image_hue, image_sat, image_val = torch.unbind(image_hsv, dim=-3) | ||
|
||
# Additive hue perturbation with STE clipping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment is inaccurate, we never did STE for torch.remainder
. I think it's fine though to not do it, should just be a comment reword
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in faefb8e.
mart/callbacks/anomalib.py
Outdated
image_sat = image_sat + sat[:, None, None] | ||
image_sat = image_sat + (torch.clip(image_sat, 0.0, 1.0) - image_sat).detach() | ||
|
||
# Convert image fro HSV to RGB space |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"from"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in faefb8e.
mart/callbacks/anomalib.py
Outdated
|
||
loss = negative_loss + positive_loss | ||
|
||
# decrease negatives and increase positives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that you removed maximize=True
in Adam, it may be worth adding an extra comment here that this will be minimized, and the intended effect is exactly opposite than that of the comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume losses are made to be minimized and gains are made to be maximized, but fixed in 85e85bb, I think.
@mzweilin: Would be nice to get this merged. Really need help on how to package this. |
LGTM! Thanks for the README updates! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
What does this PR do?
This PR implements a semantic robustness callback for anomalib as described in https://arxiv.org/abs/2405.07969. We make one critical design choice different from the paper: we supply the model with an augmented image and mask for computing metrics. Additionally, anomalib does not implement patching, so all images are square by default. We believe these changes shouldn't significantly alter results since 1) rotating masks is fine so long as we impute pixel mask values as "non-anomalous" or 0 valued.
Type of change
Please check all relevant options.
Testing
Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.
pytest
CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16
reports 70% (21 sec/epoch).CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2
reports 70% (14 sec/epoch).Before submitting
pre-commit run -a
command without errorsDid you have fun?
Make sure you had fun coding 🙃