Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Semantic Robustness #259

Merged
merged 109 commits into from
Jun 6, 2024
Merged

Semantic Robustness #259

merged 109 commits into from
Jun 6, 2024

Conversation

dxoigmn
Copy link
Contributor

@dxoigmn dxoigmn commented May 21, 2024

What does this PR do?

This PR implements a semantic robustness callback for anomalib as described in https://arxiv.org/abs/2405.07969. We make one critical design choice different from the paper: we supply the model with an augmented image and mask for computing metrics. Additionally, anomalib does not implement patching, so all images are square by default. We believe these changes shouldn't significantly alter results since 1) rotating masks is fine so long as we impute pixel mask values as "non-anomalous" or 0 valued.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16 reports 70% (21 sec/epoch).
  • CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2 reports 70% (14 sec/epoch).

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

mzweilin added 30 commits May 2, 2024 09:14
best_batch = None
batch_history = defaultdict(list)

for step in (pbar := trange(self.steps, desc="Attack", position=1)):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This := is a bit of an exotic syntax, doesn't seem to appear anywhere else in the MART repo

Copy link

@mariusarvinte mariusarvinte Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's this:

class ProgressBar(TQDMProgressBar):

Not sure how helpful it is to re-use

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah you should learn about the walrus (:= a Python 3.8 feature)! It's super useful in some circumstances.

I think using ProgressBar will be better once this code becomes MART native.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good article here about the walrus operator: https://realpython.com/python-walrus-operator/

image_hsv = rgb_to_hsv(image_adv)
image_hue, image_sat, image_val = torch.unbind(image_hsv, dim=-3)

# Additive hue perturbation with STE clipping
Copy link

@mariusarvinte mariusarvinte Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is inaccurate, we never did STE for torch.remainder. I think it's fine though to not do it, should just be a comment reword

Copy link
Contributor Author

@dxoigmn dxoigmn Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in faefb8e.

image_sat = image_sat + sat[:, None, None]
image_sat = image_sat + (torch.clip(image_sat, 0.0, 1.0) - image_sat).detach()

# Convert image fro HSV to RGB space

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"from"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in faefb8e.


loss = negative_loss + positive_loss

# decrease negatives and increase positives
Copy link

@mariusarvinte mariusarvinte Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that you removed maximize=True in Adam, it may be worth adding an extra comment here that this will be minimized, and the intended effect is exactly opposite than that of the comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume losses are made to be minimized and gains are made to be maximized, but fixed in 85e85bb, I think.

@dxoigmn
Copy link
Contributor Author

dxoigmn commented Jun 4, 2024

@mzweilin: Would be nice to get this merged. Really need help on how to package this.

@dxoigmn
Copy link
Contributor Author

dxoigmn commented Jun 6, 2024

LGTM! Thanks for the README updates!

Base automatically changed from mzweilin/example_anomalib_adversary to main June 6, 2024 15:58
Copy link
Contributor

@mzweilin mzweilin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mzweilin mzweilin merged commit bec6508 into main Jun 6, 2024
5 checks passed
@mzweilin mzweilin deleted the dxoigmn/semantic_robustness branch June 6, 2024 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants