Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modality dispatch #136

Open
wants to merge 41 commits into
base: main
Choose a base branch
from
Open

Modality dispatch #136

wants to merge 41 commits into from

Conversation

mzweilin
Copy link
Contributor

@mzweilin mzweilin commented Apr 18, 2023

What does this PR do?

  • Implement modality_dispatch() in the singledispatch manner.
  • Add modality_dispatch() in Perturber for Initializer and Projector.
    • Remove list-dispatch in Initializer and Projector.
  • Make Enforcer and Composer modality-aware with modality_dispatch()
    • GradientModifier is different because of the way we handle trainable parameters.
  • Annotate modality-aware params in Optimizer for GradientModifier.
  • Add a modality-aware adversary example.

Benign:

CUDA_VISIBLE_DEVICES=0 \
python -m mart \
experiment=ArmoryCarlaOverObjDet_TorchvisionFasterRCNN \
trainer=gpu \
fit=false \
+trainer.limit_test_batches=1 \
+model.load_state_dict.losses_and_detections.model=/home/weilinxu/coder/GARD-with-MART/oscar/model_zoo/carla_rgb_weights_eval6.pt
│      test_metrics/map_50       │       0.6349384784698486

Adversarial:

CUDA_VISIBLE_DEVICES=0 \
python -m mart \
experiment=ArmoryCarlaOverObjDet_TorchvisionFasterRCNN \
trainer=gpu \
fit=false \
+trainer.limit_test_batches=1 \
+model.load_state_dict.losses_and_detections.model=/home/weilinxu/coder/GARD-with-MART/oscar/model_zoo/carla_rgb_weights_eval6.pt \
+attack@model.modules.input_adv_test=object_detection_rgb_mask_adversary \
+model.test_sequence.seq005=input_adv_test \
model.test_sequence.seq010.preprocessor=["input_adv_test"]
│      test_metrics/map_50       │       0.4633878767490387       │

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2 reports 70% (14 sec/epoch).

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

@mzweilin mzweilin requested a review from dxoigmn April 18, 2023 17:57
Copy link
Contributor

@dxoigmn dxoigmn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments.

Comment on lines 21 to 24
data: torch.Tensor | dict[str, torch.Tensor] | Iterable[Any],
*,
input: torch.Tensor | dict[str, torch.Tensor] | Iterable[Any],
target: torch.Tensor | Iterable[Any] | None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These types should match whatever is in Enforcer above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Um, shall we make some changes since it is a recursive function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh? Whatever enforcer takes should be the same things this function takes? If you believe that, then why are the type annotations different?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because modality_dispatch() is a recursive function that would see sub-components of the original input at some depth.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But so does Enforcer? I can pass a tensor, a iterable[tensor] or iterable[dict[str, tensor]] to Enforcer. I don't get why the function being recursive matters. The type annotations tell you what the function accepts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made it the same as in Enforcer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it is now different from Enforcer, because other modules such as Initializer also use modality_dispatch() and they may have different types of arguments.

Comment on lines 104 to 109
input_adv: torch.Tensor | Iterable[torch.Tensor],
input_adv: torch.Tensor | Iterable[torch.Tensor] | Iterable[dict[str, torch.Tensor]],
*,
input: torch.Tensor | Iterable[torch.Tensor],
target: torch.Tensor | Iterable[torch.Tensor | dict[str, Any]],
input: torch.Tensor | Iterable[torch.Tensor] | Iterable[dict[str, torch.Tensor]],
target: torch.Tensor | Iterable[torch.Tensor] | Iterable[dict[str, Any]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you be more specific than Any with the types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. I think it can be a tensor or a string (file name for example).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it back to Any because it doesn't really matter. Other modules that use modality_dispatch() also annotate that as Any.

mart/attack/enforcer.py Outdated Show resolved Hide resolved
mart/attack/enforcer.py Outdated Show resolved Hide resolved
mart/utils/modality_dispatch.py Outdated Show resolved Hide resolved
Comment on lines 37 to 43
if isinstance(input, torch.Tensor):
if isinstance(modality_func, dict):
# A dictionary of Callable indexed by modality.
return modality_func[modality](data, input=input, target=target)
else:
# A Callable with modality=? as a keyword argument.
return modality_func(data, input=input, target=target, modality=modality)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's clear to explicit check the type of each input instead of having nested if statements. else should trigger an exception.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made it a singledispatch function.

The function returns an object that is homomorphic to input and data.
"""

assert type(data) == type(input)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this will break for universal perturbation idea, since I want to apply the same perturbation to every input. instead, you should dispatch on types below and raise an NotImplementedError if you don't support that combination of types.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new singledispatch implementation does not have the assertion.

Base automatically changed from iterable_instead_of_tuple to main May 16, 2023 15:57
@mzweilin mzweilin marked this pull request as draft June 21, 2023 03:35
@mzweilin mzweilin marked this pull request as ready for review June 21, 2023 23:12
@mzweilin mzweilin requested a review from dxoigmn June 22, 2023 17:50
Copy link
Contributor

@dxoigmn dxoigmn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we ever discuss the merits of making Composer and GradientModifier modality aware instea of making Adversary modality aware? What if I want to generate a multi-modal universal attack?

Comment on lines 18 to 21
perturbation: torch.Tensor | Iterable[torch.Tensor],
perturbation: torch.Tensor,
*,
input: torch.Tensor | Iterable[torch.Tensor],
target: torch.Tensor | Iterable[torch.Tensor] | Iterable[dict[str, Any]],
input: torch.Tensor,
target: torch.Tensor | dict[str, Any],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes make me think this change is a regression? Why do we not accept Iterable[torch.Tensor] anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I reverted changes to Composer and created a new Modality(Composer).

@dxoigmn
Copy link
Contributor

dxoigmn commented Jun 22, 2023

Do you think we should wait to merge this until we change MART to bound inputs between [0, 1] instead of [0, 255]? See #70.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants