Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: ONNX exportability compatibity test and fix #275

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nimiiit
Copy link

@nimiiit nimiiit commented Nov 14, 2024

fix: Fix for ONNX export of Maxblurpool layer and performance optimization by registering kernel as a buffer so that it doesn't need to be copied to the GPU over and over again.

Description

  • What: Converting the pretrained models to ONNX format gives error in the Maxpool layer used in the N2V2 architecture.This is mainly because the convolution kernel is dynamically expanded to a size matching the number of channels in the input in the Maxblurpool layer. But the number of channels should be constant within the model.
  • Why: Users can convert the pytorch models to ONNX for inference in thier platforms
  • How:
    -- instead of using the symbolic variable x.size(1), explicitly cast it to an integer and make it a constant.
    -- make the kernel as a buffer to avoid the copying to GPU overhead.
    -- add tests for ONNX exportability

Changes Made

  • Added:
    -- onnx as a test dependency in pyproject.toml
    -- 'test_lightning_module_onnx_exportability.py'
  • Modified: Maxblurpool module in 'layers.py'

Please ensure your PR meets the following requirements:

  • Code builds and passes tests locally, including doctests
  • New tests have been added (for bug fixes/features)
  • Pre-commit passes
  • PR to the documentation exists (for bug fixes / features)

dynamic axes in all dimensions

remove from state_dict

pre-commit  ruff fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant