Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable ops to support span #4

Merged
merged 1 commit into from
Aug 1, 2024
Merged

Conversation

tntwise5
Copy link

This is from my fork, https://github.com/TNTwise/SPAN-ncnn-vulkan/tree/master. Span can offer faster render times, while beating even some transformer-based super resolution architectures, and is similar in psnr to SwinIR. It works fine with ncnn, as long as some ops are enabled within ncnn. The models in the span repo should just work with upscayl after these changes, and can be exported with some versions of chainner nightly, https://github.com/chaiNNer-org/chaiNNer-nightly/releases/tag/2024-04-07.

@TNTwise
Copy link

TNTwise commented May 28, 2024

Clarification: chainner does not work to convert span pytorch models to ncnn, you would need to use a script like https://github.com/TNTwise/SPAN-ncnn-vulkan/blob/master/export_span_spandrel.py

@aaronliu0130
Copy link
Member

@NayamAmarshe @TGS963 just changes the config, LGTM

@TGS963
Copy link
Member

TGS963 commented Jul 28, 2024

Yeah looks fine to me as well, waiting for @NayamAmarshe s input

@TGS963
Copy link
Member

TGS963 commented Jul 28, 2024

Probably should test a build though with this branch

@NayamAmarshe
Copy link
Member

Thanks for the PR! Could you please explain more how this works? Is SPAN a different algorithm than Real-ESRGAN? If yes, how can the Real-ESRGAN code infer SPAN?

@TNTwise
Copy link

TNTwise commented Jul 28, 2024

Span is a different algorithm than esrgan but Ncnn can inference any layer that is enabled on compile time, and the code is just in the param file. Only thing that needs to line up is the input layer name and the output layer name. If all the ops are enabled, it doesn't matter what is in the middle.

Here is the span code https://github.com/hongyuanyu/SPAN, and you can export models using my script https://github.com/TNTwise/Universal-Pytorch-Upscaler with this example command

./UniversalTorchUpscaler -m . -n 2x_ModernSpanimationV1.5.pth -e ncnn --half

The -m is the path leading to the model, -n is the name of the model, -e is the export option, and half means half precision (this is usually what is exported when it comes to ncnn, as fp32 and fp16 have no difference)

@NayamAmarshe NayamAmarshe merged commit 8f8babf into upscayl:master Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants