-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to load custom onnx fp16 models (for example real drct gan)? That would be great improvement! #107
Comments
Hi my friend, only the onnx file is not Isisnot enough to implement the model. I would need the github project to better understand how to implement it |
With this script it is working fine. |
Output is blured? |
Could you add support for Real DRCT? |
Hi my friend, I was trying to replicate the project to convert it to onnx. Where did you find the onnx file you posted? because in the project github I can't find it |
Author removed finetuned model from google drive. |
Hi, @Djdefrag |
Hi my friend, i tried to replicate the DRCT torch model and convert it to onnx but without success. In any case, if you already have the onnx model you can make compatible with QualityScaler. Essentially if you have the onnx model in fp32 mode you are already well on your way. To do this you can use the following code:
where selected_AI_model = "-DCRT-something" |
https://mega.nz/file/0gJwyIBA#fTdbXWb6zbWrQApg2VgNRbY_fh3wdy5f-mP4Oz1jVbU
Please add support this model for Super resolution task, cause it is SOTA.
The text was updated successfully, but these errors were encountered: