You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction (minimal, reproducible, runnable)
from optimum.onnxruntime import ORTModelForVision2Seq
model = ORTModelForVision2Seq.from_pretrained("/content/swin-xlm-image-recognition", export=True, use_cache=False)
model.save_pretrained("swin-xlm-image-recognition-onnx")
Expected behavior
How to solve this issue? I am trying to convert my VisionEncoderDecoderModel to onnx using optimum, but I am getting this error: KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2, granite are supported. If you want to support swinv2 please propose a PR or open up an issue.'
The encoder is "swinv2" and the decoder is "xlm-roberta".
The text was updated successfully, but these errors were encountered:
System Info
Who can help?
@michaelbenayoun, @JingyaHuang, @echarlaix
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)
from optimum.onnxruntime import ORTModelForVision2Seq
model = ORTModelForVision2Seq.from_pretrained("/content/swin-xlm-image-recognition", export=True, use_cache=False)
model.save_pretrained("swin-xlm-image-recognition-onnx")
Expected behavior
How to solve this issue? I am trying to convert my VisionEncoderDecoderModel to onnx using optimum, but I am getting this error:
KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2, granite are supported. If you want to support swinv2 please propose a PR or open up an issue.'
The encoder is "swinv2" and the decoder is "xlm-roberta".
The text was updated successfully, but these errors were encountered: