Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the eva_giant_patch14_224 model no longer available? #586

Closed
EIFY opened this issue Jul 28, 2023 · 3 comments
Closed

Is the eva_giant_patch14_224 model no longer available? #586

EIFY opened this issue Jul 28, 2023 · 3 comments

Comments

@EIFY
Copy link
Contributor

EIFY commented Jul 28, 2023

I just followed the instructions to make install-test and make test, but then I got the following error:

tests/test_inference.py::test_inference_with_data[EVA01-g-14-False] FAILED

==================================================================================================================== FAILURES =====================================================================================================================
___________________________________________________________________________________________________ test_inference_with_data[EVA01-g-14-False] ____________________________________________________________________________________________________

model_name = 'EVA01-g-14', jit = False, pretrained = None, pretrained_hf = False, precision = 'fp32', force_quick_gelu = False

    @pytest.mark.regression_test
    @pytest.mark.parametrize("model_name,jit", models_to_test_fully)
    def test_inference_with_data(
            model_name,
            jit,
            pretrained = None,
            pretrained_hf = False,
            precision = 'fp32',
            force_quick_gelu = False,
    ):
        util_test.seed_all()
>       model, _, preprocess_val = open_clip.create_model_and_transforms(
                model_name,
                pretrained = pretrained,
                precision = precision,
                jit = jit,
                force_quick_gelu = force_quick_gelu,
                pretrained_hf = pretrained_hf
        )

tests/test_inference.py:63: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/open_clip/factory.py:308: in create_model_and_transforms
    model = create_model(
src/open_clip/factory.py:190: in create_model
    model = CustomTextCLIP(**model_cfg, cast_dtype=cast_dtype)
src/open_clip/model.py:272: in __init__
    self.visual = _build_vision_tower(embed_dim, vision_cfg, quick_gelu, cast_dtype)
src/open_clip/model.py:101: in _build_vision_tower
    visual = TimmModel(
src/open_clip/timm_model.py:60: in __init__
    self.trunk = timm.create_model(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

model_name = 'eva_giant_patch14_224', pretrained = False, pretrained_cfg = None, checkpoint_path = '', scriptable = None, exportable = None, no_jit = None, kwargs = {'global_pool': 'token', 'num_classes': 1024}, model_source = 'timm'

    def create_model(
            model_name,
            pretrained=False,
            pretrained_cfg=None,
            checkpoint_path='',
            scriptable=None,
            exportable=None,
            no_jit=None,
            **kwargs):
        """Create a model
    
        Args:
            model_name (str): name of model to instantiate
            pretrained (bool): load pretrained ImageNet-1k weights if true
            checkpoint_path (str): path of checkpoint to load after model is initialized
            scriptable (bool): set layer config so that model is jit scriptable (not working for all models yet)
            exportable (bool): set layer config so that model is traceable / ONNX exportable (not fully impl/obeyed yet)
            no_jit (bool): set layer config so that model doesn't utilize jit scripted layers (so far activations only)
    
        Keyword Args:
            drop_rate (float): dropout rate for training (default: 0.0)
            global_pool (str): global pool type (default: 'avg')
            **: other kwargs are model specific
        """
        # Parameters that aren't supported by all models or are intended to only override model defaults if set
        # should default to None in command line args/cfg. Remove them if they are present and not set so that
        # non-supporting models don't break and default args remain in effect.
        kwargs = {k: v for k, v in kwargs.items() if v is not None}
    
        model_source, model_name = parse_model_name(model_name)
        if model_source == 'hf-hub':
            # FIXME hf-hub source overrides any passed in pretrained_cfg, warn?
            # For model names specified in the form `hf-hub:path/architecture_name@revision`,
            # load model weights + pretrained_cfg from Hugging Face hub.
            pretrained_cfg, model_name = load_model_config_from_hf(model_name)
    
        if not is_model(model_name):
>           raise RuntimeError('Unknown model (%s)' % model_name)
E           RuntimeError: Unknown model (eva_giant_patch14_224)

(...)

====== short test summary info ======
FAILED tests/test_inference.py::test_inference_with_data[EVA01-g-14-False] - RuntimeError: Unknown model (eva_giant_patch14_224)
!!!!!! stopping after 1 failures !!!!!!
====== 1 failed, 12 passed, 6 warnings in 9.33s ======

As one can see, tests up to that point passed.

@rwightman
Copy link
Collaborator

rwightman commented Aug 6, 2023

@EIFY what version of timm do you have installed? eva clips need a recent timm...

FYI I can run eval just fine on EVA01-g-14 from the main branch w/ timm 0.9.5 installed, but any 0.9.x timm should be fine.

@EIFY
Copy link
Contributor Author

EIFY commented Aug 6, 2023

@rwightman I was running timm==0.6.11, which came from

timm==0.6.11

and updating to timm==0.9.5 indeed solved the issue. Perhaps we should change the pinned timm version in requirements-test.txt? Past that the test suite still failed due to the test_inference_simple[roberta-ViT-B-32-laion2b_s12b_b32k-False-False] error

E           RuntimeError: Error(s) in loading state_dict for CustomTextCLIP:
E           	Unexpected key(s) in state_dict: "text.transformer.embeddings.position_ids".

Just like #584, but that's a separate issue.

@EIFY
Copy link
Contributor Author

EIFY commented Aug 18, 2023

@rwightman I went ahead and put out a PR to update timm in requirements. Could you take a look? #601

@EIFY EIFY closed this as completed Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants