You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1.FileNotFoundError: [Errno 2] No such file or directory: './models/visualcla_merged-7b/pytorch_model.bin'
对于合并权重的情况
需要cp visualcla/pytorch_model.bin models/visualcla_merged-7b/
不知道这样对不对
2.OSError: Can't load the configuration of './models/visualcla_merged-7b/vision_encoder'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './models/visualcla_merged-7b/vision_encoder' is the correct path to a directory containing a config.json file
对于合并权重的情况
需要cp -r ./visualcla/vision_encoder/ ./models/visualcla_merged-7b/
不知道这样对不对
3.OSError: ./models/visualcla_merged-7b does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./models/visualcla_merged-7b/main' for available files.
对于合并权重的情况
cp ./visualcla/preprocessor_config.json models/visualcla_merged-7b/
不知道这样对不对
4.KeyError: 'visual_resampler_config'
以上操作完了之后,重新运行server.py
$ python server.py --model=visualcla_merged-7b --multimodal-pipeline=visualcla-7b --chat --settings=settings-visualcla.yaml --share --load-in-8bit
2023-07-27 09:31:45 WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care.
2023-07-27 09:31:47 INFO:Loading settings from settings-visualcla.yaml...
2023-07-27 09:31:47 INFO:Loading visualcla_merged-7b...
2023-07-27 09:38:36 WARNING:models/visualcla_merged-7b/special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-27 09:38:36 INFO:Loaded the model in 408.25 seconds.
2023-07-27 09:38:36 INFO:Loading the extension "multimodal"...
2023-07-27 09:38:36 INFO:VisualCLA - Loading CLIP from ./models/visualcla_merged-7b/vision_encoder as torch.float32 on cuda:0...
2023-07-27 09:38:38 INFO:VisualCLA - Loading visual resampler from ./models/visualcla_merged-7b/ as torch.float32 on cuda:0...
Traceback (most recent call last):
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1179, in
create_interface()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1086, in create_interface
extensions_module.create_extensions_block()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/modules/extensions.py", line 175, in create_extensions_block
extension.ui()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/script.py", line 119, in ui
multimodal_embedder = MultimodalEmbedder(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/multimodal_embedder.py", line 27, in init
pipeline, source = load_pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipeline_loader.py", line 30, in load_pipeline
pipeline = getattr(pipeline_modules[k], 'get_pipeline')(shared.args.multimodal_pipeline, params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/pipelines.py", line 11, in get_pipeline
return VisualCLA_7B_Pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 140, in init
super().init(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 30, in init
self.image_processor, self.vision_tower, self.visual_resampler, self.image_projection_layer = self._load_models()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 47, in _load_models
visual_resampler_config = VisualResamplerConfig.from_dict(json.load(open(os.path.join(shared.settings['visualcla_merged_model'], 'config.json')))['visual_resampler_config'])
KeyError: 'visual_resampler_config'
1.FileNotFoundError: [Errno 2] No such file or directory: './models/visualcla_merged-7b/pytorch_model.bin'
对于合并权重的情况
需要cp visualcla/pytorch_model.bin models/visualcla_merged-7b/
不知道这样对不对
2.OSError: Can't load the configuration of './models/visualcla_merged-7b/vision_encoder'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './models/visualcla_merged-7b/vision_encoder' is the correct path to a directory containing a config.json file
对于合并权重的情况
需要cp -r ./visualcla/vision_encoder/ ./models/visualcla_merged-7b/
不知道这样对不对
3.OSError: ./models/visualcla_merged-7b does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./models/visualcla_merged-7b/main' for available files.
对于合并权重的情况
cp ./visualcla/preprocessor_config.json models/visualcla_merged-7b/
不知道这样对不对
4.KeyError: 'visual_resampler_config'
以上操作完了之后,重新运行server.py
$ python server.py --model=visualcla_merged-7b --multimodal-pipeline=visualcla-7b --chat --settings=settings-visualcla.yaml --share --load-in-8bit
2023-07-27 09:31:45 WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care.
2023-07-27 09:31:47 INFO:Loading settings from settings-visualcla.yaml...
2023-07-27 09:31:47 INFO:Loading visualcla_merged-7b...
2023-07-27 09:38:36 WARNING:models/visualcla_merged-7b/special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-27 09:38:36 INFO:Loaded the model in 408.25 seconds.
2023-07-27 09:38:36 INFO:Loading the extension "multimodal"...
2023-07-27 09:38:36 INFO:VisualCLA - Loading CLIP from ./models/visualcla_merged-7b/vision_encoder as torch.float32 on cuda:0...
2023-07-27 09:38:38 INFO:VisualCLA - Loading visual resampler from ./models/visualcla_merged-7b/ as torch.float32 on cuda:0...
Traceback (most recent call last):
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1179, in
create_interface()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1086, in create_interface
extensions_module.create_extensions_block()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/modules/extensions.py", line 175, in create_extensions_block
extension.ui()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/script.py", line 119, in ui
multimodal_embedder = MultimodalEmbedder(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/multimodal_embedder.py", line 27, in init
pipeline, source = load_pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipeline_loader.py", line 30, in load_pipeline
pipeline = getattr(pipeline_modules[k], 'get_pipeline')(shared.args.multimodal_pipeline, params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/pipelines.py", line 11, in get_pipeline
return VisualCLA_7B_Pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 140, in init
super().init(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 30, in init
self.image_processor, self.vision_tower, self.visual_resampler, self.image_projection_layer = self._load_models()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 47, in _load_models
visual_resampler_config = VisualResamplerConfig.from_dict(json.load(open(os.path.join(shared.settings['visualcla_merged_model'], 'config.json')))['visual_resampler_config'])
KeyError: 'visual_resampler_config'
配置文件config.json如下
more models/visualcla_merged-7b/config.json
{
"_name_or_path": "chinese-alpaca-plus-7b/",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.2",
"use_cache": true,
"vocab_size": 49954
}
请帮忙看看,谢谢
The text was updated successfully, but these errors were encountered: