-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
运行multimodal_understanding.py报错,只改了模型从魔搭社区下载那一部分 #36
Comments
拉一下最新的代码和模型呢?我check了github,modelscope里所有最新代码,processing_emu3.py line 159都不是一行有效代码。 |
确实能跑起来了,爆内存了。 项目能不能考虑分块儿分卡执行,单卡执行需要的资源太多了。Emu3确实是我们行业里可以依赖的唯一模型,感谢智源研究院。 |
模型完全兼容transformers中的各种优化方法,可以直接使用transformers或者accelerate支持的自动化分卡(仅限多模态理解模型),代码可以参考Emu2 demo code,或者使用transformers自带的int4量化。如果只是kv cache爆了也可以尝试transformers库支持的offload kvcache的方式。 |
Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): 在运行multimodal_understanding.py时候报错仍然存在,新拉了代码 |
Name: torch Name: numpy |
可以尝试换下numpy版本试试?看着是numpy转tensor报错,但是识别到的numpy.dtype也没啥问题。。我们的环境同样的版本 |
print(pixel_values.shape) print(pixel_values.dtype) |
确认下环境问题吧,仅从目前提供的信息看,看起来不太像是我们代码的问题,而是numpy.array转torch.tensor报错了。 |
|
双卡4090跑成功了 -- coding: utf-8 --from PIL import Image from emu3.mllm.processing_emu3 import Emu3Processor from modelscope import snapshot_download model pathEMU_HUB = snapshot_download("BAAI/Emu3-Chat") Quantization configurationquantization_config = BitsAndBytesConfig( prepare model and processormodel = AutoModelForCausalLM.from_pretrained( tokenizer = AutoTokenizer.from_pretrained(EMU_HUB, trust_remote_code=True, padding_side="left") prepare inputtext = ["Please describe the image", "Please describe the image"] inputs = processor( prepare hyper parametersGENERATION_CONFIG = GenerationConfig(pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id) generateoutputs = model.generate( outputs = outputs[:, inputs.input_ids.shape[-1]:] |
Exception has occurred: ValueError
Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.
RuntimeError: Could not infer dtype of numpy.float32
During handling of the above exception, another exception occurred:
File "/home/lizhaorui/.cache/huggingface/modules/transformers_modules/Emu3-VisionTokenizer/image_processing_emu3visionvq.py", line 349, in preprocess
return BatchFeature(data=data, tensor_type=return_tensors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lizhaorui/DL/Emu3/emu3/mllm/processing_emu3.py", line 274, in tokenize_image
image_inputs = self.image_processor(image, return_tensors="pt")["pixel_values"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lizhaorui/DL/Emu3/emu3/mllm/processing_emu3.py", line 159, in call
image_tokens = self.tokenize_image(image, padding_image=padding_image)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lizhaorui/DL/Emu3/multimodal_understanding.py", line 35, in
inputs = processor(
^^^^^^^^^^
ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.
图片的例子是项目里的例子
The text was updated successfully, but these errors were encountered: