Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

_pickle.UnpicklingError: pickle data was truncated #97

Open
nitinmukesh opened this issue Jun 30, 2024 · 0 comments
Open

_pickle.UnpicklingError: pickle data was truncated #97

nitinmukesh opened this issue Jun 30, 2024 · 0 comments

Comments

@nitinmukesh
Copy link

After hours of struggling with the installation I am getting this error. Any solution plz

(audiogpt) C:\sd\AudioGPT>python audio-chatgpt.py
Initializing AudioGPT
Initializing T2I to cuda:0
C:\Users\nitin\miniconda3\envs\audiogpt\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
unet\diffusion_pytorch_model.safetensors not found
Initializing ImageCaptioning to cuda:0
Initializing Make-An-Audio to cuda:0
LatentDiffusion_audio: Running in eps-prediction mode
DiffusionWrapper has 160.22 M params.
making attention of type 'vanilla' with 256 in_channels
making attention of type 'vanilla' with 256 in_channels
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 106, 106) = 44944 dimensions.
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 512 in_channels
making attention of type 'vanilla' with 256 in_channels
making attention of type 'vanilla' with 256 in_channels
making attention of type 'vanilla' with 256 in_channels
C:\Users\nitin\miniconda3\envs\audiogpt\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████| 48.0/48.0 [00:00<?, ?B/s]
config.json: 100%|████████████████████████████████████████████████████████████████████| 570/570 [00:00<00:00, 36.6kB/s]
vocab.txt: 100%|█████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 531kB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████| 466k/466k [00:00<00:00, 735kB/s]
model.safetensors: 100%|████████████████████████████████████████████████████████████| 440M/440M [00:35<00:00, 12.3MB/s]
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
TextEncoder comes with 111.32 M params.
Traceback (most recent call last):
  File "audio-chatgpt.py", line 1377, in <module>
    bot = ConversationBot()
  File "audio-chatgpt.py", line 1057, in __init__
    self.t2a = T2A(device="cuda:0")
  File "audio-chatgpt.py", line 144, in __init__
    self.sampler = self._initialize_model('text_to_audio/Make_An_Audio/configs/text_to_audio/txt2audio_args.yaml', 'text_to_audio/Make_An_Audio/useful_ckpts/ta40multi_epoch=000085.ckpt', device=device)
  File "audio-chatgpt.py", line 150, in _initialize_model
    model.load_state_dict(torch.load(ckpt, map_location='cpu')["state_dict"], strict=False)
  File "C:\Users\nitin\miniconda3\envs\audiogpt\lib\site-packages\torch\serialization.py", line 713, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\nitin\miniconda3\envs\audiogpt\lib\site-packages\torch\serialization.py", line 920, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: pickle data was truncated


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant