Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在win 11 上启动项目 出现报错 #135

Open
lmq886 opened this issue Jun 30, 2024 · 0 comments
Open

在win 11 上启动项目 出现报错 #135

lmq886 opened this issue Jun 30, 2024 · 0 comments

Comments

@lmq886
Copy link

lmq886 commented Jun 30, 2024

image

NeRFNetwork(
  (audio_net): AudioNet(
    (encoder_conv): Sequential(
      (0): Conv1d(44, 32, kernel_size=(3,), stride=(2,), padding=(1,))
      (1): LeakyReLU(negative_slope=0.02, inplace=True)
      (2): Conv1d(32, 32, kernel_size=(3,), stride=(2,), padding=(1,))
      (3): LeakyReLU(negative_slope=0.02, inplace=True)
      (4): Conv1d(32, 64, kernel_size=(3,), stride=(2,), padding=(1,))
      (5): LeakyReLU(negative_slope=0.02, inplace=True)
      (6): Conv1d(64, 64, kernel_size=(3,), stride=(2,), padding=(1,))
      (7): LeakyReLU(negative_slope=0.02, inplace=True)
    )
    (encoder_fc1): Sequential(
      (0): Linear(in_features=64, out_features=64, bias=True)
      (1): LeakyReLU(negative_slope=0.02, inplace=True)
      (2): Linear(in_features=64, out_features=32, bias=True)
    )
  )
  (audio_att_net): AudioAttNet(
    (attentionConvNet): Sequential(
      (0): Conv1d(32, 16, kernel_size=(3,), stride=(1,), padding=(1,))
      (1): LeakyReLU(negative_slope=0.02, inplace=True)
      (2): Conv1d(16, 8, kernel_size=(3,), stride=(1,), padding=(1,))
      (3): LeakyReLU(negative_slope=0.02, inplace=True)
      (4): Conv1d(8, 4, kernel_size=(3,), stride=(1,), padding=(1,))
      (5): LeakyReLU(negative_slope=0.02, inplace=True)
      (6): Conv1d(4, 2, kernel_size=(3,), stride=(1,), padding=(1,))
      (7): LeakyReLU(negative_slope=0.02, inplace=True)
      (8): Conv1d(2, 1, kernel_size=(3,), stride=(1,), padding=(1,))
      (9): LeakyReLU(negative_slope=0.02, inplace=True)
    )
    (attentionNet): Sequential(
      (0): Linear(in_features=8, out_features=8, bias=True)
      (1): Softmax(dim=1)
    )
  )
  (encoder_xy): GridEncoder: input_dim=2 num_levels=12 level_dim=1 resolution=64 -> 512 per_level_scale=1.2081 params=(163584, 1) gridtype=hash align_corners=False  
  (encoder_yz): GridEncoder: input_dim=2 num_levels=12 level_dim=1 resolution=64 -> 512 per_level_scale=1.2081 params=(163584, 1) gridtype=hash align_corners=False  
  (encoder_xz): GridEncoder: input_dim=2 num_levels=12 level_dim=1 resolution=64 -> 512 per_level_scale=1.2081 params=(163584, 1) gridtype=hash align_corners=False  
  (eye_att_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=36, out_features=16, bias=False)
      (1): Linear(in_features=16, out_features=1, bias=False)
    )
  )
  (sigma_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=69, out_features=64, bias=False)
      (1): Linear(in_features=64, out_features=64, bias=False)
      (2): Linear(in_features=64, out_features=65, bias=False)
    )
  )
  (encoder_dir): SHEncoder: input_dim=3 degree=4
  (color_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=84, out_features=64, bias=False)
      (1): Linear(in_features=64, out_features=3, bias=False)
    )
  )
  (unc_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=36, out_features=32, bias=False)
      (1): Linear(in_features=32, out_features=1, bias=False)
    )
  )
  (aud_ch_att_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=36, out_features=64, bias=False)
      (1): Linear(in_features=64, out_features=32, bias=False)
    )
  )
  (torso_deform_encoder): FreqEncoder: input_dim=2 degree=8 output_dim=34
  (anchor_encoder): FreqEncoder: input_dim=6 degree=3 output_dim=42
  (torso_deform_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=84, out_features=32, bias=False)
      (1): Linear(in_features=32, out_features=32, bias=False)
      (2): Linear(in_features=32, out_features=2, bias=False)
    )
  )
  (torso_encoder): GridEncoder: input_dim=2 num_levels=16 level_dim=2 resolution=16 -> 2048 per_level_scale=1.3819 params=(555520, 2) gridtype=tiled align_corners=False
  (torso_net): MLP(
    (net): ModuleList(
      (0): Linear(in_features=116, out_features=32, bias=False)
      (1): Linear(in_features=32, out_features=32, bias=False)
      (2): Linear(in_features=32, out_features=4, bias=False)
    )
  )
)
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' 
are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Traceback (most recent call last):
  File "E:\work\metahuman-stream-main\app.py", line 376, in <module>
    trainer = Trainer('ngp', opt, model, device=device, workspace=opt.workspace, criterion=criterion, fp16=opt.fp16,
  File "E:\work\metahuman-stream-main\ernerf\nerf_triplane\utils.py", line 655, in __init__
    self.criterion_lpips_alex = lpips.LPIPS(net='alex').to(self.device)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\lpips\lpips.py", line 84, in __init__
    self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\lpips\pretrained_networks.py", line 59, in __init__
    alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\_utils.py", line 142, in wrapper
    return fn(*args, **kwargs)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\_utils.py", line 228, in inner_wrapper
    return builder(*args, **kwargs)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\alexnet.py", line 114, in alexnet
    model.load_state_dict(weights.get_state_dict(progress=progress))
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torchvision\models\_api.py", line 63, in get_state_dict
    return load_state_dict_from_url(self.url, progress=progress)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torch\hub.py", line 731, in load_state_dict_from_url
    return torch.load(cached_file, map_location=map_location)
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torch\serialization.py", line 705, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
  File "D:\ProgramData\anaconda3\envs\nerfstream\lib\site-packages\torch\serialization.py", line 242, in __init__
    super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
Exception ignored in: <function Trainer.__del__ at 0x000001283CE60940>
Traceback (most recent call last):
  File "E:\work\metahuman-stream-main\ernerf\nerf_triplane\utils.py", line 708, in __del__
    if self.log_ptr:
AttributeError: 'Trainer' object has no attribute 'log_ptr'

这个怎么办啊? 找不到问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant