Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question of Demo or Inference in the app.py #12

Closed
HaoqianSong opened this issue May 30, 2024 · 3 comments
Closed

Question of Demo or Inference in the app.py #12

HaoqianSong opened this issue May 30, 2024 · 3 comments

Comments

@HaoqianSong
Copy link

HaoqianSong commented May 30, 2024

Hello, I have some doubts.

@HaoqianSong
Copy link
Author

HaoqianSong commented May 30, 2024

Can app.py only run on one GPU, not multiple GPUs? In addition, because the graphics card is too small, I modified the following two places to load the model for the CPU, and some problem occurred.
defload_model_from_config(config, ckpt, device, verbose=False):
global closure_device
closure_device = device
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location=device) #"cpu"
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
model = instantiate_from_config(config.model)
m, u = model.load_state_dict(sd, strict=False)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
if len(u) > 0 and verbose:
print("unexpected keys:")
print(u)
model.to(device) #"cpu"
model.eval()
return model

@egeozguroglu
Copy link
Collaborator

egeozguroglu commented Aug 28, 2024

Hi, the current Gradio demo script is for a single GPU, but it can easily be adapted to leverage parallelization across multiple smaller GPUs.

@Kris0823
Copy link

可演示脚本可以用多个GPU进行运行吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants