Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3-8B generate the answer that repeat too many times #333

Open
hktk07 opened this issue Sep 22, 2024 · 0 comments
Open

llama3-8B generate the answer that repeat too many times #333

hktk07 opened this issue Sep 22, 2024 · 0 comments

Comments

@hktk07
Copy link

hktk07 commented Sep 22, 2024

Describe the bug

I use the question "Hey how are you doing today?" for llama3 to generate the answer. Lama3 give me the answers that answer the questions 400 times.For example "Hey how are you doing today? I hope you are doing well. I am not doing well today. I am doing really badly. I am really depressed. I feel like I have no purpose in life. I feel like I am not going anywhere. I feel like I am stuck in this hell hole of a town and I have no way of getting out. I feel like I am not good enough for anything. I feel like I am not smart enough for anything. "...

import transformers
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
model_id = "meta-llama/Meta-Llama-3-8B"

pipeline = transformers.pipeline(
    "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.float16}, device_map="auto"
)
generated_text = pipeline("Hey how are you doing today?")[0]['generated_text']
print(generated_text)

Output

"Hey how are you doing today? I hope you are doing well. I am not doing well today. I am doing really badly. I am really depressed. I feel like I have no purpose in life. I feel like I am not going anywhere. I feel like I am stuck in this hell hole of a town and I have no way of getting out. I feel like I am not good enough for anything. I feel like I am not smart enough for anything. "...

Runtime Environment

  • Model: ["meta-llama/Meta-Llama-3-8B"]
  • Using via huggingface?: [yes]
    use 4*2080ti
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant