-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[vLLM] metadata script #959
base: main
Are you sure you want to change the base?
Conversation
2023e3b
to
309da1c
Compare
309da1c
to
7f53fc3
Compare
Yes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks dope! I didn't test the script locally, but went through the exported dataset - looks amazing!
source_code = response.text | ||
|
||
models_dict = extract_models_dict(source_code) | ||
architectures = [item for tup in models_dict.values() for item in tup] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
architectures = [item for tup in models_dict.values() for item in tup] | |
architectures = sorted(list({item for tup in models_dict.values() for item in tup})) |
Maybe, if we want to remove duplicates and assuming tuple order does not matter (i.e., llama
does not have to appear before LlamaForCausalLM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool 🔥
- name: Execute Python script | ||
env: | ||
HF_VLLM_METADATA_PUSH: ${{ secrets.HF_VLLM_METADATA_PUSH }} | ||
run: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move this code to a script rather than having the Python code in the yaml? It will be easier to maintain, update, and review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will be easier to review
agree with this point
it will be easier to maintain, update
I think maintaining a separate python script would be painful. We would need to find a place to place this python script and tell the yaml job to download and run this python script (which can introduce other security issues since we are running what's being downloaded)
from huggingface_hub import HfApi | ||
|
||
def extract_models_sub_dict(parsed_code, sub_dict_name): | ||
class MODELS_SUB_LIST_VISITOR(ast.NodeVisitor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
class MODELS_SUB_LIST_VISITOR(ast.NodeVisitor): | |
class ModelsSubListVisitor(ast.NodeVisitor): |
Follow up to #957. To precisely show vLLM snippet on HF models, we need to have record on which models are supported by vLLM. vLLM provides supported models list: doc page, python src.
This PR creates a github cron job (runs once a day) script that:
Find the python script here: get_vlmm_metadata.py.zip (you can locally test run if you want).