Skip to content

Commit

Permalink
Fix INIT_YAML embeddings default settings (#1039)
Browse files Browse the repository at this point in the history
Co-authored-by: Thanh Long Phan <long.phan@dida.do>
Co-authored-by: Alonso Guevara <alonsog@microsoft.com>
  • Loading branch information
3 people authored Aug 28, 2024
1 parent 22df2f8 commit 1b51827
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 2 deletions.
4 changes: 4 additions & 0 deletions .semversioner/next-release/patch-20240827203354884800.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"type": "patch",
"description": "Fix default settings for embedding"
}
4 changes: 2 additions & 2 deletions graphrag/index/init_content.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@
## parallelization: override the global parallelization settings for embeddings
async_mode: {defs.ASYNC_MODE.value} # or asyncio
# target: {defs.EMBEDDING_TARGET.value} # or all
# batch_size: {defs.EMBEDDING_BATCH_SIZE} # the number of documents to send in a single request
# batch_max_tokens: {defs.EMBEDDING_BATCH_MAX_TOKENS} # the maximum number of tokens to send in a single request
llm:
api_key: ${{GRAPHRAG_API_KEY}}
type: {defs.EMBEDDING_TYPE.value} # or azure_openai_embedding
Expand All @@ -52,8 +54,6 @@
# max_retry_wait: {defs.LLM_MAX_RETRY_WAIT}
# sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
# concurrent_requests: {defs.LLM_CONCURRENT_REQUESTS} # the number of parallel inflight requests that may be made
# batch_size: {defs.EMBEDDING_BATCH_SIZE} # the number of documents to send in a single request
# batch_max_tokens: {defs.EMBEDDING_BATCH_MAX_TOKENS} # the maximum number of tokens to send in a single request
Expand Down

0 comments on commit 1b51827

Please sign in to comment.