Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/sambanovacloud integration dify #11918

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Empty file.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Meta-Llama-3.1-405B-Instruct
label:
en_US: Meta-Llama-3.1-405B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096
min: 1
max: 16000
pricing:
input: '0.000005'
output: '0.000010'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Meta-Llama-3.1-70B-Instruct
label:
en_US: Meta-Llama-3.1-70B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 8192
min: 1
max: 128000
pricing:
input: '0.0000006'
output: '0.0000012'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Meta-Llama-3.1-8B-Instruct
label:
en_US: Meta-Llama-3.1-8B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096
min: 1
max: 16000
pricing:
input: '0.0000001'
output: '0.0000002'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Meta-Llama-3.2-1B-Instruct
label:
en_US: Meta-Llama-3.2-1B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096
min: 1
max: 16000
pricing:
input: '0.0000004'
output: '0.0000008'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Meta-Llama-3.2-3B-Instruct
label:
en_US: Meta-Llama-3.2-3B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 4000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 2000
min: 1
max: 4000
pricing:
input: '0.0000008'
output: '0.0000016'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Qwen2.5-72B-Instruct
label:
en_US: Qwen2.5-72B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096
min: 1
max: 8000
pricing:
input: '0.000002'
output: '0.000004'
unit: '1'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
model: Qwen2.5-Coder-32B-Instruct
label:
en_US: Qwen2.5-Coder-32B-Instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8000
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
en_US: Top k
type: int
help:
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096
min: 1
max: 8000
pricing:
input: '0.0000015'
output: '0.000003'
unit: '1'
currency: USD
Empty file.
Loading