Skip to content

Commit

Permalink
Merge pull request #25 from moe-mizrak/fix/int-fields-to-float
Browse files Browse the repository at this point in the history
int fields in ChatData fixed as float
  • Loading branch information
moe-mizrak authored Nov 15, 2024
2 parents d8bd272 + 3b91399 commit 1fe7833
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 12 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,12 +89,12 @@ The `ChatData` class is used to encapsulate the data required for making chat re
#### LLM Parameters
These properties control various aspects of the generated response (more [info](https://openrouter.ai/docs#parameters)):
- **max_tokens** (int|null): The maximum number of tokens that can be generated in the completion. Default is 1024.
- **temperature** (int|null): A value between 0 and 2 controlling the randomness of the output.
- **top_p** (int|null): A value between 0 and 1 for nucleus sampling, an alternative to temperature sampling.
- **top_k** (int|null): A value between 1 and infinity for top-k sampling (not available for OpenAI models).
- **frequency_penalty** (int|null): A value between -2 and 2 for penalizing new tokens based on their existing frequency.
- **presence_penalty** (int|null): A value between -2 and 2 for penalizing new tokens based on whether they appear in the text so far.
- **repetition_penalty** (int|null): A value between 0 and 2 for penalizing repetitive tokens.
- **temperature** (float|null): A value between 0 and 2 controlling the randomness of the output.
- **top_p** (float|null): A value between 0 and 1 for nucleus sampling, an alternative to temperature sampling.
- **top_k** (float|null): A value between 1 and infinity for top-k sampling (not available for OpenAI models).
- **frequency_penalty** (float|null): A value between -2 and 2 for penalizing new tokens based on their existing frequency.
- **presence_penalty** (float|null): A value between -2 and 2 for penalizing new tokens based on whether they appear in the text so far.
- **repetition_penalty** (float|null): A value between 0 and 2 for penalizing repetitive tokens.
- **seed** (int|null): A value for deterministic sampling (OpenAI models only, in beta).
#### Function-calling
Only natively suported by OpenAI models. For others, we submit a YAML-formatted string with these tools at the end of the prompt.
Expand Down
12 changes: 6 additions & 6 deletions src/DTO/ChatData.php
Original file line number Diff line number Diff line change
Expand Up @@ -108,12 +108,12 @@ private function validateXorFields(array $params): void
* See LLM Parameters (https://openrouter.ai/docs#parameters) for following:
*/
public ?int $max_tokens = 1024; // Range: [1, context_length) The maximum number of tokens that can be generated in the completion. Default 1024.
public ?int $temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
public ?int $top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
public ?int $top_k; // Range: [1, Infinity) Not available for OpenAI models
public ?int $frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
public ?int $presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
public ?int $repetition_penalty; // Range: (0, 2]
public ?float $temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
public ?float $top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
public ?float $top_k; // Range: [1, Infinity) Not available for OpenAI models
public ?float $frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
public ?float $presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
public ?float $repetition_penalty; // Range: (0, 2]
public ?int $seed; // OpenAI only. This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

// Function-calling
Expand Down

0 comments on commit 1fe7833

Please sign in to comment.