From 15e5f53d115dd687c7b906cd4514d725b2a563da Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 26 Sep 2024 12:26:41 +0100 Subject: [PATCH] Update src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx Caps Co-authored-by: Jun Lee --- .../ai-gateway/observability/evaluations/set-up-evaluations.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx b/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx index 397b77c61fe7db8..d3ada4849ecc015 100644 --- a/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx +++ b/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx @@ -31,7 +31,7 @@ After creating a dataset, choose the evaluation parameters: - Cost: Calculates the average cost of inference requests within the dataset (only for requests with [cost data](/ai-gateway/observability/costs/)). - Speed: Calculates the average duration of inference requests within the dataset. - Performance: - - Human feedback: Measures performance based on human feedback, calculated by the % of thumbs up on the logs, annotated from the Logs tab. + - Human feedback: measures performance based on human feedback, calculated by the % of thumbs up on the logs, annotated from the Logs tab. :::note[Note]