diff --git a/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx b/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx index 397b77c61fe7db..d3ada4849ecc01 100644 --- a/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx +++ b/src/content/docs/ai-gateway/observability/evaluations/set-up-evaluations.mdx @@ -31,7 +31,7 @@ After creating a dataset, choose the evaluation parameters: - Cost: Calculates the average cost of inference requests within the dataset (only for requests with [cost data](/ai-gateway/observability/costs/)). - Speed: Calculates the average duration of inference requests within the dataset. - Performance: - - Human feedback: Measures performance based on human feedback, calculated by the % of thumbs up on the logs, annotated from the Logs tab. + - Human feedback: measures performance based on human feedback, calculated by the % of thumbs up on the logs, annotated from the Logs tab. :::note[Note]