From 82b33deecd47ef0e7c7aef50ef0bbd567763685e Mon Sep 17 00:00:00 2001 From: Giulio Starace Date: Fri, 15 Mar 2024 15:04:28 +0100 Subject: [PATCH] remove extra linebreak --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index c5326e603e..04cac96347 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,3 @@ - # OpenAI Evals Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly.