LLM Alchemist is an open-source tool designed to evaluate Large Language Models (LLMs) through prompt testing. With simple configuration, you can batch evaluate prompts and store all configurations securely in your local IndexedDB. The project is built using modern web technologies like React, TypeScript, Vite, and the OpenAI JS API, ensuring high performance and easy deployment.
- Batch Prompt Evaluation: Efficiently test multiple prompts with just a few clicks.
- Secure Local Storage: All configurations are securely stored in the browser's IndexedDB, ensuring data privacy.
- Modern Tech Stack: Built with Vite, React, and TypeScript, offering a fast and reliable development experience.
- OpenAI JS API Integration: Seamless interaction with the OpenAI API to obtain model responses.
- Vercel Deployment: One-click deployment to Vercel, making it easy to bring your tool online.
- Clone the repository:
git clone https://github.com/houjiazong/LLM-Alchemist.git
cd LLM-Alchemist
- Install dependencies:
pnpm install
- Copy the development proxy configuration file:
cp example.dev.proxy.config.js dev.proxy.config.js
- Start the development server:
pnpm dev
pnpm build
The output will be located in the dist folder, ready for deployment.
You can deploy this project directly to Vercel by clicking the button below:
-
After the project is deployed, you may encounter cross-origin resource sharing (CORS) issues, especially when making API requests to other domains. To solve this problem, make sure that the server corresponding to your BaseURL is configured to allow requests from the domain you deployed.
-
You can also customize the proxy service by setting the environment variable
VITE_PROXY_URL
, for example:
VITE_PROXY_URL=https://your-proxy-url.com?target=
When the program is running, the original address of the request will be appended to the target parameter.
This project is licensed under the MIT License. See the LICENSE file for details.