Skip to content

Text analytics for LLM apps. Cluster messages to detect use cases, outliers, power users. Detect intents and run evals with LLM (OpenAI, MistralAI, Ollama, etc.)

License

Notifications You must be signed in to change notification settings

phospho-app/phospho

Repository files navigation

The LLM app backoffice for busy builders

phospho logo phospho npm package phospho Python package on PyPi Y Combinator W24

🧪 phospho is the backoffice for your LLM app.

Detect issues and extract insights from your users' text messages.

Gather feedback and measure success. Create the best conversational experience for your users. Analyze logs effortlessly and finally make sense of all your data.

Learn more in the documentation.

phospho platform

Demo 🧪

phospho_demo_socials.mp4

Key Features 🚀

  • Clustering: Group similar conversations and identify patterns
  • A/B Testing: Compare different versions of your LLM app
  • Data Labeling: Efficiently categorize and annotate your data
  • User Analytics: Gain insights into user behavior and preferences
  • Integration: Sync data with LangSmith/Langfuse, Argilla, PowerBI
  • Data Visualization: Powerful tools to understand your data
  • Multi-user Experience: Collaborate with your team seamlessly

Quick start: How to setup phospho SaaS?

Quickly import, analyze and label data on the phospho platform.

  1. Create an account
  2. Install our SDK:
  • Python: pip install phospho
  • JavaScript: npm i phospho
  1. Set environment variables ( you can find these on your phospho account )
  • PHOSPHO_API_KEY
  • PHOSPHO_PROJECT_ID
  1. Initialize phospho: phospho.init()
  2. Start logging to phospho with phospho.log(input="question", output="answer")

Follow this guide to get started.

Note:

  • You can also import data directly through a CSV or Excel directly on the platform
  • If you use the python version, you might want to disable auto-logging with phospho.init(auto_log=False)

Deploy with docker compose

Create a .env.docker using this guide. Then, run:

docker compose up

Go to localhost:3000 to see the platform frontend. The backend documentation is available at localhost:8000/v3/docs.

Development guide

Contributing

We welcome contributions from the community. Please refer to our contributing guidelines for more information.

Running locally

This project uses Python3.11+ and NextJS.

To work on it locally,

  1. Make sure you have properly added .env files in ai-hub, extractor, backend, platform.
  2. Install the Temporal CLI brew install temporal
  3. Create a python virtual environment.
python -m venv .venv
source .venv/bin/activate
  1. Then, the quickest way to get started is to use the makefile to install and up.
# Install dependencies
make install
# Launch everything
make up
  1. Go to localhost:3000 to see the platform frontend. The backend documentations are available at localhost:8000/api/docs, localhost:8000/v2/docs and localhost:8000/v3/docs.

  2. To stop everything, run:

make stop

Related projects

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details

About us

We are a team of passionate AI builders, feel free to reach out here. With love and baguettes from Paris, the phospho team 🥖💚

Star History

Star History Chart