Langfuse is open source observability & analytics for LLM apps - trace executions, debug issues caused by model interactions, analyze token usage, latency & quality. Integrates with Langchain, OpenAI SDK. Optimized for production.
Langfuse is an open-source observability and analytics solution tailored for production applications built on large language models (LLMs). It provides developer-friendly tools to monitor, debug, and optimize complex systems orchestrating multiple LLMs, agents, and external services.
🔎 End-to-end traceability connecting prompts to responses
⚙️ Quantify model quality via custom scores and metrics
💰 Pinpoint cost drivers across executions with detailed attribution
📈 Prebuilt analytics dashboards for token usage, latency, throughput etc.
🤝 Integrates natively with Langchain, OpenAI SDK, LiteLLM and more
🔧 Local development friendly with Docker images and Railway button
Whether you want to identify bad model versions in production, reduce inference costs or generally understand what drives your LLM application's reliability and performance, Langfuse provides the missing visibility.
Its batteries-included capabilities like tracing, scoring, and analytics help developers build and operate LLM-based systems with confidence. The open source architecture ensures full customizability for diverse use cases.
- 📈 Langfuse provides end-to-end observability into large language model (LLM) based applications, enabling engineers to debug issues and understand how changes impact metrics like quality, cost, and latency. Monitoring production systems matters.
- 📊 The ability to trace executions connecting prompts to responses, add custom scoring, and segment data by numerous parameters delivers granular analytics on LLM apps. Analytics drives optimization.
- 🔌 Integrations with popular frameworks like Langchain, OpenAI SDK, and LiteLLM combined with Langfuse's batteries-included capabilities simplify instrumenting model serving systems. Easy instrumentation means more monitoring.
- 🤝 Langfuse's open-source architecture means engineers can customize tracing, visualizations, and analytics to their specific needs. Open source empowers innovation.
- 🚀 Deployment options encompassing Docker, Railway, and a fully managed cloud service cater to programs at different scales and production readiness levels. Flexible deployment accelerates leveraging capabilities.
In summary, by providing an integrated solution for end-to-end observability and actionable analytics, Langfuse enables engineers to operate LLM apps reliably at scale and continuous improvement through data-driven development.
- 📅 (13/11/23) - (02/12/23)
- 👷🏽♀️ Builders: Marc Klingen, Max Deichmann, Clemens Rawert
- 👩🏽💼 Builders on LinkedIn: https://www.linkedin.com/in/marcklingen/, https://www.linkedin.com/in/maxdeichmann/, https://www.linkedin.com/in/rawert/
- 👩🏽🏭 Builders on X: @marcklingen, @maxdeichmann, @rawert
- 👩🏽💻 Contributors: 17
- 💫 GitHub Stars: 1.3k - 1.5k
- 🍴 Forks: 92 - 117
- 👁️ Watch: 12 - 14
- 🪪 License: MIT Expat
- 🔗 Links: Below 👇🏽
- GitHub Repository: https://www.linkedin.com/company/langfuse/
- Official Website: https://langfuse.com/
- X Page: https://twitter.com/langfuse
- LinkedIn Page: https://www.linkedin.com/company/langfuse/
- Profile in The AI Engineer: https://github.com/theaiengineer/awesome-opensource-ai-engineering/blob/main/libraries/langfuse.md
🧙🏽 Follow The AI Engineer for more about Langfuse and daily insights tailored to AI engineers. Subscribe to our newsletter. We are the AI community for hackers!