Skip to content

Commit

Permalink
Merge pull request #105 from SylphAI-Inc/main
Browse files Browse the repository at this point in the history
[Beta Release 0.0.0b1]
  • Loading branch information
Sylph-AI authored Jul 11, 2024
2 parents 1049549 + b595066 commit 682e66a
Show file tree
Hide file tree
Showing 36 changed files with 1,109 additions and 455 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:

strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
python-version: ['3.9', '3.10', '3.11', '3.12']

steps:
- uses: actions/checkout@v3 # Updated to the latest version
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
[![](https://dcbadge.vercel.app/api/server/zt2mTPcu?compact=true&style=flat)](https://discord.gg/zt2mTPcu)


### ⚡ The PyTorch Library for Large Language Model Applications ⚡
### ⚡ The Lightning Library for Large Language Model Applications ⚡

*LightRAG* helps developers with both building and optimizing *Retriever-Agent-Generator* pipelines.
It is *light*, *modular*, and *robust*, with a 100% readable codebase.
Expand All @@ -20,7 +20,7 @@ It is *light*, *modular*, and *robust*, with a 100% readable codebase.

# Why LightRAG?

LLMs are like water; they can almost do anything, from GenAI applications such as chatbots, translation, summarization, code generation, and autonomous agents to classical NLP tasks like text classification and named entity recognition. They interact with the world beyond the model’s internal knowledge via retrievers, memory, and tools (function calls). Each use case is unique in its data, business logic, and user experience.
LLMs are like water; they can be shaped into anything, from GenAI applications such as chatbots, translation, summarization, code generation, and autonomous agents to classical NLP tasks like text classification and named entity recognition. They interact with the world beyond the model’s internal knowledge via retrievers, memory, and tools (function calls). Each use case is unique in its data, business logic, and user experience.

Because of this, no library can provide out-of-the-box solutions. Users must build toward their own use case. This requires the library to be modular, robust, and have a clean, readable codebase. The only code you should put into production is code you either 100% trust or are 100% clear about how to customize and iterate.

Expand Down Expand Up @@ -240,7 +240,7 @@ LightRAG full documentation available at [lightrag.sylph.ai](https://lightrag.sy
```bibtex
@software{Yin2024LightRAG,
author = {Li Yin},
title = {{LightRAG: The PyTorch Library for Large Language Model (LLM) Applications}},
title = {{LightRAG: The Lightning Library for Large Language Model (LLM) Applications}},
month = {7},
year = {2024},
doi = {10.5281/zenodo.12639531},
Expand Down
74 changes: 74 additions & 0 deletions developer_notes/react_note.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
from lightrag.components.agent import ReActAgent
from lightrag.core import Generator, ModelClientType, ModelClient
from lightrag.utils import setup_env

setup_env()


# Define tools
def multiply(a: int, b: int) -> int:
"""
Multiply two numbers.
"""
return a * b


def add(a: int, b: int) -> int:
"""
Add two numbers.
"""
return a + b


def divide(a: float, b: float) -> float:
"""
Divide two numbers.
"""
return float(a) / b


llama3_model_kwargs = {
"model": "llama3-70b-8192", # llama3 70b works better than 8b here.
"temperature": 0.0,
}
gpt_model_kwargs = {
"model": "gpt-3.5-turbo",
"temperature": 0.0,
}


def test_react_agent(model_client: ModelClient, model_kwargs: dict):
tools = [multiply, add, divide]
queries = [
"What is the capital of France? and what is 465 times 321 then add 95297 and then divide by 13.2?",
"Give me 5 words rhyming with cool, and make a 4-sentence poem using them",
]
# define a generator without tools for comparison

generator = Generator(
model_client=model_client,
model_kwargs=model_kwargs,
)

react = ReActAgent(
max_steps=6,
add_llm_as_fallback=True,
tools=tools,
model_client=model_client,
model_kwargs=model_kwargs,
)
# print(react)

for query in queries:
print(f"Query: {query}")
agent_response = react.call(query)
llm_response = generator.call(prompt_kwargs={"input_str": query})
print(f"Agent response: {agent_response}")
print(f"LLM response: {llm_response}")
print("")


if __name__ == "__main__":
test_react_agent(ModelClientType.GROQ(), llama3_model_kwargs)
# test_react_agent(ModelClientType.OPENAI(), gpt_model_kwargs)
print("Done")
3 changes: 2 additions & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ help:

apidoc:
@sphinx-apidoc -o $(APIDOCOUTDIR)/core ../lightrag/lightrag/core --separate --force
@sphinx-apidoc -o $(APIDOCOUTDIR)/components ../lightrag/lightrag/components --separate --force --templatedir=$(SOURCEDIR)/_templates
@sphinx-apidoc -o $(APIDOCOUTDIR)/components ../lightrag/lightrag/components --separate --force
#--templatedir=$(SOURCEDIR)/_templates
@sphinx-apidoc -o $(APIDOCOUTDIR)/eval ../lightrag/lightrag/eval --separate --force
@sphinx-apidoc -o $(APIDOCOUTDIR)/optim ../lightrag/lightrag/optim --separate --force
@sphinx-apidoc -o $(APIDOCOUTDIR)/utils ../lightrag/lightrag/utils --separate --force
Expand Down
Binary file modified docs/source/_static/images/LightRAG_dataflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/images/query_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/images/query_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 0 additions & 6 deletions docs/source/apis/components/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ Retriever
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:

components.retriever.bm25_retriever
components.retriever.faiss_retriever
Expand All @@ -46,23 +45,20 @@ Output Parsers
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:

components.output_parsers.outputs

Agent
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:

components.agent.react

Data Process
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:


components.data_process.text_splitter
Expand All @@ -73,15 +69,13 @@ Memory
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:

components.memory.memory

Reasoning
~~~~~~~~~~~~~~~~~~~~

.. autosummary::
:nosignatures:

components.reasoning.chain_of_thought

Expand Down
3 changes: 2 additions & 1 deletion docs/source/apis/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,10 +104,11 @@ Utils
.. autosummary::

utils.logger
utils.setup_env
utils.lazy_import
utils.serialization
utils.config
utils.registry
utils.setup_env


.. toctree::
Expand Down
Loading

0 comments on commit 682e66a

Please sign in to comment.