Skip to content

Commit

Permalink
Update articles + Css
Browse files Browse the repository at this point in the history
  • Loading branch information
jphetphoumy committed May 6, 2024
1 parent 64b6890 commit a6275be
Show file tree
Hide file tree
Showing 5 changed files with 71 additions and 8 deletions.
3 changes: 3 additions & 0 deletions assets/css/main.css
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ article {
margin: 20px 0;
padding: 10px;
border-bottom: 1px solid #fff;
padding-bottom: 100px;
}

article h1::before {
Expand Down Expand Up @@ -102,6 +103,8 @@ h2 {
footer {
text-align: center;
box-sizing: border-box;
position: fixed;
bottom: 0;
padding: 10px 20px;
background-color: #000;
color: #fff;
Expand Down
18 changes: 18 additions & 0 deletions content/about.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# About me

I am a Fullstack developer and passionated by Hacking, Devops and LLM.

Here the Languages and Tools I use :

- Python
- Javascript
- PHP
- Vue.js
- Reaper
- Langchain
- Ollama
- Lua

I'm currently learning Nixos and getting back to learning exploitation ( Buffer overflow, ect ).


Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ ___

## What is Retrieval-Augmented Generation (RAG) ?

Retrieval-Augmented Generation, or RAG, is this super cool tech that supercharges large language models (LLMs) like the ones we use in AI chatbots and search engines. Basically, it makes AI responses smarter by pulling in fresh, relevant info from external sources right when you need it.
Retrieval-Augmented Generation, or RAG, is this super cool technique that supercharges large language models (LLMs) like the ones we use in AI chatbots. Basically, it makes AI responses smarter by pulling in fresh, relevant info from external sources right when you need it.

### Why RAG Rocks for LLMS !

Expand All @@ -25,7 +25,9 @@ Here's why you might want to consider using RAG if you're messing around with AI

In short, RAG is like giving your AI a mini-upgrade with each query, ensuring it's always on its A-game when answering questions or helping out users. It's a game-changer for making AI interactions a lot more reliable and useful.

## Installation of the tools: Ollama, Langchain and Gradio
## Installation of the tools: Ollama, Langchain

Let's play a bit with python and Local LLMs. We will create a simple python application that will use **Ollama** and **Langchain** to create your custom chatbot.

### Creating the python project

Expand Down Expand Up @@ -115,10 +117,45 @@ load_dotenv()
OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL")
```

Now let's create the LLM.
Now let's create the LLM and the prompt.

With the code below, we can have a simple chatbot with a custom System prompt.

```python

llm = Ollama(model="llama3", base_url=OLLAMA_BASE_URL)

# Create the prompt template
prompt_template = """<|system|You are an helpful Assistant. Your goal is to answer the user as best as you can<|end|>
<|user|>What is the capital of france?<|end|><|assistant|>
The capital of France is Paris<|end|>
<|user|>{question}<|end|><|assistant|>
"""

prompt = PromptTemplate.from_template(prompt_template)
chain = prompt | llm


while True:
question = input("Ask me anything: ")
chunks = []
for chunk in chain.stream({question: question}):
print(chunk, end="")
chunks.append(chunk)
```

The full code woul be the following :

```python
from langchain_community.llms import Ollama
from langchain_core.prompts import PromptTemplate
import os
from dotenv import load_dotenv
load_dotenv()

# Define the base url for ollama
OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL")

llm = Ollama(model="phi3", base_url=OLLAMA_BASE_URL)

# Create the prompt template
Expand All @@ -132,8 +169,10 @@ prompt = PromptTemplate.from_template(prompt_template)
chain = prompt | llm


chunks = []
for chunk in chain.stream({question: "What is the color of the sky"}):
print(chunk, end="")
chunks.append(chunk)
while True:
question = input("Ask me anything: ")
chunks = []
for chunk in chain.stream({question: question}):
print(chunk, end="")
chunks.append(chunk)
```
4 changes: 3 additions & 1 deletion content/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ If you like the following topic :
- **Devops**
- **LLM**

See the Latest blogpost I did here : [articles](/articles)
This place is for you !

You can read my latest post here : [articles](/articles)

Happy hacking ;)
1 change: 1 addition & 0 deletions layouts/default.vue
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import Article from '~/components/Article.vue'
const links = [
{ name: 'Home', href: '/' },
{ name: 'Articles', href: '/articles' },
{ name: 'About', href: '/about' },
{ name: 'Contact', href: '/contact' },
];
</script>
Expand Down

0 comments on commit a6275be

Please sign in to comment.