Skip to content

Commit

Permalink
First article
Browse files Browse the repository at this point in the history
  • Loading branch information
jphetphoumy committed May 6, 2024
1 parent 69d2af2 commit 59ff7df
Show file tree
Hide file tree
Showing 11 changed files with 234 additions and 58 deletions.
3 changes: 3 additions & 0 deletions app.vue
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
<Meta http-equiv="expires" content="0" />
<Meta http-equiv="pragma" content="no-cache" />
</Head>
<pre>
{{ data }}
</pre>
<NuxtLayout>
<NuxtPage />
</NuxtLayout>
Expand Down
39 changes: 38 additions & 1 deletion assets/css/main.css
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,31 @@ nav ul li {
margin-right: 10px;
}

/* Nav for mobile Hamburger */
@media (max-width: 850px) {
nav ul {
display: flex;
flex-direction: column;
align-items: center;
}

nav ul li {
display: block;
margin: 5px 0;
}
}

/* Main Content Area */
main {
margin: 20px auto;
max-width: 800px;
max-width: 850px;
}

/* Main for mobile */
@media (max-width: 850px) {
main {
margin: 1.5em;
}
}

/* Article and Text Styling */
Expand All @@ -49,6 +70,14 @@ article {
border-bottom: 1px solid #fff;
}

article h1::before {
content: '.| ';
}

article h1::after {
content: ' |.';
}

article h3::before {
content: '> ';
}
Expand All @@ -57,6 +86,10 @@ article h2::before {
content: '// ';
}

article h4::before {
content: '>> ';
}

h1 {
font-size: 1.5em;
}
Expand Down Expand Up @@ -91,3 +124,7 @@ input[type="text"] {
border-bottom: 2px solid #fff;
animation: blink 1s step-end infinite;
}

:not(pre) > code {
color: #0BF40F;
}
6 changes: 5 additions & 1 deletion components/AppHeader.vue
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,11 @@ const props = defineProps<{
const route = useRoute();
const isActive = (href: string) => {
return route.path === href;
console.log(route);
if (href === '/') {
return route.path === '/';
}
return route.path.includes(href);
};
</script>
<template>
Expand Down
15 changes: 15 additions & 0 deletions components/content/ArticleHeader.vue
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
<script setup lang="ts">
const props = defineProps<{
url: string,
version: string,
}>();
const route = useRoute();
const { data } = await useAsyncData('page-data', () => queryContent(route.path).findOne());
</script>
<template>
<p>date: {{ data.date }}</p>
<a v-if="url" :href="url"> {{ version === "french" ? "Lire l'article en anglais" : "Read this article in french" }}</a>
</template>

30 changes: 29 additions & 1 deletion components/content/ProseCode.vue
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,37 @@ const { copy, copied, text } = useClipboard()
position: relative;
margin-top: 1rem;
margin-bottom: 1rem;
overflow: hidden;
overflow: auto;
/* Style scrollbar */
scrollbar-width: thin;
border-radius: 0.5rem;
}
.container::-webkit-scrollbar {
width: 8px; /* width of the entire scrollbar */
}
.container::-webkit-scrollbar-track {
background: #2e2e2e; /* color of the track (part the scrollbar can move within) */
border-radius: 10px; /* roundness of the track */
}
.container::-webkit-scrollbar-thumb {
background-color: #555; /* color of the scrollbar itself */
border-radius: 10px; /* roundness of the scrollbar */
border: 2px solid #1e1e1e; /* Creates a border around the scrollbar */
}
.container::-webkit-scrollbar-thumb:hover {
background: #666; /* color of the scrollbar when hovered */
}
.container {
scrollbar-width: thin; /* "auto" or "thin" */
scrollbar-color: #555 #2e2e2e; /* thumb and track color */
}
.filename-text {
position: absolute;
top: .5em;
Expand Down Expand Up @@ -65,6 +92,7 @@ const { copy, copied, text } = useClipboard()
:deep(pre) {
padding-left: 1em;
padding-right: 1em;
}
</style>

9 changes: 0 additions & 9 deletions content/articles/LLM/index.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
---
draft: true
date: 06-05-2024
---

# Creating a simple chatbot with Langchain and Ollama
___

::ArticleHeader

::
---

## What is Retrieval-Augmented Generation (RAG) ?

Retrieval-Augmented Generation, or RAG, is this super cool tech that supercharges large language models (LLMs) like the ones we use in AI chatbots and search engines. Basically, it makes AI responses smarter by pulling in fresh, relevant info from external sources right when you need it.

### Why RAG Rocks for LLMS !

Here's why you might want to consider using RAG if you're messing around with AI models:

1. **Keeps Things Fresh:** RAG enables LLMs to pull in the latest data beyond their training datasets, diminishing the likelihood of delivering outdated or inaccurate responses.
2. **Keeps it Real:** Instead of making up answers (yeah, AI does that sometimes), RAG helps keep the AI’s responses grounded in actual, factual data.
3. **Easy to Implement:** You don't need to be a coding wizard to get RAG rolling, and it doesn't cost an arm and a leg to update your AI model.

In short, RAG is like giving your AI a mini-upgrade with each query, ensuring it's always on its A-game when answering questions or helping out users. It's a game-changer for making AI interactions a lot more reliable and useful.

## Installation of the tools: Ollama, Langchain and Gradio

### Creating the python project

First, we will use `poetry` to create our python project.

Use the following command to initialize your project:

```bash
poetry new ai-chatbot-rag
```

Next, let's understand the tools we will use for our application. Let's start by Ollama !

### Ollama

Ollama is the easiest solution to run LLM locally. It's as simple as running `docker pull`. To get started with Ollama, you can download it on the [official website](https://ollama.com/download)

To use a local LLM with ollama, simply use `ollama pull <model-name>`. If we want to use the new **LLAMA 3** model, we can run the following command :

```bash
ollama pull llama3
```

When the model download is complete, you can use the model with the following command:

```bash
ollama run llama3
```

You can also serve an api endpoint and this is what we will use for our chatbot !.

To do this, run the following command:

```bash
ollama serve
```

### Langchain

To supercharge our AI app and make things easier, we will use the **Langchain** framework.

This framework have some nice abstraction that will make it easier to create our RAG application.

#### Getting Started with Langchain

Here's the quick guide on how to getting Langchain up and running.

1. **Installation:** Since we already have our project initialized, let's add the Langchain Library ! In your python project, use the following command:

```bash
poetry add langchain
```

This command will add the Langchain library inside your project.

#### Simple Chatbot using Langchain & Ollama

Let's create a simple Chatbot

First, we need to import the what we need from Langchain

```python
from langchain_community.llms import Ollama
from langchain_core.prompts import PromptTemplate
```

We will use environment variable to make our application more flexible. Let's install python-dotenv

```bash
poetry add python-dotenv
```

Create a .env and add your local ip

```bash [.env]
OLLAMA_BASE_URL="http://<local-ip>:11434"
```

We can use the environment variable with the following code :

```python
import os
from dotenv import load_dotenv
load_dotenv()

# Define the base url for ollama
OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL")
```

Now let's create the LLM.

```python

llm = Ollama(model="phi3", base_url=OLLAMA_BASE_URL)

# Create the prompt template
prompt_template = """<|system|You are an helpful Assistant. Your goal is to answer the user as best as you can<|end|>
<|user|>What is the capital of france?<|end|><|assistant|>
The capital of France is Paris<|end|>
<|user|>{question}<|end|><|assistant|>
"""

prompt = PromptTemplate.from_template(prompt_template)
chain = prompt | llm


chunks = []
for chunk in chain.stream({question: "What is the color of the sky"}):
print(chunk, end="")
chunks.append(chunk)
```
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
draft: true
---
# Mettre en place une application RAG avec Ollama, Langchain et Gradio
___

Expand Down
2 changes: 1 addition & 1 deletion content/articles/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ title: "Latest articles"

## LLM

[Setting up a simple RAG application with Ollama, Langchain and Gradio](/articles/setting-up-a-simple-rag-application-with-ollama-langchain-and-gradio)
[Creating a simple chatbot with langchain and ollama](/articles/creating-a-simple-chatbot-with-langchain-and-ollama)

This file was deleted.

2 changes: 1 addition & 1 deletion layouts/default.vue
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ const links = [
];
</script>
<template>
<AppHeader title="Hackandpwned" :links="links" />
<AppHeader title=".: Hackandpwned :." :links="links" />
<main>
<Article />
</main>
Expand Down

0 comments on commit 59ff7df

Please sign in to comment.