Skip to content

Commit

Permalink
✨ Introduce new feature(s): new benchmarks page
Browse files Browse the repository at this point in the history
  • Loading branch information
k33g committed Oct 23, 2024
1 parent a60be7c commit 0accab1
Show file tree
Hide file tree
Showing 4 changed files with 46 additions and 43 deletions.
72 changes: 37 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,53 @@
# Awesome SMLs
# Awesome SLMs

<div align="center">
<img width="50%" height="50%" src="female-borg.jpg" alt="Description of the image">
</div>



This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with Ollama
This is the list of the SLMs I use on my Raspberry Pi5 (8GB RAM) with Ollama

| Name | Size | Tag | Remark | Kind | URL | Good on Pi5 | Usable on Pi5 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| CodeGemma 2b | 1.6GB | 2B | Fill-in-the-middle code completion | code | [Link](https://ollama.com/library/codegemma:2b) |||
| Gemma 2b | 1.7GB | 2B | | | [Link](https://ollama.com/library/gemma:2b) |||
| Gemma2 2b | 1.6GB | 2B | | | [Link](https://ollama.com/library/gemma2:2b) |||
| All-Minilm 22m | 46MB | 22M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:22m) |||
| All-Minilm 33m | 67MB | 33M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:33m) |||
| DeepSeek Coder 1.3b | 776MB | 1.3B | Trained on both 87% code and 13% natural language | code | [Link](https://ollama.com/library/deepseek-coder) |||
| TinyLlama 1.1b | 638MB | 1.1B | | | [Link](https://ollama.com/library/tinyllama) |||
| TinyDolphin 1.1b | 637MB | 1.1B | | | [Link](https://ollama.com/library/tinydolphin) |||
| Phi3 Mini | 2.4GB | 3B | | | [Link](https://ollama.com/library/phi3:mini) |||
| Phi3.5 | 2.2GB | 3B | | | [Link](https://ollama.com/library/phi3.5) |||
| Granite-code 3b | 2.0GB | 3B | | code | [Link](https://ollama.com/library/granite-code) |||
| Qwen2.5 0.5b | 398MB | 0.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:0.5b) |||
| Qwen2.5 1.5b | 986MB | 1.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:1.5b) |||
| Qwen2.5 3b | 1.9GB | 3B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:3b) |||
| Qwen2.5 Coder 1.5b | 986MB | 1.5B | | code, tools | [Link](https://ollama.com/library/qwen2.5-coder:1.5b) |||
| Qwen2 0.5b | 352MB | 0.5B | | | [Link](https://ollama.com/library/qwen2:0.5b) |||
| Qwen2 1.5b | 934MB | 1.5B | | | [Link](https://ollama.com/library/qwen2:1.5b) |||
| Qwen 0.5b | 395MB | 0.5B | | | [Link](https://ollama.com/library/qwen:0.5b) |||
| Qwen2 Math 1.5b | 935MB | 1.5B | Specialized math language model | math | [Link](https://ollama.com/library/qwen2-math:1.5b) |||
| StarCoder 1b | 726MB | 1B | Code generation model | code | [Link](https://ollama.com/library/starcoder:1b) |||
| StarCoder2 3b | 1.7GB | 3B | | code | [Link](https://ollama.com/library/starcoder2:3b) |||
| Stable LM 2 1.6b | 983MB | 1.6B | LLM trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. | | [Link](https://ollama.com/library/stablelm2) |||
| Stable Code 3b | 1.6GB | 3B | Coding model | code | [Link](https://ollama.com/library/stable-code:3b) |||
| Replete-Coder Qwen2 1.5b | 1.9GB | 1.5B | Coding capabilities + non-coding data, fully cleaned and uncensored (mat+tool? to be tested) | code | [Link](https://ollama.com/rouge/replete-coder-qwen2-1.5b:Q8) | ||
| Dolphin-Phi 2.7b | 1.6GB | 2.7B | uncensored | | [Link](https://ollama.com/library/dolphin-phi:2.7b) |||
| Dolphin gemma2 2b | 1.6GB | 2B | | | [Link](https://ollama.com/CognitiveComputations/dolphin-gemma2:2b) |||
| codegemma:2b | 1.6GB | 2B | Fill-in-the-middle code completion | code | [Link](https://ollama.com/library/codegemma:2b) |||
| gemma:2b | 1.7GB | 2B | | chat | [Link](https://ollama.com/library/gemma:2b) |||
| gemma2:2b | 1.6GB | 2B | | chat | [Link](https://ollama.com/library/gemma2:2b) |||
| all-minilm:22m | 46MB | 22M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:22m) |||
| all-minilm:33m | 67MB | 33M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:33m) |||
| deepseek-coder:1.3b | 776MB | 1.3B | Trained on both 87% code and 13% natural language | code | [Link](https://ollama.com/library/deepseek-coder:1.3b) |||
| tinyllama | 638MB | 1.1B | | chat | [Link](https://ollama.com/library/tinyllama) |||
| tinydolphin | 637MB | 1.1B | | chat | [Link](https://ollama.com/library/tinydolphin) |||
| phi3:mini | 2.4GB | 3B | | chat | [Link](https://ollama.com/library/phi3:mini) |||
| phi3.5 | 2.2GB | 3B | | chat | [Link](https://ollama.com/library/phi3.5) |||
| granite-code:3b | 2.0GB | 3B | | code | [Link](https://ollama.com/library/granite-code:3b) |||
| qwen2.5:0.5b | 398MB | 0.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:0.5b) |||
| qwen2.5:1.5b | 986MB | 1.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:1.5b) |||
| qwen2.5:3b | 1.9GB | 3B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:3b) |||
| qwen2.5-coder:1.5b | 986MB | 1.5B | | code, tools | [Link](https://ollama.com/library/qwen2.5-coder:1.5b) |||
| qwen2:0.5b | 352MB | 0.5B | | chat | [Link](https://ollama.com/library/qwen2:0.5b) |||
| qwen2:1.5b | 934MB | 1.5B | | chat | [Link](https://ollama.com/library/qwen2:1.5b) |||
| qwen:0.5b | 395MB | 0.5B | | chat | [Link](https://ollama.com/library/qwen:0.5b) |||
| qwen2-math:1.5b | 935MB | 1.5B | Specialized math language model | math | [Link](https://ollama.com/library/qwen2-math:1.5b) |||
| starcoder:1b | 726MB | 1B | Code generation model | code | [Link](https://ollama.com/library/starcoder:1b) |||
| starcoder2:3b | 1.7GB | 3B | | code | [Link](https://ollama.com/library/starcoder2:3b) |||
| stablelm2:1.6b | 983MB | 1.6B | LLM trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. | chat | [Link](https://ollama.com/library/stablelm2:1.6b) |||
| stable-code:3b | 1.6GB | 3B | Coding model | code | [Link](https://ollama.com/library/stable-code:3b) |||
| rouge/replete-coder-qwen2-1.5b:Q8 | 1.9GB | 1.5B | Coding capabilities + non-coding data, fully cleaned and uncensored (mat+tool? to be tested) | code | [Link](https://ollama.com/rouge/replete-coder-qwen2-1.5b:Q8) | ||
| dolphin-phi:2.7b | 1.6GB | 2.7B | uncensored | chat | [Link](https://ollama.com/library/dolphin-phi:2.7b) |||
| CognitiveComputations/dolphin-gemma2:2b | 1.6GB | 2B | | chat | [Link](https://ollama.com/CognitiveComputations/dolphin-gemma2:2b) |||
| allenporter/xlam:1b | 873MB | 1B | | tools | [Link](https://ollama.com/allenporter/xlam:1b) |||
| sam4096/qwen2tools:0.5b | 352MB | 0.5B | | tools | [Link](https://ollama.com/sam4096/qwen2tools:0.5b) |||
| sam4096/qwen2tools:1.5b | 935MB | 1.5B | | tools | [Link](https://ollama.com/sam4096/qwen2tools:1.5b) |||
| mxbai-embed-large | 670MB | 335M | Only Embeddings | embedding | [Link](https://ollama.com/library/mxbai-embed-large:335m) |||
| nomic-embed-text | 274MB | 137M | Only Embeddings | embedding | [Link](https://ollama.com/library/nomic-embed-text:v1.5) |||
| Yi Coder 1.5b | 866MB | 1.5B | Code | code | [Link](https://ollama.com/library/yi-coder:1.5b) |||
| mxbai-embed-large:335m | 670MB | 335M | Only Embeddings | embedding | [Link](https://ollama.com/library/mxbai-embed-large:335m) |||
| nomic-embed-text:v1.5 | 274MB | 137M | Only Embeddings | embedding | [Link](https://ollama.com/library/nomic-embed-text:v1.5) |||
| yi-coder:1.5b | 866MB | 1.5B | Code | code | [Link](https://ollama.com/library/yi-coder:1.5b) |||
| bge-m3 | 1.2GB | 567M | Only Embeddings | embedding | [Link](https://ollama.com/library/bge-m3) |||
| reader-lm:0.5b | 352MB | 0.5b | convert HTML to Markdown | | [Link](https://ollama.com/library/reader-lm:0.5b) |||
| reader-lm:1.5b | 935MB | 1.5b | convert HTML to Markdown | | [Link](https://ollama.com/library/reader-lm:1.5b) |||
| shieldgemma:2b | 1.7GB | 2b | evaluate the safety of text | | [Link](https://ollama.com/library/shieldgemma:2b) |||
| llama-guard3:1b | 1.6GB | 1b | evaluate the safety of text | | [Link](https://ollama.com/library/llama-guard3:1b) |||
| reader-lm:0.5b | 352MB | 0.5b | convert HTML to Markdown | conversion | [Link](https://ollama.com/library/reader-lm:0.5b) |||
| reader-lm:1.5b | 935MB | 1.5b | convert HTML to Markdown | conversion | [Link](https://ollama.com/library/reader-lm:1.5b) |||
| shieldgemma:2b | 1.7GB | 2b | evaluate the safety of text | safety | [Link](https://ollama.com/library/shieldgemma:2b) |||
| llama-guard3:1b | 1.6GB | 1b | evaluate the safety of text | safety | [Link](https://ollama.com/library/llama-guard3:1b) |||
| granite3-dense:2b | 1.6GB | 2b | | chat, tools, embedding | [Link](https://ollama.com/library/granite3-dense:2b) |||
| granite3-moe:1b | 822MB | 1b | | chat, tools, embedding | [Link](https://ollama.com/library/granite3-moe:1b) |||
| llama3.2:1b | 1.3GB | 1b | | chat, tools | [Link](https://ollama.com/library/llama3.2:1b) |||
| llama3.2:3b | 2.0GB | 3b | | chat, tools | [Link](https://ollama.com/library/llama3.2:3b) |||
4 changes: 2 additions & 2 deletions benchmarks.html
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@


<div class="header-section">
<h1>SMLs Benchmarks</h1>
<h1>SLMs Benchmarks</h1>
<img src="male-borg.jpg" alt="AI Robot" class="header-image">
</div>

Expand All @@ -116,7 +116,7 @@ <h1>SMLs Benchmarks</h1>
</div>

<p class="description">
<b>Chat completion:</b> benchmarks for different <b>SMLs</b> running on Ollama.<br>
<b>Chat completion:</b> benchmarks for different <b>SLMs</b> running on Ollama.<br>
<pre><code>
System instructions: You are useful AI agent.
User question: Who is Jean-Luc Picard?
Expand Down
4 changes: 2 additions & 2 deletions gen-readme.js
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ function generateMarkdownTable(data, title, description, picture) {
return markdown;
}

let title = "Awesome SMLs"
let description = "This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with Ollama"
let title = "Awesome SLMs"
let description = "This is the list of the SLMs I use on my Raspberry Pi5 (8GB RAM) with Ollama"
let picture = "female-borg.jpg"

let mdContent = ``
Expand Down
9 changes: 5 additions & 4 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@
text-align: center;
margin-bottom: 2rem;
}

.nav-links {
margin: 1rem 0;
padding: 1rem;
Expand All @@ -130,14 +130,15 @@
<body>
<div class="container">
<div class="header-section">
<h1>Awesome SMLs</h1>
<h1>Awesome SLMs</h1>
<img src="female-borg.jpg" alt="AI Robot" class="header-image">
</div>

<p class="description">This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with Ollama</p>
<p class="description">This is the list of the SLMs I use on my Raspberry Pi5 (8GB RAM) with Ollama</p>

<div class="nav-links">
📊 <a href="benchmarks.html">View Model Benchmarks</a>
📊 <a href="benchmarks.html">View Model Benchmarks</a><br>
📝 <a href="https://github.com/parakeet-nest/awesome-slms">Awesome SLMs on GitHub</a>
</div>


Expand Down

0 comments on commit 0accab1

Please sign in to comment.