Skip to content

Commit

Permalink
πŸ›Ÿ Updated.
Browse files Browse the repository at this point in the history
  • Loading branch information
k33g committed Oct 21, 2024
1 parent 57c8231 commit 0e8a362
Show file tree
Hide file tree
Showing 3 changed files with 167 additions and 33 deletions.
4 changes: 2 additions & 2 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{
"workbench.iconTheme": "material-icon-theme",
"workbench.colorTheme": "GitHub Dark Colorblind (Beta)",
"terminal.integrated.fontSize": 10,
"editor.fontSize": 10,
"terminal.integrated.fontSize": 14,
"editor.fontSize": 14,
"files.autoSave": "afterDelay",
"files.autoSaveDelay": 1000,
"editor.insertSpaces": true,
Expand Down
72 changes: 41 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,45 @@
# Awesome SMLs

This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with [Ollama](https://ollama.com/):
This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with [Ollama](https://ollama.com/):# Awesome SMLs

| Name | Size | tag | Remark | kind | URL | Good on Pi5 | Usable on Pi5 |
This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with Ollama

| Name | Size | Tag | Remark | Kind | URL | Good on Pi5 | Usable on Pi5 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| CodeGemma 2b | 1.6GB | 2B | Fill-in-the-middle code completion | code | https://ollama.com/library/codegemma:2b | Β  | x |
| Gemma 2b | 1.7GB | 2B | Β  | Β  | https://ollama.com/library/gemma:2b | Β  | x |
| Gemma2 2b | 1.6GB | 2B | Β  | Β  | https://ollama.com/library/gemma2:2b | Β  | x |
| All-Minilm 22m | 46MB | 22M | Only Embeddings | embedding | https://ollama.com/library/all-minilm:22m | x | x |
| All-Minilm 33m | 67MB | 33M | Only Embeddings | embedding | https://ollama.com/library/all-minilm:33m | x | x |
| DeepSeek Coder 1.3b | 776MB | 1.3B | Trained on both 87% code and 13% natural language | code | https://ollama.com/library/deepseek-coder | x | x |
| TinyLlama 1.1b | 638MB | 1.1B | Β  | Β  | https://ollama.com/library/tinyllama | x | x |
| TinyDolphin 1.1b | 637MB | 1.1B | Β  | Β  | https://ollama.com/library/tinydolphin | x | x |
| Phi3 Mini | 2.4GB | 3B | Β  | Β  | https://ollama.com/library/phi3:mini | Β  | x |
| Phi3.5 | 2.2GB | 3B | Β  | Β  | https://ollama.com/library/phi3.5 | Β  | x |
| Granite-code 3b | 2.0GB | 3B | Β  | code | https://ollama.com/library/granite-code | Β  | x |
| Qwen2 0.5b | 352MB | 0.5B | Β  | Β  | https://ollama.com/library/qwen2:0.5b | x | x |
| Qwen2 1.5b | 934MB | 1.5B | Β  | Β  | https://ollama.com/library/qwen2:1.5b | Β  | x |
| Qwen 0.5b | 395MB | 0.5B | Β  | Β  | https://ollama.com/library/qwen:0.5b | x | x |
| Qwen2 Math 1.5b | 935MB | 1.5B | Specialized math language model | math | https://ollama.com/library/qwen2-math:1.5b | Β  | x |
| StarCoder 1b | 726MB | 1B | Code generation model | code | https://ollama.com/library/starcoder:1b | x | x |
| StarCoder2 3b | 1.7GB | 3B | Β  | code | https://ollama.com/library/starcoder2:3b | Β  | x |
| Stable LM 2 1.6b | 983MB | 1.6B | LLM trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. | Β  | https://ollama.com/library/stablelm2 | x | x |
| Stable Code 3b | 1.6GB | 3B | Coding model | code | https://ollama.com/library/stable-code:3b | Β  | x |
| Replete-Coder Qwen2 1.5b | 1.9GB | 1.5B | Coding capabilities + non-coding data, fully cleaned and uncensored (mat+tool? to be tested) | code | https://ollama.com/rouge/replete-coder-qwen2-1.5b:Q8 | x | x |
| Dolphin-Phi 2.7b | 1.6GB | 2.7B | uncensored | Β  | https://ollama.com/library/dolphin-phi:2.7b | Β  | x |
| Dolphin gemma2 2b | 1.6GB | 2B | Β  | Β  | https://ollama.com/CognitiveComputations/dolphin-gemma2:2b | Β  | x |
| allenporter/xlam:1b | 873MB | 1B | Β  | tools | https://ollama.com/allenporter/xlam:1b | Β  | x |
| sam4096/qwen2tools:0.5b | 352MB | 0.5B | Β  | tools | https://ollama.com/sam4096/qwen2tools:0.5b | x | x |
| sam4096/qwen2tools:1.5b | 935MB | 1.5B | Β  | tools | https://ollama.com/sam4096/qwen2tools:1.5b | Β  | x |
| mxbai-embed-large | 670MB | 335M | Only Embeddings | embedding | https://ollama.com/library/mxbai-embed-large:335m | x | x |
| nomic-embed-text | 274MB | 137M | Only Embeddings | embedding | https://ollama.com/library/nomic-embed-text:v1.5 | x | x |
| Yi Coder 1.5b | 866MB | 1.5B | Code | code | https://ollama.com/library/yi-coder:1.5b | Β  | x |
| bge-m3 | 1.2GB | 567M | Only Embeddings | embedding | https://ollama.com/library/bge-m3 | Β  | x |
| CodeGemma 2b | 1.6GB | 2B | Fill-in-the-middle code completion | code | [Link](https://ollama.com/library/codegemma:2b) | ❌ | βœ… |
| Gemma 2b | 1.7GB | 2B | | | [Link](https://ollama.com/library/gemma:2b) | ❌ | βœ… |
| Gemma2 2b | 1.6GB | 2B | | | [Link](https://ollama.com/library/gemma2:2b) | ❌ | βœ… |
| All-Minilm 22m | 46MB | 22M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:22m) | βœ… | βœ… |
| All-Minilm 33m | 67MB | 33M | Only Embeddings | embedding | [Link](https://ollama.com/library/all-minilm:33m) | βœ… | βœ… |
| DeepSeek Coder 1.3b | 776MB | 1.3B | Trained on both 87% code and 13% natural language | code | [Link](https://ollama.com/library/deepseek-coder) | βœ… | βœ… |
| TinyLlama 1.1b | 638MB | 1.1B | | | [Link](https://ollama.com/library/tinyllama) | βœ… | βœ… |
| TinyDolphin 1.1b | 637MB | 1.1B | | | [Link](https://ollama.com/library/tinydolphin) | βœ… | βœ… |
| Phi3 Mini | 2.4GB | 3B | | | [Link](https://ollama.com/library/phi3:mini) | ❌ | βœ… |
| Phi3.5 | 2.2GB | 3B | | | [Link](https://ollama.com/library/phi3.5) | ❌ | βœ… |
| Granite-code 3b | 2.0GB | 3B | | code | [Link](https://ollama.com/library/granite-code) | ❌ | βœ… |
| Qwen2.5 0.5b | 398MB | 0.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:0.5b) | βœ… | βœ… |
| Qwen2.5 1.5b | 986MB | 1.5B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:1.5b) | ❌ | βœ… |
| Qwen2.5 3b | 1.9GB | 3B | | chat, tools | [Link](https://ollama.com/library/qwen2.5:3b) | ❌ | βœ… |
| Qwen2.5 Coder 1.5b | 986MB | 1.5B | | code, tools | [Link](https://ollama.com/library/qwen2.5-coder:1.5b) | ❌ | βœ… |
| Qwen2 0.5b | 352MB | 0.5B | | | [Link](https://ollama.com/library/qwen2:0.5b) | βœ… | βœ… |
| Qwen2 1.5b | 934MB | 1.5B | | | [Link](https://ollama.com/library/qwen2:1.5b) | ❌ | βœ… |
| Qwen 0.5b | 395MB | 0.5B | | | [Link](https://ollama.com/library/qwen:0.5b) | βœ… | βœ… |
| Qwen2 Math 1.5b | 935MB | 1.5B | Specialized math language model | math | [Link](https://ollama.com/library/qwen2-math:1.5b) | ❌ | βœ… |
| StarCoder 1b | 726MB | 1B | Code generation model | code | [Link](https://ollama.com/library/starcoder:1b) | βœ… | βœ… |
| StarCoder2 3b | 1.7GB | 3B | | code | [Link](https://ollama.com/library/starcoder2:3b) | ❌ | βœ… |
| Stable LM 2 1.6b | 983MB | 1.6B | LLM trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. | | [Link](https://ollama.com/library/stablelm2) | βœ… | βœ… |
| Stable Code 3b | 1.6GB | 3B | Coding model | code | [Link](https://ollama.com/library/stable-code:3b) | ❌ | βœ… |
| Replete-Coder Qwen2 1.5b | 1.9GB | 1.5B | Coding capabilities + non-coding data, fully cleaned and uncensored (mat+tool? to be tested) | code | [Link](https://ollama.com/rouge/replete-coder-qwen2-1.5b:Q8) | βœ… | βœ… |
| Dolphin-Phi 2.7b | 1.6GB | 2.7B | uncensored | | [Link](https://ollama.com/library/dolphin-phi:2.7b) | ❌ | βœ… |
| Dolphin gemma2 2b | 1.6GB | 2B | | | [Link](https://ollama.com/CognitiveComputations/dolphin-gemma2:2b) | ❌ | βœ… |
| allenporter/xlam:1b | 873MB | 1B | | tools | [Link](https://ollama.com/allenporter/xlam:1b) | ❌ | βœ… |
| sam4096/qwen2tools:0.5b | 352MB | 0.5B | | tools | [Link](https://ollama.com/sam4096/qwen2tools:0.5b) | βœ… | βœ… |
| sam4096/qwen2tools:1.5b | 935MB | 1.5B | | tools | [Link](https://ollama.com/sam4096/qwen2tools:1.5b) | ❌ | βœ… |
| mxbai-embed-large | 670MB | 335M | Only Embeddings | embedding | [Link](https://ollama.com/library/mxbai-embed-large:335m) | βœ… | βœ… |
| nomic-embed-text | 274MB | 137M | Only Embeddings | embedding | [Link](https://ollama.com/library/nomic-embed-text:v1.5) | βœ… | βœ… |
| Yi Coder 1.5b | 866MB | 1.5B | Code | code | [Link](https://ollama.com/library/yi-coder:1.5b) | ❌ | βœ… |
| bge-m3 | 1.2GB | 567M | Only Embeddings | embedding | [Link](https://ollama.com/library/bge-m3) | ❌ | βœ… |
| reader-lm:0.5b | 352MB | 0.5b | convert HTML to Markdown | | [Link](https://ollama.com/library/reader-lm:0.5b) | βœ… | βœ… |
| reader-lm:1.5b | 935MB | 1.5b | convert HTML to Markdown | | [Link](https://ollama.com/library/reader-lm:1.5b) | βœ… | βœ… |
| shieldgemma:2b | 1.7GB | 2b | evaluate the safety of text | | [Link](https://ollama.com/library/shieldgemma:2b) | ❌ | βœ… |
| llama-guard3:1b | 1.6GB | 1b | evaluate the safety of text | | [Link](https://ollama.com/library/llama-guard3:1b) | ❌ | βœ… |
124 changes: 124 additions & 0 deletions data.js
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,46 @@ const smlData =
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "Qwen2.5 0.5b",
"size": "398MB",
"tag": "0.5B",
"remark": "",
"kind": "chat, tools",
"url": "https://ollama.com/library/qwen2.5:0.5b",
"good_on_pi5": true,
"usable_on_pi5": true
},
{
"name": "Qwen2.5 1.5b",
"size": "986MB",
"tag": "1.5B",
"remark": "",
"kind": "chat, tools",
"url": "https://ollama.com/library/qwen2.5:1.5b",
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "Qwen2.5 3b",
"size": "1.9GB",
"tag": "3B",
"remark": "",
"kind": "chat, tools",
"url": "https://ollama.com/library/qwen2.5:3b",
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "Qwen2.5 Coder 1.5b",
"size": "986MB",
"tag": "1.5B",
"remark": "",
"kind": "code, tools",
"url": "https://ollama.com/library/qwen2.5-coder:1.5b",
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "Qwen2 0.5b",
"size": "352MB",
Expand Down Expand Up @@ -292,6 +332,90 @@ const smlData =
"url": "https://ollama.com/library/bge-m3",
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "reader-lm:0.5b",
"size": "352MB",
"tag": "0.5b",
"remark": "convert HTML to Markdown",
"kind": "",
"url": "https://ollama.com/library/reader-lm:0.5b",
"good_on_pi5": true,
"usable_on_pi5": true
},
{
"name": "reader-lm:1.5b",
"size": "935MB",
"tag": "1.5b",
"remark": "convert HTML to Markdown",
"kind": "",
"url": "https://ollama.com/library/reader-lm:1.5b",
"good_on_pi5": true,
"usable_on_pi5": true
},
{
"name": "shieldgemma:2b",
"size": "1.7GB",
"tag": "2b",
"remark": "evaluate the safety of text",
"kind": "",
"url": "https://ollama.com/library/shieldgemma:2b",
"good_on_pi5": false,
"usable_on_pi5": true
},
{
"name": "llama-guard3:1b",
"size": "1.6GB",
"tag": "1b",
"remark": "evaluate the safety of text",
"kind": "",
"url": "https://ollama.com/library/llama-guard3:1b",
"good_on_pi5": false,
"usable_on_pi5": true
}
]
}


function generateMarkdownTable(data) {
const headers = [
'Name', 'Size', 'Tag', 'Remark', 'Kind', 'URL', 'Good on Pi5', 'Usable on Pi5'
];

let markdown = `# ${data.title}\n\n${data.description}\n\n`;
markdown += `| ${headers.join(' | ')} |\n`;
markdown += `| ${headers.map(() => '---').join(' | ')} |\n`;

data.models.forEach(model => {
const row = [
model.name,
model.size,
model.tag,
model.remark,
model.kind,
`[Link](${model.url})`,
model.good_on_pi5 ? 'βœ…' : '❌',
model.usable_on_pi5 ? 'βœ…' : '❌'
];
markdown += `| ${row.join(' | ')} |\n`;
});

return markdown;
}



function generate() {
mdContent = `# Awesome SMLs
This is the list of the SMLs I use on my Raspberry Pi5 (8GB RAM) with [Ollama](https://ollama.com/):`

mdContent += generateMarkdownTable(smlData)
const fs = require('fs')
fs.writeFileSync("README.md", mdContent);
}

// Only run the main function if this script is run directly (not imported)
if (require.main === module) {
generate();
}

0 comments on commit 0e8a362

Please sign in to comment.