Replies: 8 comments
-
Still using WSLTo show that OLLAMA is working on my Windows box, here is the output from inside the WSL Ubuntu session:
And in Windows Powershell, the only real difference is using
|
Beta Was this translation helpful? Give feedback.
-
@danielmiessler Any ideas? |
Beta Was this translation helpful? Give feedback.
-
Fabric Running Natively on Windows 11Still no luck with Local models 😢I installed fabric (latest from git commit c3df1e7 ) from scratch on Windows 11, natively, using the new I also have ollama 0.1.29 on the same machine.
And it's working correctly:
And here is the local model respnding:
However, fabric still does not see the local models:
Anyone know what I can do to debug this? Respond here or in this issue: #178 |
Beta Was this translation helpful? Give feedback.
-
This is starting to get interesting. This is running in Windows 11 using Powershell. Activated the
Next, the same code with an explicit server address.
|
Beta Was this translation helpful? Give feedback.
-
Similarly in my WSL installed fabric:
|
Beta Was this translation helpful? Give feedback.
-
AnswerAfter a few fixes in this PR: #315 Install Ollama on Windows (not in WSL)Once it's installed, set the OLLAMA_HOST environment variable. Set This will make sure that ollama serves all interfaces, including the WSL network interface. Install Fabric on WindowsInstall git and Python on Windows Once these are installed, clone the repository and install git clone https://github.com/danielmiessler/fabric.git
python.exe -m pip install pipx Then follow the usual process to install fabric: cd fabric
pipx install .
fabric --setup During the fabric setup process, add your API keys. Additionally, Install Fabric on Windows WSLIn the WSL linux container, go to the directory on Windows where you cloned fabric and do this: cd /mnt/c/Users/YourUser/src/fabric
sudo apt install pipx
pipx install .
fabric --setup You can skip the steps of adding your API keys here since you can Add the following snippet to your
|
Beta Was this translation helpful? Give feedback.
-
Thank you. I have the same problem, and I tried all these steps on two machines, but I still can't see any local models from Ollama. Any help would be appreciated! |
Beta Was this translation helpful? Give feedback.
-
A few things to keep in mind:
So, here's how I got it working: Set a System-level environment variable in Windows for OLLAMA_HOST to 0.0.0.0:11434. Then configure a firewall rule to allow inbound connections on port 11434. You can restrict this rule's scope to your private network or the host's IP address or whatever you decide. You can verify that Ollama is now listening on the correct interfaces by typing this command in you PowerShell:
Next, you need to configure your .env file in fabric .env file by adding this line:
Again, this is the workaround that Docker in WSL uses to connect to services on the host. You can verify that you can connect to the Ollama port we opened in the firewall rule by running this command in WSL:
And to check that Ollama is running from WSL:
Once you set this up, your local models should show in fabric:
|
Beta Was this translation helpful? Give feedback.
-
First Try: Using Fabric in WSL
I have a Windows 11 machine with an NVIDIA GeoForce RTX 4090 installed. Fabric is updated to the latest from Git.
I read about the Ollama Networking issue issue here: ollama/ollama#703
So, in PowerShell, I do this:
Windows firewall then asks me to give permission to OLLAMA.EXE so it can serve the other interfaces.
Now in the WSL Ubuntu environment, I can see this:
Now, using the default gateway as the address for the Ollama server URL:
I'm still not seeing the local models in fabric:
Any ideas for how to get local models working in Windows/WSL?
Beta Was this translation helpful? Give feedback.
All reactions