Replies: 1 comment 1 reply
-
Got to say, I think you might be the first using Podman or at least the first to mention it. I've never used Podman but make sure you also create an As far as DB config, everything seems correct. Not sure but you can also try specifying a network to add the containers. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone! 👋
I've hit a bit of a snag with my speedtest-tracker, and I could use some insights.
Let me set the stage: I'm dealing with a rootless Podman setup, using podman-compose. Interestingly, I've got a similar setup with wiki.js (application + database) in the same compose file, so the issue I'm facing caught me off guard.
Here's my current compose file:
I've experimented with both mariadb and postgresql. Initially, I had a dedicated network with a specific subnet, reverse-proxy, etc., but since that wasn't playing nice, I switched to the example from the documentation, which is pretty straightforward. The idea is that both containers should communicate via their names within the same stack using this simplified setup.
Now, here's where the plot thickens. When I run "podman-compose up -d," the application container throws an error: 'network could not translate host name "db" to address,' even though everything seems peachy in the db container.
I also gave it a shot with an older version by using SQLite (0.11.22), and all was smooth sailing. Tried the same version with mariadb and postgres without success.
Now, I'm pondering if this is related to Laravel and my specific configuration. The error message reads: "[object] (PDOException(code: 0): PDO::__construct(): php_network_getaddresses: getaddrinfo for db failed: Name or service not known at /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connectors/Connector.php:65)".
Has anyone else encountered a similar issue or have insights into what might be going on here? And do you think Laravel might be up to some funky business with the application-to-database connection?
Cheers!
I continued my tests today with a clear mind. I identified the conflict preventing the application from functioning. However, I still don't understand why.
Podman is in version 4.6.2 with netavark as the backend and rootlesskit. It creates a tunnel from the host to the container by forwarding traffic. Since version 4, Netavark is capable of DNS resolution with aardvark-dns to enable the resolution of other containers by their name or alias.
On the same computer/podman user, I have an Adguardhome container. My router distributes this address on my network as the DNS server to filter requests and redirect for internal domains. Adguardhome is on a podman "lan_network" that allows internal DNS resolution for containers and facilitates communication between my containers while using Traefik as reverse proxy.
Initially the plan was to place speedtest-tracker on this same network and on a network dedicated to communication with its database. As mentioned earlier, it didn't work. Worse, I didn't mention it yesterday, but all containers that depended on internal resolution from adguardhome to point to other services were no longer able to access them (e.g., Homepage). In reality, I couldn't even ping an external service from within the adguardhome container.
So, I disconnected speedtest-tracker from the lan_network to avoid a potential conflict, but it still didn't work.
So I tried to stop adguardhome and I performed the same test, and now there's no issue; it's working fine.
I still don't understand why. The exposed ports are different, the networks are separated, and the containers are isolated. The only point I can think of involves DNS resolution, and it seems like the speedtest-tracker container uses domain resolution differently. Instead of using the aardvark-dns, it conflicts with adguardhome by querying it on port 53 (to the extent of crashing its network layer).
The main difference as far as I can see is that speedtest-tracker is built on Alpine based on busybox, which handles certain network tools differently (for example, ping, which requires a setuid bit). To confirm this, I will test my wiki.js stack (currently on another server) to see if the behavior is the same.
And it's confirmed. wiki.js which is also built with Alpine is not able to connect to its db when running on the same VM:
error: Database Initialization Error: getaddrinfo ENOTFOUND wikijs_db
I have another cluster with Docker Swarm. Adguardhome, Wiki.js, and Speedtest-tracker are cohabiting without any problems. So it's probably related to rootless network management and Alpine/BusyBox.
So the culprit seems to be the "rootlesskit" mode (https://docs.podman.io/en/latest/markdown/podman-run.1.html#network-mode-net), which is the default in my use case.
Trying slirp4netns is working (but with other trade-offs). Rootlesskit with DNS management and Alpine/BusyBox are the magic recipe. I guess I will wait for Ubuntu 24.04 to install and use passt/pasta, which will probably fix all these problems.
Any idea how to investigate this further?
Cheers.
Beta Was this translation helpful? Give feedback.
All reactions