-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to avoid the auto-generated numbered suffix in the node name (i.e. mynode-1
)
#74
Comments
We have a That said, I'm curious about what why the specific device name matters. If it's related to tailnet ACLs, have you looked at tagging the nodes, and using those tags for ACLs? Or is there some other reason the specific device name matters, particularly in a CI environment? |
I'll give it a shot and report back with the results.
The tailscale machine names are significant because of the declarative approach (nix) I'm following in my project. One prerequisite for this approach is that we need to know, at build time (typically in CI), what the final tailscale node machine names will be. ExampleTo illustrate with an example, say the project exposes 3 services that need to be consumed by end-users (other tailnet members). {
tailscale {
ephemeral
}
}
https://{$SITE_TS_NODE}.tail123.ts.net {
bind tailscale/{$SITE_TS_NODE}
response "hello from site"
}
https://{$GRAFANA_TS_NODE}.tail123.ts.net {
bind tailscale/{$GRAFANA_TS_NODE}
response "hello from grafana"
}
https://{$LOKI_TS_NODE}.tail123.ts.net {
bind tailscale/{$LOKI_TS_NODE}
response "hello from loki"
} For each PR, a separate instance is deployed. To namespace the tailscale nodes, every So if a PR is opened for a git branch name called
Now that we can rely on a determinate naming scheme, we can use nix to define our development environments and infrastructure. This then unlocks some pretty cool stuff, including the ability to:
IssueSay we wanted our site to display a clickable link to our grafana service. The source code would look like this (using svelte template or similar): <a href="https://{$GRAFANA_TS_NODE}.{$TAILNET_NAME}">
Go to Grafana
</a> And the compiled html would look like this: <a href="https://grafana-fix-some-bug.tail123.ts.net">
Go to Grafana
</a> When our tailscale node name has In a similar fashion, the VPS machine we deploy may include a system environment variable nixosConfigurations = {
vps = nixpkgs.lib.nixosSystem {
modules = [
{
environment.variables = {
SITE_TS_NODE = "site-${GIT_BRANCH}";
GRAFANA_TS_NODE = "grafana-${GIT_BRANCH}";
# ...
};
# This grafana systemd service will not be able to reach loki if
# the loki machine name ends up being `loki-fix-some-bug-1`
systemd.user.services.grafana = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Restart = "on-failure";
ExecStart = "start-grafana.sh";
};
environment = {
LOKI_TS_NODE = "loki-${GIT_BRANCH}";
};
};
}
];
};
}; |
It indeed does not work. When setting
|
I have the exact same issue, although for a slightly different use-case. I'm self-hosting a number of services on a NAS device and whenever I re-deploy new configuration to caddy-tailscale, e.g. a new service needs to be reverse proxied, all the Tailnet DNS names changes and I have to go and rename/remove the ones that no longer works. Being able to re-up caddy-tailscale, and have it attach to existing machines for known entries, and only create new ones for first time entries, would definitely solve this problem. |
I think I managed to solve my issue, by simply persisting the the state. I added this directive in my Caddyfile:
And then mounted a persistent docker volume to Initially I just ran a script to remove the machines via API calls during deploy, but that seems no longer needed. |
Hi. I'm using this plugin in CI pipeline, where it creates some nodes in the tailnet for per-PR preview environments. Sometimes, while iterating on the project, the underlying VPS is destroyed and immediately recreated. This results in undesirable behavior where node names are automatically suffixed with an integer:
This breaks the project's naming scheme. To fix it, I log on to the tailscale admin UI, remove
mycaddynode
from the tailnet, then renamemycaddynode-1
tomycaddynode
. The fix is simple, but can get tedious at times as it first requires identifying that this event has occured in the first place.Therefore, I'd like to automate the fix. After searching around for a bit, I found that tailscaled's
--state=mem:
flag essentially works around this auto-suffix naming issue.After trying it, the
--state=mem:
does indeed fix the issue, but only for "regular" tailscale nodes and not for nodes that are generated by this caddy-tailscale plugin:Is there a way to get the same behavior for the "caddy nodes" too?
Workarounds
For now, I think my best bet would be to write a script that leverages that tailscale's Rest API to remove the stale offline nodes and rename the replacement node names (
POST /device/{deviceId}/name
) (i.e.mycaddynode-1
->mycaddynode
).The text was updated successfully, but these errors were encountered: