You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I'm running into an async routing with inbound connections.
I have a public ip with a front end ip config on the public load balancer.
On the fortigates I have a virtual IP external IP is the pip, mapped IP is the ip of the vm. Port forward enabled 80:80
IPv4 Policy, From any To any, source all, destination virtual IP, action accept, NAT Disabled.
I have a simple vm with nginx installed.
Route table 0.0.0.0/0 > internal load balancer.
Hit the public ip from a bunch a locations and some will succeed and some will timeout.
Enable NAT on the policy solves the issue but then the nginx logs show the fortigate ip which is not ideal. Am I missing a setting?
The text was updated successfully, but these errors were encountered:
I think I'm running into an async routing with inbound connections.
I have a public ip with a front end ip config on the public load balancer.
On the fortigates I have a virtual IP external IP is the pip, mapped IP is the ip of the vm. Port forward enabled 80:80
IPv4 Policy, From any To any, source all, destination virtual IP, action accept, NAT Disabled.
I have a simple vm with nginx installed.
Route table 0.0.0.0/0 > internal load balancer.
Hit the public ip from a bunch a locations and some will succeed and some will timeout.
Enable NAT on the policy solves the issue but then the nginx logs show the fortigate ip which is not ideal. Am I missing a setting?
The text was updated successfully, but these errors were encountered: