Reach host port from container when userland proxy is disabled
I have a container in a bridged network. It can reach the host without problem, and connect to every port on the host EXCEPT for ports that were published by other containers.
I have the userland proxy disabled, so I think it might have something to do with how docker sets up the iptable rules.
Is there a simple way to allow the container to reach ports published by other containers (running in different bridged networks)?
I would like to avoid putting both containers in the same network, or switching to host-networking.
I can’t reproduce your problem.
I have two Docker bridged networks defined:
eecf335e7600 net1 bridge local
5c7ca74637b0 net2 bridge local
I start container c1
in network net1
, publishing container port 8080
to host port 8081
:
$ docker run -d --rm --net net1 -p 8081:8080 --name c1
--add-host host.docker.local:host-gateway alpinelinux/darkhttpd
I start container c2
in network net2
, publishing container port 8080
to host port 8082
:
$ docker run -d --rm --net net2 -p 8082:8080 --name c2
--add-host host.docker.local:host-gateway alpinelinux/darkhttpd
Now from within container c1
I can access the web service in container c2
by accessing host port 8082
:
$ docker exec c1 wget -O- http://host.docker.local:8082
Connecting to host.docker.local:8082 (172.17.0.1:8082)
writing to stdout
- 100% |********************************| 191 0:00:00 ETA
written to stdout
<html>
<head>
...
And from container c2
I can access the web service in container c1
by accessing host port 8081
:
$ docker exec c2 wget -O- http://host.docker.local:8081
Connecting to host.docker.local:8081 (172.17.0.1:8081)
writing to stdout
- 100% |********************************| 191 0:00:00 ETA
written to stdout
<html>
<head>
...
It all seems to work as advertised. If you get different results repeating the same set of steps, the first thing I would check is to see if you have firewall rules on your host that could be interfering with the traffic.
Update
As expected, the behavior is the same if I just pick a host address rather than using the --add-host
option. E.g., if I have:
$ ip -o addr |grep -o '.*inet [^ ]*'
1: lo inet 127.0.0.1/8
2: eth0 inet 192.168.123.106/24
4: docker_gwbridge inet 172.23.0.1/16
8: br-70f125dc5b7d inet 192.168.208.1/20
15: br-a87cc1462629 inet 172.29.0.1/16
18: docker0 inet 172.17.0.1/16
35: br-eecf335e7600 inet 172.18.0.1/16
36: br-5c7ca74637b0 inet 172.19.0.1/16
Then these all work:
$ docker exec c2 wget -O- 192.168.123.106:8081 | head -3
Connecting to 192.168.123.106:8081 (192.168.123.106:8081)
writing to stdout
- 100% |********************************| 191 0:00:00 ETA
written to stdout
<html>
<head>
<title>/</title>
$ docker exec c2 wget -O- 172.29.0.1:8081 | head -3
Connecting to 172.29.0.1:8081 (172.29.0.1:8081)
writing to stdout
- 100% |********************************| 191 0:00:00 ETA
written to stdout
<html>
<head>
<title>/</title>
$ docker exec c2 wget -O- 172.18.0.1:8081 | head -3
Connecting to 172.18.0.1:8081 (172.18.0.1:8081)
writing to stdout
- 100% |********************************| 191 0:00:00 ETA
written to stdout
<html>
<head>
<title>/</title>
Etc.
I did not mention it in the question, because I was pretty sure I already had this problem before I disabled it…
That’s a pretty substantial configuration change from the default :).
If I disable the userland proxy, it all stops working.
This article may be of interest:
In the previous section we identified two scenarios where Docker cannot use iptables NAT rules to map a published port to a container service:
- When a container connected to another Docker network tries to reach the service (Docker is blocking direct communication between Docker networks);
- When a local process tries to reach the service through loopback interface.
In both cases, Docker uses a userland (Linux process) TCP or UDP proxy. You can easily identify the proxy with netstat command after starting a container with a published port (we’ll yet again use our standard Flask application):