When using Docker containers for development environments (also known as the opposite of micro-services) a common scenario is that you are using a laptop and want to run docker containers on a more powerful machine like a workstation or build server.
This can be done by exposing the Docker service API to the network and connecting to it from a different host. Here’s how to do it.
Docker Service API
Enable the Docker API by editing /lib/systemd/system/docker.service
and as follows:
$ sudo nano /lib/systemd/system/docker.service ... #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecStart=/usr/bin/dockerd -H=fd:// -H=tcp://0.0.0.0:4243 ...
In case the docker service is not enabled, enable it now. Usually this is not needed as the docker service is already enabled most of the time.
$ sudo systemctl enable docker.service $ sudo systemctl start docker.service
Reload the the system daemons and restart the Docker service so the changes go into effect.
$ sudo systemctl daemon-reload $ sudo service docker restart
Finally test if the Docker service is running and accessible over the REST API.
# Test if Docker API is accessible from local host $ curl http://localhost:4243/images/json ...<json dump here> # Test if Docker API is accessible from remote host # Change IP address 192.168.0.1 to whatever your machine is using $ curl http://remote-host-name:4243/images/json ...<json dump here>
Docker Context
Now that the Docker API is accessible it is time to use it.
The way to do this is by using a Docker context.
This feature wasn’t always available. In older Docker versions the DOCKER_HOST
environment variable had to be edited to achieve the same, which was a lot more cumbersome to manage.
First have a look at the existing Docker contexts.
$ docker context ls NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
Now create a new context which will be used to run container on the remote host which has the API enabled.
$ docker context create "new-context-name" --description "Docker API on remote-host-name" --docker "host=tcp://remote-host-name:4243"
BTW: Whenever a host name is used, it is also possible to use an IPv4 address instead.
Now it is time to switch to the new context:
$ docker context use "new-context-name" $ docker context ls docker context ls NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR default Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm new-context-name * Docker API on remote-host-name ssh://username@remote-host-name # That printout's a mess unless you use a full screen terminal/browser, I know.
If everything was set up correctly, from now on the docker
command will refer to the Docker service API running on the remote host and all Docker commands will be executed on the remote host.
Test this by pulling a new Docker image, try to run a container in the new context and so on.
Use docker image ls
to check if an image was pulled to the local or the remote host and use docker container ls --all
to check if a container has been spawned on the local or the remote host. You get the gist.
Note that this is merely a test setup for use in a protected local environment.
It is strongly advisable to add additional security precautions, e.g. firewall rules to limit access to the Docker API to hosts from the local network.
That’s it.
References:
- https://docs.docker.com/engine/context/working-with-contexts/
- https://github.com/mtabishk/expose-docker-api
- https://collabnix.com/how-to-connect-to-remote-docker-using-docker-context-cli/
- https://pitstop.manageengine.com/portal/en/kb/articles/how-to-enable-docker-remote-api-on-docker-host
- https://medium.com/xebia-engineering/using-docker-containers-as-jenkins-build-slaves-a0bb1c9190d
Leave a Reply