Docker is amazing, but it has it's own way of handling things.
Docker-Compose is amazing and it to needs to be used the right way.
This post is about explaining how to attach new containers to existing containers already set up via docker-compose.
Background:
Docker-Compose, when it brings up a bunch of containers or 'services' puts all of them in the same network with a common subnet and DNS so each service can talk to one-another based on the service name.
Example:
mongodb://root:mongopwd@mongo:27017/
The 'mongo' in that line is the service name, and to connect say Mongo-Express to that container we can simply call the service name. Without the DNS component of the networking layer, we would need to know the specific IP of the container to connect. The 27017 is the port which is the default for MongoDB.
Normally with Docker-Compose all the services or containers that run within the config file - normally docker-comopose.yml - will be given their own subnet and DNS to operate so they can just talk to one-another based on the service name.
Example:
version: '3.1'
services:
mongo:
image: mongo
container_name: mongodb
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: mongopwd
websvr:
build: ./work
container_name: web4mongo
restart: unless-stopped
volumes:
- ./work/src:/var/www/html
ports:
- 80:80
links:
- mongo
depends_on:
- mongo
The services are denoted using indentation, and shown as 'mongo' and 'websvr'. Those two can than talk to one-another using 'mongo' or 'websvr' rather than the IP address of each individual container. This is really powerful since it makes the networking portion more easily understood, and also more easily repeatable since containers brought up on one system may get assigned different IP ranges than say on a different system. From a code perspective, just call the service name each time and all of that is handled.
Adding another container:
However say we want to add a container. Often you need to adjust the docker-compose config, stop / remove the containers, and restart them. If the containers are removed and don't have a mapped volume to store the data already created, than the data could be lost.
Less than ideal.
However, with Docker you can map new containers to the network!
First find the network. Run 'docker network list'
NETWORK ID NAME DRIVER SCOPE
50711******* bridge bridge local
ca9cc******* host host local
118ef******* lamp_default bridge local
1887b******* local-debug_default bridge local
9299d******* none null local
06023******* ournoteorganizer_default bridge local
186fc******* phps3object_default bridge local
2209e******* wordpress_default bridge local
An output like the above may be what you see depending on the number of Docker instances you are running or have setup. This gives us the network names which we can use to attach a new container in the network.
For this example we are adding a Mongo Express which is a web-based graphical UI that helps see and interact with a MongoDB database more easily. It's not necessary and should be disabled (docker stop mongo-express) when not used, but it is clearer, and possibly faster than running through all the mongosh commands.
Here is what we run:
-e ME_CONFIG_MONGODB_URL=mongodb://root:mongopwd@mongo:27017/\
We create the new container using the run command, give it a name, define the port mapping between the host and the container, then assign it to the same network as the the original Docker-Compose group of containers - placing it in the same subnet and DNS / user space. Then we assign the necessary environment varilables to match the existing ones shown in the previous docker-compose.yml file. Finally specific the Docker image as mongo-express.
This worked a treat for me since I had commented out Mongo Express in my pusblished OurNoteOrganizer application due to security concerns. Now I have a Mongo Express container I can call up when I need it or turn it off when I don't.
Start: sudo docker start mongoex
Stop: sudo docker stop mongoex
Docker Compose is amazingly powerful, but understanding how it works is important to take full advantage of it. Knowing that it is essentially an automated networking stack + container creation makes it possible to work with without having to destroy containers and possibly lose information.
No comments:
Post a Comment