Managing services
This tutorial is a continuation of a series on this topic. You can refer to the previous post if you do not have prior experience with initializing a swarm.
A service is the docker’s abstraction of a microservice. We will learn how to manage services with swarm in this tutorial.
Step 2: Managing Services
Service management commands must be executed on a swarm manager node. You can find the docker documentation on this topic here. In addition to basic operations like creating and deleting services, docker-swarm allows you to monitor (inspect) and scale your services as necessary.
Step 2.1 : Create a Service
Before creating the service, we should clearly understand the requirements of the service. Here I have listed down a set of requirements that the deployed service should meet.
- It should be restarted when and if it crashes.
- It should have 2 replicas at the beginning. Which means two independent containers should run the service, ideally on two nodes.
- Finally, the service should be exposed to the external parties through the port 80 of the external network interface. However the application listens on port 8080.
After identifying the basic requirements of the service, we can deploy the service with the following command. You can refer here for a full set of options because real-life requirements are definitely complicated than these.

The command spits out the id of the service which in this case is lhulncit7xw51mmokpsvho2z1.
Step 2.2 : Inspect a Service
After creating the service, we can choose to inspect the status of the service by looking at the logs. Docker swarm allows us to observe the logs of all the instances of the service with the following command. (-f
option instructs swarm to follow the logs in real time)
Complete reference can be found here.

If you would rather look at the configuration of the service, you can use the following command. Again, a full command reference can be found here.

One important thing to notice here is the Endpoint mode. The value is VIP, which means Virtual IP. This means that docker swarms routing mesh automatically assigns a load balancer to our service, operating on this VIP. Cool, right?
Step 2.3 : Scale a Service
Now, the interesting part. We need to either scale up our service to meet the demands of the users or to scale down to minimize the resource usage. Let’s take a look how we can achieve scaling with swarm.
Scaling up or down is trivial. Let’s scale up the service, main-service
to run 5 instances. Command reference can be found here.

A task
is Docker’s abstraction of the unit of scheduling. We can inspect how the service is deployed task-wise.

We can see that the service is running on 5 different containers distributed across both nodes. Using the same approach we can scale the service back down.
Step 2.4: Stop a Service
Stoping a service is easy if you know the id or the name of the service. If you don’t know the id, let’s find out using the ls
command.

We can identify the id
of the service by looking at the service name. Running the following command will simply remove your service from the swarm.

Great! Now we know how to manage individual services with docker swarm. Let’s move to the most important part of this tutorial, managing a service stack.