Microservices Continuous Delivery with Docker and Jenkins
Limited Time Offer!
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
Source:- 126kr.com
Docker, Microservices, Continuous Delivery are currently some of the most popular topics in the world of programming. In an environment consisting of dozens of microservices communicating with each other it seems to be particularly important the automation of the testing, building and deployment process. Docker is excellent solution for microservices, because it can create and run isolated containers with service. Today, I’m going to present you how to create basic continuous delivery pipeline for sample microservices using most popular software automation tool – Jenkins.
Sample Microservices
Before I get into the main topic of the article I say a few words about structure and tools used for sample microservices creation. Sample application consists of two sample microservices communicating with each other (account, customer), discovery server (Eureka) and API gateway (Zuul). It was implemented using Spring Boot and Spring Cloud frameworks. Its source code is available on GitHub . Spring Cloud has support for microservices discovery and gateway out of the box – we only have to define right dependencies inside maven project configuration file ( pom.xml
). The picture illustrating the adopted solution architecture is visible below. Both customer, account REST API services, discovery server and gateway running inside separated docker containers. Gateway is the entry point to the microservices system. It is interacting with all other services. It proxies requests to the selected microservices searching its addresses in discovery service. In case of existing more than one instance of each account or customer microservice the request is load balanced with Ribbon
and Feign
client. Account and customer services are registering themselves into the discovery server after startup. There is also a possibility of interaction between them, for example if we would like to find and return all customer’s account details.
I wouldn’t like to go into the details of those microservices implementation with Spring Boot and Spring Cloud frameworks. If you are interested in detailed description of the sample application development you can read it in my blog posthere. Generally, Spring framework has a full support for microservices with all Netflix OSS tools like Ribbon, Hystrix and Eureka. In the blog post I described how to implement service discovery, distributed tracing, load balancing, logging trace ID propagation, API gateway for microservices with those solutions.
Dockerfiles
Each service in the sample source code has Dockerfile
with docker image build definition. It’s really simple. Here’s Dockerfile for account service. We use openjdk as a base image. Jar file from target is added to the image and then run using java -jar command. Service is running on port 2222 which is exposed outside.
1
2
3
4
5
|
FROM openjdk MAINTAINER Piotr Minkowski < piotr.minkowski @gmail.com> ADD target/account-service.jar account-service.jar ENTRYPOINT ["java", "-jar", "/account-service.jar"] EXPOSE 2222 |
We also had to set main class in the JAR manifest. We achieve it using spring-boot-maven-plugin in module pom.xml. The fragment is visible below. We also set build finalName to cut off version number from target JAR file. Dockerfile and maven build definition is pretty similar for all other microservices.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
< build > < finalName >account-service</ finalName > < plugins > < plugin > < groupId >org.springframework.boot</ groupId > < artifactId >spring-boot-maven-plugin</ artifactId > < version >1.5.2.RELEASE</ version > < configuration > < mainClass >pl.piomin.microservices.account.Application</ mainClass > < addResources >true</ addResources > </ configuration > < executions > < execution > < goals > < goal >repackage</ goal > </ goals > </ execution > </ executions > </ plugin > </ plugins > </ build > |
Jenkins pipelines
We use Pipeline Plugin for building continous delivery for our microservices. In addition to the standard plugins set on Jenkins we also need Docker Pipeline Plugin by CloudBees . There are four pipelines defined as you can see in the picture below.
Here’s pipeline definition written in Groovy language for discovery service. We have 5 stages of execution. Inside Checkout stage we are pulling changes for remote Git repository of the project. Then project is build with mvn clean install command and also maven version is read from pom.xml
. In Image stage we build docker image from discovery service Dockerfile and then push that image to local registry. In the fourth step we are running built image with default port exposed and hostname visible for linked docker containers. Finally, account pipeline is started with no wait option, which means that source pipeline is finished and won’t wait for account pipeline execution finish.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
node { withMaven(maven:'maven') { stage('Checkout') { git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master' } stage('Build') { sh 'mvn clean install' def pom = readMavenPom file:'pom.xml' print pom.version env.version = pom.version } stage('Image') { dir ('discovery-service') { def app = docker.build "localhost:5000/discovery-service:${env.version}" app.push() } } stage ('Run') { docker.image("localhost:5000/discovery-service:${env.version}").run('-p 8761:8761 -h discovery --name discovery') } stage ('Final') { build job: 'account-service-pipeline', wait: false } } } |
Account pipeline is very similar. The main difference is inside fourth stage where account service container is linked to discovery container. We need to linked that containers, because account-service is registering itself in discovery server and must be able to connect it using hostname.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
node { withMaven(maven:'maven') { stage('Checkout') { git url: 'https://github.com/piomin/sample-spring-microservices.git', credentialsId: 'github-piomin', branch: 'master' } stage('Build') { sh 'mvn clean install' def pom = readMavenPom file:'pom.xml' print pom.version env.version = pom.version } stage('Image') { dir ('account-service') { def app = docker.build "localhost:5000/account-service:${env.version}" app.push() } } stage ('Run') { docker.image("localhost:5000/account-service:${env.version}").run('-p 2222:2222 -h account --name account --link discovery') } stage ('Final') { build job: 'customer-service-pipeline', wait: false } } } |
Similar pipelines are also defined for customer and gateway service. They are available in main project catalog on each microservice as Jenkinsfile
. Every image which is built during pipeline execution is also pushed to local Docker registry. To enable local registry on our host we need to pull and run Docker registry image and also use that registry address as an image name prefix while pulling or pushing. Local registry is exposed on its default 5000 port. You can see the list of pushed images to local registry by calling its REST API, for example http://localhost:5000/v2/_catalog .
docker run -d --name registry -p 5000:5000 registry
Testing
You should launch the build on discovery-service-pipeline . This pipeline will not only run build for discovery service but also call start next pipeline build ( account-service-pipeline ) at the end.The same rule is configured for account-service-pipeline which calls customer-service-pipeline and for customer-service-pipeline which call gateway-service-pipeline . So, after all pipelines finish you can check the list of running docker containers by calling docker ps
command. You should have seen 5 containers: local registry and our four microservices. You can also check the logs of each container by running command docker logs
, for example docker logs account
. If everything works fine you should be able te call some service like http://localhost:2222/accounts or via Zuul gateway http://localhost:8765/account/account .
1
2
3
4
5
6
7
8
|
</ div > < div class = "cm-replace _replace_51" >CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fa3b9e408bb4 localhost:5000/gateway-service:1.0-SNAPSHOT "java -jar /gatewa..." About an hour ago Up About an hour 0.0.0.0:8765->8765/tcp gateway cc9e2b44fe44 localhost:5000/customer-service:1.0-SNAPSHOT "java -jar /custom..." About an hour ago Up About an hour 0.0.0.0:3333->3333/tcp customer 49657f4531de localhost:5000/account-service:1.0-SNAPSHOT "java -jar /accoun..." About an hour ago Up About an hour 0.0.0.0:2222->2222/tcp account fe07b8dfe96c localhost:5000/discovery-service:1.0-SNAPSHOT "java -jar /discov..." About an hour ago Up About an hour 0.0.0.0:8761->8761/tcp discovery f9a7691ddbba registry</ div > < div class = "cm-replace _replace_51" > |
Conclusion
I have presented the basic sample of Continuous Delivery environment for microservices using Docker and Jenkins. You can easily find out the limitations of presented solution, for example we has to linked docker containers with each other to enable communication between them or all of the tools and microservices are running on the same machine. For more advanced sample we could use Jenkins slaves running on different machines or docker containers (morehere), tools like Kubernetes for orchestration and clustering, maybe Docker-in-Docker containers for simulating multiple docker machines. I hope that article is a fine introduction to the microservices Continuous Delivery and helps you to understand the basics of this idea. I think that you could expect more my advanced articles about that subject near the future.