We now want to focus on the machine in which Jenkins executes jobs. So far, all jobs have been executed in the Jenkins server itself. But this is far from being optimal. Why?
Instead, let’s introduce distributed builds architecture, while delegating the jobs execution to other machine(s), which will be called Jenkins agents.
Jenkins agents, also known as Jenkins slaves, are a fundamental part of the Jenkins automation system. They serve as the execution environment for Jenkins builds and tasks.
(Image by https://foxutech.com/author/motoskia/)
The first step is to run jobs within a docker containers. The containers will still be running on the Jenkins server itself, later on we will integrate other agents and execute the containers on them.
🧐 Question: Taking the factors mentioned above (performance, build environments and isolation), which of them have been achieved by running jobs in containers?
Let’s create a Docker image that will be used as a build agent for our existed pipelines. One image for all pipelines. The image will be based on the jenkins/agent image, which is suitable for running Jenkins jobs (with Java installed and other executable Jenkins uses).
What else do we need in this image to run our pipelines? We need aws
cli, docker
, snyk
, python
, and maybe more… very rich and colorful Docker image!
We are going to utilize Docker multistage builds for that.
Take a look on the following Dockerfile:
FROM ubuntu:latest as installer
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN apt-get update \
&& apt-get install -y unzip \
&& unzip awscliv2.zip \
&& ./aws/install --bin-dir /aws-cli-bin/
# this is an example demostrating how to install a tool on a some Docker image, then copy its artifacts to another image
RUN mkdir /snyk && cd /snyk \
&& curl https://static.snyk.io/cli/v1.666.0/snyk-linux -o snyk \
&& chmod +x ./snyk
FROM jenkins/agent
# Copy the `docker` (client only!!!) from `docekr` image to this image.
COPY --from=docker /usr/local/bin/docker /usr/local/bin/
COPY --from=installer /usr/local/aws-cli/ /usr/local/aws-cli/
COPY --from=installer /aws-cli-bin/ /usr/local/bin/
COPY --from=installer /snyk/ /usr/local/bin/
The dockerfile starts with ubuntu:latest
as an installer
image in which we will install aws
cli and snyk
.
After the installer
image was built, we will copy the relevant artifacts to the main image, the one starting with FROM jenkins/agent
.
build.Jenkinsifle
, replace agent any
by:agent {
docker {
image '<image-url>'
args '--user root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
This directive tell Jenkins to run the pipeline on a docker container.
Now pay attention!!! Since as part of the build pipeline (RobertaBuild
) we build a docker image, there is a problem of building a docker image from within a docker container, also known as dind - docker in docker.
To solve this, we want the use the docker client within the agent container, to send the docker build
command to the daemon resides outside the container, on the Jenkins server. This way we bypass the docker-in-docker problem, as the image is actually being built outside the container. Only the build command is sent from within the container. How do we do that?
-v
mount the socket file that the docker client is using to talk with the docker daemon. In this case the docker client within the container will talk with the docker daemon operates outside the container on Jenkins machine.--user root
runs the container as root
user, which is necessary to access /var/run/docker.sock
.Source reference: https://www.jenkins.io/doc/book/managing/nodes/
The Jenkins controller is the Jenkins service itself and where Jenkins is installed. It is also a web server that also acts as a “brain” for deciding how, when, and where to run tasks.
Nodes are the “machines” on which build agents run. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync, and response time. A node is taken offline if any of these values go outside the configured threshold. Jenkins supports two types of nodes:
jar
) Java client process that connects to a Jenkins controllerAn executor is a slot for the execution of tasks. Effectively, it is a thread in the agent. The number of executors on a node defines the number of concurrent tasks that can run.
You can either choose to install a Jenkins agent on Windows, on MacOS, or deploy agent on EC2 instance (bellow).
Let’s create an EC2 and connect it to your Jenkins controller as an agents.
*.micro
EC2 instance in the same VPC as your Jenkins server. Your instance has to have Java 11 and Docker installed. Make sure you have enough disk to execute your pipelines. It’s recommended to create an AMI from this instance to later usage.EC2 agent 1
.ubuntu
user has access to. E.g. ~/jenkins
or any other path.general
. The label will be later used to assign jobs specifically on an agent having this label.Click Save to save the configuration.
build.Jenkinsfile
pipeline to b executed on agents labeled according to the label you choose. For example:agent {
docker {
label 'general'
image '<image-url>'
args '--user root -v /var/run/docker.sock:/var/run/docker.sock'
}
}
Trigger the pipeline and make sure it’s running on the agents machine.
Complete the above work to configure the build.Jenkinsfile
, deploy.Jenkinsfile
and pr-testing.Jenkinsfile
pipelines to be running on the Jenkins agent, in a Docker container.
Notes:
pr-testing.Jenkinsfile
require Python as part of the pipeline execution, you have to re-build the agent docker image with Python in it. You are highly encouraged to use another “installer” image to create Python virtual environment (venv
) within the image, and then to copy the created venv
to the jenkins/agent
image.Jenkins EC2-plugin allows Jenkins to start agents on EC2 on demand, and kill them as they get unused. It’ll start instances using the EC2 API and automatically connect them as Jenkins agents. When the load goes down, excess EC2 instances will be terminated.
Amazon EC2
Jenkins plugin.In this exercise, you will set up Jenkins on a Kubernetes cluster using Helm and configure Jenkins agents as pods to dynamically provision build environments.