Creating a DevOps container


To begin developing any DevOps use case we will need to deploy a common set of DevOps tools.

In this exercise, we shall set up a containerized DevOps environment using Docker, and call it "The DevOps Launchpad" which will include essential DevOps tools and software such as Git, Jenkins, Ansible, Terraform, Prometheus, and Grafana.

Although the DevOps world is large and diverse and often you will probably need many more tools however consider this exercise as a template to which you can add/remove stuff you need for your requirements.

The DevOps Launchpad - Prerequisites

In this example, we are going to make use of a Docker container to store, build and run our DevOps tools. We have used Docker since it is lightweight, easy to install, highly configurable and portable. However, if you prefer, you can choose to install these tools on a standalone VM.

Before diving into the setup, ensure you have the following prerequisites in place:

  1. A Linux VM running on AWS to host the Docker container. This VM should have:

    • A modern Linux distribution (such as Ubuntu 20.04 or RHEL 8)

    • Docker installed

    • Sufficient disk space, CPU, and memory resources to accommodate the containerized DevOps tools

  2. A Windows VM to act as a jump host from where you can access the Jenkins URL.

  3. Necessary ports opened on the AWS security group to allow connections to Jenkins, Prometheus, and Grafana:

    • Jenkins: 8080 (default port)

    • Prometheus: 9090 (default port)

    • Grafana: 3000 (default port)

Installing Docker on the Linux VM

Before creating the Docker container, make sure Docker is installed on the Linux VM

Creating the Dockerfile

Next, create a Dockerfile with the following contents:

Building and Running the Docker Container

With the Dockerfile ready, build the image containing the DevOps Launchpad:

Now, run the all-in-one DevOps container:

This command will create a Docker container named devops-launchpad, exposing the necessary ports for Jenkins, Prometheus, and Grafana, and storing Jenkins data in a Docker volume named jenkins_home.

With this setup, we now have a containerized DevOps environment with essential tools, ready for developing automation use cases and managing infrastructure.

Accessing the DevOps Tools

Once the Docker container is running, you can access the tools through their respective URLs:

  1. Jenkins: http://<Linux_VM_IP>:8080

  2. Prometheus: http://<Linux_VM_IP>:9090

  3. Grafana: http://<Linux_VM_IP>:3000

Remember to replace <Linux_VM_IP> with the actual IP address of your Linux VM running on AWS.

Next we shall see how to use these URLs to configure each application.

Configuring Jenkins

When you access Jenkins for the first time, you'll need to unlock it using an initial admin password. You can obtain this password from the Docker container logs:

Search for a message similar to this one:

Copy the initial admin password and paste it into the Jenkins URL. Then, follow the on-screen instructions to complete the setup, install suggested plugins, and create an admin user when prompted.

After creating an admin user, you can configure Jenkins further according to your specific needs. Below are some of the standard configurations:

  1. System configuration: Navigate to "Manage Jenkins" > "Configure System" to adjust global settings, such as the number of executors, system message, and environment variables.

  2. Security: Configure authentication and authorization settings by going to "Manage Jenkins" > "Configure Global Security". Here, you can enable security, select a security realm, and configure authorization strategies.

  3. Tool installations: Configure the tools you'll use in your build jobs by visiting "Manage Jenkins" > "Global Tool Configuration". You can set up JDK, Maven, Gradle, Git, and other tools with their respective installations and configurations.

  4. Plugins: Extend the functionality of Jenkins by installing additional plugins. To manage plugins, go to "Manage Jenkins" > "Manage Plugins". Here, you can install, update, and remove plugins.

Configuring Prometheus

To configure Prometheus, you'll need to edit the prometheus.yml configuration file, which is typically located in /etc/prometheus/. Inside the Docker container, you can use an editor like vim or nano to update this file.

First, exec into the running Docker container:

Next, edit the prometheus.yml file to add monitoring targets, such as applications or infrastructure components, and configure alerting rules:

You need to edit the prometheus.yml file to configure Prometheus.

Below is an example of a sample prometheus.yml configuration file

As we can see in the above example, there are basically 3 sections in this file.

First, the global section sets the default scrape interval and evaluation interval to 15 seconds.

Second, the scrape_configs section includes three jobs:

  1. The Prometheus job scrapes metrics from the Prometheus server itself.

  2. The node_exporter job scrapes metrics from a Node Exporter instance. Replace <node_exporter_IP> with the actual IP address of the Node Exporter host.

  3. The custom_app job scrapes metrics from a custom application. Replace <custom_app_IP> and <custom_app_port> with the actual IP address and port of your custom application. In this job, the metrics_path and scrape_interval settings are customized, which override the defaults.

And finally, the alerting section configures the Alertmanager instance by specifying the target IP address and port. Replace <alertmanager_IP> and <alertmanager_port> with the actual IP address and port of your Alertmanager instance.

After editing the configuration in this template, a restart of the Prometheus service is required (of course within the container).

Now you can exit the Docker container.

Configuring Grafana

Grafana is a powerful, flexible, and customizable data visualization and monitoring tool. After installing Grafana and setting up data sources, as previously mentioned, you can create and manage dashboards to visualize your data:

  1. Dashboard settings: Configure general dashboard settings, such as the dashboard title, description, and time zone.

  2. Panels: Add, edit, and arrange panels to display data in various visualization formats, such as line charts, bar graphs, and gauges. You can customize panel settings, such as axes, legend, and colors.

  3. Queries: Use PromQL (for Prometheus data source) or other query languages (depending on your data source) to fetch the desired metric data.

  4. Variables: Create dynamic dashboards by using variables that allow users to change the data displayed on the dashboard. Variables can be used in panel titles, queries, and other dashboard settings.

  5. Alerts: Configure alert rules for your panels to trigger notifications when specific conditions are met. You can set up alert conditions, evaluate time intervals, and configure notification channels, such as email, Slack, or PagerDuty.

  6. Annotations: Add annotations to your dashboards to provide additional context for your data. Annotations can be based on events, such as deployments or outages, or on specific metric values.

To configure Grafana, open the Grafana web interface at http://<Linux_VM_IP>:3000 and log in using the default credentials (username: admin, password: admin). You'll be prompted to change the admin password upon your first login.

To start visualizing data from Prometheus, follow these steps:

  1. Add Prometheus as a data source:

    • Click on the gear icon in the left sidebar and select "Data Sources."

    • Click "Add data source" and select "Prometheus."

    • Enter the Prometheus URL (localhost:9090) and click "Save & Test."

  2. Create dashboards to visualize metrics:

    • Click on the "+" icon in the left sidebar and select "Dashboard."

    • Click "Add new panel" and choose a visualization type.

    • In the "Query" tab, select the "Prometheus" data source and enter a PromQL query to fetch the desired metric data.

    • Customize the panel settings, such as axes, legend, and colors, then click "Apply."

    • Arrange and resize panels as needed, and save the dashboard.

You can also import pre-built Grafana dashboards for specific use cases by visiting the Grafana Dashboards page.


Git is a distributed version control system, and its configuration is usually handled on a per-user or per-repository basis. To set up global configurations for the Git user, run the following commands:

These settings will apply to all repositories that you interact with on your system. You can also configure settings for a specific repository by running the same commands without the --global flag while inside the repository directory.


As part of this DevOps launchpad setup we have also installed Ansible.

Ansible is an open-source automation tool for configuration management, application deployment, and task automation. The primary configuration file for Ansible is the ansible.cfg file, which is typically located in /etc/ansible/. To configure Ansible, you can edit this file and adjust settings as needed. Some common settings include:

  • inventory: The path to your inventory file, which lists the hosts you want to manage with Ansible.

  • remote_user: The default remote user for connecting to managed hosts.

  • ask_pass: Whether to prompt for a password when connecting to managed hosts.

  • ask_sudo_pass: Whether to prompt for a sudo password when executing tasks with elevated privileges.

Additionally, you can create Ansible playbooks to define automation tasks for your infrastructure. Playbooks are written in YAML and specify a series of tasks to be executed on specified hosts.


Hashicorp's Terraform also gets deployed as part of the DevOps Launchpad.

Terraform is an infrastructure-as-code tool for managing and provisioning infrastructure across various cloud providers. To configure Terraform, you create a configuration file (usually with a .tf extension) that describes the desired infrastructure resources.

A Terraform configuration file typically consists of:

  • Provider: The cloud provider that Terraform will use to provision resources. Examples include AWS, Azure, and Google Cloud Platform.

  • Resource blocks: Define individual infrastructure components, such as virtual machines, network interfaces, and storage volumes.

  • Variables: Store values that can be passed to your Terraform configuration, allowing for greater flexibility and reusability.

  • Outputs: Define values that should be returned after Terraform has finished provisioning resources.

You can organize Terraform configurations into modules to create reusable components and simplify your infrastructure management.


You now have a containerized DevOps environment with essential tools, including Git, Jenkins, Ansible, Terraform, Prometheus, and Grafana. This setup allows you to use these tools to develop automation use cases, manage infrastructure, and monitor applications effectively. However it is only at a high level and does not go into too much detail. We shall explore each of these in upcoming articles.

Now that you have this foundation in place, you can focus on delivering high-quality software and infrastructure use cases, improving collaboration, and streamlining your DevOps processes.

Did you find this article valuable?

Support Piyush T Shah by becoming a sponsor. Any amount is appreciated!