Blog

This is a draft

Please do not share this link.

Build a Continuous Delivery pipeline - Part I

By:

Date:

Tags: aws, continuous delivery, devops, cloudformation, infrastructures


The DevOps methodology has grown his importance with the advent of the agile movements and new technology stacks, which demanded more focus on configuration management, provision of agile infrastructures and continuous deployment of applications. This extension to the agile methodologies brought closer software and system engineers. However, some companies, mainly startups, decided to give developers the responsibility of a systems engineer to avoid having dedicated persons for the role, in an attempt to reduce costs. Obviously, this bold move came with consequences.

Despite all the benefits of multi versed individuals - each discipline requires years of training (guys like Leonardo Da Vinci are rare to find), the added responsibility took time from what the developers were trained to do: writing code. To remove the burden from those developers, some products were made available, like Elastic Beanstalk, Heroku and Google App Engine. However, those products work fine for simple applications but don’t provide the necessary flexibility that wider complexity demand.

AWS, being one of the most mature cloud-based provider until this date, has built some useful tools that solve some of the most time-consuming tasks of being a Automation engineer, shifting the focus of those overloaded developers to writing code again.

This post is the first of a series of three that explains how to build an infrastructure with multiple environments, with a Continuous Delivery (CD) pipeline and no downtime deployments on AWS. The main objective is to do build it in an easy and modular way to reduce costs and decrease time of bootstrapping application and delivery infrastructures.

Objectives:

  • Reduce costs and time of bringing infrastructures up
  • Modular infrastructure to maximize re-use
  • Horizontal scaling & fault tolerant configuration
  • CD pipeline with no downtime deployments
  • Ability to rollback builds

Tools:

  • AWS for cloud services
  • Jenkins for orchestrating the delivery pipeline
  • CloudFormation to provision environments and network configuration
  • CodeDeploy for blue/green deployments
  • Chef-solo for managing application configuration on each instance
  • Vert.x to implement a Java Hello World application
  • Maven to compile and package the application

Pre-requirements:

  • Basic knowledge with Chef cookbooks
  • Familiar with AWS
  • AWS account
  • AWS account key for AWS CLI
  • Key pair for EC2 instances login

Part I - Design the application infrastructure

To design the application infrastructure, we’re going to use CloudFormation, which allows systems administrators to manage AWS resources declaratively. It helps to provision and manage those resources predictably by using text-based templates, written in JSON or YAML. One of the advantages of declaring the infrastructure this way is the ability to version-control the changes and optionally automating the provision of the resources, managing them as we typically do with code. Automating the infrastructure changes is not in the scope of this article, but if you’re interested in doing so, please read this blog post from AWS.

The easiest way to design a CloudFormation template is to use the CloudFormation Designer. This tool is best used to define the skeleton of the AWS resources in the infrastructure but doesn’t eliminate the need to know the configuration details and to edit the template file manually. However, I found the documentation easy to read and the learning curve is relatively small if you’re familiar with AWS.

To maximize re-use, let’s design the application template as modular as possible by using the nested stacks capability of CloudFormation and by defining inputs parameters, which will be used for stack inter-communication during creation time.

The following image depicts the application infrastructure, designed with CloudFormation Designer.

Application infrastructure

The first thing we’re going to define are the template input parameters in a way to avoid major specific configurations. By writing the template as generic as possible, it will allow us to replicate stacks and, for example, create multiple environments with a single template. Our template will be used to create the QA and Production environments inside a single VPC. A dedicated VPC for each environment would be the right choice, and the template could be changed to fit that ideal scenario. However, for simplicity and to keep the budget as low as possible, we’re going to place all the resources inside the same VPC. One VPC per environment would allow us to separate each environment logically, mainly due to the production environment’s strict security requirements.

Stack input parameters

The template input parameters are defined in the Parameters block, which should be placed in the top of the template for better readability. Here’s an example of the ApplicationName parameter:

"Parameters": {
  "ApplicationName": {
    "Description": "The name of the Application",
    "Type": "String"
  }
}

The following list defines all the needed parameters to turn this template generic enough to make it reusable:

  • ApplicationName - The name of the application, which will be helpful to tag the AWS resources and later for deployment.
  • Environment - The name of the environment. This is also helpful to tag resources and later for deployment.
  • Region - The name of the AWS region where to instantiate the stack.
  • VpcId - VPC ID to attach the resources.
  • InternetGatewayId - Internet Gateway ID for the load balancers to be accessible from the Internet.
  • VpcCidrBlock - VPC CIDR block. This seems redundant but it’s helpful for routing and security configuration.
  • PrivateSubnet1CidrBlock - CIDR block for private subnet in Availability Zone 1.
  • PrivateSubnet2CidrBlock - CIDR block for private subnet in Availability Zone 2.
  • LoadBalancerSubnet1CidrBlock - CIDR block for load balancer subnet in Availability Zone 1.
  • LoadBalancerSubnet2CidrBlock - CIDR block for load balancer subnet in Availability Zone 2.
  • HealthCheckTarget - Health check target for the load balancer to execute in the nodes.
  • ApplicationPort - The HTTP port where the application is listening.
  • KeyName - Key-pair name to log to EC2 instances.

Private networks configuration

To achieve a fault tolerant configuration, we will configure two private networks in the same region but in different Availability Zones (AZ), which provide us another layer of logical separation since each subnetwork doesn’t extend cross AZs. This configuration can’t prevent the application to stop working after a catastrophe in the deployment region but two AZ will guarantee minimum fault-tolerance: if one datacenter is down, the instances in the other AZ are kept in service by the Load Balancer (LB). This prevents for extreme probable situations like the cleaning person to accidentally unplug the rack power to use the vacuum cleaner to swipe the datacenter room.

There are three properties we need to configure in each private subnetwork: VPC ID, CIDR block and AZ. Since VPC ID and subnet CIDR Block are both template input parameters, we will reference those inside each AWS::EC2::Subnet resource type for CloudFormation to resolve during runtime. Note that each subnet CIDR block must be valid within the VPC CIDR block. The AZ name depends on the region and, in some way, we need to construct it based on the input region name. One way of doing this is to use CloudFormation’s Fn::Join built-in intrinsic function. For example, for the region us-west-2, the below AZ would be constructed as us-west-2a.

The following code block shows the private subnet 1 configuration:

"PrivateSubnet1": {
  "Type": "AWS::EC2::Subnet",
  "Properties": {
    "VpcId": {
      "Ref": "VpcId"
    },
    "CidrBlock": {
      "Ref": "PrivateSubnet1CidrBlock"
    },
    "AvailabilityZone": {
      "Fn::Join": [
        "-",
        [
          {
            "Ref": "Region"
          },
          "a"
        ]
      ]
    }
  }
}

Private subnetworks routes

For our private networks we need to configure the routing table. We’re just going to define a single route for the application subnetworks to route requests to the Internet using AWS::EC2::Route resource type. For now, we’re going to use the VPC’s internet gateway ID to configure the route as a placeholder for the NAT instance, which we’re going to configure during Part II. The EC2 instances inside these two private networks won’t be exposed directly to the internet. Configuring a NAT to translate outbound requests is the solution we’re going to use for securing our networks. Otherwise, we would have to configure an Internet Gateway and assign Elastic IPs (EIP) to each instance, exposing them to the outside world.

The following snippet refers to the private subnet’s route to the Internet. The internet gateway ID destination is just a placeholder for now:

"InternetRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "RouteTableId": {
      "Ref": "PrivateRouteTable"
    },
    "GatewayId": {
      "Ref": "InternetGatewayId"
    },
    "DestinationCidrBlock": "0.0.0.0/0"
  }
}

Load balancer

Now, it’s time to configure the LB properties. The application LB will, by default, forward outside requests to the application nodes using a round-robin algorithm. This design is useful to achieve two requirements we defined for our application stack: fault-tolerance and horizontal scalability. Using the setup we designed, the LB will balance requests between the healthy instances across the two AZs. If the application needs to be scaled up, the Auto Scaling Group (ASG) will instantiate new nodes and set them inside those AZs, distributing the load for more instances, reducing the load pressure of the original set.

The LB is responsible to keep the healthy nodes in service. If some node is unhealthy, the LB will remove them from rotation automatically. This behaviour must be configured in the LB properties and all instance nodes must expose a health check endpoint for the LB to be able to access it.

The following code shows the LB’s HealthCheck property pointing to the input parameter HealthCheckTarget. An example of an HealthCheckTarget input would be HTTP:8080/health, which means the instances must expose an HTTP endpoint in the 8080 port using the /health path. A healthy node would respond HTTP 200 OK. The LB connection listeners must also be configured with the front-end (LB) port and back-end port (instance). For more information please check the user guide.

"ApplicationLoadBalancer": {
  "Type": "AWS::ElasticLoadBalancing::LoadBalancer",
  "Properties": {
    "CrossZone": true,
    "Listeners": [
      {
        "LoadBalancerPort": "80",
        "InstancePort": {
          "Ref": "ApplicationPort"
        },
        "Protocol": "HTTP"
      }
    ],
    "HealthCheck": {
      "Target": {
        "Ref": "HealthCheckTarget"
      },
      "HealthyThreshold": "3",
      "UnhealthyThreshold": "5",
      "Interval": "30",
      "Timeout": "5"
    },
(...)
  }
}

Load balancer subnetwork configuration

With the LB properly configured, we now need to set up networking, which will expose it to the Internet for handling outside requests. For that, public networks for the LB nodes need to be configured: one for each AZ, the same configured for the application private networks. A VPC ID, a CIDR block and an AZ will be passed as references, as the following block shows:

"LoadBalancerSubnet1": {
  "Type": "AWS::EC2::Subnet",
  "Properties": {
    "VpcId": {
      "Ref": "VpcId"
    },
    "CidrBlock": {
      "Ref": "LoadBalancerSubnet1CidrBlock"
    },
    "AvailabilityZone": {
      "Fn::Join": [
        "-",
        [
          {
            "Ref": "Region"
          },
          "a"
        ]
      ]
    }
  }
}

Load balancer route table

A single route will be configured in the LB networking router for accessing the Internet. In this case, the Internet Gateway (IGW) will be the destination for the route and responsible to translate the assigned public IP address to the LB.

Here’s the route configuration:

"LoadBalancerInternetRoute": {
  "Type": "AWS::EC2::Route",
  "Properties": {
    "RouteTableId": {
      "Ref": "LoadBalancerRouteTable"
    },
    "DestinationCidrBlock": "0.0.0.0/0",
    "GatewayId": {
      "Ref": "InternetGatewayId"
    }
  }
}

Auto scaling group

An Auto Scaling Group (ASG) is a group of EC2 instances which share the same characteristics for aiming automatic scalability and management. The stack we’re building will just have a simple ASG that keeps a fixed number of healthy instances up and running. If you want to configure an ASG with complex policies, like CPU percentage threshold or memory usage, please check the quick reference. For demonstration purposes, we’re configuring the ASG with a single instance using AWS::AutoScaling::AutoScalingGroup resource type, as follows:

"ApplicationAutoScalingGroup": {
  "Type": "AWS::AutoScaling::AutoScalingGroup",
  "Properties": {
    "AvailabilityZones": [
      {
        "Fn::Join": [
          "-",
          [
            {
              "Ref": "Region"
            },
            "a"
          ]
        ]
      },
      {
        "Fn::Join": [
          "-",
          [
            {
              "Ref": "Region"
            },
            "b"
          ]
        ]
      }
    ],
    "MinSize": "1",
    "MaxSize": "1",
    "VPCZoneIdentifier": [
      {
        "Ref": "PrivateSubnet2"
      },
      {
        "Ref": "PrivateSubnet1"
      }
    ],
    "LaunchConfigurationName": {
      "Ref": "ApplicationLaunchConfiguration"
    },
    "LoadBalancerNames": [
      {
        "Ref": "ApplicationLoadBalancer"
      }
    ]
  }
}

Note the AvailabilityZones and VPCZoneIdentifier configurations to define where the application nodes will be instantiated. The subnetworks configured in the VPC zone identifier must reside in the configured AZs and the ApplicationLoadBalancer references the previously configured LB.

The LaunchConfigurationName points to a Launch Configuration (LC), which the ASG uses to launch EC2 instances. A simple LC would have defined: an instance type, an Amazon Machine Image (AMI) ID, a key pair for logging into the instances and a Security Group (SG), which we’ll cover in the next section. You can review the LC properties in the following resource block:

"ApplicationLaunchConfiguration": {
  "Type": "AWS::AutoScaling::LaunchConfiguration",
  "Properties": {
    "InstanceType": "t2.micro",
    "ImageId": "ami-f173cc91",
    "KeyName": { "Ref": "KeyName" },
    "SecurityGroups": [
      {
        "Ref": "PrivateSubnetSecurityGroup"
      }
    ]
  }
}

The AMI ID “ami-f173cc91” is just a pre-cooked Linux AMI from Amazon, but you can use others as your own cooked AMIs.

Security groups

To control the traffic for the LB and the application instances, we’re going to configure two Security Groups (SG), attached to the VPC ID input parameter. We could also use Network Access Control Lists ACL in conjunction with SGs, or even use it as an alternative but SG gives us protection at an instance level and it’s perfect for creating an applicational context for security.

The bellow table is a resume about the differences between SGs and ACLs but you can read more details in the VPC security user guide.

Security GroupAccess Control List
Return traffic is always allowed (stateful)Return traffic must be explicitly allowed (stateless)
Applied to instance levelApplied to subnetwork level
Rules to allow traffic onlyRules to allow and deny traffic

The following resource block is a LB security group configuration to allow ingress Internet traffic only to the 80 port:

"HttpIngressSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "GroupDescription": "Allows only HTTP ingress",
    "VpcId": {
      "Ref": "VpcId"
    },
    "SecurityGroupIngress": [
      {
        "IpProtocol": "tcp",
        "FromPort": "80",
        "ToPort": 80,
        "CidrIp": "0.0.0.0/0"
      }
    ]
  }
}

The following SG rules limit the ingress traffic to the application instances to the VPC’s network and allows outbound traffic to the Internet:

"PrivateSubnetSecurityGroup": {
  "Type": "AWS::EC2::SecurityGroup",
  "Properties": {
    "GroupDescription": "Allows only private traffic and allows all egress traffic.",
    "VpcId": {
      "Ref": "VpcId"
    },
    "SecurityGroupIngress": [
      {
        "IpProtocol": "-1",
        "CidrIp": {
          "Ref": "VpcCidrBlock"
        }
      }
    ],
    "SecurityGroupEgress": [
      {
        "IpProtocol": "-1",
        "CidrIp": "0.0.0.0/0"
      }
    ]
  }
}

Test and debug the stack

To test the stack, one can use AWS CLI in the way shown below and see it being created in the AWS Console:

$aws cloudformation create-stack --stack-name simple-http-app --template-body file:////Users/you/aws/cf_templates/simple-http-app.template --disable-rollback --capabilities CAPABILITY_IAM

If something goes wrong, the disable-rollback flag is useful to help debug by freezing the state of the stack upon error to check for error messages and logs. Please read the documentation reference for more details.

Conclusion

This concludes Part I. We created a template for our application infrastructure using CloudFormation Designer. This tool doesn’t exempt the designer from knowing how to create a template manually, since there’s no such thing as free meals, but it reduces the effort of creating one from scratch. Besides all the benefits of using it, you will also have a documented infrastructure. How cool is that?

You can checkout the complete template in BytePitch’s GitHub account.

Until now, we touched three objectives from our goals:

  • Reduce costs and time of bringing infrastructures up
  • Modular infrastructure to maximize re-use
  • Horizontal scaling & fault tolerant configuration (against vacuum cleaners)

In the next post of this series, we’re going to:

  • Bootstrap QA and Production environments using the template we just created
  • Configure networking and security and provide secured Internet access to the private instances
  • Start the pipeline infrastructure by provision a Jenkins instance

See you next week.

Contact Us


Do you like what you read? Drop us a note...
Call +351 911 131 401
Or drop us a note...