AWS vs Azure w/ Packer

What is packer, who better to describe it than the people that created it.

Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.

Packer allows you target a variety of different cloud providers plus tools such as Docker and VirtualBox, allowing you to run the image locally, to develop and/or test against.

AWS vs Azure

Recently we evaluated the performance of Packer across two large cloud providers - AWS and Azure, and wanted to share this, as this could be a deciding factor on how you deploy.

The timings stated in this will vary depending on the provisioners you use, and what scripts you execute to provision the VHDs/AMIs.

The provisioning script we execute creates a NGINX box with various other components that we use internally. We also generate some SSL parameters which can vary in how long it takes to execute, typically a minute either side.

Below is a chart of the performance differences between cloud providers, with some variants on the Azure platform.

cloud comparison

AWS

The creation of an AMI in AWS is fast, incredibly fast. 7 minutes to provision a custom AMI is more than adequate, so we didn't feel the need to tinker too much.

The configuration to build an AMI in AWS is simple, as you can see below:

{
    "builders": [{
        "type": "amazon-ebs",
        "communicator": "ssh",
        "ssh_username": "ubuntu",
        "access_key": "access-key",
        "secret_key": "secret-key",
        "region": "eu-west-1",
        "source_ami": "ami-base-id",
        "instance_type": "t2.micro",
        "vpc_id": "vpc-id",
        "subnet_id": "subnet-id",
        "ami_name": "packer-example-{{timestamp}}"
    }],
    "provisioners": [
        {
            "type": "shell",
            "scripts": [
                "config/provision.sh" ]
            }
    ]
}

It's important to remember that AMIs are regional. So if you create an AMI in eu-west-1, you won't be able to access from any other region.

Azure

Using Packer with Azure required a little more work to get running. We will go through each of the variants we tried and how they saved time for our VHD creation.

First iteration (31 minutes)

The first iteration of this process we used a Basic_A1 instance. It created a temporary resource group with the various components it needed to created a VHD. The configuration looked like so -

{
    "builders" :[{
        "type": "azure-arm",
        "client_id": "client-id",
        "client_secret": "client-secret",
        "resource_group_name": "storage-account-resource-group",
        "storage_account": "imgs",
        "subscription_id": "subscription-id",
        "tenant_id": "tenant-id",
        "capture_container_name": "images",
        "capture_name_prefix": "prefix",
        "os_type": "Linux",
        "image_publisher": "Canonical",
        "image_offer": "UbuntuServer",
        "image_sku": "16.04-LTS",
        "temp_resource_group_name": "temp-build",
        "location": "North Europe",
        "vm_size": "Basic_A1"
    }],
    "provisioners": [
        {
            "type": "shell",
            "scripts": [
                "config/provision.sh" ]
            }
    ]
}

Second iteration (26 minutes)

I believed we could get a little bit more performance if I upped the instance size due to the SSL parameter generation being CPU intensive. We managed to get a performance improvement albeit not a substantial one.

You can adjust the size of the VM by adjusting the vm_size property.

Third iteration (21 minutes)

During the second iteration we noticed that it was creating a few expensive resources in the temporary resource group. This was adding precious minutes to the provisioning and deprovisioning of the temp resource group and its resources.

With that, we decided to set additional Azure packer parameters, these allow you to specify pre-existing VNETs and subnets. Thusly removing the need to create them.

You can do this by Setting virtual_network_name, virtual_network_resource_group_name and virtual_network_subnet_name.

Our configuration file now looks like so -

{
    "builders" :[{
        "type": "azure-arm",
        "client_id": "client-id",
        "client_secret": "client-secret",
        "resource_group_name": "storage-account-resource-group",
        "storage_account": "imgs",
        "subscription_id": "subscription-id",
        "tenant_id": "tenant-id",
        "capture_container_name": "images",
        "capture_name_prefix": "prefix",
        "os_type": "Linux",
        "image_publisher": "Canonical",
        "image_offer": "UbuntuServer",
        "image_sku": "16.04-LTS",
        "temp_resource_group_name": "temp-build",
        "virtual_network_name": "vnet",
        "virtual_network_resource_group_name": "networking-resource-group",
        "virtual_network_subnet_name": "sunet",
        "azure_tags": {
        "dept": "operations"
        },
        "location": "North Europe",
        "vm_size": "Basic_A3"
    }],
    "provisioners": [
        {
            "type": "shell",
            "scripts": [
                "config/nginx.sh" ]
            }
    ]
} 
Conclusion

Creating machine images in Azure is much slower than AWS. The difference is definitely in the speed of infrastructure provisioning and deprovisioning.

It'd be interesting to know what AWS is doing under the hood to achieve such performance, or what Azure is doing so badly.

Credit for Header

Nathan Smith

An engineer with a passion for simplicity.

Manchester

Subscribe to Zuto Tech Blog

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!