I recently decided to run a mostly useless but fun experiment that I've wanted to do for a while - checking latencies between AWS availability zones in the same region.

Cloudping is a great site for seeing the latencies between AWS regions - but no similar site exists for availability zones.

AWS doesn't officially commit to any cross-AZ latencies - but the phrasing I found in their official blog and their official documentation is "single-digit millisecond latency".

Seeing is believing, so I want to run this experiment on all pairs of AWS availability zones. What really piques my curiosity is seeing whether there are any significant differences in cross-AZ latencies between different regions. Do us-east-1a and us-east-1f have lower latency than ap-south-1a and ap-south-1b? Let's find out:

Background

AWS's infrastructure is spread across regions throughout the world - at time of writing there are 32 regions. Regions are central to the AWS experience - most services work on region granularity, meaning that if you're logged in to the AWS console in us-west-2 you won't see any EC2 instances you created in sa-east-1.

Each region has at least three - and in us-east-1's case as many as six - availability zones. Availability zones are the data centers themselves, not necessarily a single building but a single physical cluster of infrastructure. Each AZ runs completely independently of the others, and AWS spreads AZs geographically so that if one AZ goes down (because of e.g. an earthquake or a fire) - the others should remain intact.

Regions and availability zones are very interesting from a cost perspective - network traffic that stays inside an availability zone is free, but network traffic that crosses between two availability zones in the same region costs you, and network traffic that crosses between two regions costs you even more. But that's a story for another day.

The above note means that if we want to measure the latency between two AZs - the latency will only have meaning across different AWS accounts if we use AZ IDs and not AWS codes. So we won't be interested in the latency between us-east-1a and us-east-1b - because these are arbitrary labels - but we will be interested in the latency between use1-az1 and use1-az2.

Implementation

I want to automate the bringup of the measurements environment - and for the fun of the implementation, I want the measurements to be done in a very controlled manner, such that at any given time at most one measurement is being taken in a given region.

As such, we will implement the following flow:

  1. Bring up a t3.micro EC2 instance in every single AZ across all regions.
  1. Each EC2 instance will have its own SQS queue - it will poll this queue for a "Go" command.
  2. Upon being given a "Go" command, the EC2 instance will query a DynamoDB table for this information:
    1. A list of AZs to test network performance against
    2. The next SQS queue to send a "Go" command to. If this instance is the last instance in the region to be tested, we will mark this - and the instance will instead send a "Done" message for the region to a central control queue.
  3. The instance will run ping (to measure latency) and then iperf3 (to measure bandwidth, for fun) against each of the availability zones it's been told to test against.
  4. The instance will write its results to a DynamoDB table.
  5. The instance will either send "Go" to the next SQS queue or "Done" to the central control queue.

We will write three classes of Terraform templates for the experiment:

  • Global - these will create centralized resources used by all instances. Namely, the two DynamoDB tables used in steps 3 and 5, the central SQS queue used in step 6, and the IAM roles attached to the instances.
  • Per Region - these will create the needed Terraform AWS providers and the EC2 security groups.
  • Per AZ - these will create the EC2 instances and their corresponding queues.

For example, this is the Terraform template used to create the per-AZ EC2 instances:

resource "aws_instance" "ec2_instance_REGION_AZ_REPLACE_ME" {
  provider          = aws.REGION_ALIAS_REPLACE_ME
  instance_type     = "t3.micro"
  ami               = "REGION_AMI_REPLACE_ME"
  availability_zone = "REGION_AZ_REPLACE_ME"
  key_name          = aws_key_pair.development_server_key_pair_REGION_ALIAS_REPLACE_ME.key_name
  vpc_security_group_ids = [
    aws_security_group.allow_ssh_security_group_REGION_ALIAS_REPLACE_ME.id,
    aws_security_group.allow_all_outbound_traffic_security_group_REGION_ALIAS_REPLACE_ME.id,
    aws_security_group.allow_iperf3_traffic_REGION_ALIAS_REPLACE_ME.id,
    aws_security_group.allow_ping_traffic_REGION_ALIAS_REPLACE_ME.id
  ]
  iam_instance_profile = aws_iam_instance_profile.ec2_instance_iam_profile.name
 
  user_data = <<-EOF
              #!/bin/bash
              echo "${aws_sqs_queue.sqs_queue_REGION_AZ_REPLACE_ME.url}" > /sqs-queue
              echo "${aws_sqs_queue.sqs_control_queue.url}" > /control-sqs-queue
              echo "${aws_dynamodb_table.ec2_instance_metrics.name}" > /dynamodb-write-table
              echo "${aws_dynamodb_table.ec2_instance_instructions.name}" > /dynamodb-read-table
              echo "REGION_NAME_REPLACE_ME" > /region
              echo "REGION_AZ_REPLACE_ME" > /az
 
              chmod 777 /sqs-queue /control-sqs-queue /dynamodb-read-table /dynamodb-write-table /region /az
              apt update -y
              apt install iperf3 python3-pip -y
 
              iperf3 -s &
 
              su - ubuntu -c "pip3 install boto3"
              su - ubuntu -c "git clone https://github.com/danielkleinstein/aws-availability-zones-latencies.git repo"
              su - ubuntu -c "cd repo && python3 user-data.py > log.txt 2>&1"
              EOF
 
  root_block_device {
    volume_type           = "gp3"
    volume_size           = 16
    delete_on_termination = true
    tags = {
      Name = "Instance_REGION_AZ_REPLACE_ME-volume"
    }
  }
 
  tags = {
    Name = "Instance_REGION_AZ_REPLACE_ME"
  }
}

All of the *REPLACE_ME placeholders will be replaced by a Python script that - in addition to generating the Terraform files - will also write the initial instructions to the DynamoDB table (for step 3) and poll the control queue until all regions are complete.

My PhD mathematician friends tell me that for a region with nn availability zones, there are (n2)\binom n 2 distinct pairs of availability zones. For sanity and a baseline, I also want to compare each availability zone to itself - as such, in every region we will run (n2)+n\binom n 2 + n measurements.

The source code for this experiment is available in this repository - in fact this repository is used by the experiment itself for the EC2 instances' user data script.

Results

I ran the experiment on 28 out of 32 regions (the four missing regions are unavailable to me - two in China and two GovCloud regions). I was surprised to see that the vast majority of AZ pairs had sub-millisecond latency - a twist on Amazon's "single-digit millisecond" claim.

There are 188 results; for brevity the full tables are hidden away, but here are the top 10 slowest pairs and the top 10 fastest pairs (not including self-pairs):

Top Ten Slowest Pairs

availability_zone_fromavailability_zone_toLatency (ms)
sa-east-1asa-east-1b2.42
me-central-1ame-central-1c1.95
ap-northeast-1aap-northeast-1c1.87
il-central-1ail-central-1b1.70
eu-north-1beu-north-1c1.64
ap-northeast-1aap-northeast-1d1.54
eu-west-3aeu-west-3b1.40
sa-east-1bsa-east-1c1.39
ap-northeast-2bap-northeast-2c1.38
eu-west-3beu-west-3c1.36

The São Paulo region has significantly higher cross-AZ latencies than all other regions - coming in not-too-far behind are the UAE, Tokyo, and Israel regions.

Top Ten Fastest Pairs

availability_zone_fromavailability_zone_toLatency (ms)
ap-northeast-3bap-northeast-3c0.39
ap-southeast-4aap-southeast-4c0.40
ap-south-1aap-south-1c0.43
us-east-1bus-east-1c0.44
us-east-1aus-east-1f0.45
us-west-2aus-west-2c0.46
us-east-1dus-east-1e0.46
ca-central-1aca-central-1b0.46
me-south-1ame-south-1c0.46
ap-southeast-3bap-southeast-3c0.47

Tokyo might have some of the highest latencies - but not too far away in Japan is the Osaka region with the fastest cross-AZ latencies. Coming in very close behind are the Melbourne, Mumbai, and North Virginia regions. The differences between the fastest regions are much smaller here, and the ranking would probably vary somewhat from measurement to measurement.

The results are provided twice - once using the familiar AZ codes with pair latencies that are only meaningful to my account, and once using AZ IDs with pair latencies available to all accounts.

Final Thoughts and Conclusions

We verified Amazon's claim that all AZs in a given region are connected with "single-digit millisecond latency". In fact we saw that this claim is modest - we observed sub-millisecond latencies between most AZs.

We also saw that there are some fairly significant differences between cross-AZ latencies in different regions - from 0.39 milliseconds in Osaka to 2.42 milliseconds in São Paulo. But it's worth remembering that even though the relative difference is significant, the absolute difference is still small - a maximum of 2.42 milliseconds is impressive. When I set out to run this experiment I expected most cross-AZ latencies to be more than this.

Sometimes it's nice to detach and look at the big picture: With relatively little development effort I was able to quickly bring up dozens of compute instances in 23 countries across six continents, run automated tests on them, and collect their results into a serverless database service. The experiment - including development costs - didn't cost more than a couple of dollars. Moreover, modern tooling allows me to recreate this experiment in minutes whenever I want. It can be pretty cool living in the future.