Testing AWS network performance

Updated: May 15


Our customers often ask us about different aspects of network performance in AWS, how architecture or configurations can affect it, what to expect and how to optimize it.

In this blog post I will explain basic components influencing a network performance, complete several tests and demonstrate outcomes.

Bandwidth is the maximum rate of transfer over the network, defined in bits per second (abbreviated Bps, Mbps, Gbps, etc.). Network bandwidth defines the maximum bandwidth rate, but the actual user or application transfer rate will also be affected by latency, protocol, and packet loss.


Latency is the delay between two points in a network. Latency can be measured in one-way delay or Round-Trip Time (RTT) between two points. Ping is a common way to test RTT delay. Delays include propagation delays for signals to travel across different mediums such as copper or fiber optics, often at speeds close to the speed of light. There are also processing delays for packets to move through physical or virtual network devices, such as the Amazon Virtual Private Cloud (Amazon VPC) virtual router. Network drivers and operating systems can be optimized to minimize processing latency on the host system as well.


Jitter is the variation in inter-packet delays. Jitter is caused by a variance in delay over time between two points in the network. Jitter is often caused by variations in processing delays and queueing delays in the network, which increase with higher network load. For example, if the one-way delay between two systems varies from 10 ms to 100 ms, then there is 90 ms of jitter. This type of varying delay causes issues with voice and real-time systems that process media because the systems have to decide to buffer data longer or continue without the data.


Throughput is the rate of successful data transferred, measured in bits per second. Bandwidth, latency, and packet loss affect the throughput rate. The bandwidth will define the maximum rate possible. Latency affects the bandwidth of protocols like Transmission Control Protocol (TCP) with round-trip handshakes.


Packet loss is typically stated in terms of the percentage of packets that are dropped in a flow or on a circuit. Packet loss will affect applications differently. TCP applications are generally sensitive to loss due to congestion control.


Packets per second refers to how many packets are processed in one second. Packets per second are a common bottleneck in network performance testing. All processing points in the network must process each packet, requiring computing resources. Particularly for small packets, per-packet processing can limit throughput before bandwidth limits are reached.


The Maximum Transmission Unit (MTU) defines the largest packet that can be sent over the network. The maximum on most Internet and Wide Area Networks (WANs) is 1,500 bytes. Jumbo frames are packets larger than 1,500 bytes. AWS supports 9,001 byte jumbo frames within a VPC. VPC peering and traffic leaving a VPC support up to 1,500 byte packets, including Internet and AWS Direct Connect traffic. Increasing the MTU increases throughput when the packet per second processing rate is the performance bottleneck.

Enhanced networking uses single root I/O virtualization (SR-IOV) as a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.


You can check whether enhanced networking is enabled or not using “ethtool” for Linux OS.

### Debian 8.1 ###
#  ethtool -i eth0
driver: vif
version:
firmware-version:
bus-info: vif-0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Driver “vif” means that enhanced networking is disabled.

### Amazon Linux 2 ###
# ethtool -i eth0
driver: ixgbevf
version: 4.1.0-k
firmware-version: 
expansion-rom-version: 
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

### Debian ###
# ethtool -i eth0
driver: ena
version: 1.0.0
firmware-version: 
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Drivers “ixgbevf” and “ena” means that enhanced networking is enabled. Driver depends on instance type and OS version.

The first test has been completed between Ireland (eu-west-1) and US Northern Virginia (us-east-1) for EC2 instances with enabled and disabled enhanced networking. EC2 type is r4.large with 2 vCPUs 15.25 Gb RAM and network bandwidth “Up to 10 Gbps”



Ping has been used for testing latency, Iperf3 has been used for testing bandwidth and jitter for TCP and UDP traffic. VPCs were not peered, therefore EC2 instances used public IPs and AWS Internet Gateways.

# iperf3 -s -p 5001
-----------------------------------------------------------
Server listening on 5001
-----------------------------------------------------------

Instances without enhanced networking

# ping 54.159.47.240
PING 54.159.47.240 (54.159.47.240) 56(84) bytes of data.
64 bytes from 54.159.47.240: icmp_seq=1 ttl=44 time=68.0 ms
64 bytes from 54.159.47.240: icmp_seq=2 ttl=44 time=68.1 ms

### TCP traffic ###
# iperf3 -c 54.159.47.240 --port 5001 --parallel 128
Connecting to host 54.159.47.240, port 5001
. . . . .
[SUM]   0.00-10.00  sec   717 MBytes   601 Mbits/sec  5697             sender
[SUM]   0.00-10.00  sec   676 MBytes   567 Mbits/sec                  receiver
iperf Done.

### UDP traffic ###
# iperf3 -c 54.159.47.240 --port 5001 -u -b 10g
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   912 MBytes   765 Mbits/sec 0.143 ms  0/173 (0%)  
[  4] Sent 173 datagrams
iperf Done.

Instances with Enhanced networking

# iperf3 -c 23.20.188.12 --port 5001 --parallel 128
. . . . .
[SUM]   0.00-10.00  sec  5.36 GBytes  4.61 Gbits/sec  7205             sender
[SUM]   0.00-10.00  sec  4.52 GBytes  3.88 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 23.20.188.12 --port 5001 -u -b 10g
. . . . .
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  7.99 GBytes  6.87 Gbits/sec 0.012 ms  0/171 (0%)  
[  4] Sent 171 datagrams
iperf Done.

We can see significant improvement of bandwidth, jitter and packets per second for the case with enhanced networking. Ping remains the same.

The next test has been completed between different, but geographically close AWS regions US Ohio (us-east-2) and US Northern Virginia (us-east-1). EC2 type is the same as in the previous test.


Compared with the previous test, ping results look better due to geographic proximity. Bandwidth, jitter and packets per second are predictably much better for enhanced networking.

The next test has been completed between two instances within one VPC in US Northern Virginia (us-east-1) communication via Internet Gateway and public IPs. EC2 type is the same as in the previous test. MTU in this case is 1500 and I will compare results with connection via private IPs where Jumbo frames can be used.


“Tracepath” utility can be used for checking possible MTU between hosts. AWS Internet Gateway does not support Jumbo frames so packets will be fragmented even if we try using MTU larger than 1500. I will also test the influence of “enhanced networking” here.

# ping 54.159.47.240
PING 54.159.47.240 (54.159.47.240) 56(84) bytes of data.
64 bytes from 54.159.47.240: icmp_seq=1 ttl=63 time=0.542 ms
64 bytes from 54.159.47.240: icmp_seq=2 ttl=63 time=0.541 ms


# tracepath 54.159.47.240
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-172-31-96-1.ec2.internal                           0.139ms pmtu 1500
 1:  no reply
 2:  ec2-54-159-47-240.compute-1.amazonaws.com             0.719ms reached
     Resume: pmtu 1500 hops 2 back 2

Instances without  enhanced networking
# iperf3 -c 54.159.47.240 --port 5001 --parallel 128
[SUM]   0.00-10.00  sec   701 MBytes   588 Mbits/sec  39419             sender
[SUM]   0.00-10.00  sec   693 MBytes   581 Mbits/sec                  receiver
iperf Done.

# iperf3 -c 54.159.47.240 --port 5001  -u -b 10g
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.42 GBytes  1.22 Gbits/sec 0.031 ms  177930/185994 (96%) 
[  4] Sent 185994 datagrams
iperf Done.

Instances with Enhanced networking
# iperf3 -c 23.20.188.12 --port 5001 --parallel 128
[SUM]   0.00-10.00  sec  6.29 GBytes  5.40 Gbits/sec  31205             sender
[SUM]   0.00-10.00  sec  5.11 GBytes  4.39 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 23.20.188.12 --port 5001 -u -b 10g
Connecting to host 23.20.188.12, port 5001 
- - - - - - - - - - - - - -