Testing AWS network performance


Our customers often ask us about different aspects of network performance in AWS, how architecture or configurations can affect it, what to expect and how to optimize it.

In this blog post I will explain basic components influencing a network performance, complete several tests and demonstrate outcomes.

Bandwidth is the maximum rate of transfer over the network, defined in bits per second (abbreviated Bps, Mbps, Gbps, etc.). Network bandwidth defines the maximum bandwidth rate, but the actual user or application transfer rate will also be affected by latency, protocol, and packet loss.


Latency is the delay between two points in a network. Latency can be measured in one-way delay or Round-Trip Time (RTT) between two points. Ping is a common way to test RTT delay. Delays include propagation delays for signals to travel across different mediums such as copper or fiber optics, often at speeds close to the speed of light. There are also processing delays for packets to move through physical or virtual network devices, such as the Amazon Virtual Private Cloud (Amazon VPC) virtual router. Network drivers and operating systems can be optimized to minimize processing latency on the host system as well.


Jitter is the variation in inter-packet delays. Jitter is caused by a variance in delay over time between two points in the network. Jitter is often caused by variations in processing delays and queueing delays in the network, which increase with higher network load. For example, if the one-way delay between two systems varies from 10 ms to 100 ms, then there is 90 ms of jitter. This type of varying delay causes issues with voice and real-time systems that process media because the systems have to decide to buffer data longer or continue without the data.


Throughput is the rate of successful data transferred, measured in bits per second. Bandwidth, latency, and packet loss affect the throughput rate. The bandwidth will define the maximum rate possible. Latency affects the bandwidth of protocols like Transmission Control Protocol (TCP) with round-trip handshakes.


Packet loss is typically stated in terms of the percentage of packets that are dropped in a flow or on a circuit. Packet loss will affect applications differently. TCP applications are generally sensitive to loss due to congestion control.


Packets per second refers to how many packets are processed in one second. Packets per second are a common bottleneck in network performance testing. All processing points in the network must process each packet, requiring computing resources. Particularly for small packets, per-packet processing can limit throughput before bandwidth limits are reached.


The Maximum Transmission Unit (MTU) defines the largest packet that can be sent over the network. The maximum on most Internet and Wide Area Networks (WANs) is 1,500 bytes. Jumbo frames are packets larger than 1,500 bytes. AWS supports 9,001 byte jumbo frames within a VPC. VPC peering and traffic leaving a VPC support up to 1,500 byte packets, including Internet and AWS Direct Connect traffic. Increasing the MTU increases throughput when the packet per second processing rate is the performance bottleneck.

Enhanced networking uses single root I/O virtualization (SR-IOV) as a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.


You can check whether enhanced networking is enabled or not using “ethtool” for Linux OS.

### Debian 8.1 ###
#  ethtool -i eth0
driver: vif
version:
firmware-version:
bus-info: vif-0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Driver “vif” means that enhanced networking is disabled.

### Amazon Linux 2 ###
# ethtool -i eth0
driver: ixgbevf
version: 4.1.0-k
firmware-version: 
expansion-rom-version: 
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

### Debian ###
# ethtool -i eth0
driver: ena
version: 1.0.0
firmware-version: 
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

Drivers “ixgbevf” and “ena” means that enhanced networking is enabled. Driver depends on instance type and OS version.

The first test has been completed between Ireland (eu-west-1) and US Northern Virginia (us-east-1) for EC2 instances with enabled and disabled enhanced networking. EC2 type is r4.large with 2 vCPUs 15.25 Gb RAM and network bandwidth “Up to 10 Gbps”



Ping has been used for testing latency, Iperf3 has been used for testing bandwidth and jitter for TCP and UDP traffic. VPCs were not peered, therefore EC2 instances used public IPs and AWS Internet Gateways.

# iperf3 -s -p 5001
-----------------------------------------------------------
Server listening on 5001
-----------------------------------------------------------

Instances without enhanced networking

# ping 54.159.47.240
PING 54.159.47.240 (54.159.47.240) 56(84) bytes of data.
64 bytes from 54.159.47.240: icmp_seq=1 ttl=44 time=68.0 ms
64 bytes from 54.159.47.240: icmp_seq=2 ttl=44 time=68.1 ms

### TCP traffic ###
# iperf3 -c 54.159.47.240 --port 5001 --parallel 128
Connecting to host 54.159.47.240, port 5001
. . . . .
[SUM]   0.00-10.00  sec   717 MBytes   601 Mbits/sec  5697             sender
[SUM]   0.00-10.00  sec   676 MBytes   567 Mbits/sec                  receiver
iperf Done.

### UDP traffic ###
# iperf3 -c 54.159.47.240 --port 5001 -u -b 10g
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   912 MBytes   765 Mbits/sec 0.143 ms  0/173 (0%)  
[  4] Sent 173 datagrams
iperf Done.

Instances with Enhanced networking

# iperf3 -c 23.20.188.12 --port 5001 --parallel 128
. . . . .
[SUM]   0.00-10.00  sec  5.36 GBytes  4.61 Gbits/sec  7205             sender
[SUM]   0.00-10.00  sec  4.52 GBytes  3.88 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 23.20.188.12 --port 5001 -u -b 10g
. . . . .
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  7.99 GBytes  6.87 Gbits/sec 0.012 ms  0/171 (0%)  
[  4] Sent 171 datagrams
iperf Done.

We can see significant improvement of bandwidth, jitter and packets per second for the case with enhanced networking. Ping remains the same.

The next test has been completed between different, but geographically close AWS regions US Ohio (us-east-2) and US Northern Virginia (us-east-1). EC2 type is the same as in the previous test.


Compared with the previous test, ping results look better due to geographic proximity. Bandwidth, jitter and packets per second are predictably much better for enhanced networking.

The next test has been completed between two instances within one VPC in US Northern Virginia (us-east-1) communication via Internet Gateway and public IPs. EC2 type is the same as in the previous test. MTU in this case is 1500 and I will compare results with connection via private IPs where Jumbo frames can be used.


“Tracepath” utility can be used for checking possible MTU between hosts. AWS Internet Gateway does not support Jumbo frames so packets will be fragmented even if we try using MTU larger than 1500. I will also test the influence of “enhanced networking” here.

# ping 54.159.47.240
PING 54.159.47.240 (54.159.47.240) 56(84) bytes of data.
64 bytes from 54.159.47.240: icmp_seq=1 ttl=63 time=0.542 ms
64 bytes from 54.159.47.240: icmp_seq=2 ttl=63 time=0.541 ms


# tracepath 54.159.47.240
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-172-31-96-1.ec2.internal                           0.139ms pmtu 1500
 1:  no reply
 2:  ec2-54-159-47-240.compute-1.amazonaws.com             0.719ms reached
     Resume: pmtu 1500 hops 2 back 2

Instances without  enhanced networking
# iperf3 -c 54.159.47.240 --port 5001 --parallel 128
[SUM]   0.00-10.00  sec   701 MBytes   588 Mbits/sec  39419             sender
[SUM]   0.00-10.00  sec   693 MBytes   581 Mbits/sec                  receiver
iperf Done.

# iperf3 -c 54.159.47.240 --port 5001  -u -b 10g
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.42 GBytes  1.22 Gbits/sec 0.031 ms  177930/185994 (96%) 
[  4] Sent 185994 datagrams
iperf Done.

Instances with Enhanced networking
# iperf3 -c 23.20.188.12 --port 5001 --parallel 128
[SUM]   0.00-10.00  sec  6.29 GBytes  5.40 Gbits/sec  31205             sender
[SUM]   0.00-10.00  sec  5.11 GBytes  4.39 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 23.20.188.12 --port 5001 -u -b 10g
Connecting to host 23.20.188.12, port 5001 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  9.34 GBytes  8.02 Gbits/sec 0.021 ms  17/172 (9.9%)  
[  4] Sent 172 datagrams
iperf Done.

The next is the same, but via private IPs with MTU 9001


# ping 172.31.104.176
PING 172.31.104.176 (172.31.104.176) 56(84) bytes of data.
64 bytes from 172.31.104.176: icmp_seq=1 ttl=64 time=0.429 ms
64 bytes from 172.31.104.176: icmp_seq=2 ttl=64 time=0.436 ms

# tracepath 172.31.104.176
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-172-31-104-176.ec2.internal                        0.586ms reached
 1:  ip-172-31-104-176.ec2.internal                        0.506ms reached
     Resume: pmtu 9001 hops 1 back 1

Instances without  enhanced networking
# iperf3 -c 172.31.104.176 --port 5001 --parallel 128
Connecting to host 54.159.47.240, port 5001
. . . . .
[SUM]   0.00-10.00  sec   923 MBytes   774 Mbits/sec  16743             sender
[SUM]   0.00-10.00  sec   888 MBytes   745 Mbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.104.176 --port 5001  -u -b 10g
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  2.83 GBytes  2.43 Gbits/sec 0.042 ms  256730/370574 (69%) 
[  4] Sent 370574 datagrams
iperf Done.

Instances with Enhanced networking
# iperf3 -c 172.31.107.106 --port 5001 --parallel 128
[SUM]   0.00-10.02  sec  12.6 GBytes  9.58 Gbits/sec  263             sender
[SUM]   0.00-10.02  sec  10.7 GBytes  8.87 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.107.106 --port 5001 -u -b 10g
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  5.77 GBytes  7.98 Gbits/sec  0.010 ms  51/756031 (0.0067%)  
[  4] Sent 756031 datagrams
iperf Done.

Result for TCP bandwidth using Jumbo frames is twice as good as a standard one, 8.87 Gbits/sec for MTU 9001 vs. 4.39 Gbits/sec for MTU 1500 (both with enhanced networking)

The next test is for cross-region peering connection vs. usual internet connection between instances with enabled enhanced networking.



### Via Internet gateway ###
# ping 18.188.115.105
PING 18.188.115.105 (18.188.115.105) 56(84) bytes of data.
64 bytes from 18.188.115.105: icmp_seq=1 ttl=42 time=93.5 ms
64 bytes from 18.188.115.105: icmp_seq=2 ttl=42 time=93.8 ms
64 bytes from 18.188.115.105: icmp_seq=3 ttl=42 time=93.6 ms


# tracepath 54.217.52.160
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-10-0-0-1.us-east-2.compute.internal                0.073ms pmtu 1500
23:  ec2-54-217-52-160.eu-west-1.compute.amazonaws.com    93.719ms reached
     Resume: pmtu 1500 hops 23 back 22 


# iperf3 -c 54.217.52.160 -p 5001 -P 128
[SUM]   0.00-10.00  sec  3.92 GBytes  3.37 Gbits/sec  11063             sender
[SUM]   0.00-10.00  sec  3.10 GBytes  2.66 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 54.217.52.160 -p 5001 -P 12 -u -b 10g
[SUM]   0.00-10.00  sec  5.96 GBytes  5.12 Gbits/sec  0.190 ms  0/171 (0%)  
iperf Done.


### Via VPC peering ### 
# ping 10.0.0.171
PING 10.0.0.171 (10.0.0.171) 56(84) bytes of data.
64 bytes from 10.0.0.171: icmp_seq=1 ttl=64 time=87.2 ms
64 bytes from 10.0.0.171: icmp_seq=2 ttl=64 time=87.2 ms
64 bytes from 10.0.0.171: icmp_seq=3 ttl=64 time=87.3 ms

# tracepath 172.31.102.120
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-10-0-0-1.us-east-2.compute.internal                0.077ms pmtu 1500
 1:  ip-172-31-102-120.us-east-2.compute.internal         85.885ms reached
     Resume: pmtu 1500 hops 1 back 1

# iperf3 -c 172.31.102.120 -p 5001 -P 128
[SUM]   0.00-10.43  sec  5.38 GBytes  4.43 Gbits/sec  22187             sender
[SUM]   0.00-10.43  sec  4.97 GBytes  4.09 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.102.120 -p 5001 -P 12 -u -b 10g
[SUM]   0.00-10.00  sec  7.62 GBytes  6.55 Gbits/sec  0.336 ms  0/139 (0%)  
iperf Done.

Jumbo frames are not supported for cross-region VPC-peering


Despite better bandwidth via peering connection I got worse results for Jitter and packets per second. Here we affirm that cross-region connectivity is not consistent and it does not matter whether VPCs are peered or they communicate via internet gateway. In both cases public Internet is used for transport. VPC peering makes connection more secure, but can not guarantee increased performance.

In the next test I compare communication between two VPCs within one AWS region either connected via AWS Internet Gateways or via VPC peering.










Jumbo frames are used in the peering case.

### Via VPC peering ###
# tracepath 10.1.0.218
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-10-1-0-218.us-east-2.compute.internal              1.057ms reached
 1:  ip-10-1-0-218.us-east-2.compute.internal              1.014ms reached
     Resume: pmtu 9001 hops 1 back 1 

# ping 10.1.0.218
PING 10.1.0.218 (10.1.0.218) 56(84) bytes of data.
64 bytes from 10.1.0.218: icmp_seq=1 ttl=64 time=1.01 ms
64 bytes from 10.1.0.218: icmp_seq=2 ttl=64 time=1.00 ms
64 bytes from 10.1.0.218: icmp_seq=3 ttl=64 time=1.00 ms


# iperf3 -c 10.1.0.218 -p 5001 -P 12
[SUM]   0.00-10.00  sec  11.8 GBytes  10.2 Gbits/sec  426             sender
[SUM]   0.00-10.00  sec  11.7 GBytes  10.0 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 10.1.0.218 -p 5001 -P 12 -u -b 10g
[SUM]   0.00-10.00  sec  11.8 GBytes  10.1 Gbits/sec  0.099 ms  1089039/1542885 (71%)  
iperf Done.


### Via Internet gateway ###
# tracepath 18.191.216.137
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-10-0-0-1.us-east-2.compute.internal                0.079ms pmtu 1500
 1:  no reply
 2:  ec2-18-191-216-137.us-east-2.compute.amazonaws.com    1.124ms reached
     Resume: pmtu 1500 hops 2 back 2 

# ping 18.191.216.137
PING 18.191.216.137 (18.191.216.137) 56(84) bytes of data.
64 bytes from 18.191.216.137: icmp_seq=1 ttl=63 time=1.03 ms
64 bytes from 18.191.216.137: icmp_seq=2 ttl=63 time=1.00 ms
64 bytes from 18.191.216.137: icmp_seq=3 ttl=63 time=0.995 ms

# iperf3 -c 18.191.216.137 -p 5001 -P 12
[SUM]   0.00-10.00  sec  5.69 GBytes  4.89 Gbits/sec  1693             sender
[SUM]   0.00-10.00  sec  5.59 GBytes  4.80 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 18.191.216.137 -p 5001 -P 12 -u -b 10g
[SUM]   0.00-10.00  sec  11.8 GBytes  10.2 Gbits/sec  8.556 ms  0/97 (0%)  
iperf Done.

VPC peering within one region shows much better results for Jitter. Packets per second value is less, but here we use Jumbo frames and that’s why TCP bandwidth is twice as good. Ping is the same, so AWS Internet Gateway does not add latency.

To achieve the maximum network performance on instances with enhanced networking, you may need to modify the default operating system configuration. The MTU is the largest Ethernet frame that can be sent on a network. Most networks, including the Internet, use a 1,500-byte MTU. This is the maximum in AWS, except within a VPC where the MTU is 9,001 bytes. Any MTU over 1,500 bytes is considered a jumbo frame. The MTU increases throughput because each packet can carry more data while maintaining the same packets per second.

In the next test I compare the same pair of instances for two cases, MTU 1500 and MTU 9001.

root@ip-172-31-107-106:/# ifconfig 
eth0      Link encap:Ethernet  HWaddr 0e:4d:af:fe:41:01  
          inet addr:172.31.107.106  Bcast:172.31.111.255  Mask:255.255.240.0
          inet6 addr: fe80::c4d:afff:fefe:4101/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

# tracepath 172.31.109.49
 1?: [LOCALHOST]   pmtu 1500
 1:  ip-172-31-109-49.ec2.internal                         0.153ms reached
 1:  ip-172-31-109-49.ec2.internal                         0.097ms reached
     Resume: pmtu 1500 hops 1 back 1 


# iperf3 -c 172.31.109.49 -p 5001 -P 128
[SUM]   0.00-10.00  sec  7.48 GBytes  6.42 Gbits/sec  3354             sender
[SUM]   0.00-10.00  sec  6.23 GBytes  5.35 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.109.49 -p 5001 -u -b 10g
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  9.55 GBytes  8.04 Gbits/sec 0.014 ms  30/171 (18%)  
[  4] Sent 171 datagrams
iperf Done.

# sudo ifconfig eth0 mtu 9001

# tracepath 172.31.109.49
 1?: [LOCALHOST]                                         pmtu 9001
 1:  ip-172-31-109-49.ec2.internal                         0.192ms reached
 1:  ip-172-31-109-49.ec2.internal                         0.127ms reached
     Resume: pmtu 9001 hops 1 back 1 

# iperf3 -c 172.31.109.49 -p 5001 -P 128
[SUM]   0.00-10.03  sec  11.1 GBytes  9.50 Gbits/sec  420             sender
[SUM]   0.00-10.03  sec  10.4 GBytes  8.94 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.109.49 -p 5001 -u -b 10g
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  9.29 GBytes  7.98 Gbits/sec 0.007 ms  292/1217272 (0.024%)  
[  4] Sent 1217272 datagrams
iperf Done.

Using Jumbo frames show better TCP bandwidth results while UDP is the same.

You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Cluster placement group packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication.

In the next test I compare network performance between two instances in a cluster placement group vs. two instances in different availability zones. Instance type c5n.large has been used with maximum bandwidth up to 25 Gbps.

### Cluster placement group ###
# iperf3 -c 172.31.100.77 -p 5001 -P 12
. . . . . . . . . . .  
[SUM]   0.00-10.00  sec  28.1 GBytes  24.5 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec  28.1 GBytes  24.5 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.100.77 -p 5001 -u -b 25g -P 4
. . . . . . . . . . .  
[SUM]   0.00-10.00  sec  28.8 GBytes  24.7 Gbits/sec 0.023 ms  1837361/3454542 (53%)  
iperf Done.

# ping  172.31.100.77
PING 172.31.100.77 (172.31.100.77) 56(84) bytes of data.
64 bytes from 172.31.100.77: icmp_seq=1 ttl=255 time=0.095 ms
64 bytes from 172.31.100.77: icmp_seq=2 ttl=255 time=0.093 ms
64 bytes from 172.31.100.77: icmp_seq=3 ttl=255 time=0.090 ms
### No Placement group, different AZs ###

# ping 172.31.127.39
PING 172.31.127.39 (172.31.127.39) 56(84) bytes of data.
64 bytes from 172.31.127.39: icmp_seq=1 ttl=255 time=0.768 ms
64 bytes from 172.31.127.39: icmp_seq=2 ttl=255 time=0.764 ms
64 bytes from 172.31.127.39: icmp_seq=3 ttl=255 time=0.754 ms

# iperf3 -c 172.31.127.39 -p 5001 -P 12
. . . . . . . . . . .  
[SUM]   0.00-10.00  sec  28.9 GBytes  24.8 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec  28.8 GBytes  24.8 Gbits/sec                  receiver
iperf Done.

# iperf3 -c 172.31.127.39 -p 5001 -P 12 -u -b 25g
. . . . . . . . . . .  
[SUM]   0.00-10.00  sec  29.7 GBytes  25.5 Gbits/sec 0.044 ms  1471584/3559155 (41%)  
iperf Done.

Results for ping and jitter are much better for connection within cluster placement group, but keep in mind that this scheme is not highly-available.

Elastic block storage (EBS) is a disk that is connected to a compute instance via network. By default a network is shared between disk I/O operations and other traffic.

An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.


For the next test I have used instance type r3.xlarge, which allows enable/disable EBS optimization.

I have also created an EFS file system with provisioned throughput 300 MiB/s for simulating network load.

You can use the AWS CLI command in order to check whether your instance is EBS-optimized. In the below tests I create network load by using “dd” Linux command to write into file in mounted EFS file system. At the same time I use “dd” command to write into file in EBS volume and check network load by “nload

# aws ec2 describe-instance-attribute --instance-id i-07b867f8****  --attribute ebsOptimized
{
    "InstanceId": "i-07b867f8****", 
    "EbsOptimized": {
        "Value": false
    }
}

# sudo mount -t efs -o tls fs-edbba218:/ /efs
# df -h
Filesystem      Size  Used Avail Use% Mounted on
. . . . . . . . . . . . . . . . . . . . . . . .
/dev/xvda1      100G  7.5G   93G   8% /
127.0.0.1:/     8.0E     0  8.0E   0% /efs

NO EBS-optimized
ONLY network load

# dd if=/dev/zero of=/efs/20G-output bs=1M count=20480 conv=fsync

# nload eth0
device eth0 [172.31.98.73] (1/2):
==================================================================================
Incoming:
............... ........................... ..........  Curr: 2.64 MBit/s
######################################################  Avg: 1.82 MBit/s
Outgoing:

######################################################
######################################################  Curr: 669.41 MBit/s
######################################################  Avg: 490.73 MBit/s

When I only use EFS, current network throughput is about 670 Mbit/s

NO EBS-optimized
Network load + DISK load

# dd if=/dev/zero of=/efs/20G-output bs=1M count=20480 conv=fsync

# dd if=/dev/zero of=/ebs/output bs=1M count=20480 conv=fsync

# nload eth0
Device eth0 [172.31.98.73] (1/1):
==================================================================================
Incoming:
                                                        Curr: 929.56 kBit/s
                                                        Avg: 770.98 kBit/s
Outgoing:
           ###########################################  Curr: 134.14 MBit/s
           ###########################################  Avg: 90.56 MBit/s

When I create simultaneous load to EBS and EFS (network), current network throughput is about 130 Mbit/s, so we can see how network resources are divided between different workloads.

Next is the same test, but for an EBS-optimized instance. 

EBS-optimized
ONLY network load
# aws ec2 describe-instance-attribute --instance-id i-0e6e6e7f****  --attribute ebsOptimized
{
    "InstanceId": "i-0e6e6e7f****", 
    "EbsOptimized": {
        "Value": true
    }
}

# dd if=/dev/zero of=/efs/20G-output bs=1M count=20480 conv=fsync

# nload eth0
Device eth0 [172.31.99.175] (1/1):
==================================================================================
Incoming:
                                                        Curr: 2.68 MBit/s
                                                        Avg: 1.86 MBit/s
Outgoing:
######################################################  Curr: 666.41 MBit/s
######################################################  Avg: 515.44 MBit/s

EBS-optimized
Network load + DISK load
# dd if=/dev/zero of=/efs/20G-output bs=1M count=20480 conv=fsync

# dd if=/dev/zero of=/ebs/output bs=1M count=20480 conv=fsync

# nload eth0
Device eth0 [172.31.99.175] (1/1):
==================================================================================
Incoming:
                                                        Curr: 2.69 MBit/s
                                                        Avg: 2.42 MBit/s
Outgoing:
######################################################  Curr: 666.06 MBit/s
######################################################  Avg: 623.51 MBit/s

Here we can see that network performance does not suffer from disk load.

AWS provides a set of tools and features that allow you to improve network performance significantly. Use the newest instance types and AMIs which support the latest drivers for “enhanced networking”, tune your EC2 OS if performance needs to be further improved, use EBS-optimized instances if you expect disk load and of course keep in mind that distance and number of intermediate devices also affect network performance for all, not only for AWS.







OUR CENTERS OF OPERATIONS & DEVELOPMENT

ISRAEL - SALES & DEVELOPMENT

LABS TLV, 61 Floor

Derech Menachem Begin 121,

Tel Aviv-Yafo, 6701203

+972.79.555.5440

info@automat-it.com

UKRAINE - DEVELOPMENT AND SUPPORT

Serpova str. 4a, office 2

Kharkiv 61000

+380.93.524.4633‬ 

info@automat-it.com

NYC - SALES & OPERATIONS

244 Fifth Avenue

New York, NY 10001

+1.917.841.8014

info@automat-it.com

  • Automat-IT LinkedIn
  • Automat-IT Facebook
  • automat-it twitter
  • Telegram

Let's Talk Today

© 2016-2020 AUTOMAT-IT all rights reserved     |     Privacy Policy