AWS Announces Nine New Compute and Networking Innovations for Amazon EC2
Business Wire | December 03, 2019
Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced nine new Amazon Elastic Compute Cloud (EC2) innovations. AWS already has more compute and networking capabilities than any other cloud provider, including the most powerful GPU instances, the fastest processors, and the only cloud with 100 Gbps connectivity for standard instances. Today, AWS added to its industry-leading compute and networking innovations with new Arm-based instances (M6g, C6g, R6g) powered by AWS-designed processors in Graviton2, machine learning inference instances (Inf1) powered by AWS-designed Inferentia chips, a new Amazon EC2 feature that uses machine learning to cost and performance optimize Amazon EC2 usage, and networking enhancements that make it easier for customers to scale, secure, and manage their workloads in AWS. Since their introduction a year ago, Arm-based Amazon EC2 A1 instances (powered by AWS's first version of Graviton chips) have provided customers with significant cost savings by running scale-out workloads like containerized microservices and web tier applications. Based on the cost savings, combined with increasing and significant support for Arm from a broader ecosystem of operating system vendors (OSVs) and independent software vendors (ISVs), customers now want to be able to run more demanding workloads with varying characteristics on AWS Graviton-based instances, including compute-heavy data analytics and memory intensive data stores. These diverse workloads require enhanced capabilities beyond those supported by A1 instances, such as faster processing, higher memory capacity, increased networking bandwidth, and larger instance sizes.