Inside Advanced Scale Challenges|Thursday, July 27, 2017
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

AWS Embraces FPGAs, ‘Elastic’ GPUs 

(Profit_Image/Shutterstock)

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors.

The public cloud giant (NASDAQ: AMZN) announced the new "F1" instance along with a separate GPU service on Wednesday (Nov. 30) during its annual partner event. The new FPGA instance is available now as a developer preview on the U.S. East region of the AWS cloud computing service and is scheduled to be generally available by the end of the year. Pricing was not announced.

Meanwhile, the company also announced a separate preview of its "elastic GPU" service running on its Elastic Compute Cloud. "There [are] times we need just a little GPU but… don’t' need all the GPU you'd get in a GPU instance," noted AWS CEO Andy Jassy. "Elastic GPUs will allow [customers] to attach GPU to any of our nine instances."

Meanwhile, the new F1 instance attempts to address the continuing data explosion being generated by the Internet of Things, video streaming and other demanding workloads. Specifications for the F1 instances pair Intel Broadwell E5 2686 v4 processors with up to 976 Gbytes of memory and up to 4 Tbytes of storage with one to eight FPGAs.

"The FPGAs are dedicated to the instance and are isolated for use in multi-tenant environments," Amazon's Jeff Barr noted in a blog post. Up to eight FPGAs are dedicated to each F1 instance, based on the following specs:

  • Xilinx UltraScale+ VU9P fabricated using a 16-nanometer process.
  • 64 Gbytes of memory on a 288-bit wide bus, including four DDR4 channels.
  • Dedicated PCIe x16 interface to the CPU.
  • Approximately 6,800 digital signal-processing engines.

For instances with more than one FPGA, Barr said a dedicated PCIe fabric allows sharing the same memory address space and communicating across a PCIe Fabric at up to 12 Gbps in each direction.

AWS also said it would supply an F1 hardware development kit with preconfigured I/O interfaces for linking FPGAs, CPUs and main memory. Sample applications included with the kit address communications between host and FPGA, FPGA and memory as well as links between individual FPGAs, Barr said.

Amazon partners also said they are working with the public cloud giant to roll out the new F1 instance. The "AWS marketplace will embrace heterogeneous computing principles with the advent of its new F1 instance, opening up a whole new set of data analytics capabilities," predicts Des Wilson, CEO of data analytics specialist Ryft Systems Inc. The Rockville, Md., company specializing in hybrid FPGA/x86 configurations said it is providing APIs that can be used to integrate the F1 instance with various analytics platforms.

About the author: George Leopold

George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.

Add a Comment

Share This