TechWeb

Amazon Adds PostgreSQL, Big C3 servers

Nov 15, 2013 (04:11 AM EST)

Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=240163967


Amazon Web Services CTO Werner Vogels this week unveiled a new database service based on PostgreSQL, a service for mobile developers that allows application streaming, new C3 and I2 instance types, and a cross-region snapshot capture capability for its Redshift data warehouse users.

Amazon officials are fond of tallying a total for their innovations each year. At Amazon's Re:Invent show this week in Las Vegas, they kept adding to the total as the show unfolded. Last year's event drew 6,000 attendees, a mix of developers, partners and customers. This year, reservations were closed 2.5 weeks before the event as a maximum 9,000 attendees were pre-registered.

The creation of the new instance types reflects the growing size of workloads being placed in the AWS Elastic Compute Cloud. C3 will be Amazon's most compute intensive instance type. They will come in five sizes, powered by modern Zeon E5-2680v2 chips running at 2.8 GHz. The C3 instances will be stored on solid state disk, making for rapid activation when needed.

The base unit will be a C3 large with two virtual CPUs, 3.75 GBs RAM and 32 GBs of solid state disk. The two virtual CPUs supply seven of what Amazon calls EC2 compute units. Amazon's ECUs don't relate directly to present day hardware. Instead, they are a unit of measure based on the performance of a 2007-8 Xeon chip running at 1 GHz. The "large" C3 is available at 15 cents an hour.

[ Want to learn more about another major new service from AWS? See Amazon Launches Workspaces For Desktop Users. ]

The high end of the C3 instances is equipped with 32 virtual CPUs with 60 GBs RAM and 640 GB of solid state disk. The 32 virtual CPUs are the equivalent of 108 EC2 compute units. It's available at $2.40 an hour.

"These are highest performance processors on EC2," said CTO Werner Vogels in a Thursday keynote address. For sheer compute cycles, they give the most bang for the buck over any other instance type, he added.

Amazon added an I/O instance type, the I2, which also depends on large amounts of solid state disk to yield very high I/O rates. Vogels said I2s are capable of reaching up to 175,000 read I/O operations per second and 160,000 write I/O operations per second. In turning to SSDs to speed performance, Amazon is following Rackspace, which announced new cloud servers using SSDs just before Re:Invent, and Digital Ocean, a cloud start up in New Jersey that boasts fast start-up times and operations with its SSD-equipped virtual servers.

The I/O instance type reflects a concerted effort on Amazon's part to improve I/O performance for customer workloads, something that has proven hard to predict for many customers. The chief culprit is believed to be the variable efficiency of Elastic Block Store, the storage that supports running applications. Customers complain that an applications that frequently runs fast is unaccountably slow at certain times, reflecting contention for I/O channels.

For more than a year, Amazon has offered PIOPs, or provisioned I/O operations per second. Customers pay more, but they can designate an I/O level they wish to be able to achieve at any time and Amazon guarantees they will hit within 10% of that mark 99.9% of the time. For example, if the customer sought 4,000 I/Os per second, Amazon guarantees at least 3,600 I/O operations 99.9% of the time, according to figures released by Miles Ward, senior manager of solutions architecture, in a Re:Invent session Wednesday. I/O operations are clearly difficult to architect with assurance in the complexity of a multi-tenant cloud, and Amazon has left itself a little wiggle room in case its best effort to deliver exactly what's ordered doesn't quite work out.

AWS also added an ability to create Redshift data warehouse snapshots and save them to a region other than the one in which they were taken. The service allows data warehouse applications to operate in multiple Amazon regions without falling out of synch. It also gives Redshift users a disaster recovery strategy based on the simple process of snapshot capture and updating in a region outside the one where they're generated. Vogels noted that interest in cross-region updates peaked after Hurricane Sandy blacked out much of the East Coast. Amazon's operations at U.S. East in Ashburn, Va., were not affected but many data centers lost power and lost the ability to operate when backup system failed or ran out of fuel.

In addition, AWS told mobile developers they can use its new AppStream service that allows them to stream an application to a variety of end user devices. Amazon officials are trying to attract developers to their service by making it easier for smartphone and tablet application developers to produce an application once, then run it on EC2 making use of AppStream to reach multiple devices. Amazon will take care of hardware and posting updates to the application, while developers concentrate on function and features.

Vogels said availability of the open source PostgreSQL database as a service was one of the most requested additions from the customer base. Amazon already offers Oracle, MySQL, and Microsoft's SQL Server. But PostgreSQL adds an open source ANSI standard database system, something that MySQL doesn't claim to be. So PostgreSQL can be used with full blown relational data base applications that require constant data consistency.