Amazon Web Services (AWS) has launched two new EC2 instances for applications and analytics workloads, as well as AWS Data Pipeline, a web service that allows enterprises to move data across various systems
Launching the new Elastic Compute Cloud (EC2) instance types at re: Invent, its first user conference, AWS chief technology officer, Werner Vogels said they will help enterprises build new high-class applications quickly.
The two new instances designed for analytics are cluster high memory EC2 instance and high storage EC2 instance.
“For those enterprises that have been struggling to do very large scale analytics until now, high storage EC2 instance type is for you,” Vogels said. The new storage instance provides users with 48TB of capacity.
And cluster high memory is aimed at enterprise users that build applications requiring large scale memory, he added.
Vogels also launched Amazon Data Pipeline that will help enterprises create automated and scheduled data flows.
Data Pipeline is a data integration cloud service for business intelligence and will help organisations automate big data workflows, Vogels said.
“Data Pipeline is pre-integrated with existing AWS data sources and connects with third-party and on-premise sources,” Vogels said.
The service was demonstrated by AWS chief data scientist Matt Wood at the re: Invent conference. Wood showed the simple drag-and-drop interface that allows users to create a data pipeline and schedule data-intensive programs.
According to Kyle Hilgendorf, the principal research analyst at Gartner, the Data Pipeline user interface is “clean and simple”. “I hope the AWS Data Pipeline GUI is a look into the future of the AWS management console,” he said.
More on AWS re:Invent
The data service can also be used to create daily and weekly analytics report for data analyses.
“One common customer request we always get is ‘how do I automate replication of database Dynamo DB to Amazon S3’, and Data Pipeline will help enterprises do that,” Wood said.
As there are disparate data collection systems on the cloud such as DynamoDB, Amazon Simple Storage Service (Amazon S3), Amazon Elastic MapReduce (EMR) and now Amazon's new data warehouse service Redshift, it is a challenge to integrate all the data from all these sources, Wood explained.
“Data Pipeline would help enterprises overcome that big data challenge and consolidate all the disparate data into one place,” he said.
21st century IT architecture and applications
In his keynote, Vogels shared his vision of the 21st century applications and IT architecture. “New applications must be resilient, data-driven, adaptable and controllable,” he said.
Elaborating on these four components, Vogels said that 21st-century IT architecture must be “cost-aware” and be built with cost in mind (controllable). He also said that enterprises must constantly inspect the whole application distribution chain and put everything in logs (data-driven).
“There are always code failures. Don’t think failures are exceptions and that’s why you should think of resiliency in the applications you build,” Vogels added.
No room for sentiment
And lastly, he urged enterprises to make no assumptions and advised them to be adaptable at a time when technology is changing at a fast pace.
“Don't become attached to your IT infrastructure. Servers won't hug you back,” he said.
He also spoke about cloud security. “When Amazon.com decided to move all its services to AWS cloud, we decided to encrypt everything – the data that is in transit as well as the data that is at rest,” he said.
Enterprises should think about integrating security into their apps from grounds-up, according to Vogels.
“In the old world, everything was resource-focused. In the new world, everything is business-focused and enterprises must think of IT architecture and applications from a business point of view,” he said.