In this guest post, Jon Topper, CTO of DevOps consultancy, The Scale Factory, flags his four favourite announcements from Amazon Web Services’ (AWS) week-long Re:Invent partner and customer conference, which took place last week in Las Vegas.
AWS Re:Invent has just wrapped up for 2017. Held in Las Vegas for over 40,000 people, the conference is an impressive piece of event management at scale, bringing together customers, partners and Amazon employees for five days of keynotes, workshops and networking.
I attended re:Invent 2016, and gave myself repetitive strain injury trying to live-tweet the keynotes, barely keeping up with the pace of new service and feature announcements.
This year’s show saw the cloud giant debut a total of 70 new products and services, and I’ve picked out four of my favourites, and discuss what they mean for enterprise technology teams.
Introducing Inter-region VPC peering
At the start of November, AWS announced it would now be possible for users of its Direct Connect service to route traffic to almost every AWS datacentre region on the planet from a single Direct Connect circuit.
On the back of this, I predicted other region restrictions would be lifted, and AWS came good on that expectation this week when it announced support for peering two virtual private clouds (VPCs) across regions at Re:Invent.
VPC peering is a mechanism by which two separate private clouds are linked together so traffic can pass between them. We use it to link staging and production networks to a central shared management network, for example, but other use cases include allowing vendors to join their networks with clients’ to enable a private exchange of traffic between them.
Until now, when working with customers who require a presence in multiple regions, we have to build and configure VPN networking infrastructure to support it, which also needs monitoring, patching and so forth.
With inter-region VPC peering, all that goes away: we’ll be able just to configure a relationship between two VPCs in different regions, and Amazon will take care of the networking for us, handling both security and availability themselves.
GuardDuty makes it debut at Re:Invent
AWS also debuted a new threat detection service for its public cloud offering, called GuardDuty, which customers can use to monitor traffic flow and API logs across their accounts.
This lets users establish a baseline for “normal” behaviour within their infrastructure, and watch for security anomalies. These are reported with a severity rating, and remediation for certain types of events can be automated using existing AWS tools.
Last year, AWS announced Shield, a managed DDoS protection service made available, for free, to all AWS customers, with CTO Werner Vogels acknowledging that this is something Amazon should have provided a long time ago.
AWS employees often say that security is job zero, and that if they don’t get security right, then there’s no point doing anything else. It’s no surprise therefore that we’re seeing more security focused product releases this year.
AWS GuardDuty is a welcome announcement for both customers and systems integrators. The incumbent vendors in this space offer clumsy solutions, based on past generations of on-premise hardware appliances.
These had the luxury of connecting to a network tap port where they could passively observe and report on traffic as it went by, without impacting on the network or host performance.
Since network taps aren’t available in the cloud, suppliers have had to resort to host-based agents that capture and forward packets to virtual appliances, affecting host performance and bandwidth bills.
AWS GuardDuty lives in the fabric of the cloud itself, and other vendors will find it hard to compete with this level of access.
It’s likely that over time, existing security vendors will pivot their business model further towards becoming AWS partners, adding value to Amazon services rather than providing their own – a move we’ve seen from traditional hosting providers such as Claranet and Rackspace over the years.
EKS: Kubernetes container support on AWS
In the last three years, Kubernetes has become the de facto industry standard for container orchestration, a major industry hot topic, and an important consideration in the running of microservices architectures.
This open source project was created by engineers at Google, who based their solution on their experiences of operating production workloads at the search giant.
Google has offered a hosted Kubernetes solution for some time as part of their public cloud offering, with Microsoft adding support for Azure earlier this year.
At Re:Invent 2017, AWS announced their managed Kubernetes offering, EKS (AKA as Amazon Elastic Container Service for Kubernetes).
While this announcement shows AWS playing catch-up against the other providers, research by the Cloud Native Computing Foundation (CNCF) shows that 63% of Kubernetes workloads were already deployed to the AWS cloud, by people who were prepared to build and operate the orchestration software themselves.
EKS will now make this much easier, with Amazon taking care of the Kubernetes master cluster as a service; keeping it available, patched and appropriately scaled.
Like its parent service, ECS, this is a “bring your own node” solution: users will need to provide, manage and scale their own running cluster of worker instances.
EKS will take care of scheduling workloads onto them, and provide integration with other Amazon services such as those provided for identity management and load balancing.
Expanding the container proposition with Fargate
Alongside EKS, CEO Andy Jassy announced another new container service: AWS Fargate. Potentially much more game-changing, Fargate users won’t need to provide their own worker fleet – these too will be managed entirely by AWS, elevating the container as a first class primitive on the Amazon platform, on a par with EC2 instances. Initially supporting just ECS, Fargate will offer support for Kubernetes via EKS during 2018.
It’s an exciting time for AWS users – with the ability to adopt the latest in container scheduling technology. But, without the challenges of operating the ecosystem, enterprise tech teams can now spend more of their valuable time on generating business value.