AWS CEO: Why we call it re:Invent

AWS CEO Adam Selipsky kicked off AWS re:Invent 2023 in Las Vegas this week after the customary ‘7-am hard rock band tribute act’ that you didn’t think your ears (or various other parts) were ready for – actually, they were really good.

More than 50,000 attendees were in Las Vegas and Selipsky said that the best thing about this event is the variety of use cases in which he sees AWS cloud services being applied. From banking to healthcare to automotive and every other industry you can think of, Selipsky reeled off a list of brand names customers that everyone would recognise.

It’s not just global enterprises that choose AWS said the CEO (but he wasn’t referring to start-ups), there are a large number of unicorns also now embracing cloud services. But it is ‘customers of every size’ he insists… an expert guitar maker in a tiny village in Ireland is using AWS cloud to create a digital passport service to help ship its products around the world.

But why is all this happening on AWS asks Selipsky?

He talks about security and breadth of service as key differentiators, but really the AWS CEO says that it all comes down to the fact that the company thinks differently. In the early days of Amazon, there was a strong focus on shouldering the heavy lifting associated with operating the datacentres that organisations used to have to operate, maintain and manage in the pre-cloud era.

Looking back to the AWS S3 storage service, Selipsky started to tell stories related to how AWS has progressed technologies like this with tiering elements to bring forward new functions, new cost savings and new ways to serve the changing workloads that exist in real-world deployment environments with massive analytics functions being applied to them. The Amazon S3 Express One Zone was newly showcased at this year’s event — it’s all about bringing frequently accessed data forward with the fastest performance and lowest latency.

Why it’s called re:Invent

It is at this point that Selipsky really moved to try and clarify why the event itself is called re:Invent – it’s because it is trying to keep changing the way some core computing standards work.

Moving to talk about general-purpose computing, Selipsky explained the evolution of the AWS Graviton processor line, which has increased performance with improved energy efficiency over its years of development.

Moving (perhaps inevitably) to generative AI, Selipsky explained the structure of what he now calls the generative AI stack.

At the base, we find the infrastructure technologies that and devoted to training foundational models and creating enough model knowledge to be able to deliver inference. AWS has been working with capitalisation-fanatics Nvidia to bring Graphical Processing Unit (GPU)-driven cloud services forward for some years now. These GPUs need to be deployed into really high-performance clusters (with up to 20,000 GPUs in a single cluster) at a size that compares in line with the exaflop-level power that we would typically find in a supercomputer.

Clustering capacity

AWS EC2 capacity blocks for ML address the need for users to be able to access surges in capacity when building foundation models (as a key example, but this cloud delivery format applies to other use cases as well of course) and then, perhaps, pause for a while to analyse the state, structure and worth of the foundation model during its construction (a period when cloud capacity is necessarily lower) before then increasing capacity again. 

AWS Neuron is the company’s Software Development Kit (SDK) to optimise machine learning development. Selipsky points to this as a key factor for developers now looking to build foundation models and work directly in the new and emerging generative AI application space.AWS Neuron is an SDK with a compiler, runtime and profiling tools that unlock high-performance and cost-effective deep learning (DL) acceleration.

It supports high-performance training on AWS Trainium-based Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances. 

Amazon Bedrock

Amazon Bedrock made its first appearance at this stage of the show.

Amazon Bedrock is a service for building and scaling generative AI applications, which are applications that can generate text, images, audio and synthetic data in response to prompts. 

Amazon Bedrock gives users the ability to work with foundation models – those ultra-large ML models that generative AI relies on – from the top AI startup model providers, including AI21, Anthropic and Stability AI and access to the Titan family of foundation models developed by AWS. No single model does everything AWS reminds us… so Amazon Bedrock opens up an array of FMs from leading providers, so AWS users have the flexibility and choice to use the best models for their specific needs.

Guardrails for Amazon Bedrock has now come out of preview and are now generally available.

This technology can help developers build customer service applications with generative AI functions (for example) to reduce (and hopefully eradicate) bias and the risk of hallucinations cropping up in working functions. It is used to promote safe interactions between users and gen-AI applications by implementing safeguards customised to your specific use cases and an organisation’s responsible AI policies.

According to the company, “AWS is committed to developing generative AI in a responsible, people-centric way by focusing on education and science and helping developers to integrate responsible AI across the AI lifecycle. With Guardrails for Amazon Bedrock, you can consistently implement safeguards to deliver relevant and safe user experiences aligned with your company policies and principles. Guardrails help you define denied topics and content filters to remove undesirable and harmful content from interactions between users and your applications. This provides an additional level of control on top of any protections built into foundation models.”

A key announcement this week (and one that we hope to analyse separately) is Amazon Q.

Amazon Q

Amazon Q is said to be new type of generative AI powered assistant that is specifically for work and can be tailored to a customer’s business. Customers can get fast, relevant answers to pressing questions, generate content and take actions… all informed by a customer’s information repositories, code and enterprise systems. Amazon Q provides information and advice to employees to streamline tasks, accelerate decision-making and problem solving and help spark creativity and innovation at work. Designed to meet enterprise customers’ stringent requirements, Amazon Q can personalise its interactions to each individual user based on an organization’s existing identities, roles and permissions.

Generative AI has the potential to spur a technological shift that will reshape how people do everything from searching for information and exploring new ideas to writing and building applications,” said Dr. Swami Sivasubramanian, vice president of data and AI. “AWS is helping customers harness generative AI with solutions at all three layers of the stack, including purpose-built infrastructure, tools and applications. Amazon Q builds on AWS’s history of taking complex, expensive technologies and making them accessible to customers of all sizes and technical abilities, with a data-first approach and enterprise-grade security and privacy built-in from the start.

AWS had and has a lot more in the pipe and this is ‘just’ the day one intro, we will dive deeper throughout the week.

Adam Selipsky keynote at AWS re:Invent 2023
Photo by Noah Berger

Data Center
Data Management