Brian Jackson - Fotolia
Amazon.com chief technology officer (CTO) Werner Vogels has declared 1 November 2018 as the happiest day of his working year at the firm, as that is when the online retail giant finally switched off its Oracle data warehouse.
Vogels made the declaration during the technical keynote at the Amazon Web Services (AWS) re:Invent user conference in Las Vegas, where he provided attendees with a comprehensive insight into the inner workings of the database technologies that underpin the Amazon.com platform.
He revealed the database setup Amazon.com runs on has undergone a significant overhaul over the past 14 years, prompted by by an outage incident in 12 December 2004, which he described as his worst day working to-date at Amazon.
The date in question coincided with the cut-off deadline for when Amazon.com customers could take advantage of the firm’s free shipping offer on Christmas orders, making it a very busy time for the retailer.
However, a database bug occurred on 12 December 2004 that took out the entire Amazon.com website for 12 hours, Vogels recalled, marking the system out as a significant single point of failure for its operations.
“That particular failure drove a lot of the development that you now see coming back in AWS,” said Vogels.
The database in question was effectively a “black box” of code, said Vogels, and from that point on the firm started taking steps to reduce the “blast radius” for such incidents, to minimise the number of customers who would be affected should another database issue occur in future.
“The first decision was to start using databases like everyone else in the world,” he said, which meant introducing the concept of partitioning and making use of cell-based architectures. These create independent units, so if there is a failure it is limited to that unit, not the whole thing,” he said.
This work also led to the realisation that traditional, relational database technologies are ill-equipped to cope with the demands of web-scale and born in the cloud companies, he added.
“These [relational] databases are not cloud native. They are not good, fundamental building blocks for database innovation, and are definitely not [equipped] for really massive scale,” he said.
“So when we started thinking about how can we build a database that would be the future of database innovation, basically we needed to move away from the models we created in the 80s and 90s for databases and go to true cloud-native modern database.”
This work went on to inform the development of Amazon’s own relational database technology called Aurora, which has become the fastest-growing service in AWS history since it went on general release in 2015. “It can support internet workloads that no other database can support,” he added.
As a counterpoint to this, Vogels then went on to discuss his happiest working day of 2018, which is when the firm finally completed its migration onto Amazon’s own RedShift data warehousing technology.
“My happiest day of this year was actually November 1st. This was the moment we switched off one of world’s largest, if not the largest, Oracle data warehouse,” he said.
The move has generated considerable performance benefits to Amazon.com, as the firm is now able to take advantage of the ongoing improvements continually being made to AWS platforms in direct response to user feedback, he said.
“[There are] massive improvements in performance because we know how our customers are using our systems and that can drive the way we do innovation forward,” he said.
“Even in the past six months, Redshift has become 3.5 times faster. It is amazing that we can do that because we have that feedback loop in how our customers are using our systems.”
In addition to Vogels comments, Re:Invent 2018 has seen numerous barbs fired Oracle’s way by various members of the AWS senior leadership team, as the firm made a series of announcements about the expanding capabilities of its growing database portfolio.
These include the launch of the firm’s first blockchain-focused offerings, coupled with the emergence of a number of new features within its incumbent Aurora and DynamoDB services.
The firm’s CEO Andy Jassy made a passing reference to the Oracle during the main event keynote, where he talked about “old guard” databases that are expensive to run, have high risk of lock-in, and are not built with customers in mind.
“Old guard databases have been a miserable world over the past couple of decades for most enterprises, and that’s because these old guard databases – like Oracle – and SQL servers are expensive, they are high lock-in, they’re proprietary and they’re not customer-focused,” he said.
“Forget the fact both of them will constantly audit and fine you, but they also make capricious decisions overnight that are good for them, but not so good for you. If Oracle decides they want to double the cost of Oracle software running on AWS or Azure, that’s what they do,” he added.
Read more about AWS database technologies
- Months after Oracle CTO Larry Ellison cited Amazon’s use of its database tech as proof of its superiority, AWS CEO confirms its parent company is now migrating off it.
- Amazon Web Services (AWS) has upped the ante in its ongoing war of words with Oracle by taking a series of pot-shots at its rival while showcasing a growing database portfolio at Re:Invent 2017.