Porsche revs up Formula E team with NetApp hybrid cloud

Porsche’s electric car racing team has to set up on city street courses in short order, and does it with a portable server setup and hybrid cloud storage managed by NetApp GFC

Porsche’s Formula E team takes cars to 16 races in a season, and has a 24-hour setup time for events held on city streets with a wide variety of conditions and potential changes to courses.

During races, data is collected from hundreds of telemetry points on electric racing cars, handled and processed in real time during the event, and stored for later access and analysis.

Porsche uses a heavily cloud-based model of working, with hot data cached locally in case of outage and to ensure rapid access, some data in NetApp Ontap flash in the cloud, while cold data is held in Blob storage on Microsoft Azure. 

Even though it is almost entirely cloud-dependent, head of motorsport IT at Porsche, Friedemann Kurz, speaking at NetApp’s Insight event at the Hockenheimring in Germany last week, said the model can scale well to enterprise use cases. 

But Formula E is arguably quite a special use case. The format sees races run on city street courses – from Mexico City and LA to Jakarta and Seoul – with constraints on headcount and roles, and budgets. Also constraining it as a race team are the venues, where the team must arrive – usually on a Wednesday – and set up within 24 hours to run a race over the weekend.

It’s an electric vehicle format, with cars that generate as much as 350kw/476bhp. The challenge is not only to drive the race course and take into account conditions, but also maximise efficiency of battery power. Batteries weigh 285kg and have 38.5kwh capacity. No charging is allowed during the race but power can be recouped – in other words, recharging the batteries – during braking, for example, where up to 350kw can be generated.

The aim, said Kurz, is to finish the race with a discharged battery. “You get power back when braking, and that might be useful when you want to overtake a little later in the race,” he said. “You hope to be as efficient as possible and that your competitor isn’t able to do the same thing. Finishing the race with an empty battery means you’ve had maximum efficiency. And, obviously you don’t want it to run out before that.”

Read more on hybrid cloud

With constraints on setup and practice time, the Porsche team can use data from past races to run simulations prior to the event to familiarise drivers with courses. 

On Wednesdays before race weekends, teams arrive and spend a full day on setup, bringing a ruggedised server rack with storage managed by NetApp Global File Cache (GFC), which handles movement of data between local hardware, the Azure cloud and the Porsche datacentre.

“Engineers arrive and need to plug in and have data ready for them,” said Kurz. “To prepare for the race, they need to apply data into the car and all the updates regarding the track, surface, to the weather, even the course layout, which can change in these types of venues. It’s not purely about the amount of data. It only maxes out at about 50GB per race. The challenge is to handle it in real time. So, to be in the middle of nowhere and bring in everything our engineers need to work.”

Hybrid cloud is core to the way the Porsche Formula E team works, but local cache is integral. The cloud enables portability while local cache allows for rapid access and access in case of outage, which would be near-fatal for such a data-driven operation.

Local caching and management of data between remote sites and the cloud is handled by NetApp GFC, which allows consolidation of distributed file servers into a global storage pool in the public cloud. This creates a globally accessible file system using Azure Blob and Ontap Cloud that remote locations can use as if they were local.

GFC monitors and learns usage patterns in the data and assigns it to the cloud – Azure Blob, as well as dedicated flash disk in the cloud – local cache, and/or the Porsche datacentre. 

“It’s hybrid cloud and it works seamlessly,” says Kurz. “100% of data is in the cloud, but local servers keep 10% of frequently used data on the local machine. And if they need data that’s not held locally, the file structure is still there. If the user clicks and views it, the system knows and starts to make it available.”

Azure Blob

Azure Blob forms the bulk of the shared data pool and retains infrequently accessed data. “It is really, really cheap, and compressed, and charges are mainly for any I/O that occurs,” said Kurz.

“GFC moves it there when it makes sense, when it hasn’t been accessed for two weeks, for example.”

What about response times from the cloud? Blob access time is in the “low seconds”, he said. “It’s not something the user feels as unusual. Overall, the benefit is that it’s low maintenance. One guy on-site is able to connect our operations equipment to the cloud. It doesn’t need a whole team of experts to build.”

Effectively, however, this is a pretty small-scale SME-sized operation. Does Kurz think it would scale to an enterprise level? 

“There’s no reason it can’t be scalable,” he said. “If you’re operating worldwide and have several branches with these kinds of requirements and optimised management of storage hosted in the cloud, the savings can get bigger as it scales. The main benefit is the saving.”

Read more on Cloud storage

CIO
Security
Networking
Data Center
Data Management
Close