Hitachi Vantara: Cementing the building blocks of ‘physical AI’

Keen to know what makes the software-hardware fusion point really forge, fizz and flourish, the Computer Weekly Developer Network (CWDN) team sat down with Jason Hardy, CTO of AI at Hitachi Vantara.

The objective here was to examine some key trends in the technology industry as seen by the practitioners at the company we know as  a player in the data platform market with its competencies in hybrid cloud infrastructures and data analytics for sustainable business growth. 

Hardy and team are highly focused on what they call physical AI i.e AI systems that can perceive, understand and act in the physical world, integrating digital intelligence with the physicality of robotics and sensors to improve human and industrial collaboration and efficiency.

CWDN: From Hitachi’s perspective, what makes capitalisation-focused company Nvidia’s RTX PRO 6000 Blackwell GPU and RTX PRO server architecture especially suited to the “physical “AI workloads you’re developing (such as digital twins)?

Hardy: As we look at how the industrial AI ecosystem will start to evolve, we need a platform that can support AI inferencing at the very edge. Think of it as a conveyor belt to the cloud. As we get closer to the device, we still need an instrument with an insurmountable level of compute, but it needs to be factory-friendly and allow AI workloads to impact real-time decision-making.

The RTX Pro 6000 gives us the ability to create high-powered compute in a format that fits well to meet factory or edge demand from the inferencing and industrial AI workloads.

On the digital twin side, physics-based simulations, especially robotics training, require fidelity to the real world. They need to be photo-real from a rendering perspective and include full physics simulation capabilities. RTX Pro 6000 is able to do this, helping us train robots in digital twin and omniverse spaces, before we then move them into the real world as fully trained models.

As we expand robotics training, site planning and digital physics-based simulations, RTX Pro 6000 powers all these workloads in a very flexible manner. It provides ray tracing to generate photo-real renderings, while also supporting AI inferencing and physics simulation. It’s a well-rounded form that allows us to address multiple scenarios across industrial and physical AI workflows and portfolio expansion.

CWDN: Which frameworks and libraries (e.g., Nvidia Omniverse, PhysX, Modulus, PyTorch, TensorFlow) is Hitachi Vantara prioritising for developers building physical AI applications… and how are you packaging or exposing them to customers in your platform?

Hardy: We’ve built our platform to be extremely flexible. It’s not a black box; it’s designed from a compute perspective to meet customer demands. At Hitachi Vantara, we are adopting many of the listed technologies: Nvidia Omniverse, Modulus and open source frameworks like TensorFlow and PyTorch for various workloads.

From our perspective, it’s about enabling these workloads through compute capabilities, then using our expertise to build Omniverse simulations, integrate physics modules like Modulus and deliver end-to-end solutions. It’s Vantara powering the platform, combined with our One Hitachi partners across industrial sectors.

We offer customers a turnkey capability. It’s not just a compute problem: it’s about how software is consumed and delivered and how we package it all up for practical use.

CWDN: How is Hitachi helping developers scale workloads across GPUs and clusters to achieve real-time performance?

Hardy: We’re heavily focused on scaling AI workloads and ensuring our software can efficiently utilise GPUs. This includes decreasing footprint requirements or stretching assets to maximise ROI.

We also bring decades of experience and collaborate with partners, including NVIDIA, to make general improvements. Our in-house capabilities, product development and external collaborations all focus on improving efficiency and ensuring the best ROI for customers.

CWDN: What deployment models are you seeing most in demand… and how is Hitachi enabling developers to integrate these environments into their existing DevOps workflows?

Hardy: Deployment models range from CapEx investment to OpEx, including as-a-service or managed service models. Customers consume technology from us in different ways, so we meet their needs and provide comprehensive offerings across all products – not just GPU or IQ infrastructure, but also storage and managed services.

For developers, our platform is not a black box or closed environment. If customers already have DevOps pipelines to support AI, we plug into them… and in some cases help optimise them. If they don’t, we provide frameworks and tools to build from scratch.

It depends on customer needs, but the principle is to align with how they operate and innovate as a company and for us to integrate with this. As part of One Hitachi, we deliver comprehensive, rounded offerings across industries and customer types.

CWDN: Physical AI depends on real-time data from assets and sensors. How is Hitachi helping developers connect live data streams into digital twins and what ingestion or processing frameworks are you building into your solutions?

Hardy: We’ve been working with sensor-based workloads for decades, since we build products with sensors embedded. We understand the many ways this data is transmitted and consumed.

We draw upon that legacy to understand these environments and data types,… and to determine how best to align them with predictive analytics, forecasting, guided repair, or other outcomes. We build the sensors and products, integrate their data streams through our frameworks and process them close to the sensors to enable real-time decisions.

We understand both IT and OT and design solutions that bridge the two. We also support third-party integrations and guide our customers through this.

CWDN: Looking ahead, how is Hitachi hoping to ensure that developers’ code and workflows are future-ready, for example, as physical AI extends to edge deployments, modular AI frameworks, or containerised pipelines?

Hardy: We plan with a long-term view: what does 2030 or 2035 look like? Our products run continuously for decades, not on three-year refresh cycles. As physical AI extends, we’re innovating through simulation and platforms like Omniverse, while also bringing in our own software to augment capabilities. Our perspective combines IT and OT, ensuring we plan for future impacts while continuously innovating to improve outcomes.

The market is evolving rapidly, so we align with its direction and help drive it. That’s why we partner closely with Nvidia – for example, with our Hitachi iQ portfolio of AI solutions – because they are shaping what this market looks like. Our mission is to provide our point of view and our guidance to apply this mutual understanding to our customer offerings in the physical AI and enterprise AI realm, as well as other markets.

This is new for everyone, so our practices in product development, coding and workflows apply very widely, benefiting all industries.