PyTorch Foundation lights up welcomes for Helion & Safetensors
The PyTorch Foundation has welcomed Helion as a PyTorch Foundation-hosted project.
As an organisation dedicated to promoting open source development of the PyTorch AI ecosystem, PyTorch itself is of course an open source machine learning framework for building and training neural networks.
Helion is a Python-embedded domain-specific language (DSL) for authoring machine learning kernels. In other words, Helion acts as a specialised coding toolset for software developers coding inside Python to help software engineers create instructions for AI services and software.
Now in its ascension, Helion is designed to compile down to multiple backends (in terms of both hardware, chips and kernel-level substrate technologies) for what the team say is “hardware heterogeneity” spanning Triton, TileIR… with more to come.
Abstraction expansion
Helion aims to raise the level of abstraction compared to kernel languages, making it easier to write correct and efficient kernels while enabling more automation in the autotuning process.
“Helion joins the PyTorch Foundation as AI model development expands from training to an inference boom, elevating the importance of serving models at scale. In this landscape, in which hardware, software and model architectures are shifting simultaneously, engineering teams face significant hurdles in cross-platform compatibility. Helion eliminates bottlenecks associated with model architectures and execution, providing developers with radically simpler kernels, automated ahead-of-time autotuning and greater hardware performance portability,” noted the team, in a celebratory technical product statement.
Helion can be viewed as “PyTorch with tiles” in real terms (tiles being smaller and more manageable blocks of data that are processed individually to improve hardware efficiency) today, or as a higher-level abstraction over kernel languages like Triton.
“Helion makes kernel authoring a first-class part of PyTorch. Developers can write custom kernels in high-level Python, autotune them and deploy across multiple hardware – all within PyTorch. As a PyTorch Foundation project, kernel-level performance stays open, portable and accessible to the entire community,” adds the team.
Compared to Triton and other backends, Helion reduces manual coding effort through autotuning.
Safetensors – tensor serialisation situation
The PyTorch Foundation also welcomed Safetensors as a PyTorch Foundation-hosted project. Developed and maintained by Hugging Face, Safetensors has become one of the most widely adopted tensor serialisation formats in the open source ML ecosystem.
“Safetensors joining the PyTorch Foundation is an important step towards using a safe serialisation format everywhere by default. The new ecosystem and exposure the library will gain from this move will solidify its security guarantees and usability,” noted both Luc Georges and Lysandre Debut, in a release statement. Georges is a Co-Maintainer and Debut is Chief Open Source Officer at Hugging Face.
Safetensors is a secure, high-performance file format for storing and distributing machine learning tensors.
Deep integration
Safetensors integrates deeply with PyTorch and is designed to slot directly into PyTorch-based workflows as a drop-in replacement for torch.load and torch.save. It binds to PyTorch’s native loading APIs, including torch.UntypedStorage, Tensor.narrow and Tensor.to, enabling zero-copy loading and lazy deserialization without changes to existing model code.
For distributed PyTorch workloads, Safetensors provides meaningful practical benefits: large models hosted on the Hugging Face Hub in Safetensors format load significantly faster than their pickle-based equivalents, particularly in multi-GPU and multi-node configurations where redundant I/O is a bottleneck.
The format is already the default checkpoint format across the Hugging Face ecosystem, meaning the vast majority of publicly available PyTorch models, including Llama, Gemma, Cohere models and thousands of fine-tuned variants, are all distributed in the Safetensors format.
Developers can learn more about Helion here and Safetensors here.

