Open Neural Network Exchange
Open-source artificial intelligence ecosystem
From Wikipedia, the free encyclopedia
The Open Neural Network Exchange (ONNX) [ˈɒnɪks][2] is an open-source artificial intelligence ecosystem[3] of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to enable a standard format for representing machine learning models. ONNX is available on GitHub.
| Open Neural Network Exchange (ONNX) | |
|---|---|
| Original authors | Facebook, Microsoft |
| Developer | Linux Foundation |
| Initial release | September 2017 |
| Stable release | 1.20.0[1]
/ 1 December 2025 |
| Written in | C++, Python |
| Operating system | Windows, Linux |
| Type | Artificial intelligence ecosystem |
| License | initially MIT License; later changed to Apache License 2.0 |
| Website | onnx |
| Repository | |
History
ONNX was originally named Toffee[4] and was developed by the PyTorch team at Facebook.[5] In September 2017 it was renamed to ONNX and announced by Facebook and Microsoft.[6] Later, IBM, Huawei, Intel, AMD, Arm and Qualcomm announced support for the initiative.[3]
In October 2017, Microsoft announced that it would add its Cognitive Toolkit and Project Brainwave platform to the initiative.[3]
In November 2019 ONNX was accepted as graduate project in Linux Foundation AI.[7]
In October 2020 Zetane Systems became a member of the ONNX ecosystem.[8]
Intent
The initiative targets:
Framework interoperability
Enable developers to move machine learning models between different frameworks, which may be used at different stages of the development process, such as training, architecture design, or deployment on mobile devices.[6]
Shared optimization
Provide a common representation that can be used by hardware vendors and other developers to apply optimizations to artificial neural network models across multiple machine learning frameworks.[6]
Contents
ONNX provides definitions of an extensible computation graph model, built-in operators and standard data types, focused on inferencing (evaluation).[6]
Each computation dataflow graph is a list of nodes that form an acyclic graph. Nodes have inputs and outputs. Each node is a call to an operator. Metadata documents the graph. Built-in operators are to be available on each ONNX-supporting framework.[6]
ONNX models can be trained in a single framework, such as PyTorch or TensorFlow, and then exported to ONNX. This format allows models to be transferred from the training framework to other environments for testing or deployment. Once a model is in ONNX format, it can be executed in different runtime systems or on various hardware platforms, such as GPUs or specialized AI accelerators. Using a common format enables the same model representation to be used across multiple systems and frameworks.[9]
See also
- Neural Network Exchange Format
- Comparison of deep learning software
- Predictive Model Markup Language—an XML-based predictive model interchange format
- PicklingTools—an open-source collection of tools for allowing C++ and Python systems to share information quickly and easily.