Open Neural Network Exchange

Open-source artificial intelligence ecosystem From Wikipedia, the free encyclopedia

The Open Neural Network Exchange (ONNX) [ˈɒnɪks][2] is an open-source artificial intelligence ecosystem[3] of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to enable a standard format for representing machine learning models. ONNX is available on GitHub.

Initial releaseSeptember 2017; 8 years ago (2017-09)
Stable release
1.20.0[1] / 1 December 2025; 3 months ago (1 December 2025)
Quick facts Original authors, Developer ...
Open Neural Network Exchange (ONNX)
Original authorsFacebook, Microsoft
DeveloperLinux Foundation
Initial releaseSeptember 2017; 8 years ago (2017-09)
Stable release
1.20.0[1] / 1 December 2025; 3 months ago (1 December 2025)
Written inC++, Python
Operating systemWindows, Linux
TypeArtificial intelligence ecosystem
Licenseinitially MIT License;
later changed to Apache License 2.0
Websiteonnx.ai Edit this on Wikidata
Repository
Close

History

ONNX was originally named Toffee[4] and was developed by the PyTorch team at Facebook.[5] In September 2017 it was renamed to ONNX and announced by Facebook and Microsoft.[6] Later, IBM, Huawei, Intel, AMD, Arm and Qualcomm announced support for the initiative.[3]

In October 2017, Microsoft announced that it would add its Cognitive Toolkit and Project Brainwave platform to the initiative.[3]

In November 2019 ONNX was accepted as graduate project in Linux Foundation AI.[7]

In October 2020 Zetane Systems became a member of the ONNX ecosystem.[8]

Intent

The initiative targets:

Framework interoperability

Enable developers to move machine learning models between different frameworks, which may be used at different stages of the development process, such as training, architecture design, or deployment on mobile devices.[6]

Shared optimization

Provide a common representation that can be used by hardware vendors and other developers to apply optimizations to artificial neural network models across multiple machine learning frameworks.[6]

Contents

ONNX provides definitions of an extensible computation graph model, built-in operators and standard data types, focused on inferencing (evaluation).[6]

Each computation dataflow graph is a list of nodes that form an acyclic graph. Nodes have inputs and outputs. Each node is a call to an operator. Metadata documents the graph. Built-in operators are to be available on each ONNX-supporting framework.[6]

ONNX models can be trained in a single framework, such as PyTorch or TensorFlow, and then exported to ONNX. This format allows models to be transferred from the training framework to other environments for testing or deployment. Once a model is in ONNX format, it can be executed in different runtime systems or on various hardware platforms, such as GPUs or specialized AI accelerators. Using a common format enables the same model representation to be used across multiple systems and frameworks.[9]

See also

References

Related Articles

Wikiwand AI