TensorFlow Hub
Machine learning library created by Google
From Wikipedia, the free encyclopedia
TensorFlow Hub (also styled TF Hub) is an open-source machine learning library and online repository that provides TensorFlow model components, called modules.[2]
| TensorFlow Hub | |
|---|---|
| Developer | |
| Initial release | March 6, 2018 |
| Stable release | 0.16.1[1]
/ January 30, 2024 |
| Written in | Python |
| Operating system | cross-platform |
| Platform | TensorFlow |
| Type | Machine learning, Artificial intelligence |
| License | Apache-2.0 |
| Website | tensorflow |
| Repository | |
It is maintained by Google as part of the TensorFlow ecosystem and allows developers to discover, publish, and reuse pretrained models for tasks such as computer vision, natural language processing, and transfer learning.[3]
Overview
TensorFlow Hub provides a central platform where developers and researchers can access pre-trained models and integrate them directly into[weasel words] TensorFlow workflows.[4] Each module encapsulates a computation graph and its trained weights, with standardized input and output signatures. Modules can be loaded using the hub.load() function or through Keras integration via hub.KerasLayer, enabling users to perform transfer learning or feature extraction.[4]
History
TensorFlow Hub was announced by Google in March 2018, with the first public version released shortly after. Its introduction coincided with the growing adoption[vague] of transfer learning techniques and the need for standardized model packaging.[according to whom?] Over time, the hub expanded to include models such as the BERT family, MobileNet, EfficientNet, and the Universal Sentence Encoder.[5]
In 2020, research on “Regret selection in TensorFlow Hub” explored the problem of identifying optimal models for downstream tasks given a large repository of alternatives.[citation needed]
Applications
TensorFlow Hub hosts a variety of models across machine learning domains:[citation needed]
- Natural language processing: BERT, ALBERT language model, and Universal Sentence Encoder.
- Computer vision: ResNet, Inception (deep learning), MobileNet, EfficientNet.
- Speech and audio: spectrogram feature extractors and automatic speech recognition models.
- Multilingual embeddings: cross-lingual and sentence-level representations for machine translation and semantic similarity.
Modules are widely used[by whom?] in education, academic research, and industry for prototyping and production deployment.[6]