TensorFlow 2.21 has been launched! You’ll find a whole checklist of all modifications within the full launch notes on GitHub.
What’s new within the LiteRT stack?
At Google I/O ‘25, we shared a preview of the evolution to LiteRT: a high-performance runtime designed particularly for superior {hardware} acceleration. Right now, we’re excited to announce that these superior acceleration capabilities have absolutely graduated into the LiteRT manufacturing stack, obtainable now for all builders.
This milestone solidifies LiteRT because the common on-device inference framework for the AI period, representing a major leap over TFLite for being:
- Quicker: delivers 1.4x sooner GPU efficiency than TFLite, and introduces new, state-of-the-art NPU acceleration.
- Less complicated: supplies a unified, streamlined workflow for GPU and NPU acceleration throughout edge platforms.
- Highly effective: helps superior cross-platform GenAI deployment for fashionable open fashions like Gemma.
- Versatile: affords first-class PyTorch/JAX help by way of seamless mannequin conversion.
All of that is delivered whereas sustaining the identical dependable, cross-platform deployment you belief since TFLite.
Learn the full announcement and get began.
tf.lite
- A number of operators now help lower-precision information varieties for higher efficiency and effectivity which incorporates int8 and int16x8 for the SQRT operator in addition to int16x8 for comparability operators.
- Help for smaller information varieties has been prolonged throughout a number of operators. tfl.forged now helps conversions involving INT2 and INT4, tfl.slice has added help for INT4, and tfl.fully_connected now contains help for INT2.
Group updates
We’ve additionally heard from the group across the want for fixing bugs shortly and offering extra well timed dependency updates, so we’re rising assets in the direction of these efforts. Going ahead, we’ll completely give attention to:
- Safety and bug fixes: We’re rising our efforts to shortly deal with safety vulnerabilities and significant bugs, releasing minor and patch variations as required.
- Dependency updates: We are going to launch minor variations as required to help dependency updates, together with new Python releases.
- Group contributions: We are going to proceed to evaluate and settle for crucial bug fixes as related from the open supply group.
These commitments will apply to TF.information, TensorFlow Serving, TFX, TensorFlow Information Validation, TensorFlow Remodel, TensorFlow Mannequin Evaluation, TensorFlow Recommenders, TensorFlow Textual content, TensorBoard, and TensorFlow Quantum.
Observe: The TF Lite undertaking has been renamed to LiteRT and is in lively improvement individually.
Whereas TensorFlow continues to offer stability for manufacturing, we advocate exploring our newest updates for Keras 3, JAX, and PyTorch for brand new work in Generative AI.







