OpenXLA is open ecosystem of performant, portable, and extensible machine learning (ML) infrastructure components that simplify ML development by defragmenting the tools between frontend frameworks and hardware backends. Built by industry leaders in AI modeling, software, and hardware.
How is the community using OpenXLA? This page consolidates links to repositories and projects using OpenXLA to provide inspiration and code pointers!
Have a project that uses OpenXLA? Send us a pull request and add it to this page!
Frameworks
- JAX is a ML framework with a
NumPy-like API for writing high-performance ML models
- PyTorch/XLA provides a bridge from PyTorch
to OpenXLA and StableHLO
- TensorFlow is a long-standing ML
framework with a large ecosystem
- Reactant.jl is a framework for
optimizing and executing Julia code via OpenXLA, StableHLO, and
MLIR
- GoMLX ML Framework for the Go Language
- gopjrt raw XlaBuilder+PJRT wrapper for Go:
tested on CPU, GPU and TPU.
- gopjrt raw XlaBuilder+PJRT wrapper for Go:
tested on CPU, GPU and TPU.
PJRT Plugins
- libTPU allows models to execute on Google's Cloud TPUs
Edge Compilation
- Google AI Edge uses StableHLO as an input format
to deploy to mobile devices using LiteRT
- AI Edge Torch exports
PyTorch models for mobile deployment via StableHLO
- AI Edge Torch exports
PyTorch models for mobile deployment via StableHLO
- IREE uses StableHLO as an input format to deploy across
a range of devices and accelerators
- IREE also includes a PJRT plugin
- StableHLO to CoreML
converts StableHLO models to Apple's CoreML
for deploying to Apple devices
Tooling and Visualization
- Model Explorer offers
heirarchical graph visualization with support for StableHLO models