Convert Machine Learning Code Between Frameworks
Website |
Docs |
Demos |
Design |
FAQ |
Ivy enables you to:
ivy.transpile
ivy.trace_graph
The easiest way to set up Ivy is to install it using pip:
pip install ivy
You can pull the Docker image for Ivy from:
docker pull ivyllc/ivy:latest
You can also install Ivy from source if you want to take advantage of
the latest changes, but we can’t ensure everything will work as
expected 😅
git clone https://github.com/ivy-llc/ivy.git
cd ivy
pip install --user -e .
If you want to set up testing and various frameworks it’s probably best
to check out the Setting Up
page, where OS-specific and IDE-specific instructions are available!
These are the frameworks that ivy.transpile
currently supports conversions from and to.
We’re working hard on adding support for more frameworks, let us know on Discord if there are source/target frameworks that would be useful for you!
Framework | Source | Target |
---|---|---|
PyTorch | ✅ | 🚧 |
TensorFlow | 🚧 | ✅ |
JAX | 🚧 | ✅ |
NumPy | 🚧 | ✅ |
Ivy’s transpiler allows you convert code between different ML frameworks. Have a look at our Quickstart notebook to get a brief idea of the features!
Beyond that, based on the frameworks you want to convert code between, there are a few more examples further down this page 👇 which contain a number of models and libraries transpiled between PyTorch, JAX, TensorFlow and NumPy.
Here’s some examples, to help you get started using Ivy! The examples page also features a wide range of
demos and tutorials showcasing some more use cases for Ivy.
import ivy
import torch
import tensorflow as tf
def torch_fn(x):
a = torch.mul(x, x)
b = torch.mean(x)
return x * a + b
tf_fn = ivy.transpile(torch_fn, source="torch", target="tensorflow")
tf_x = tf.convert_to_tensor([1., 2., 3.])
ret = tf_fn(tf_x)
import ivy
import torch
def torch_fn(x):
a = torch.mul(x, x)
b = torch.mean(x)
return x * a + b
torch_x = torch.tensor([1., 2., 3.])
graph = ivy.trace_graph(jax_fn, to="torch", args=(torch_x,))
ret = graph(torch_x)
Let’s take a look at how Ivy works as a transpiler in more detail to get an idea of why and where to use it.
When is Ivy's transpiler useful?
If you want to use building blocks published in other frameworks (neural
networks, layers, array computing libraries, training pipelines…),
you want to integrate code developed in various frameworks, or maybe
straight up migrate code from one framework to another or even between versions of the same framework, the transpiler is
definitely the tool for the job! You can use the converted code just
as if it was code originally developed in that framework, applying
framework-specific optimizations or tools, instantly exposing your
project to all of the unique perks of a different framework.
Ivy’s transpiler allows you to use code from any other framework (or
from any other version of the same framework!) in your own code, by just
adding one line of code.
This way, Ivy makes all ML-related projects available for you,
independently of the framework you want to use to research, develop, or
deploy systems. Feel free to head over to the docs for the full API
reference, but the functions you’d most likely want to use are:
# Converts framework-specific code to a target framework of choice. See usage in the documentation
ivy.transpile()
# Traces an efficient fully-functional graph from a function, removing all wrapping and redundant code. See usage in the documentation
ivy.trace_graph()
ivy.transpile
will eagerly transpile if a class or function is providedimport ivy
import torch
import tensorflow as tf
def torch_fn(x):
x = torch.abs(x)
return torch.sum(x)
x1 = torch.tensor([1., 2.])
x1 = tf.convert_to_tensor([1., 2.])
# Transpilation happens eagerly
tf_fn = ivy.transpile(test_fn, source="torch", target="tensorflow")
# tf_fn is now tensorflow code and runs efficiently
ret = tf_fn(x1)
ivy.transpile
will lazily transpile if a module (library) is providedimport kornia
x2 = torch.rand(5, 3, 4, 4)
# Module is provided -> transpilation happens lazily
tf_kornia = ivy.transpile(kornia, source="torch", target="tensorflow")
# The transpilation is initialized here, and this function is converted to tensorflwo
ret = tf_kornia.color.rgb_to_grayscale(x2)
# Transpilation has already occurred, the tensorflow function runs efficiently
ret = tf_kornia.color.rgb_to_grayscale(x2)
ivy.trace_graph
can be used eagerly or lazilyIf you pass the necessary arguments for function tracing, the graph tracing step will
happen instantly (eagerly). Otherwise, the graph tracing
will happen only when the returned function is first invoked.
import ivy
import jax
ivy.set_backend("jax")
# Simple JAX function to transpile
def test_fn(x):
return jax.numpy.sum(x)
x1 = ivy.array([1., 2.])
# Arguments are available -> tracing happens eagerly
eager_graph = ivy.trace_graph(test_fn, to="jax", args=(x1,))
# eager_graph now runs efficiently
ret = eager_graph(x1)
# Arguments are not available -> tracing happens lazily
lazy_graph = ivy.trace_graph(test_fn, to="jax")
# The traced graph is initialized, tracing will happen here
ret = lazy_graph(x1)
# Tracing has already happend, traced graph runs efficiently
ret = lazy_graph(x1)
If you want to learn more, you can find more information in the Ivy as
a transpiler section of the
docs!
You can find Ivy’s documentation on the Docs page, which includes:
We believe that everyone can contribute and make a difference. Whether
it’s writing code, fixing bugs, or simply sharing feedback,
your contributions are definitely welcome and appreciated 🙌
Check out all of our Open Tasks,
and find out more info in our Contributing guide
in the docs! Or to immediately dive into a useful task, look for any failing tests on our Test Dashboard!
Join our growing community on a mission to make conversions between frameworks simple and accessible to all!
Whether you are a seasoned developer or just starting out, you’ll find a place here! Join the Ivy community on
our Discord 👾 server, which is the
perfect place to ask questions, share ideas, and get help from both
fellow developers and the Ivy Team directly.
See you there!
If you use Ivy for your work, please don’t forget to give proper credit
by including the accompanying paper
📄 in your references. It’s a small way to show appreciation and help
to continue to support this and other open source projects 🙌
@article{lenton2021ivy,
title={Ivy: Templated deep learning for inter-framework portability},
author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
journal={arXiv preprint arXiv:2102.02886},
year={2021}
}