When Google first about its Tensor Processing Unit, the seemed clear enough: Speed machine learning at scale by throwing custom hardware at the problem. Use commodity GPUs to train machine-learning models; use custom TPUs to deploy those trained models.

 is designed to handle both of those duties, training and deploying, on the same chip. That new generation is also faster, both on its own and when scaled out with others in what’s called a “TPU pod.”

But faster machine learning isn’t the only benefit from such a design. The TPU, especially in this new form, constitutes another piece of what amounts to Google building an end-to-end machine-learning pipeline, covering everything from intake of data to deployment of the trained model.

Machine learning: A pipeline runs through it

One of the largest obstacles to using machine learning right now is how tough it can be to put together a full pipeline for the data—intake, normalization, model training, model and deployment. The pieces are still highly disparate and uncoordinated. Companies like Baidu have  wanting to create a single, unified, unpack-and-go solution, but so far that’s just a notion.

 and .

. But it’s only one framework of many, each suited to different needs and use cases. So TPUs’ limitation of supporting just TensorFlow means you have to use it, regardless of its fit, if you want to squeeze maximum performance out of . Another framework might be more convenient to use for a particular job, but it might not train or serve predictions as quickly because it’ll be consigned to running only on GPUs.

None of this also rules out the possibility that Google could introduce other hardware, such as , to allow frameworks not directly sponsored by Google to also have an edge.

But for most people, the inconvenience of being able to use TPUs to accelerate only certain things will be far outweighed by the convenience of having a managed, cloud-based everything-in-one-place pipeline for machine-learning work. So, like it or not, prepare to use TensorFlow.