Running Distributed TensorFlow on Compute Engine

Running Distributed TensorFlow on Compute Engine

1時間 15分 クレジット: 7


Google Cloud Self-Paced Labs


This lab shows you how to use a distributed configuration of TensorFlow 1.x on multiple Compute Engine instances to train a convolutional neural network model using the MNIST dataset. The MNIST dataset enables handwritten digit recognition, and is widely used in machine learning as a training set for image recognition.

TensorFlow is Google's open source library for machine learning, developed by researchers and engineers in Google's Machine Intelligence organization, which is part of Research at Google. TensorFlow is designed to run on multiple computers to distribute training workloads. For this lab you will run TensorFlow 1.x on multiple Compute Engine virtual machine instances to train the model. You can use Cloud Machine Learning Engine instead, which manages resource allocation tasks for you and can host your trained models. We recommend that you use Cloud ML Engine unless you have a specific reason not to. You can learn more in the this lab that uses Cloud ML Engine and Cloud Datalab.

The following diagram describes the architecture for running a distributed configuration of TensorFlow 1.x on Compute Engine, and using Cloud ML Engine with Cloud Datalab to execute predictions with your trained model.


This Qwiklab shows you how to set up and use this architecture, and explains some of the concepts along the way.


  • Set up Compute Engine to create a cluster of virtual machines (VMs) to run TensorFlow 1.x.

  • Learn how to run the distributed TensorFlow 1.x sample code on your Compute Engine cluster to train a model.

  • Deploy the trained model to Cloud ML Engine to create a custom API for predictions and then execute predictions using a Cloud Datalab notebook.

Qwiklabs に参加してこのラボの残りの部分や他のラボを確認しましょう。

  • Google Cloud Console への一時的なアクセス権を取得します。
  • 初心者レベルから上級者レベルまで 200 を超えるラボが用意されています。
  • ご自分のペースで学習できるように詳細に分割されています。