Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

TensorFlow on NVIDIA Jetson TX2 Development Kit

$
0
0

In this article, we will work through installing TensorFlow v1.0.1 on the Jetson TX2. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX2. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX2. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.

In this article, we’ll go over the steps to build TensorFlow v1.0.1 on the Jetson TX2. This will take about three hour and a half hours to build.

Note: Please read through this article before starting installation. This is not a simple installation, you may want to tailor it to your needs.

Preparation

This article assumes that Jetson 3.0 is used to flash the Jetson TX2. At a minimum, install:

  • L4T 27.1 an Ubuntu 16.04 64-bit variant (aarch64)
  • CUDA 8.0
  • cuDNN 5.1.10

TensorFlow will use CUDA and cuDNN in this build.

In order to get TensorFlow to compile on the Jetson TX2, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 6 GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX2 up like that, the procedure is similar to that as described in the article: Jetson TX1 Swap File and Development Preparation.

There is a repository on the JetsonHacks account on Github named installTensorFlowTX2. Clone the repository and switch over to that directory.

$ git clone https://github.com/jetsonhacks/installTensorFlowTX2
$ cd installTensorFlowTX2

Prerequisites

There is a convenience script which will install the required prerequisites such as Java and Bazel. The script also patches the source files appropriately for ARM 64.

$ ./installPrerequisites.sh

From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.

Building TensorFlow

First, clone the TensorFlow repository and patch for Arm 64 operation:

$ ./cloneTensorFlow.sh

then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.

$ ./setTensorFlowEV.sh

We’re now ready to build TensorFlow:

$ ./buildTensorFlow.sh

This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:

$ ./packageTensorFlow.sh

The wheel file will be in the $HOME directory, tensorflow-1.0.1-cp27-cp27mu-linux_aarch64.whl

Installation

Pip can be used to install the wheel file:

$ pip install $HOME/tensorflow-1.0.1-cp27-cp27mu-linux_aarch64.whl

Validation

You can go through the procedure on the TensorFlow installation page:

TensorFlow on NVIDIA Jetson TX2 Development Kit appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles