Quantcast
Channel: JetsonHacks
Viewing all 339 articles
Browse latest View live

Skydio R1 – Jetson TX1 Based Self-Flying Camera

$
0
0

So you’ve been working on your Jetson TX1 project for a while, and need a little inspiration. It turns out that there’s a group of people at Skydio that put together a very impressive product around the Jetson TX1. Here’s a very nice and informative preview video of the Skydio R1 in action from Tested.com. Looky here:

Technology!

The Skydio R1 is state of the art in its implementation of tracking and path planning algorithms. The hardware platform consists of a Jetson TX1 module, with 6 pairs of stereo cameras. Including the 4K video imager, this makes for 13 (!) cameras all together. Here’s a look inside:

Skydio R1 Jetson Module
Skydio cutaway showing Jetson TX1 Module

There’s lots of good stuff inside (the Jetson TX1 is the circuit board in the middle), but the ‘secret sauce’ so to speak is the software stack which runs the drone. Advances in machine learning over the last few years enables the autonomous drone to have eight different algorithms to track a person in a scene, including the difficult “fly in front of me and get the shot”.

The stereo cameras help build a 360 degree view of the surrounding environment. There is a pair for each of the front, back, left, right, top and bottom of the aircraft. These camera views were used to train a neural network for identifying objects in the environment. This information allows the robot to not only follow the subject, but also to avoid obstacles while flying.

As we know, an equally impressive feat is implementation of the path and motion planning algorithms to get the high quality cinematic shots. If you think about it, the Skydio R1 has to think about the future (in this case about 4 seconds) to calculate not only where the subject is, but where they are going. Then the robot has plan a path to get where it needs to be to get the shot. Since the autonomous drone can fly about 25 miles per hour, you understand that this can be quite a challenge!

The Skydio R1 is available for pre-order, and begins shipping in March.

Color me impressed!

The post Skydio R1 – Jetson TX1 Based Self-Flying Camera appeared first on JetsonHacks.


RACECAR/J – Hokuyo UST-10LX Lidar

$
0
0

Adding a Hokuyo UST-10LX Scanning Range Finder to RACECAR/J is probably the most difficult task in the build to replicate a MIT RACECAR. With a little patience, it’s not too difficult. Looky here:

Background

One popular approach to vehicle autonomy is to add lidar and use the output to map the surrounding environment. On full size cars, 360 degree lidar maps the surrounding environment. In turn, this map goes to the onboard computers which then use the information for path planning, object segmentation, obstacle avoidance and so on.

In order to avoid “blind spots”, supplemental sensors such as additional lidars, radars, and cameras may be in use on the vehicle. There are several challenges with this approach. One of the major challenges is “Sensor fusion”, which means combining all of the information from the sensors in to one overall operating picture.

Another challenge is that the lidars themselves have a couple of things going against them. First, current implementations tend to be rather pricey (a good 3D lidar can cost ~ 75,000 USD). Second, most of the lidars available today are mechanical implementation with spinning mirrors, motors, and other bits to produce the magic.

The magical future promises that once the devices start mass production and the inevitable “solid state revolution” occurs, the price of the devices will drop to an affordable level. As technology folks, we all know the “Future Starts Slow”, and that it might take a while for this to happen.

For the MIT RACECAR, the lidar on this car is at a smaller scale, yet presents some of the same fundamental issues when combining with other onboard sensors. For this application, the lidar is 2D and is in a small package. The Hokuyo UST-10LX has impressive specs:

  • Scanning Range: 0.02 to 1Om 270-degree
  • Measuring Accuracy +/-40mm
  • Angular Resolution 0.25 degrees (360/1 ,400)
  • Scanning Frequency 40Hz (2,400rpm)

There’s some maths, but one of the issues is the minimum performance rate in relation to the speed of the vehicle. For a vehicle traveling over 20 miles per hour, you need this kind of performance!

Installation

Installing the Hokuyo on RACECAR/J has one tricky bit. The wires that provide power to the Hokuyo need to connect with the battery that drives the cars electronics. In the case of the MIT RACECAR configuration of RACECAR/J, the battery is an Energizer 18000. See Note below.

Wiring

In this installation, we use the full length wiring of the Hokuyo. You may want to shorten the Hokuyo wiring for better cable management.

The important thing to remember is that the blue and brown wires are the power input to the lidar. As shown in the video, prepare the cable by cutting the control wires to a length of about 1/2″. The control wires are everything but the blue and brown wire. Bend the control wires over the existing heat shrink tubing and use some electrical tape to both insulate the wires and keep them in place.

Then cut the power wires to about 1″ in length. Strip off a 1/4″ of insulation. Twist each wire separately, and tin them with solder.

In the video, a 1.35mm x 3.5mm M/F 3′ Jack to Plug cable is cut in half. The plug side is cut to 6″, and about 1/4″ of insulation is removed. Twist each wire separately, and tin them with solder.

Hokuyo Wiring Hokuyo Wiring Preparation

Add some heat shrink tubing to each of the plug wires, and slip a larger piece of shrink tubing over the Hokuyo side. The large piece of heat shrink should be enough to cover both the bent over wires and the actual connection itself.

Once the heat shrink tubing is in place, solder the plug wires to the Hokuyo power wires. A simple butt joint should be sufficient. In this example, remember that the blue wire is ground (-) and should go to the black wire from the plug. The brown wire is positive (+), and should go to the wire on the plug with the white stripe.

Heat Shrink

This is the best part! Heat shrink tubing is always fun!

Cover the power wires with the heat shrink tubing. Do the small heat shrink tubing first. Once the heat shrink tubing is in place, apply heat. The tubing will magically shrink to fit the wire.

Once the inner heat tubing is done, place the larger heat shrink tubing over the joint. Then apply heat. Here’s how the sequence should look:

Heat shrink Preparation Heat Shrink - Cover Power Wires Apply Heat to Heat Shrink Tubing Finished Cable

Battery Connection

You will also need a cable to connect to the battery. If you are using an Energizer 18000 in the MIT RACECAR configuration make a Y cable. One cable comes from the battery ( you can use the “blue” wire from the Energizer kit). You will plug this cable into the 19V jack on the battery. The cable splits in to a Y. One side of the Y goes to the 1.35mm x 3.5mm jack wire from the previous step. The other side of the Y goes to the Jetson. The Jetson plug is 2.5mm ID, 5.5mm OD. I’ve found the right angle version helps remove some cable clutter.

Your wiring may vary, but make sure that the wiring of positive and ground across the 3.5mm jack and plug is consistent when wiring the Hokuyo.

We use two outputs on the Energizer. The 19V powers both the Jetson and the Hokuyo. The 12V output powers the ‘green’ cable, which connects with the USB hub. You may use the green cable in the Energizer kit, and splice it to a 1.7mm ID, 4.75mm OD plug connector. Again, right angle is a good choice here.

Before installing the Hokuyo in the robot, connect it to the battery to make sure that it is getting power.

RACECAR/J Installation

The Hokuyo generates quite a bit of heat during operation. Hokuyo recommends an aluminum plate to help with heat dissipation. The RACECAR/J store carries a LIDAR Aluminum Heat Sink for Hokuyo UST-10LX. If you remember in one of our previous articles RACECAR/J Platfrom Preparation we mount a Delrin plate to mount the lidar. In this application, substitute the aluminum version. Note: Full RACECAR/J kits contain the aluminum plate, there is no need to buy an additional one.

As shown in the video, attach the 1″ aluminum standoffs to the lidar plate using four 7/16″ 4-40 machine screws. Next, use two M3x10mm machine screws to attach the Hokuyo to the lidar plate. Once secured, attach the assembly to the lower platform deck using four 7/16″ 4-40 machine screws.

The cable management is more an art form than science at the moment. Use some zip ties to help route the wiring on the bottom side of the platform appropriately. You can watch the video to get some hints.

Once everything is assembled, ready for testing. Off to the testing article.

Note

The Energizer 18000 battery is currently in stock. However, over the last six months or so it has been difficult to get. We will have some alternatives featured here soon.

The post RACECAR/J – Hokuyo UST-10LX Lidar appeared first on JetsonHacks.

RACECAR/J – Hokuyo UST-10LX Configuration

$
0
0

In the first part of our Hokuyo UST-10LX installation article, we made a wiring harness and installed the lidar into RACECAR/J. After connecting the Jetson on the RACECAR to an HDMI monitor, keyboard and mouse we are ready to configure the Hokuyo and test it under ROS. Looky here:

Hokuyo Network Configuration

Once the wiring and installation of the Hokuyo is complete it is time to set the Jetson up to recognize the Hokuyo UST-10LX. A stock UST-10LX is set to an IP address of 192.168.0.10.

In order for the Jetson to recognize the UST-10LX, the Jetson must be on the same ethernet subnet. In the video above, we walk through setting up a static IP connection for talking to the Hokuyo. Because there are many ways that you may configure your network and robot for testing and deployment, that approach is just one example. Typically for deployment you will need to get your hands a little bit dirtier and add something like the following to your configuration in /etc/network/interfaces
For example you might add:

auto eth0
iface eth0 inet static
address 192.168.0.15
netmask 255.255.255.0
gateway 192.168.0.1
metric 800

This is standard Linux talk about how to setup a static IP of 192.168.0.15 on a machine. This is a very large subject, and will not be covered in any depth here. Google is your friend on this one.

Once you have your static IP setup, you should be able to ping the Hokuyo:

$ ping 192.168.0.10

You will receive back the bytes that you sent it. If you do not, make sure that the blue light on the Hokuyo is on (indicating that the device has power). If the light is on, you most likely have network configuration issues.

UST-10LX Under ROS

In an earlier article RACECAR/J Software Install we cover installing the software drivers, ROS and MIT RACECAR ROS packages. Included in the installation is the ROS urg_node, which is a ROS wrapper for the Hokuyo urg_c library. The urg_node allows ROS to communicate with the Hokuyo.

You will need to setup your .bashrc file to reflect the new network configuration, i.e.

export ROS_MASTER_URI=http://192.168.0.15:11311
export ROS_IP=http://192.168.0.15

Again, these are just example settings and should be changed to match your network.

We can examine the information that the lidar is producing. First open a terminal and start roscore:

$ roscore

Open another terminal. You can list all of the ROS topics:

$ rostopic list

You should see the ‘/scan’ topic. You can then examine the data stream from the lidar:

$ rostopic echo /scan

The data stream displays.

At this point, the Hokuyo is good to go!

In the rest of the video, there is a quick rviz demo. This is more dependent on how you have your robot setup. For the demo, rviz was installed and the LaserScan displayed. The setup for the demo is not covered in this article as you will probably want to run visualizations from a base station connected to the robot over WiFi.

Conclusion

The Hokuyo UST-10LX is a central part of the MIT RACECAR configuration. Installation is a little more challenging than the rest of the build, but with a little patience there should not be any issues.

The post RACECAR/J – Hokuyo UST-10LX Configuration appeared first on JetsonHacks.

Build Kernel and Modules – NVIDIA Jetson TX2

$
0
0

In this article, we cover building the kernel and modules for the NVIDIA Jetson TX2 running L4T 28.2. We also build the CH341 module, which is a USB to serial converter. Looky here:

Background

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

Note: The kernel source must match the version of L4T that has been flashed onto the Jetson. For example, here we use the kernel source for L4T 28.2 with L4T 28.2. Kernel versions are not compatible across releases.

NVIDIA recommends using a host PC when building a kernel from source. See the Linux for Tegra R28.2 web page where you can get the required information about GCC Tool Chain for 64-bit BSP.

If you are building systems which require a large amount of kernel development, that is a good option. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or three as modules to support some extra hardware with the TX2.

Presented here are scripts which download the kernel source on to the Jetson TX2 itself, builds the kernel image and modules, and installs the modules and copies the kernel Image to the boot directory. The video above shows how to select the CH341 module and build it as an external module.

Installation

The script files to build the kernel on the Jetson TX2 are available on the JetsonHacks Github account in the buildJetsonTX2 repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX2Kernel.git
$ cd buildJetsonTX2Kernel

There are three main scripts. The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file.

Note: The local version needs to be set to match the release that you are building. For example, if you are building modules for a stock kernel, then the local versions should be -tegra which makes the kernel release name 4.4.38-tegra. If you are building a custom kernel, then you should add your own local version. In the video above, we used -jetsonbotv0.1 which results in 4.4.38-jetsobotv0.1.

Make sure to save the configuration file when done editing. Note that if you want to just compile a module or two for use with a stock kernel, you should set the local version identifier to match the stock version.

The second script, makeKernel.sh, prepares the kernel and modules for building, and then builds the kernel and modules specified.

$ ./makeKernel.sh

The modules are then installed in /lib/modules/

The third script, copyImage.sh, copies over the newly built Image file into the /boot directory.

$ ./copyImage.sh

Note: This is probably overly simplistic. In most development cases, you will want to duplicate the stock kernel Image file, and modify the file /boot/extlinux/extlinux.conf so that you have the option to boot from the the stock image or the newly created image through the serial console. This is in case something, ahem, goes wrong. Because this is related to your specific development needs, this exercise is left to the reader.

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Note: The copyImage.sh script simply copies the Image file to the /boot directory of the current device. If you are using an external device such as a SSD as your root directory and still using the eMMC to boot from, you will need to copy the Image file to the /boot directory of the eMMC.

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. Hopefully this article helps along this path.

Note

  • The video above was made directly after flashing the Jetson TX2 with L4T 28.2 using JetPack 3.2.
  • If you encounter the error ‘cannot stat:’ when you run the copyImage.sh script, it means that the Image file did not build. You should check for error messages generated in the makeKernel.sh step.
  • For L4T 28.1, please visit the earlier article which tells you to git checkout vL4T28.1 after cloning the repository.

The post Build Kernel and Modules – NVIDIA Jetson TX2 appeared first on JetsonHacks.

Build TensorFlow on NVIDIA Jetson TX Development Kits

$
0
0

We build TensorFlow 1.6 on the Jetson TX with some new scripts written by Jason Tichy over at NVIDIA. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

In the vast majority of cases, you will want to install the associated .whl files for TensorFlow and not build from source. You can find the latest set of .whl files in the NVIDIA Jetson Forums.

Note: We previously built TensorFlow for both the Jetson TX2 and Jetson TX1 for L4T 28.1. Because of changes to the Java environment, these have been deprecated.

Some people would like to use the entire TensorFlow system on a Jetson. In this article, we’ll go over the steps to build TensorFlow r1.6 on a Jetson TX Dev Kit from source. These scripts work on both the Jetson TX1 and Jetson TX2. This should take about three hours to build on a Jetson TX2, longer on a Jetson TX1.

You will need ~10GB of free space in your build area. Typically the smart move is to freshly flash your Jetson with L4T 28.2, CUDA 9.0 Toolkit and cuDNN 7.0.5 and then start your build.

Installation

The TensorFlow scripts are located in the JasonAtNvidia account on Github in the JetsonTFBuild repository. You can simply check out the entire repository:

$ git checkout https://github.com/JasonAtNvidia/JetsonTFBuild.git

which will clone the repository including the TensorFlow .whl files. The .whl files take up several hundred megabytes of space. You may want to delete the .whl files.

As an alternative, here’s a script which will download the repository without the wheels directory:

Save the gist to a file (for example getJetsonTFBuild.sh), save the file and then execute it. For example:

$ bash getJetsonTFBuild.sh

This will download everything except the wheel directory.

Next, switch over to the repository directory:

$ cd JetsonTFBuild

Building

To execute the build file:

$ sudo bash BuildTensorFlow.sh

There are three parameters which you may pass to the script:

  • -b | --branch <branchname> Github branch to clone, i.e r1.6 (default: master)
  • -s | --swapsize <size> Size of swap file to create to assist building process in GB, i.e. 8
  • -d | --dir <directory> Directory to download files and use for build process, default: pwd/TensorFlow_install

Because the Jetson TX1 and Jetson TX2 do not have enough physical memory to build TensorFlow, a swap file is used.

Note: On a Jetson TX1, make sure that you set the directory to point to a device which has enough space for the build. The TX1 does not have enough eMMC memory to hold the swap file. The faster the external memory the better. The Jetson TX2 eMMC does have enough extra room for the build.

For example, to compile TensorFlow release 1.6 on a Jetson TX2 (as shown in the video):

$ sudo bash BuildTensorFlow.sh -b r1.6

After the TensorFlow build (which will take between 3 to 6 hours), you should do a validation check.

Validation

You can go through the procedure on the TensorFlow installation page: Tensorflow: Validate your installation

Validate your TensorFlow installation by doing the following:

Start a Terminal.
Change directory (cd) to any directory on your system other than the tensorflow subdirectory from which you invoked the configure command.
Invoke python or python3 accordingly, for python 2.X for example:

$ python

Enter the following short program inside the python interactive shell:

>>> import tensorflow as tf
>>> hello = tf.constant(‘Hello, TensorFlow!’)
>>> sess = tf.Session()
>>> print(sess.run(hello))

If the Python program outputs the following, then the installation is successful and you can begin writing TensorFlow programs.

Hello, TensorFlow!”

This is not very thorough, of course. However it does show that what you have built is installed.

Conclusion

This is a pretty straight forward process to build TensorFlow. At the same time, you should spend the time in reading through the scripts to get an understanding of how they operate.

Make sure to report any issues on the JasonAtNvidia account in the JetsonTFBuild repository.

Special thanks again to Jason Tichy over at NVIDIA for the repository!

Notes

  • The install in the video was performed directly after flashing the Jetson TX2 with JetPack 3.2
  • The install is lengthy, however it certainly should take much less than 4 hours on a TX2 and less than 6 hours on a TX1 once all the files are downloaded. If it takes longer, something is wrong.
  • In the video, TensorFlow 1.6.0 is installed

The post Build TensorFlow on NVIDIA Jetson TX Development Kits appeared first on JetsonHacks.

Intel RealSense D400 librealsense2 – NVIDIA Jetson TX Dev Kits

$
0
0

Intel has recently begun shipping the RealSense D435 and D415 depth cameras. Let’s start working on running them on the NVIDIA Jetson TX kits. Looky here:

Background

As you may recall, we were using librealsense with the previous generation R400 RealSense camers. With the advent of the new D400 series RealSense cameras, Intel has upgraded the librealsense to version 2.0 to support the new camera family and its features.

The new hardware introduces a couple of different video modes, as well as support for camera on-board accelerometer and gyroscope. While the D435 in the video does not have the additional hardware, other cameras in the range do. As a result, librealsense requires modification to the Jetson kernel Image and additional modules to support the new features.

The Jetson TX kits are embedded systems, so they don’t quite line up with the way that most developers think about the Linux desktop. In the regular installers for librealsense, there are several assumptions made about how devices attach to the system. Also, some assumptions are made about the kernel configuration that do not match the Jetson.

The bottom line is that we need to build a new kernel to support the RealSense cameras. We’ll break this installation into two parts. The first part is installing librealsense itself. The second part will build a kernel that supports the cameras.

Librealsense2 Installation

On the JetsonHacks Github account, there is a repository named buildLibrealsense2TX. To download the repository:

$ cd $HOME
$ git clone https://github.com/jetsonhacks/buildLibrealsense2TX
$ cd buildLibrealsense2TX

Next, make sure that the RealSense camera is not attached to the system. Then install the library:

$ ./installLibrealsense.sh

Looking inside the script file, you will see that there are a couple of patches for the Jetson. The first patch is a work around for some code related to an Intel specific instruction set, the other is a workaround for the Industrial I/O (IIO) device detection.

The stock librealsense code appears to assume that a IIO device reports with a device and bus number. On the Jetson, the ina3221x Power Monitors do not follow this protocol. The result is that a series of warning are issued continuously as the library scans for HID devices that have been added (plugged in) to the system.

The library is looking for IIO HID devices (the accelerometer and gyroscope on a RealSense camera). The ina3221x is not a HID device, but appears during the IIO scanning. The library scans, but because it does not find a device or bus number for the power monitor, it issues a warning to stderr (the console). The result is that the console gets spammed, which in turn results in a performance penalty.

The workaround patch checks to see if the device detected is the power monitor before issuing a warning.

Similarly, the built in camera module control is via CSI/I2C, not USB as expected by librealsense. Again, a warning is sent to stderr by librealsense. There may be a clever way to determine if the warning is about the on-board Jetson camera, but in this case the patch just comments out the warning.

After applying the patches, the script compiles the library, examples and tools:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The examples and tools are located in /usr/local/bin

The script also sets up a udev rule so that the RealSense camera is available in user space.

Once the library is installed, plug the camera into the Jetson, or into the Jetson through a powered USB 3.0 hub. You can then go and execute the tools and examples. For example:

$ cd /usr/local/bin
$ ./realsense-viewer

As shown in the video, you will be able to use the camera. However, if you examine the console from where the app is launched, you will notice that there are a couple of issues. First, some of the video modes are not recognized. Second, some of the frame meta-data is absent.

The video modes are identified in the Linux kernel. The frame meta-data is too. In order for this information to become available to librealsense, patches must be applied to the kernel and the kernel rebuilt.

You will need to make the determination as to if the added information is important for your application.

Kernel and Modules

The changes that librealsense needs are spread across several files. Some of the changes relate to the video formats and frame meta-data. These changes are in the UVC video and V4L2 modules.

In previous versions of librealsense, we would build the UVC module as an external module. This was relatively simple. However, things have changed a little internally in the way that L4T 28.2 is configured. The V4L2 module is built into the kernel Image file (it is an ‘internal’ module). The UVC can still be compiled as an external module.

The other new HID modules that the library uses are part of the IIO device tree. These modules rely on the internal module for IIO, as well as a couple of other support modules which must be enabled as internal modules.

As a result, this is a little tricky for development purposes in a general purpose manner. First, there are some modules which need to be enabled. They are a little picky in that some need to be internal modules. There are also patches that need to be applied to the stock kernel sources.

There are two ways to go about this. The first is the recommended way, where you work it all into your development process. The second way is to use a provided script which will attempt to build a stock kernel with the addition of the kernel support needed for librealsense. If something goes wrong during this second method, most likely you will be forced to reflash your Jetson because it is in a bad state.

In either case, you should be building your kernel on a freshly flashed Jetson. You will need ~3GB of free space for building the kernel.

Build steps

We’ve talked about building the kernel with modules before. Basically the steps are:

  • Get the kernel sources
  • Configure the kernel
  • Apply any needed patches
  • Make the kernel and modules
  • Install
  • Cross your fingers and hope

Typically you’re working on a development kit, where you have your own kernel modification and configuration in place. You will need to do some configuration of the kernel for librealsense. Located in:

buildLibrealsense2TX/config/ (e.g. buildLibrealsense2TX/config/TX2

there is a stock .config file with the changes that are needed to the configuration for the librealsense library. You should diff this with a stock kernel configuration, and then add the changes to your development .config file.

You can then apply the kernel patches:

$ ./applyKernelPatches.sh

These patches will fix up the camera formats, add the meta-data and so on.

At this point, you are ready to build the kernel and install it. As usual, you should make a backup of the stock image and modify /boot/extlinux/extlinux.conf to add an option to boot to either the stock image or the new image. Also, remember to set the local version in the .config file!

Just go for it!

You’re the type of person who lives on the edge. Doesn’t care about what others think. You just want it, and you want it now. Have I got the script for you!

If you are not concerned about kernel development and just want the camera up and running properly, you can run a script which will rebuild the kernel with all of the changes needed and install it. Just to be clear, this will install a stock kernel with the librealsense changes in place of whatever is currently there. If you have kernel modifications already installed, they will disappear.

Be forewarned though, sometimes when you live on the edge, you can fall over the edge. If something untoward happens during the build, it can render your Jetson brickly; you will need to reflash it.

For the install on a Jetson TX2:

$ ./buildPatchedKernelTX2.sh

After the installation, reboot, and you should be ready for goodness.

Note: We’ll provide a TX1 script soon.

If something does go wrong during the build, you may want to try to debug it. As part of its cleanup process, the buildPatchedKernel script erases all of the source files and build files that it has downloaded and built. You can pass nocleanup as a command line flag so it keeps those files around. Hopefully you can fix everything.

$ ./buildPatchedKernelTX2.sh –nocleanup

Actually, this script is more useful as a template for rebuilding your kernel with the librealsense changes.

Performance

As we note in the video, there appears to be some issues with performance with the realsense-viewer application. It seems rather suspicious that when the program starts that 100% of one of the CPU cores is being used. This usually indicates that the GUI is not yielding at the bottom of its event loop.

Another issue is that most of librealsense is built for optimization of x86 code. This is to be expected, after all it is an Intel product. Because the Jetson is ARM based, the code defaults to some generic drivel. If only there was a way to exploit parallel processing on a Jetson which has 256 CUDA cores …

We’ll have to come back and revisit these issues. This is still a work in progress.

Notes

  • In the video, iInstallation was performed on a Jetson TX2 running L4T 28.2 (JetPack 3.2)
  • Librealsense 2.10.2
  • Intel RealSense D435 camera

The post Intel RealSense D400 librealsense2 – NVIDIA Jetson TX Dev Kits appeared first on JetsonHacks.

Build Kernel and Modules – NVIDIA Jetson TX1

$
0
0

In this article, we cover building the kernel and modules for the NVIDIA Jetson TX1 running L4T 28.2. We also build the CH341 module, which is a USB to serial converter. Looky here:

Background

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

Note: The kernel source must match the version of L4T that has been flashed onto the Jetson. For example, here we use the kernel source for L4T 28.2 with L4T 28.2. Kernel versions are not compatible across releases.

NVIDIA recommends using a host PC when building a kernel from source. See the Linux for Tegra R28.2 web page where you can get the required information about GCC Tool Chain for 64-bit BSP.

If you are building systems which require a large amount of kernel development, that is a good option. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or three as modules to support some extra hardware with the TX1.

Presented here are scripts which download the kernel source on to the Jetson TX1 itself, builds the kernel image and modules, and installs the modules and copies the kernel Image to the boot directory. The video above shows how to select the CH341 module and build it as an external module.

Installation

Note: On the Jetson TX1, space is at a premium. We advise that you build the kernel and modules immediately after flashing the Jetson.

The script files to build the kernel on the Jetson TX1 are available on the JetsonHacks Github account in the buildJetsonTX1Kernel repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX1Kernel.git
$ cd buildJetsonTX2Kernel
$ git clone v1.0-L4T28.2

There are five main scripts.

getKernelSources.sh

The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file.

Note: The local version needs to be set to match the release that you are building. For example, if you are building modules for a stock kernel, then the local versions should be -tegra which makes the kernel release name 4.4.38-tegra. If you are building a custom kernel, then you should add your own local version. In the video above, we used -jetsonbotv0.1 which results in 4.4.38-jetsobotv0.1.

Make sure to save the configuration file when done editing. Note that if you want to just compile a module or two for use with a stock kernel, you should set the local version identifier to match the stock version.

getKernelSourcesNoGUI.sh

getKernelSourcesNoGUI.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSourcesNoGUI.sh

This script is useful for scripting purposes, SSH purposes, and in the case where you want to use an alternate method to edit the .config file.

makeKernel.sh

makeKernel.sh prepares the kernel and modules for building, and then builds the kernel and modules as specified.

$ ./makeKernel.sh

The modules are then installed in /lib/modules/

copyImage.sh

copyImage.sh copies over the newly built Image file into the /boot directory.

$ ./copyImage.sh

Note: This is probably overly simplistic. In most development cases (and as shown in the video), you will want to duplicate the stock kernel Image file, and modify the file /boot/extlinux/extlinux.conf so that you have the option to boot from the the stock image or the newly created image through the serial console. This is in case something, ahem, goes wrong. Because this is related to your specific development needs, this exercise is left to the reader.

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Note: The copyImage.sh script simply copies the Image file to the /boot directory of the current device. If you are using an external device such as a SSD as your root directory and still using the eMMC to boot from, you will need to copy the Image file to the /boot directory of the eMMC.

removeAllKernelSources.sh

removeAllKernelSources.sh removes all of the kernel sources and compressed source files. You may want to make a backup of the files before deletion.

$ ./removeAllKernelSources.sh

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. Hopefully this article helps along this path.

Note

  • The video above was made directly after flashing the Jetson TX1 with L4T 28.2 using JetPack 3.2.
  • For earlier version of L4T, look in the tags in the Github buildJetsonKernelTX1 repository.

The post Build Kernel and Modules – NVIDIA Jetson TX1 appeared first on JetsonHacks.

Now with CUDA! Intel RealSense D400 cameras – NVIDIA Jetson TX

$
0
0

In our previous article, Intel RealSense D400 librealsense2, we began work on making the the RealSense SDK work more smoothly on the NVIDIA Jetson TX Development kits. We now add CUDA support! Looky here:

Background

As you may recall, there are a few issues with running the RealSense SDK, librealsense 2, on the NVIDIA Jetson TX2. For example, the application “realsense-viewer” pegged a CPU at 100%, which results in poor performance. We also want to add support for the Jetson TX1.

As we had guessed, there was a wild render loop which did not have a sleep call. After fixing that, we add CUDA support. This is a little more of a challenge than one would like. Intel switched over to using CMake in version 2 of librealsense. The first task is to add flags to tell CMake to use CUDA. We chose the appropriately named “USE_CUDA” flag.

Next we add the actual CUDA code itself. Fortunately we have a template to work off. It turns out that Mehran Maghoumi (who is currently working at NVIDIA!) had done some work in his Github repository culibrealsense on adding CUDA support for the image conversion of the native image format (YUY2) to more display friendly RGB. A few nips and tucks later, we now have support for RGB, BGR, RGBA and BGRA in CUDA. BGR is the preferred OpenCV format. The ‘A’ refers to a alpha channel for the image.

There is some speedup using the new conversion code. In a few tests the conversion time of an image went from ~14ms to ~8ms. This is a healthy speedup. The code can probably be improved, but for our purposes it’s a nice speedup for the amount of work involved.

During the development process, we also upgraded to librealsense 2.10.4. The library is under heavy development, unfortunately the challenge is discovering newly “features” which don’t translate well into the Jetson environment. These tend to be things such as code written for Intel x86 processors and the myriad of idiocy that goes along with multi-platform libraries. To be fair, this is an Intel product. It makes sense that there are optimizations for their processors. It just takes a while to figure out what breaks when the optimizations are added.

The kernel patching routines are rewritten to support the Jetson TX1, and to be slightly more intelligent about configuring the kernel.

Note: Because space is tight on the Jetson TX1, you should first flash the Jetson TX1 with L4T 28.2 using JetPack 3.2, download the buildLibrealsense2TX repository, and then build the patched kernel. Once the kernel is in place, remove the kernel sources to give you enough space to operate.

Installation

On the JetsonHacks Github account, there is a repository buildLibrealsense2TX. To download the repository:

$ cd $HOME
$ git clone https://github.com/jetsonhacks/buildLibrealsense2TX
$ cd buildLibrealsense2TX
$ git checkout v0.8

The instructions are much the same as in the previous article. More details are available on the README.md file in the buildLibrealsense2TX directory.

Explain please

First, a word about what we’re doing. There are several RealSense camera formats that the standard UVC (video) module in the kernel does not support. librealsense provides patches to add those video modes to the appropriately. The patches also add support for properly adding timestamps to the incoming video stream. In addition, there are several patches that modify some of the Industrial I/O (IIO) tree. These patches are providing support to supplemental hardware on a RealSense camera, such as a 3D Gyro or 3D Accelerometer.

Note: Some of the patches apply to modules that are built into the kernel itself. Because these modules are required to be in the Image, and not built as external, you will need to compile the kernel itself along with the modules.

If you do not modify the Linux kernel, librealsense will mostly appear to work. You will probably experience issues a little further along when you are trying to get better precision and accuracy from the video and depth streams. As the RealSense D435 camera does not have supplemental hardware, the IIO patches don’t make any difference.

If you’re just looking to play around with the camera, you may be able to get away with not compiling the kernel and skip over it. If you’re more serious, you’ll have to start patchin’ !

Building the Kernel

Note: If you built your kernel as in the previous article, you must rebuild it again!

Building the kernel can be taxing, there are many little things that can go wrong. Plus, you can make your Jetson become brickly if something happens at an untoward momement. If you’re a go for it kind of person:

$ ./buildPatchedKernelTX2.sh

If something does go wrong during the build, you may want to try to debug it. As part of its cleanup process, the buildPatchedKernel script erases all of the source files and build files that it has downloaded and built. You can pass nocleanup as a command line flag so it keeps those files around. Hopefully you can fix everything.

$ ./buildPatchedKernelTX2.sh –nocleanup

Actually, this script is more useful as a template for rebuilding your own kernel with the librealsense changes. There are two scripts in the ‘scripts’ directory for helping in the kernel build:

patchKernel.sh applies the librealsense kernel patches.

configureKernel.sh Configures the kernel to add the appropriate modules needed by librealsense.

Since you’re a developer, you should be able to figure out what the scripts are doing and modify them to match your needs.

librealsense Installation

make sure that the RealSense camera is not attached to the system. Then install the library:

$ ./installLibrealsense.sh

Looking inside the script file, you will see that there are a couple of patches for the Jetson. The first patch is a work around for some code related to an Intel specific instruction set, the other is a workaround for the Industrial I/O (IIO) device detection.

The stock librealsense code appears to assume that a IIO device reports with a device and bus number. On the Jetson, the ina3221x Power Monitors do not follow this protocol. The result is that a series of warning are issued continuously as the library scans for HID devices that have been added (plugged in) to the system.

The library is looking for IIO HID devices (the accelerometer and gyroscope on a RealSense camera). The ina3221x is not a HID device, but appears during the IIO scanning. The library scans, but because it does not find a device or bus number for the power monitor, it issues a warning to stderr (the console). The result is that the console gets spammed, which in turn results in a performance penalty.

The workaround patch checks to see if the device detected is the power monitor before issuing a warning.

Similarly, the built in camera module control is via CSI/I2C, not USB as expected by librealsense. Again, a warning is sent to stderr by librealsense. There may be a clever way to determine if the warning is about the on-board Jetson camera, but in this case the patch just comments out the warning.

After applying the patches, the script compiles the library, examples and tools:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The examples and tools are located in /usr/local/bin

The script also sets up a udev rule so that the RealSense camera is available in user space.

Once the library is installed, plug the camera into the Jetson, or into the Jetson through a powered USB 3.0 hub. You can then go and execute the tools and examples. For example:

$ cd /usr/local/bin
$ ./realsense-viewer

Conclusion

So there you have it. This is the first pass through of getting the RealSense D400 cameras working with the Jetson TX Dev kits. Because the RealSense SDK is under heavy development, we will have to keep our eye out for improvements in the weeks ahead!

Notes

  • L4T 28.2 installed by JetPack 3.2
  • A Jetson TX2 was used in the video
  • buildLibrealsense2TX is version v0.8

The post Now with CUDA! Intel RealSense D400 cameras – NVIDIA Jetson TX appeared first on JetsonHacks.


Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits

$
0
0

With the advent of L4T 28.2, we freshened up the ROS installers for the Jetson TX1 and Jetson TX2. Looky here:

Background

In a previous article, Robot Operating System (ROS) on NVIDIA Jetson TX2, we discuss the history of ROS and why it has become the most popular operating system for robots in the world.

The L4T 28.2 release now allows us to use the same code base for the Jetson TX1 as the Jetson TX2 for the installer. Due to the legacy nature of this project, we still maintain two repositories. There are two repositories on the JetsonHacks account on Github. The first is installROSTX2 which is for the Jetson TX2. The second is installROSTX1 which is for the Jetson TX1.

Installation

The main script, installROS.sh, is a straightforward implementation of the install instructions taken from the ROS Wiki. The instructions install ROS Kinetic on the Jetson.

The installation for both Jetsons are similar. On the Jetson TX2:

You can grab the repository:

$ git clone https://github.com/jetsonhacks/installROSTX2.git
$ cd installROSTX2

on the Jetson TX1:

$ git clone https://github.com/jetsonhacks/installROSTX1.git
$ cd installROSTX1

installROS.sh

Usage: ./installROS.sh  [[-p package] | [-h]]
 -p | --package <packagename>  ROS package to install
                               Multiple Usage allowed
                               The first package should be a base package. One of the following:
                                 ros-kinetic-ros-base
                                 ros-kinetic-desktop
                                 ros-kinetic-desktop-full
 

Default is ros-kinetic-ros-base if no packages are specified.

Example Usage:

$ ./installROS.sh -p ros-kinetic-desktop -p ros-kinetic-rgbd-launch

This script installs a baseline ROS environment. There are several tasks:

  • Enable repositories universe, multiverse, and restricted
  • Adds the ROS sources list
  • Sets the needed keys
  • Loads specified ROS packages (defaults to ros-kinetic-base-ros if none specified)
  • Initializes rosdep

You can edit this file to add the ROS packages for your application.

setupCatkinWorkspace.sh

Usage:

$ ./setupCatkinWorkspace.sh [optionalWorkspaceName]

where optionalWorkspaceName is the name of the workspace to be used. The default workspace name is catkin_ws. This script also sets up some ROS environment variables. Refer to the script for details.

Notes

  • In the video, the Jetson TX2 was flashed with L4T 28.2 using JetPack 3.2. L4T 28.2 is derived from Ubuntu 16.04.

The post Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits appeared first on JetsonHacks.

Intel RealSense Package for ROS on NVIDIA Jetson TX

$
0
0

Intel provides an open source ROS Package for their RealSense D400 series cameras. Here we install the package on a NVIDIA Jetson TX development kit. Looky here:

Background

The RealSense D400 cameras are the next generation of the Intel RealSense camera product line. In the video, we install a ROS driver for a RealSense D435 camera, a device well suited towards robotic applications. The size of the D435, along with convenient mounting options, provide for a nice mechanical packaging solution for adding a RGBD camera to your project.

Installation

There are two prerequisites for installing the RealSense ROS package on the Jetson. The first is to install the librealsense 2 camera driver library on the Jetson. We cover the procedure in this article: Install Intel RealSense D400 camera driver.

The second prerequisite is to install Robot Operating System (ROS). A short article Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits is available.

Install RealSense Package for ROS

There is a convenience script to install the RealSense ROS package on the JetsonHacks account on Github called installRealSense2ROSTX.

After the prerequisites above are installed:

$ git clone https://github.com/jetsonhacks/installRealSense2ROSTX
$ cd installRealSense2ROSTX
$ ./installRealSense2ROSTX

Where catkin workplace name is the name of the catkin_workspace to place the RealSense ROS package. If the workspace name is not specified, the installation takes place in the default catkin workspace name, catkin_ws.

The install script works with either a Jetson TX1 or a Jetson TX2.

There are several launch files available. Please refer to the Intel-ROS RealSense repository on Github for examples.

Notes

  • In the video, the installation takes place on a Jetson TX2 running L4T 28.2 (JetPack 3.2).
  • An Intel RealSense D435 is being used in the video.
  • Version v2.0.3 of the intel-ros realsense package is installed
  • Version v2.10.4 of librealsense 2 is installed on the Jetson
  • ROS Kinetic

The post Intel RealSense Package for ROS on NVIDIA Jetson TX appeared first on JetsonHacks.

RACECAR/J – Hokuyo UST-10LX Configuration

$
0
0

In the first part of our Hokuyo UST-10LX installation article, we made a wiring harness and installed the lidar into RACECAR/J. After connecting the Jetson on the RACECAR to an HDMI monitor, keyboard and mouse we are ready to configure the Hokuyo and test it under ROS. Looky here:

Hokuyo Network Configuration

Once the wiring and installation of the Hokuyo is complete it is time to set the Jetson up to recognize the Hokuyo UST-10LX. A stock UST-10LX is set to an IP address of 192.168.0.10.

In order for the Jetson to recognize the UST-10LX, the Jetson must be on the same ethernet subnet. In the video above, we walk through setting up a static IP connection for talking to the Hokuyo. Because there are many ways that you may configure your network and robot for testing and deployment, that approach is just one example. Typically for deployment you will need to get your hands a little bit dirtier and add something like the following to your configuration in /etc/network/interfaces
For example you might add:

auto eth0
iface eth0 inet static
address 192.168.0.15
netmask 255.255.255.0
gateway 192.168.0.1
metric 800

This is standard Linux talk about how to setup a static IP of 192.168.0.15 on a machine. This is a very large subject, and will not be covered in any depth here. Google is your friend on this one.

Once you have your static IP setup, you should be able to ping the Hokuyo:

$ ping 192.168.0.10

You will receive back the bytes that you sent it. If you do not, make sure that the blue light on the Hokuyo is on (indicating that the device has power). If the light is on, you most likely have network configuration issues.

UST-10LX Under ROS

In an earlier article RACECAR/J Software Install we cover installing the software drivers, ROS and MIT RACECAR ROS packages. Included in the installation is the ROS urg_node, which is a ROS wrapper for the Hokuyo urg_c library. The urg_node allows ROS to communicate with the Hokuyo.

You will need to setup your .bashrc file to reflect the new network configuration, i.e.

export ROS_MASTER_URI=http://192.168.0.15:11311
export ROS_IP=http://192.168.0.15

Again, these are just example settings and should be changed to match your network.

We can examine the information that the lidar is producing. First open a terminal and start roscore:

$ roscore

Open another terminal. You can list all of the ROS topics:

$ rostopic list

You should see the ‘/scan’ topic. You can then examine the data stream from the lidar:

$ rostopic echo /scan

The data stream displays.

At this point, the Hokuyo is good to go!

In the rest of the video, there is a quick rviz demo. This is more dependent on how you have your robot setup. For the demo, rviz was installed and the LaserScan displayed. The setup for the demo is not covered in this article as you will probably want to run visualizations from a base station connected to the robot over WiFi.

Conclusion

The Hokuyo UST-10LX is a central part of the MIT RACECAR configuration. Installation is a little more challenging than the rest of the build, but with a little patience there should not be any issues.

The post RACECAR/J – Hokuyo UST-10LX Configuration appeared first on JetsonHacks.

Build Kernel and Modules – NVIDIA Jetson TX2

$
0
0

In this article, we cover building the kernel and modules for the NVIDIA Jetson TX2 running L4T 28.2. We also build the CH341 module, which is a USB to serial converter. Looky here:

Background

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

Note: The kernel source must match the version of L4T that has been flashed onto the Jetson. For example, here we use the kernel source for L4T 28.2 with L4T 28.2. Kernel versions are not compatible across releases.

NVIDIA recommends using a host PC when building a kernel from source. See the Linux for Tegra R28.2 web page where you can get the required information about GCC Tool Chain for 64-bit BSP.

If you are building systems which require a large amount of kernel development, that is a good option. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or three as modules to support some extra hardware with the TX2.

Presented here are scripts which download the kernel source on to the Jetson TX2 itself, builds the kernel image and modules, and installs the modules and copies the kernel Image to the boot directory. The video above shows how to select the CH341 module and build it as an external module.

Installation

The script files to build the kernel on the Jetson TX2 are available on the JetsonHacks Github account in the buildJetsonTX2 repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX2Kernel.git
$ cd buildJetsonTX2Kernel

There are three main scripts. The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file.

Note: The local version needs to be set to match the release that you are building. For example, if you are building modules for a stock kernel, then the local versions should be -tegra which makes the kernel release name 4.4.38-tegra. If you are building a custom kernel, then you should add your own local version. In the video above, we used -jetsonbotv0.1 which results in 4.4.38-jetsobotv0.1.

Make sure to save the configuration file when done editing. Note that if you want to just compile a module or two for use with a stock kernel, you should set the local version identifier to match the stock version.

The second script, makeKernel.sh, prepares the kernel and modules for building, and then builds the kernel and modules specified.

$ ./makeKernel.sh

The modules are then installed in /lib/modules/

The third script, copyImage.sh, copies over the newly built Image file into the /boot directory.

$ ./copyImage.sh

Note: This is probably overly simplistic. In most development cases, you will want to duplicate the stock kernel Image file, and modify the file /boot/extlinux/extlinux.conf so that you have the option to boot from the the stock image or the newly created image through the serial console. This is in case something, ahem, goes wrong. Because this is related to your specific development needs, this exercise is left to the reader.

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Note: The copyImage.sh script simply copies the Image file to the /boot directory of the current device. If you are using an external device such as a SSD as your root directory and still using the eMMC to boot from, you will need to copy the Image file to the /boot directory of the eMMC.

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. Hopefully this article helps along this path.

Note

  • The video above was made directly after flashing the Jetson TX2 with L4T 28.2 using JetPack 3.2.
  • If you encounter the error ‘cannot stat:’ when you run the copyImage.sh script, it means that the Image file did not build. You should check for error messages generated in the makeKernel.sh step.
  • For L4T 28.1, please visit the earlier article which tells you to git checkout vL4T28.1 after cloning the repository.

The post Build Kernel and Modules – NVIDIA Jetson TX2 appeared first on JetsonHacks.

Build TensorFlow on NVIDIA Jetson TX Development Kits

$
0
0

We build TensorFlow 1.6 on the Jetson TX with some new scripts written by Jason Tichy over at NVIDIA. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

In the vast majority of cases, you will want to install the associated .whl files for TensorFlow and not build from source. You can find the latest set of .whl files in the NVIDIA Jetson Forums.

Note: We previously built TensorFlow for both the Jetson TX2 and Jetson TX1 for L4T 28.1. Because of changes to the Java environment, these have been deprecated.

Some people would like to use the entire TensorFlow system on a Jetson. In this article, we’ll go over the steps to build TensorFlow r1.6 on a Jetson TX Dev Kit from source. These scripts work on both the Jetson TX1 and Jetson TX2. This should take about three hours to build on a Jetson TX2, longer on a Jetson TX1.

You will need ~10GB of free space in your build area. Typically the smart move is to freshly flash your Jetson with L4T 28.2, CUDA 9.0 Toolkit and cuDNN 7.0.5 and then start your build.

Installation

The TensorFlow scripts are located in the JasonAtNvidia account on Github in the JetsonTFBuild repository. You can simply check out the entire repository:

$ git checkout https://github.com/JasonAtNvidia/JetsonTFBuild.git

which will clone the repository including the TensorFlow .whl files. The .whl files take up several hundred megabytes of space. You may want to delete the .whl files.

As an alternative, here’s a script which will download the repository without the wheels directory:

Save the gist to a file (for example getJetsonTFBuild.sh), save the file and then execute it. For example:

$ bash getJetsonTFBuild.sh

This will download everything except the wheel directory.

Next, switch over to the repository directory:

$ cd JetsonTFBuild

Building

To execute the build file:

$ sudo bash BuildTensorFlow.sh

There are three parameters which you may pass to the script:

  • -b | --branch <branchname> Github branch to clone, i.e r1.6 (default: master)
  • -s | --swapsize <size> Size of swap file to create to assist building process in GB, i.e. 8
  • -d | --dir <directory> Directory to download files and use for build process, default: pwd/TensorFlow_install

Because the Jetson TX1 and Jetson TX2 do not have enough physical memory to build TensorFlow, a swap file is used.

Note: On a Jetson TX1, make sure that you set the directory to point to a device which has enough space for the build. The TX1 does not have enough eMMC memory to hold the swap file. The faster the external memory the better. The Jetson TX2 eMMC does have enough extra room for the build.

For example, to compile TensorFlow release 1.6 on a Jetson TX2 (as shown in the video):

$ sudo bash BuildTensorFlow.sh -b r1.6

After the TensorFlow build (which will take between 3 to 6 hours), you should do a validation check.

Validation

You can go through the procedure on the TensorFlow installation page: Tensorflow: Validate your installation

Validate your TensorFlow installation by doing the following:

Start a Terminal.
Change directory (cd) to any directory on your system other than the tensorflow subdirectory from which you invoked the configure command.
Invoke python or python3 accordingly, for python 2.X for example:

$ python

Enter the following short program inside the python interactive shell:

>>> import tensorflow as tf
>>> hello = tf.constant(‘Hello, TensorFlow!’)
>>> sess = tf.Session()
>>> print(sess.run(hello))

If the Python program outputs the following, then the installation is successful and you can begin writing TensorFlow programs.

Hello, TensorFlow!”

This is not very thorough, of course. However it does show that what you have built is installed.

Conclusion

This is a pretty straight forward process to build TensorFlow. At the same time, you should spend the time in reading through the scripts to get an understanding of how they operate.

Make sure to report any issues on the JasonAtNvidia account in the JetsonTFBuild repository.

Special thanks again to Jason Tichy over at NVIDIA for the repository!

Notes

  • The install in the video was performed directly after flashing the Jetson TX2 with JetPack 3.2
  • The install is lengthy, however it certainly should take much less than 4 hours on a TX2 and less than 6 hours on a TX1 once all the files are downloaded. If it takes longer, something is wrong.
  • In the video, TensorFlow 1.6.0 is installed

The post Build TensorFlow on NVIDIA Jetson TX Development Kits appeared first on JetsonHacks.

Intel RealSense D400 librealsense2 – NVIDIA Jetson TX Dev Kits

$
0
0

Intel has recently begun shipping the RealSense D435 and D415 depth cameras. Let’s start working on running them on the NVIDIA Jetson TX kits. Looky here:

Background

As you may recall, we were using librealsense with the previous generation R400 RealSense camers. With the advent of the new D400 series RealSense cameras, Intel has upgraded the librealsense to version 2.0 to support the new camera family and its features.

The new hardware introduces a couple of different video modes, as well as support for camera on-board accelerometer and gyroscope. While the D435 in the video does not have the additional hardware, other cameras in the range do. As a result, librealsense requires modification to the Jetson kernel Image and additional modules to support the new features.

The Jetson TX kits are embedded systems, so they don’t quite line up with the way that most developers think about the Linux desktop. In the regular installers for librealsense, there are several assumptions made about how devices attach to the system. Also, some assumptions are made about the kernel configuration that do not match the Jetson.

The bottom line is that we need to build a new kernel to support the RealSense cameras. We’ll break this installation into two parts. The first part is installing librealsense itself. The second part will build a kernel that supports the cameras.

Librealsense2 Installation

On the JetsonHacks Github account, there is a repository named buildLibrealsense2TX. To download the repository:

$ cd $HOME
$ git clone https://github.com/jetsonhacks/buildLibrealsense2TX
$ cd buildLibrealsense2TX

Next, make sure that the RealSense camera is not attached to the system. Then install the library:

$ ./installLibrealsense.sh

Looking inside the script file, you will see that there are a couple of patches for the Jetson. The first patch is a work around for some code related to an Intel specific instruction set, the other is a workaround for the Industrial I/O (IIO) device detection.

The stock librealsense code appears to assume that a IIO device reports with a device and bus number. On the Jetson, the ina3221x Power Monitors do not follow this protocol. The result is that a series of warning are issued continuously as the library scans for HID devices that have been added (plugged in) to the system.

The library is looking for IIO HID devices (the accelerometer and gyroscope on a RealSense camera). The ina3221x is not a HID device, but appears during the IIO scanning. The library scans, but because it does not find a device or bus number for the power monitor, it issues a warning to stderr (the console). The result is that the console gets spammed, which in turn results in a performance penalty.

The workaround patch checks to see if the device detected is the power monitor before issuing a warning.

Similarly, the built in camera module control is via CSI/I2C, not USB as expected by librealsense. Again, a warning is sent to stderr by librealsense. There may be a clever way to determine if the warning is about the on-board Jetson camera, but in this case the patch just comments out the warning.

After applying the patches, the script compiles the library, examples and tools:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The examples and tools are located in /usr/local/bin

The script also sets up a udev rule so that the RealSense camera is available in user space.

Once the library is installed, plug the camera into the Jetson, or into the Jetson through a powered USB 3.0 hub. You can then go and execute the tools and examples. For example:

$ cd /usr/local/bin
$ ./realsense-viewer

As shown in the video, you will be able to use the camera. However, if you examine the console from where the app is launched, you will notice that there are a couple of issues. First, some of the video modes are not recognized. Second, some of the frame meta-data is absent.

The video modes are identified in the Linux kernel. The frame meta-data is too. In order for this information to become available to librealsense, patches must be applied to the kernel and the kernel rebuilt.

You will need to make the determination as to if the added information is important for your application.

Kernel and Modules

The changes that librealsense needs are spread across several files. Some of the changes relate to the video formats and frame meta-data. These changes are in the UVC video and V4L2 modules.

In previous versions of librealsense, we would build the UVC module as an external module. This was relatively simple. However, things have changed a little internally in the way that L4T 28.2 is configured. The V4L2 module is built into the kernel Image file (it is an ‘internal’ module). The UVC can still be compiled as an external module.

The other new HID modules that the library uses are part of the IIO device tree. These modules rely on the internal module for IIO, as well as a couple of other support modules which must be enabled as internal modules.

As a result, this is a little tricky for development purposes in a general purpose manner. First, there are some modules which need to be enabled. They are a little picky in that some need to be internal modules. There are also patches that need to be applied to the stock kernel sources.

There are two ways to go about this. The first is the recommended way, where you work it all into your development process. The second way is to use a provided script which will attempt to build a stock kernel with the addition of the kernel support needed for librealsense. If something goes wrong during this second method, most likely you will be forced to reflash your Jetson because it is in a bad state.

In either case, you should be building your kernel on a freshly flashed Jetson. You will need ~3GB of free space for building the kernel.

Build steps

We’ve talked about building the kernel with modules before. Basically the steps are:

  • Get the kernel sources
  • Configure the kernel
  • Apply any needed patches
  • Make the kernel and modules
  • Install
  • Cross your fingers and hope

Typically you’re working on a development kit, where you have your own kernel modification and configuration in place. You will need to do some configuration of the kernel for librealsense. Located in:

buildLibrealsense2TX/config/ (e.g. buildLibrealsense2TX/config/TX2

there is a stock .config file with the changes that are needed to the configuration for the librealsense library. You should diff this with a stock kernel configuration, and then add the changes to your development .config file.

You can then apply the kernel patches:

$ ./applyKernelPatches.sh

These patches will fix up the camera formats, add the meta-data and so on.

At this point, you are ready to build the kernel and install it. As usual, you should make a backup of the stock image and modify /boot/extlinux/extlinux.conf to add an option to boot to either the stock image or the new image. Also, remember to set the local version in the .config file!

Just go for it!

You’re the type of person who lives on the edge. Doesn’t care about what others think. You just want it, and you want it now. Have I got the script for you!

If you are not concerned about kernel development and just want the camera up and running properly, you can run a script which will rebuild the kernel with all of the changes needed and install it. Just to be clear, this will install a stock kernel with the librealsense changes in place of whatever is currently there. If you have kernel modifications already installed, they will disappear.

Be forewarned though, sometimes when you live on the edge, you can fall over the edge. If something untoward happens during the build, it can render your Jetson brickly; you will need to reflash it.

For the install on a Jetson TX2:

$ ./buildPatchedKernelTX2.sh

After the installation, reboot, and you should be ready for goodness.

Note: We’ll provide a TX1 script soon.

If something does go wrong during the build, you may want to try to debug it. As part of its cleanup process, the buildPatchedKernel script erases all of the source files and build files that it has downloaded and built. You can pass nocleanup as a command line flag so it keeps those files around. Hopefully you can fix everything.

$ ./buildPatchedKernelTX2.sh –nocleanup

Actually, this script is more useful as a template for rebuilding your kernel with the librealsense changes.

Performance

As we note in the video, there appears to be some issues with performance with the realsense-viewer application. It seems rather suspicious that when the program starts that 100% of one of the CPU cores is being used. This usually indicates that the GUI is not yielding at the bottom of its event loop.

Another issue is that most of librealsense is built for optimization of x86 code. This is to be expected, after all it is an Intel product. Because the Jetson is ARM based, the code defaults to some generic drivel. If only there was a way to exploit parallel processing on a Jetson which has 256 CUDA cores …

Making progress! Off to the next installment of the series.

Notes

  • In the video, iInstallation was performed on a Jetson TX2 running L4T 28.2 (JetPack 3.2)
  • Librealsense 2.10.2
  • Intel RealSense D435 camera

Intel Documentation:

The post Intel RealSense D400 librealsense2 – NVIDIA Jetson TX Dev Kits appeared first on JetsonHacks.

Build Kernel and Modules – NVIDIA Jetson TX1

$
0
0

In this article, we cover building the kernel and modules for the NVIDIA Jetson TX1 running L4T 28.2. We also build the CH341 module, which is a USB to serial converter. Looky here:

Background

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

Note: The kernel source must match the version of L4T that has been flashed onto the Jetson. For example, here we use the kernel source for L4T 28.2 with L4T 28.2. Kernel versions are not compatible across releases.

NVIDIA recommends using a host PC when building a kernel from source. See the Linux for Tegra R28.2 web page where you can get the required information about GCC Tool Chain for 64-bit BSP.

If you are building systems which require a large amount of kernel development, that is a good option. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or three as modules to support some extra hardware with the TX1.

Presented here are scripts which download the kernel source on to the Jetson TX1 itself, builds the kernel image and modules, and installs the modules and copies the kernel Image to the boot directory. The video above shows how to select the CH341 module and build it as an external module.

Installation

Note: On the Jetson TX1, space is at a premium. We advise that you build the kernel and modules immediately after flashing the Jetson.

The script files to build the kernel on the Jetson TX1 are available on the JetsonHacks Github account in the buildJetsonTX1Kernel repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX1Kernel.git
$ cd buildJetsonTX2Kernel
$ git clone v1.0-L4T28.2

There are five main scripts.

getKernelSources.sh

The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file.

Note: The local version needs to be set to match the release that you are building. For example, if you are building modules for a stock kernel, then the local versions should be -tegra which makes the kernel release name 4.4.38-tegra. If you are building a custom kernel, then you should add your own local version. In the video above, we used -jetsonbotv0.1 which results in 4.4.38-jetsobotv0.1.

Make sure to save the configuration file when done editing. Note that if you want to just compile a module or two for use with a stock kernel, you should set the local version identifier to match the stock version.

getKernelSourcesNoGUI.sh

getKernelSourcesNoGUI.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSourcesNoGUI.sh

This script is useful for scripting purposes, SSH purposes, and in the case where you want to use an alternate method to edit the .config file.

makeKernel.sh

makeKernel.sh prepares the kernel and modules for building, and then builds the kernel and modules as specified.

$ ./makeKernel.sh

The modules are then installed in /lib/modules/

copyImage.sh

copyImage.sh copies over the newly built Image file into the /boot directory.

$ ./copyImage.sh

Note: This is probably overly simplistic. In most development cases (and as shown in the video), you will want to duplicate the stock kernel Image file, and modify the file /boot/extlinux/extlinux.conf so that you have the option to boot from the the stock image or the newly created image through the serial console. This is in case something, ahem, goes wrong. Because this is related to your specific development needs, this exercise is left to the reader.

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Note: The copyImage.sh script simply copies the Image file to the /boot directory of the current device. If you are using an external device such as a SSD as your root directory and still using the eMMC to boot from, you will need to copy the Image file to the /boot directory of the eMMC.

removeAllKernelSources.sh

removeAllKernelSources.sh removes all of the kernel sources and compressed source files. You may want to make a backup of the files before deletion.

$ ./removeAllKernelSources.sh

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. Hopefully this article helps along this path.

Note

  • The video above was made directly after flashing the Jetson TX1 with L4T 28.2 using JetPack 3.2.
  • For earlier version of L4T, look in the tags in the Github buildJetsonKernelTX1 repository.

The post Build Kernel and Modules – NVIDIA Jetson TX1 appeared first on JetsonHacks.


Now with CUDA! Intel RealSense D400 cameras – NVIDIA Jetson TX

$
0
0

In our previous article, Intel RealSense D400 librealsense2, we began work on making the the RealSense SDK work more smoothly on the NVIDIA Jetson TX Development kits. We now add CUDA support! Looky here:

Background

As you may recall, there are a few issues with running the RealSense SDK, librealsense 2, on the NVIDIA Jetson TX2. For example, the application “realsense-viewer” pegged a CPU at 100%, which results in poor performance. We also want to add support for the Jetson TX1.

As we had guessed, there was a wild render loop which did not have a sleep call. After fixing that, we add CUDA support. This is a little more of a challenge than one would like. Intel switched over to using CMake in version 2 of librealsense. The first task is to add flags to tell CMake to use CUDA. We chose the appropriately named “USE_CUDA” flag.

Next we add the actual CUDA code itself. Fortunately we have a template to work off. It turns out that Mehran Maghoumi (who is currently working at NVIDIA!) had done some work in his Github repository culibrealsense on adding CUDA support for the image conversion of the native image format (YUY2) to more display friendly RGB. A few nips and tucks later, we now have support for RGB, BGR, RGBA and BGRA in CUDA. BGR is the preferred OpenCV format. The ‘A’ refers to a alpha channel for the image.

There is some speedup using the new conversion code. In a few tests the conversion time of an image went from ~14ms to ~8ms. This is a healthy speedup. The code can probably be improved, but for our purposes it’s a nice speedup for the amount of work involved.

During the development process, we also upgraded to librealsense 2.10.4. The library is under heavy development, unfortunately the challenge is discovering newly “features” which don’t translate well into the Jetson environment. These tend to be things such as code written for Intel x86 processors and the myriad of idiocy that goes along with multi-platform libraries. To be fair, this is an Intel product. It makes sense that there are optimizations for their processors. It just takes a while to figure out what breaks when the optimizations are added.

The kernel patching routines are rewritten to support the Jetson TX1, and to be slightly more intelligent about configuring the kernel.

Note: Because space is tight on the Jetson TX1, you should first flash the Jetson TX1 with L4T 28.2 using JetPack 3.2, download the buildLibrealsense2TX repository, and then build the patched kernel. Once the kernel is in place, remove the kernel sources to give you enough space to operate.

Installation

On the JetsonHacks Github account, there is a repository buildLibrealsense2TX. To download the repository:

$ cd $HOME
$ git clone https://github.com/jetsonhacks/buildLibrealsense2TX
$ cd buildLibrealsense2TX
$ git checkout v0.8

The instructions are much the same as in the previous article. More details are available on the README.md file in the buildLibrealsense2TX directory.

Explain please

First, a word about what we’re doing. There are several RealSense camera formats that the standard UVC (video) module in the kernel does not support. librealsense provides patches to add those video modes to the appropriately. The patches also add support for properly adding timestamps to the incoming video stream. In addition, there are several patches that modify some of the Industrial I/O (IIO) tree. These patches are providing support to supplemental hardware on a RealSense camera, such as a 3D Gyro or 3D Accelerometer.

Note: Some of the patches apply to modules that are built into the kernel itself. Because these modules are required to be in the Image, and not built as external, you will need to compile the kernel itself along with the modules.

If you do not modify the Linux kernel, librealsense will mostly appear to work. You will probably experience issues a little further along when you are trying to get better precision and accuracy from the video and depth streams. As the RealSense D435 camera does not have supplemental hardware, the IIO patches don’t make any difference.

If you’re just looking to play around with the camera, you may be able to get away with not compiling the kernel and skip over it. If you’re more serious, you’ll have to start patchin’ !

Building the Kernel

Note: If you built your kernel as in the previous article, you must rebuild it again!

Building the kernel can be taxing, there are many little things that can go wrong. Plus, you can make your Jetson become brickly if something happens at an untoward momement. If you’re a go for it kind of person:

$ ./buildPatchedKernelTX2.sh

If something does go wrong during the build, you may want to try to debug it. As part of its cleanup process, the buildPatchedKernel script erases all of the source files and build files that it has downloaded and built. You can pass nocleanup as a command line flag so it keeps those files around. Hopefully you can fix everything.

$ ./buildPatchedKernelTX2.sh –nocleanup

Actually, this script is more useful as a template for rebuilding your own kernel with the librealsense changes. There are two scripts in the ‘scripts’ directory for helping in the kernel build:

patchKernel.sh applies the librealsense kernel patches.

configureKernel.sh Configures the kernel to add the appropriate modules needed by librealsense.

Since you’re a developer, you should be able to figure out what the scripts are doing and modify them to match your needs.

librealsense Installation

make sure that the RealSense camera is not attached to the system. Then install the library:

$ ./installLibrealsense.sh

Looking inside the script file, you will see that there are a couple of patches for the Jetson. The first patch is a work around for some code related to an Intel specific instruction set, the other is a workaround for the Industrial I/O (IIO) device detection.

The stock librealsense code appears to assume that a IIO device reports with a device and bus number. On the Jetson, the ina3221x Power Monitors do not follow this protocol. The result is that a series of warning are issued continuously as the library scans for HID devices that have been added (plugged in) to the system.

The library is looking for IIO HID devices (the accelerometer and gyroscope on a RealSense camera). The ina3221x is not a HID device, but appears during the IIO scanning. The library scans, but because it does not find a device or bus number for the power monitor, it issues a warning to stderr (the console). The result is that the console gets spammed, which in turn results in a performance penalty.

The workaround patch checks to see if the device detected is the power monitor before issuing a warning.

Similarly, the built in camera module control is via CSI/I2C, not USB as expected by librealsense. Again, a warning is sent to stderr by librealsense. There may be a clever way to determine if the warning is about the on-board Jetson camera, but in this case the patch just comments out the warning.

After applying the patches, the script compiles the library, examples and tools:

  • The library is installed in /usr/local/lib
  • The header files are in /usr/local/include
  • The examples and tools are located in /usr/local/bin

The script also sets up a udev rule so that the RealSense camera is available in user space.

Once the library is installed, plug the camera into the Jetson, or into the Jetson through a powered USB 3.0 hub. You can then go and execute the tools and examples. For example:

$ cd /usr/local/bin
$ ./realsense-viewer

Conclusion

So there you have it. This is the first pass through of getting the RealSense D400 cameras working with the Jetson TX Dev kits. Because the RealSense SDK is under heavy development, we will have to keep our eye out for improvements in the weeks ahead!

Notes

  • L4T 28.2 installed by JetPack 3.2
  • A Jetson TX2 was used in the video
  • buildLibrealsense2TX is version v0.8

Intel Documentation:

The post Now with CUDA! Intel RealSense D400 cameras – NVIDIA Jetson TX appeared first on JetsonHacks.

Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits

$
0
0

With the advent of L4T 28.2, we freshened up the ROS installers for the Jetson TX1 and Jetson TX2. Looky here:

Background

In a previous article, Robot Operating System (ROS) on NVIDIA Jetson TX2, we discuss the history of ROS and why it has become the most popular operating system for robots in the world.

The L4T 28.2 release now allows us to use the same code base for the Jetson TX1 as the Jetson TX2 for the installer. Due to the legacy nature of this project, we still maintain two repositories. There are two repositories on the JetsonHacks account on Github. The first is installROSTX2 which is for the Jetson TX2. The second is installROSTX1 which is for the Jetson TX1.

Installation

The main script, installROS.sh, is a straightforward implementation of the install instructions taken from the ROS Wiki. The instructions install ROS Kinetic on the Jetson.

The installation for both Jetsons are similar. On the Jetson TX2:

You can grab the repository:

$ git clone https://github.com/jetsonhacks/installROSTX2.git
$ cd installROSTX2

on the Jetson TX1:

$ git clone https://github.com/jetsonhacks/installROSTX1.git
$ cd installROSTX1

installROS.sh

Usage: ./installROS.sh  [[-p package] | [-h]]
 -p | --package <packagename>  ROS package to install
                               Multiple Usage allowed
                               The first package should be a base package. One of the following:
                                 ros-kinetic-ros-base
                                 ros-kinetic-desktop
                                 ros-kinetic-desktop-full
 

Default is ros-kinetic-ros-base if no packages are specified.

Example Usage:

$ ./installROS.sh -p ros-kinetic-desktop -p ros-kinetic-rgbd-launch

This script installs a baseline ROS environment. There are several tasks:

  • Enable repositories universe, multiverse, and restricted
  • Adds the ROS sources list
  • Sets the needed keys
  • Loads specified ROS packages (defaults to ros-kinetic-base-ros if none specified)
  • Initializes rosdep

You can edit this file to add the ROS packages for your application.

setupCatkinWorkspace.sh

Usage:

$ ./setupCatkinWorkspace.sh [optionalWorkspaceName]

where optionalWorkspaceName is the name of the workspace to be used. The default workspace name is catkin_ws. This script also sets up some ROS environment variables. Refer to the script for details.

Notes

  • In the video, the Jetson TX2 was flashed with L4T 28.2 using JetPack 3.2. L4T 28.2 is derived from Ubuntu 16.04.

The post Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits appeared first on JetsonHacks.

Intel RealSense Package for ROS on NVIDIA Jetson TX

$
0
0

Intel provides an open source ROS Package for their RealSense D400 series cameras. Here we install the package on a NVIDIA Jetson TX development kit. Looky here:

Background

The RealSense D400 cameras are the next generation of the Intel RealSense camera product line. In the video, we install a ROS driver for a RealSense D435 camera, a device well suited towards robotic applications. The size of the D435, along with convenient mounting options, provide for a nice mechanical packaging solution for adding a RGBD camera to your project.

Installation

There are two prerequisites for installing the RealSense ROS package on the Jetson. The first is to install the librealsense 2 camera driver library on the Jetson. We cover the procedure in this article: Install Intel RealSense D400 camera driver.

The second prerequisite is to install Robot Operating System (ROS). A short article Robot Operating System (ROS) on NVIDIA Jetson TX Development Kits is available.

Install RealSense Package for ROS

There is a convenience script to install the RealSense ROS package on the JetsonHacks account on Github called installRealSense2ROSTX.

After the prerequisites above are installed:

$ git clone https://github.com/jetsonhacks/installRealSense2ROSTX
$ cd installRealSense2ROSTX
$ ./installRealSense2ROSTX

Where catkin workplace name is the name of the catkin_workspace to place the RealSense ROS package. If the workspace name is not specified, the installation takes place in the default catkin workspace name, catkin_ws.

The install script works with either a Jetson TX1 or a Jetson TX2.

There are several launch files available. Please refer to the Intel-ROS RealSense repository on Github for examples.

Notes

  • In the video, the installation takes place on a Jetson TX2 running L4T 28.2 (JetPack 3.2).
  • An Intel RealSense D435 is being used in the video.
  • Version v2.0.3 of the intel-ros realsense package is installed
  • Version v2.10.4 of librealsense 2 is installed on the Jetson
  • ROS Kinetic

The post Intel RealSense Package for ROS on NVIDIA Jetson TX appeared first on JetsonHacks.

Build OpenCV 3.4 with CUDA on NVIDIA Jetson TX2

$
0
0

In order for OpenCV to get access to CUDA acceleration on the NVIDIA Jetson TX2 running L4T 28.2 (JetPack 3.2), you need to build the library from source. Looky here:

Background

With the latest release of L4T, 28.2, OpenCV version 3.3 may be installed through the JetPack installer. At the time of the L4T release, OpenCV did not provide support for CUDA 9.0 with which L4T 28.2 ships. Over the next couple of months, version OpenCV 3.4 added CUDA 9.0 support.

So what does that mean? Well, if you want OpenCV CUDA support under L4T 28.2 you need to compile it from source. Fortunately we have some convenience scripts to help with that task in the JetsonHacks repository buildOpenCVTX2 on Github.

Installation

You should note that OpenCV is a rich environment, and can be custom tailored to your needs. As such, some of the more common options are in the build command, but are not comprehensive. Modify the options to suit your needs.

Library location

With this script release, the script now installs OpenCV in /usr/local. Earlier versions of this script installed in /usr. You may have to set your include and libraries and/or PYTHONPATH to point to the new version. See the Examples folder. Alternatively, you may want to change the script to install into the /usr directory.

All of this may lead to a conflict. You may consider removing OpenCV installed by JetPack before performing this script installation:

$ sudo apt-get purge libopencv*

Options

Make sure to read through the install script. In the script, here are some of the options that were included:

  • CUDA
  • Fast Math (cuBLAS)
  • OpenGL
  • GStreamer 1.0
  • Video 4 Linux (V4L)
  • Python 2.7 and Python 3.5 support

Build and Install

To download the source, build and install OpenCV:

$ git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
$ cd buildOpenCVTX2
$ ./buildOpenCV.sh

You can remove the sources and build files after you are done:

$ ./removeOpenCVSources.sh

This will remove the OpenCV source, as well as the opencv_extras directories.

Examples

There are a couple of demos in the Examples folder.

There are two example programs here. Both programs require OpenCV to be installed with GStreamer support enabled. Both of these examples were last tested with L4T 28.2, OpenCV 3.4.1

The first is a simple C++ program to view the onboard camera feed from the Jetson Dev Kit.

To compile gstreamer_view.cpp:

$ gcc -std=c++11 ‘pkg-config –cflags opencv’ ‘pkg-config –libs opencv’ gstreamer_view.cpp -o gstreamer_view -lstdc++ -lopencv_core -lopencv_highgui -lopencv_videoio

to run the program:

$ ./gstreamer_view

The second is a Python program that reads the onboard camera feed from the Jetson Dev Kit and does Canny Edge Detection.

To run the Canny detection demo (Python 2.7):

$ python cannyDetection.py

With Python 3.3:

$ python3 cannyDetection.py

With the Canny detection demo, use the less than (<) and greater than (>) to adjust the edge detection parameters. You can pass the command line flags —video_device=<videoDeviceNumber> to use a USB camera instead of the built in camera.

Notes

GPU Activity Monitor – NVIDIA Jetson TX Dev Kit

$
0
0

Sometimes you just want to see how hard the GPU is working. Enter the GPU Activity Monitor. Looky here:

Background

Traditionally if you want to see how busy a Linux system is, you can use a graphical tool like System Monitor. The CPU, memory and networking and a wide variety of other good innards are on display. However, one thing that is missing is GPU utilization.

Most developers use the tegrastats tool to get a feel for GPU utilization, which reports as a percentage of maximum. This prints with a large number of other system parameters every second. For my particular use case, I am only interested in a graph of how the GPU is being utilized over time.

Simple Stupid Good

After fiddling around a little bit, I figured out how to get the GPU utilization and wrote a simple Python script to graph utilization against time. Similar to the way that System Monitor works, the graph shows utilization over a 60 second interval.

This is a dead simple implementation, rather brute force. It would have been nice if I knew Python, but hey! The Python script utilizes the Matplotlib library. You can use the script with Python 2 or Python 3.

Installation

The graph is implemented as an animated Python Matplotlib graph. The first step is to install the appropriate Matplotlib library.

For Python 2.7, Matplotlib may be installed as follows:

$ sudo apt-get install python-matplotlib

For Python 3, Matplotlib may be installed as follows:

$ sudo apt-get install python3-matplotlib

Next, on the JetsonHacks account on Github there is a repository named gpuGraphTX. Clone the repository

$ git clone https://github.com/jetsonhacks/gpuGraphTX

and switch over to the repository’s directory:

$ cd gpuGraphTX

The Fun Part, Run the Script

You can run the app:

$ ./gpuGraph.py

or:

$ python gpuGraph.py

or:

$ python3 gpuGraph.py

After a little time spent loading fonts, the graph appears:

GPU Activity Monitor
GPU Activity Monitor

You can resize the window to get a better view of the activity, as well as use the toolbar to do actions like zoom in on any given section or save the graph to a file.

Notes

In the video, the script was installed on a Jetson TX2 directly after flashing L4T 28.2 using JetPack 3.2. The script has been tested with both the Jetson TX1 and Jetson TX2, and using Python 2.7 and Python 3.5.

The post GPU Activity Monitor – NVIDIA Jetson TX Dev Kit appeared first on JetsonHacks.

Viewing all 339 articles
Browse latest View live