Quantcast
Channel: JetsonHacks
Viewing all 339 articles
Browse latest View live

Select Boot Device from Serial Console – NVIDIA Jetson Development Kit

$
0
0

Just a short article on how you can select the boot device from the serial console of a NVIDIA Jetson Development Kit. Looky here:

Background

In the last few articles, we covered attaching external storage to the Jetson TX1. We then point the root directory so that during boot time the Jetson runs from external storage. This helps expand the amount of storage on the root device past that of the 16 GB of the internal eMMC.

The serial console is wired into the Jetson TX1 using a USB-TO-TTL serial cable. On a Jetson TK1, the wiring to the serial console is different, but the process on the connected host is the same.

Modifying the extlinux.conf file on the Jetson allows for different booting options to be made available over the serial console. Each device entry has a MENU LABEL which represents the device and is presented to the user during boot over the serial console. On the connected host machine, tapping a key while the Jetson is booting allows us to select the boot device option.

And I would do this why?

There are a few of reasons you may want to use this capability. First, you may want to use it if you need to debug your extlinux.conf file when you change it. The next couple of reasons are for the case when external storage is being used as the root device.

The second reason to use this method is in the case where the root device gets buggered, but you still need to be able to access the Jetson. By having a clean internal eMMC installation, you should be able to boot to the internal eMMC and still be able to examine the other external devices. This provides a margin of comfort in case there are issues and the Jetson won’t boot.

Another reason is that some boot devices are removable media, such as USB drives. Especially when the Jetson is in a space where different people use it, it’s not uncommon for the USB drive to be mistakenly removed and taken away. Without the USB device, the Jetson won’t boot, and the console output may make it look like the Jetson is hopelessly broken. Using the serial console, you can at least boot to the internal eMMC.

Having a SD Card that is a boot device is another safety backup, as the Jetson will attempt to boot from the SD Card before it try to boot from other devices.

The post Select Boot Device from Serial Console – NVIDIA Jetson Development Kit appeared first on JetsonHacks.


Install Qt Creator on NVIDIA Jetson TX1

$
0
0

Here we have a short article on installing Qt Creator on the NVIDIA Jetson TX1. Looky here:

Note: This article is about installing Qt Creator 3.3.1 for Qt 5.5.1 on a Jetson TX1. The Jetson TX1 is flashed using JetPack 2.3.1 and is running L4T 24.2.1.

Background

There are a couple of tricks to installing Qt Creator on the Jetson TX1 from the Ubuntu repositories. Some folks have reported issues installing Qt 5.5 on the TX1, so I revisited the installation we had done for the Jetson TK1.

Installation

First, install Qt Creator from the repositories. Open a Terminal and execute:

sudo apt-get install qt5-default qtcreator -y

Second, the compiler needs to be set up. Open Qt Creator, and go to:

Tools->Options->Build & Run->Compilers

Click the ‘Add’ button and select ‘GCC’. In the ‘Compiler path:’ text box, place the path to the gcc compiler. On a standard installation the path is: /usr/bin/gcc.

Here’s the where the first issue comes into play. When GCC is added to the compiler list, it does not set the processor architecture flag correctly. As shown in the video, remedy this issue by modifying the ABI section of the GCC compiler dialog. Change the setting to:

custom – arm – linux – generic – elf – 64 bit

Then save the modifications by clicking ‘Apply’

The third and final step is to add a kit which supports the GCC compiler. Click the ‘Kit‘ tab. The ‘Desktop‘ kit appears to have an issue with setting the compiler. This means that you can find the Desktop kit configuration file and manually modify it, or you can create a new Kit all together. In the video, a new Kit called ‘JetsonTX1’ is created and set to be the default.

Qt Creator is now ready for development, make sure that the JetsonTX1 Kit is selected when creating a new project.

Examples

In the video, the standard Qt examples were loaded for demonstration purposes. Also, Qt documentation was loaded. In the Terminal, execute:

$ sudo apt-get install qt5-doc qt5-doc-html qtbase5-doc-html qtbase5-examples -y

The examples are now available.

Conclusion

Getting Qt Creator up and running on the Jetson TX1 requires a couple of tricks, but fortunately we were able to figure them out.

Note: If you are running a version of L4T 24.X before 24.2.1, you may encounter errors associated with a soft link issue with the Mesa OpenGL drivers. In 24.2.1, these have been resolved. For previous versions, you may have to:

$ cd /usr/lib/aarch64-linux-gnu
$ sudo rm libGL.so
$ sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libGL.so libGL.so

The post Install Qt Creator on NVIDIA Jetson TX1 appeared first on JetsonHacks.

Speech Recognition – Smart Microphone – Jetson Development Kits

$
0
0

This article starts a new series on Speech Recognition. A “smart microphone” is an array of microphones with special signal processing hardware to locate and isolate speech, even in noisy environments. Looky here:

Background

All the cool kids now have in home, voice activated devices like Amazon Echo or Google Home. These devices can play your favorite music, answer questions, read books, control home automation, and all those other things people thought the future was about in the 1960s. For the most part, the speech recognition on the devices works well, although you may find yourself with an extra dollhouse or two occasionally.

One of the enabling technologies of these devices is what is called a microphone array. Several microphones are placed in a circle, with the output being sent to a Digital Signal Processor, or DSP for short. The DSP has several special algorithms which help detect where a voice originates from (localization) and uses audio beamforming to process, reduce echo and reverberation from the signal. The result is an audio stream that is an accurate representation of the original voice.

Once a suitable audio stream has been acquired, the stream can be either processed locally or sent to a server for further processing. In the case of something like an Amazon Echo, a local processor “listens” to the incoming audio stream for a keyword trigger, e.g. “Alexa”. Once the keyword has been identified, the rest of the audio stream is sent to online servers which do speech recognition on the stream, and then parse the audio into “actions”. The service then sends the action back to the device. These actions vary from device to device, but typically allow the user to request the device to play music, control home automation devices, or ask/answer questions. Amazon, Google and Microsoft all have APIs to interface their online services with audio.

The online services have large data bases which they have used machine learning techniques to train their speech recognizers. You may have noticed that many of the online services have become significantly better at recognizing speech over the last couple of years. This advance is mostly due to advances in machine learning.

Speech Recognition for the rest of us

The consumer devices are interesting, and now the technology for smart microphones is available separately from several manufacturers. In the video, a Seeed Studio Respeaker is shown. There are several other manufacturers, the Respeaker in the video was ordered through a Kickstarter campaign.

The Far Field Microphone Array is built around a XVSM-2000 chip from XMOS. Watch the video for a rundown of the rest of the fun hardware that is available on the Respeaker, with sprinkles like RGB LEDs and an Arduino type of processor. The Jetson can talk to either the Respeaker Core or Microphone Array using USB.

Conclusion

Over the course of the next few articles, we’ll figure out how to interface with the Microphone Array, gather the audio stream, and then perform speech recognition both locally and through online services.

The post Speech Recognition – Smart Microphone – Jetson Development Kits appeared first on JetsonHacks.

Daniel Tobias’ CAR – Cherry Autonomous Racecar

$
0
0

Daniel Tobias created a Jetson based autonomous vehicle named the Cherry Autonomous Racecar (CAR) for his senior design project at North Carolina A&T State University. Daniel was gracious enough to share the in-depth details of his project in this interview.

Daniel Tobias with the Cherry Autonomous Racecar
Daniel Tobias with the Cherry Autonomous Racecar

NCAT Cherry Autonomous Racecar – Tear down

Interview

JetsonHacks (JH): What is your background?

Daniel Tobias (Daniel): I am from Greensboro, North Carolina. My first exposure to robotics was back in high-school where I was the founding president of the school’s FIRST based robotics club. After graduation I originally pursued a BS in chemistry but later switched to computer engineering.

Currently I am a senior in the Computer Engineering Undergraduate program at North Carolina A&T State University, where I also help to oversee and run the day-to-day operations of the school’s Robotics Club/Makerspace. My current interests include the application of machine learning to robotics and music.

JH: Why did you choose to make the “Cherry Autonomous Racecar”?

Daniel: The idea came to me in March 2016 while I was walking through Cherry Hall on the way to the University’s robotics club. The hallways of Cherry have parallel blue and gold stripes, that look like lane markers. So I thought it would be cool to see if a scale car could drive down them like a road. With a bit of Googling and YouTubing I found the MIT RACECAR. Their car was able to navigate the tunnels of MIT using a LIDAR and SLAM. I liked their project but the Hokuyo LIDAR was way out of my budget at $1800.

After more research I figured a vision based approach could get similar results without spending a lot. So the plan was to recreate their platform but use OpenCV with GPU support to do lane recognition, and eventually bring other sensors into the fold to help it navigate. That was how my project started out and where the name Cherry Autonomous Racecar comes from.

While researching techniques for lane centering, I became more interested in how current approaches to self driving cars work. So I figured I would try reproducing them at scale. I also wanted to get experience with sensors similar to those being used on autonomous cars in the industry; which is why I chose to add the Neato LIDAR and the RealSense depth camera.

JH: Why did you choose to use the Jetson for the CAR. What advantage does using the Jetson platform give the CAR?

Daniel: I originally went with the Jetson TX1 because I wanted to do real time lane recognition and centering by using OpenCV. Since the board was advertised as world’s most advanced system for embedded visual computing I figured it would be a perfect fit.

My original plan was to offload image processing to the CUDA cores and onboard HW decoders to handle high res frames in real time. But the main benefit was that everything could be done locally without relying on a wireless connection to a server.

There was also online resources and active communities, like JetsonHacks, centered around the board which were helpful.

Currently I am using TensorFlow on the Jetson TX1 to run the End to End Learning for Self-Driving Cars CNN model that NVIDIA released a while back. The Jetson allowed me to use the GPU version of Tensorflow to run the trained model roughly in real time.

JH: What were the major obstacles you encountered when building your CAR?

Daniel: The biggest obstacle I faced was compiling OpenCV 3.1 with CUDA support. I spent a lot of Summer 2016 in dependency hell, involving over 50 attempts to get OpenCV compiled right. You can see that, around that time, there were multiple posts on the official Jetson TX1 forums regarding how to install it.

It took some time to diagnose why my data recording node was saving images out of order. Eventually I realized that the Jetson couldn’t handle saving 65 FPS (30RGB,30Depth,5LIDAR) with the software encoding of cv.imwrite. So I tried reducing this number to (10,10,5) respectively and it worked; I suspect it was a threading issue.

In the paper NVIDIA implemented the CNN using Torch and their Drive PX system. Reproducing the same model in Tensorflow resulted in a trained model that was over 1Gb in size. This was problematic since the supporting ROS network and OS were already using 2.5/4 Gb of RAM. When we tried loading the model into RAM the Jetson threw OOM errors. We tried different solutions like black-and-white images and reducing the resolution of the images further. These helped bring the size of the model down, but the main issue was the 1164 Neurons on the first fully connected layer. By cutting it down to 512 neurons we were able to reduce the model to 150Mbs without any noticeable loss of performance in our validation set. There are other improvements that could reduce the size of the model, like using half-precision floating point which the Maxwell based CUDA cores could take advantage of. Or using TensorFlow quantize functions to reduce the precision to 8bit.

I was originally commanding the Traxxas Electronic Speed Control (ESC) using a 5 volt Arduino, but getting strange behavior. Turned out the ESC uses 3.3V logic, so switching to the 3.3V Teensy solved the issue.

IMUs are very noisey and calibrating the magnetometer of the razor 9dof IMU is near impossible when mounted on my car.

Had to upgrade the shocks and springs of the Traxxas Slash to support the weight of all the extra components.

Xbox controller transceiver module caused interference with the wifi I was using to remote access the car. Both apparently use 2.4GHz band. One could use the Jetson’s 5GHz band to get around this, but my current router doesn’t support it.

Another obstacle was the lack of storage space. In July I found a guide on how to make the filesystem on the SSD and point the Jetson to look at the SSD when it boots up. This also allowed compiling TensorFlow from source which requires at least 12Gbs of swap space. But now you can just use a .whl file to install Tensorflow. (Install TensorFlow on NVIDIA Jetson TX1 Development Kit)

I started to realize pretty early on that development would be difficult on a bleeding edge device like the Jetson TX1. But in its current state the support and community for the Jetson TX1 has grown exponentially where most of the problems and their solutions can be found, if you search for them.

JH: It’d be great to have a description about the idea behind doing deep learning/training for the CAR. I know that it’s a big subject over at NVIDIA.

Daniel: I eventually stumbled upon the the video of NVIDIA’s DAVE 2 system driving a Lincoln near the end of September. Looking at the paper that accompanied the video, I saw they were using the NVIDIA Drive PX to run their CNN, so I googled it. It seemed the Drive PX and the Jetson were similar in hardware, which led me to believe that it was possible to implement their net on the Jetson.

An end to end system that could handle input to actuation went completely against traditional approaches to autonomous driving. There were no explicit equations, variables or software modules that were designed for a specific task. It seemed like a new paradigm to use a purely data driven approach that could mimic the behavior of a human driver.

The training data was generated by manually driving the car around and saving every third image to disk as a jpg with the corresponding steering value in the filename. Each steering angle was normalized by mapping to a range of values where 0.0 represents full-left, and 1.0 full-right.

Each data-collection run lasted about 35 minutes producing over 30,000 labeled images. These were then offloaded to a computer with a GTX 1080 and sufficient RAM for training. The resulting trained model was copied to the Jetson and fed realtime images from the same camera; with the steering angles output by the model used to direct the car. Since the model only produced steering angles, a simple cruise control feature was added to set the throttle so the controller didn’t have to be used to accelerate.

In the paper they augmented the images so the car would learn to recover from mistakes. Our take on this was to drive the CAR normally for 80% of the time and for last 20% to drive it somewhat chaotically. Chaotic driving was letting the car drift periodically on the straightaways and correcting when it got close to a wall. Also, additional human drivers were used so that the car had a variety of data. We noticed driving patterns emerge when the car ran the model that could be linked back to a specific driver. As in the paper, we tried to increase the relative number of turning images for training. This was done by flooring the throttle of the CAR when it was going straight and taking the turns very slowly to generate more images.

JH: What are your plans going forward with the CAR?

Daniel: Currently my team and I are creating hardware instructions on how to physically recreate the CAR, which go with the software already posted on Github. The Jetson and most of the sensors are not being utilized to the fullest, so we plan to release the project as an Open Source/Hardware platform so that others can modify or branch the project in a different direction like what I did with the MIT RACECAR.

As a senior design project, we are also working on a larger version of the Cherry Autonomous Racecar. Think Barbie Jeep but with a few thousand dollars worth of sensors thanks to Chrysler.

If time permits this semester I plan to try a 3D convolutional neural network to work with the point cloud from the RealSense. The net would try to detect other 1/10th scale cars and maybe some other smaller objects like a toy dog. I might also try a 3D CNN through time(FlowNet) and maybe a RNN approach.

I might also try running the CAR in a walking/bike trail setting and see if it can train on the first few miles and run on the last few miles assuming I can find a trail that doesn’t change too much scenery wise.

On a more personal note, I wouldn’t mind continuing development of the CAR project or a similar project in the capacity of a graduate student or researcher.

NCAT Cherry Autonomous Racecar – Trial Runs

Github Repositories

Tensorflow

https://github.com/DJTobias/Cherry-Autonomous-Racecar/tree/master/Tensorflow
https://github.com/DJTobias/Cherry-Autonomous-Racecar/blob/master/car/scripts/runModel.py

CAR package

https://github.com/DJTobias/Cherry-Autonomous-Racecar/

Conclusion

I want to thank Daniel for sharing his work with the JetsonHacks community, hopefully this will provide some inspiration for other Makers out there. If you have a Jetson based build you would like to share, send in an email!

The post Daniel Tobias’ CAR – Cherry Autonomous Racecar appeared first on JetsonHacks.

NVIDIA Jetson TX2 Development Kit

$
0
0

Today NVIDIA began shipping a new product, the Jetson TX2 Development Kit. Looky here:

Jetson TX2 Overview

The Jetson TX2 is a new iteration of the Jetson Development Kit which doubles the computing power and power efficiency of the earlier Jetson TX1.

The Jetson TX1 Dev Kit introduced a new module format, where a standardized Tegra Module is plugged into a carrier board. While the Jetson TX2 uses the same carrier board as the Jetson TX1, the actual Tegra TX2 Module itself is all new.

Hardware

The Jetson TX2 features a NVIDIA Pascal GPU with 256 CUDA capable cores. The CPU complex consists of two ARM v8 64 bit CPU clusters which are connected by a high-performance coherent interconnect fabric. The Denver 2 (Dual-Core) CPU cluster is optimized for higher single-thread performance; the second CPU cluster is an ARM Cortex-A57 QuadCore which is better suited for multi-threaded applications.

The memory subsystem incorporates a 128-bit memory controller, which provides high bandwidth LPDDR4 support. 8 GB LPDDR4 Main Memory and 32 GB eMMC Flash memory are integrated on the Module. Going to a 128-bit design from the TX1 64-bit is a major performance enhancement.

The Module also supports hardware video encoders and decoders which support 4K ultra-high-definition video at 60 fps in several different formats. This is slightly different than the hybrid Jetson TX1 module, which used both dedicated hardware and software which was running on the Tegra SoC for those tasks. Also included is an Audio Processing Engine with full hardware support for multi-channel audio.

The Jetson TX2 supports Wi-Fi and Bluetooth wireless connectivity. Wi-fi is much improved over the earlier Jetson TX1. Gigabit Ethernet BASE-T is included. Here’s a comparison between the TX1 and the TX2.

Jetson TX2 vs TX1
Jetson TX2 vs Jetson TX1

The carrier board, which is common between both the Jetson TX2 and the Jetson TX1, has the following I/O connectors:

  • USB 3.0 Type A
  • USB 2.0 Micro AB (supports recovery and host mode)
  • HDMI
  • M.2 Key E
  • PCI-E x4
  • Gigabit Ethernet
  • Full size SD card reader
  • SATA data+power
  • Display expansion header
  • Camera expansion header

There are two expansion headers, a 40 pin, 2.54mm spaced header with signals laid out similarly to the Raspberry Pi, and a 30 pin, 2.54mm spaced header for extra GPIO.

The Jetson also includes a 5MP camera in the camera expansion header, and a display expansion header for adding extra display panels.

The Jetson TX2 has added a CAN bus controller to the module. CAN is a network format that is frequently used in automobiles and other vehicles. The CAN bus signals are available directly on the GPIO Expansion Header.

Sippy or Speedy

This new generation brings a configurable amount of performance increase depending on power consumption requirements. NVIDIA has engineered two modes. Max-Q is the name of the energy efficiency mode which clocks the Parker SoC for efficiency over performance and draws about 7.5W, right before the bend in the power/performance curve. The result of this mode is that the TX2 has similar performance to a TX1 in max performance mode, while drawing about half the power!

In Max-P mode, the TX2 just flat out goes for it in the power budget of 15W. This provides about twice the performance of the Jetson TX1 at its maximum clock rate.

Jetson TX2 Dual Operating Modes
Jetson TX2 Dual Operating Modes

Software

There are several changes to the Jetson TX2 software stack. The Jetson TX2 runs a Developer Preview of an Ubuntu 16.04 variant named L4T 27.1. The Linux Kernel is 4.4, a newer version than the earlier Jetson TX1 version 3.10. There have been changes to the boot flow, with additional firmware managers added to the mix. The Jetson TX2 comes with a long list of software libraries, and a good selection of samples with source code.

The new JetPack 3.0 installer is available to flash and copy system software to the Jetson TX2.

Initial Impressions

NVIDIA claims that the Jetson TX2 is twice as fast as the Jetson TX1. After booting the machine, this surely seems the case. The entire experience feels very much like a desktop/laptop level machine. Doubling the memory (and the memory bus speed) surely helps with that feeling. Previous Jetsons experience quite a bit of memory pressure when running memory intensive, desktop applications like web browsers. The TX2 doesn’t even notice.

Running a handful of compiles and tests on applications like Caffe proved that the Jetson TX2 is indeed quite a bit faster than the earlier Jetson TX1 (see the video for one of the tests).

One of the fun samples that comes with the Jetson TX2 is an object recognition example which is demonstrated in the video. The deep learning sample uses Caffe along with ImageNet and uses the onboard camera to grab imagery.

Note that we haven’t performed any performance tuning for the demos, this is how it runs fresh out the box!

If you want some hardcore numbers, go over to Phoronix and check out NVIDIA Jetson TX2 Linux Benchmarks

Conclusion

Stay tuned as we begin working with the TX2 to better understand how to take advantage of the extra performance. Find out more on the NVIDIA Developers site.

Pictures, Natch!

Jetson
Jetson TX2
Jetson Module
Jetson TX2 Module – Courtesy of NVIDIA
Jetson
Jetson TX2
Jetson
Jetson TX2
Jetson
Jetson TX2
Jetson Camera
Jetson TX2 Camera
NVIDIA Jetson TX2 Development Kit
NVIDIA Jetson TX2 Development Kit – Courtesy of NVIDIA

ARM Board Comparison – NVIDIA Jetson TX2

$
0
0

For some “weekend benchmarking fun” Michael Larabel over at Phoenix did an ARM board comparison which included the new Jetson TX2. Here’s the link: Benchmarks Of Many ARM Boards From The Raspberry Pi To NVIDIA Jetson TX2

ARM Board Comparison - Phoronix.net

The comparison looks at the CPU performance ranging from cheap ~$10 ARM SBCs to the Raspberry Pi to the Jetson TX1 and Jetson TX2.

Not surprisingly, the Jetson TX2 has a major performance advantage over the (much) less expensive competitors, but the interesting thing to note is the performance gains that have happened in this market over the last 3 years. Another thing to note is that none of the tests use the GPU, so this is strictly testing the CPU side.

There should be some upcoming tests against more capable x86 architecture machines, so it will be interesting to see the results there. Worth the read.

The post ARM Board Comparison – NVIDIA Jetson TX2 appeared first on JetsonHacks.

JetPack 3.0 – NVIDIA Jetson TX2 Development Kit

$
0
0

JetPack 3.0 is a tool that installs the software tools and operating system for a Jetson Development Kit. In the following video, JetPack installs on a Jetson TX2 Development Kit. Looky here:

JetPack Information

JetPack 3.0 may be used to install the development tools on a Jetson Development Kit, either a Jetson TK1, TX1 or TX2. You can read more information on the JetPack web page. There’s a list of all of the System Requirements, as well as the different tools that can be installed.

Note

In addition to the Jetson TX1, you will need another desktop or laptop computer with an Intel or AMD x86 processor. These types of machines are commonly called a PC for Personal Computer. This computer is referred to as the host for the flashing process. JetPack is an x86 binary and will not run on an ARM based machine. In the video, an older Dell Inspiron 3000 Series i3847-3850BK Desktop (3.5 GHz Intel Core i3-4150 Processor, 8GB DDR3, 1TB HDD with Ubuntu installed) is being used as the host.

Installation

It’s that time again! When NVIDIA introduces a new Jetson model, they usually come out with a new revision of JetPack to support it. We are now on revision 3.

For the most part, installation pretty easy. From an Ubuntu 14.04 PC 64 bit host computer, you simply download the JetPack software from the NVIDIA web link above (you’ll have to sign in with your developer account to download JetPack) and follow the instructions in the setup guide.

The set of tools that you can install is flexible. You have the option to install a cross compiler on the host for building your Jetson programs on your PC. Using the cross compiler you can build CUDA and GameWorks samples, then copy the sample binaries to the Jetson.

For the demo, I installed the cross compiler and built the samples. I thought they might be fun to play with at some point. You can see one of the deep learning examples in the video.

Installation from the demo host computer to the Jetson took about an hour fifteen all together, including all the downloads on a 30 MBs Internet link, flashing the Jetson, cross compiling the samples and then loading them onto the Jetson.

The one tricky bit in all of this is setting the Jetson into recovery mode. Follow the on-screen instructions to set the Jetson into recovery mode, open a Terminal, and then type:

$ lsusb

In the output you should see the Jetson listed as Nvidia. If you don’t see the Jetson using lsusb, then the device will not be flashed. Some people who have tried using virtual machines with JetPack have to use some tricks to allow for USB to see the device. Note: Some of the virtual machines just won’t work with JetPack.

Note: On the Jetson TK1, the procedure to enter recovery mode is just slightly different. Refer to the installation manual for details.

Tools Available

Currently the Jetson TX1 uses L4T 23.1 (Jetson TK1 uses L4T 21.4). JetPack flashes the appropriate L4T to the Jetson. Here are some of the JetPack release highlights for the Jetson TX2:

  • Linux for Tegra r27.1 (Developers Preview)
  • TensorRT 1.0
  • cuDNN 5.1
  • VisionWorks 1.6
  • CUDA 8.0
  • Multimedia API

Developer Tools

  • Tegra Graphics Debugger
  • Tegra System Profiler
  • PerfKit

Each Jetson has its own L4T Version:

Product L4T Version Notes
Jetson TX2 27.1
NEW!
64-bit Ubuntu 16.04
Kernel 4.4
Developer Preview Release
Jetson TX1 24.2.1
Unchanged
64-bit Ubuntu 16.04
Kernel 3.10.96
Production Release
Jetson TK1 21.5
Unchanged
32-bit Ubuntu 14.04
Kernel 3.10.40
Production Release

Do I have to have an Ubuntu PC?

The short answer is yes. You may be able to use a VM, but it is not officially supported. Here’s what NVIDIA wrote in the Jetson Forum:

The flashing must be performed from within 64-bit Linux on an x86-based machine. Running an Ubuntu 14.04 x86_64 image is highly-recommended for the flashing procedure. If you don’t already have a Linux desktop, and are trying to avoid setting up dual-boot, you can first try running Ubuntu from within a virtual machine. Although convenient, flashing from VM is technically unsupported — warning in advance that while flashing from within VM, you may encounter issues such as the flashing not completing or freezing during transfer. Chances will be improved if you remove any USB hubs or long cables in between your Jetson and the host machine.

The next logical step would be to boot your desktop/laptop machine off Ubuntu LiveCD or USB stick (using unetbootin tool or similar).

Finally, if you have an extra HDD partition, you can install Ubuntu as dual-boot alongside Windows. Flashing natively from within Ubuntu is the supported and recommended method for flashing successfully. It may be wise to just start in on dual-boot from the get-go, otherwise you may end up wasting more time trying to get the other (potentially more convenient, but unsupported) methods to work.

Note also that Ubuntu 16.04 is not officially supported, but people have been reporting success using that OS. If you encounter issues, please ask questions on the Jetson & Embedded Systems development forums.

Conclusion

The first time through, setting up the system and flashing the Jetson can take around a little more than an hour or so depending on your download speeds and the speed of your PC. In the video, a simple cable modem 30MBs was used for downloading. Downloading all of the components only happens the first time you do an installation, subsequent installations check for updates and if none are available then simply flash the Jetson, saving a lot of time.

The post JetPack 3.0 – NVIDIA Jetson TX2 Development Kit appeared first on JetsonHacks.

Caffe Deep Learning Framework – NVIDIA Jetson TX2

$
0
0

Back in September, we installed the Caffe Deep Learning Framework on a Jetson TX1 Development Kit. With the advent of the Jetson TX2, now is the time to install Caffe and compare the performance difference between the two. Looky here:

Background

As you recall, Caffe is a deep learning framework developed with cleanliness, readability, and speed in mind. It was created by Yangqing Jia during his PhD at UC Berkeley, and is in active development by the Berkeley Vision and Learning Center (BVLC) and by community contributors.

Over the last couple of years, a great deal of progress has been made in speeding up the performance of the supporting underlying software stack. In particular the cuDNN library has been tightly integrated with Caffe, giving a nice bump in performance.

Caffe Installation

A script is available in the JetsonHack Github repository which will install the dependencies for Caffe, download the source files, configure the build system, compile Caffe, and then run a suite of tests. Passing the tests indicates that Caffe is installed correctly.

This installation demonstration is for a NVIDIA Jetson TX2 running L4T 27.1, a 64-bit Ubuntu 16.04 variant. The installation of L4T 27.1 was done using JetPack 3.0, and includes installation of OpenCV4Tegra, CUDA 8.0, and cuDNN 5.1.

Before starting the installation, you may want to set the CPU and GPU clocks to maximum by running the script:

$ sudo ./jetson_clocks.sh

The script is in the home directory.

In order to install Caffe:

$ git clone https://github.com/jetsonhacks/installCaffeJTX2.git
$ cd installCaffeJTX2
$ ./installCaffe.sh

Installation should not require intervention, in the video installation of dependencies and compilation took about 14 minutes. Running the unit tests takes about 19 minutes. While not strictly necessary, running the unit tests makes sure that the installation is correct.

Test Results

At the end of the video, there are a couple of timed tests which can be compared with the Jetson TX1. The following table adds some more information:

Jetson TK1 vs. Jetson TX1 vs. Jetson TX2 Caffe GPU Example Comparison
10 iterations, times in milliseconds
Machine Average FWD Average BACK Average FWD-BACK
Jetson TK1 (32-bit OS) 234 243 478
Jetson TX1 (64-bit OS) 80 119 200
Jetson TX2 (Mode Max-Q) 78 97 175
Jetson TX2 (Mode Max-P) 65 85 149
Jetson TX2 (Mode Max-N) 56 75 132

The tests are running 50 iterations of the recognition pipeline, and each one is analyzing 10 different crops of the input image, so look at the ‘Average Forward pass’ time and divide by 10 to get the timing per recognition result. For the Max-N version of the Jetson TX2, that means that an image recognition takes about 5.6 ms.

The Jetson TX2 introduces the concept of performance modes. The Jetson TX1 has 4 ARM Cortex A57 CPU cores. In comparison, there are 6 CPU cores in the Tegra T2 SoC. Four are ARM Cortex-A57, the other two are NVIDIA Denver 2. Depending on performance and power requirements the cores can be taken on or offline, and the frequencies of their clocks set independently. There are five predefined modes available through the use of the nvpmodel CLI tool.

  • sudo nvpmodel -m 1 (Max-Q)
  • sudo nvpmodel -m 2 (Max-P)
  • sudo nvpmodel -m 0 (Max-N)

Max-Q uses only the 4 ARM A57 cores at a minimal clock frequency. Note that from the table, this gives performance equivalent to the Jetson TX1. Max-Q sets the power profile to be 7.5W, so this represents Jetson TX1 performance while only using half the amount of power of a TX1 at full speed!

Max-P also uses only the 4 ARM A57 cores, but at a faster clock frequency. From the table, we can see that the Average Forward Pass drops from the Max-Q value of 78 to the Max-P value of 65. My understanding is that Max-P limits power usage to 15W.

Finally, we can see that in Max-N mode the Jetson TX2 performs best of all. (Note: This wasn’t shown in the video, it’s a special bonus for our readers here!) In addition to the 4 ARM A57 cores the Denver 2 cores come on line, and the clocks on the CPU and the GPU are put to their maximum values. To put it in perspective, the Jetson TX1 at max clock runs the test in about ~10000 ms, the Jetson TX2 at Max-N runs the same test in ~6600 ms. Quite a bit of giddy-up.

Conclusion

Deep learning is in its infancy and as people explore its potential, the Jetson TX2 seems well positioned to take the lessons learned and deploy them in the embedded computing ecosystem. There are several different deep learning platforms being developed, the improvement in Caffe on the Jetson Dev Kits over the last couple of years is way impressive.

Notes

The installation in this video was done directly after flashing L4T 27.1 on to the Jetson TX2 with CUDA 8.0, cuDNN r5.1 and OpenCV4Tegra.

The latest Caffe commit used in the video is: 317d162acbe420c4b2d1faa77b5c18a3841c444c

The post Caffe Deep Learning Framework – NVIDIA Jetson TX2 appeared first on JetsonHacks.


Serial Console – NVIDIA Jetson TX2

$
0
0

A Serial Console is a useful tool for embedded development, remote access, and those times when the development kit has issues that you need to observe. Here’s a simple approach for adding a serial console. Looky here:

Serial Console Background

The story of serial data transfer over wires goes back almost a hundred years. I’ve heard stories that Ferdinand Magellan first discovered a serial cable on his journeys, but lost it to a tangle in the battle of Mactan in 1521. Apparently it was later rediscovered in America where teletypewriters used serial communication technology over telegraph wires, the first patents around stop/start method of synchronization over wires being granted around 1916.

Serial communication in the computer industry is ubiquitous, in this case we are going to connect an Ubuntu PC up to the Jetson TX2 Development Kit through UART 1 on the TX2 J21 GPIO header. UART 1 is the serial console on the Jetson TX2 which allows direct access to the serial and debug console. Quite a handy thing to have when the going gets hardcore.

Note: Because the Jetson TX1 and the Jetson TX2 use the same carrier board, the procedure is the same for both devices. There is a nearly identical version of this post for the TX1, basically because we could not afford any new jokes for this article.

Installation

Because the Jetson communicates over a basic serial cable, almost any computer with serial terminal software can communicate with the Jetson. There are a wide range and variety of software terminal emulators out there, for this particular case the program Minicom was chosen. Other platforms and software programs can be used including Windows and Macintosh boxen.

One of the nice things about the Jetson TX2 is that it uses 2.54mm headers, which make interfacing easy with the Raspberry Pi and Arduino ecosystems. In this video, we use an Adafruit USB to TTL Serial Cable – Debug / Console Cable for Raspberry Pi. It’s also available from Amazon.

There are a wide variety of offerings for these types of cable. The products fall in two camps. The first camp uses FTDI chips for TTL to USB conversion, the second camp uses PL2303HX chips. The Adafruit cable is in the latter camp. One thing to keep in mind is that a driver for the appropriate chip may be required for the cable to work correctly with your particular operating system. The driver for the PL2303HX was already installed on the machine being used in the demonstration.

Wiring

Here are the signals for the J21 header: Jetson TX2 J21 Header Pinout Note: There is a small white triangle pointing to Pin 1 of the J21 Header on the Jetson TX2 carrier board.

The wiring is straightforward. Make sure that the Jetson is off and wire:

Jetson TX2 J21 Pin 8 (UART 1 TXD) → Cable RXD (White Wire)
Jetson TX2 J21 Pin 10 (UART 1 RXD) → Cable TXD (Green Wire)
Jetson TX2 J21 Pin 9 (GND) → Cable GND (Black Wire)

Then plug the USB connector into the host machine.

Here’s what it should look like:

Attached serial console cable to a Jetson TX2 Development Kit
Attached serial console cable to a Jetson TX2 Development Kit

Software

Once the Jetson is wired and connected, check to make sure that you can see it.

$ lsusb

Should list the device, the name is dependent on the chip being used by the USB-TTL cable. In the video, the device was listed as a PL2303 Serial Port.
You will then need to find the USB port to which the device is mapped.

$ ls /dev/ttyUSB*

This will list out the USB ports. On the machine in the video, there is only one device. Other machines may have more, you’ll have to figure out which is which. In this case, remember /dev/ttyUSB0 is the device to be later entered into the terminal emulator later.

You are then ready to install Minicom:

$ sudo apt-get install minicom

To start Minicom:

$ sudo minicom

The ‘sudo’ is used because of the serial port permissions. You’re then ready to configure the Settings to communicate with the Jetson TX2.

Settings

An important part of serial communication is settings that are used to communicate between the devices. Rather than go through a lengthy discussion of each setting and it’s meaning, let’s distill it into the settings themselves.

First set the device, in the video the device was ‘/dev/ttyUSB0‘.

Connection speed is 115200, with 8 bits, no parity, and 1 stop bit (115200 8N1). For these three wire cables, the correct setting is software control, no hardware control. If you choose a 5 wire setup with RTS and CTS lines, then select hardware control, and no software control.

In Minicom, Ctrl A Z brings up the main menu. Select the ‘cOnfigure Minicom’ menu item, enter the settings, and make sure that you save the configuration as described in the video. After that task is complete, exit Minicom and restart to have the settings take effect.

$ sudo minicom

You may then start the Jetson, at which point you will see the kernel log starting to scroll on the Minicom window on the host.

There are a wide variety of ways to interact with the Jetson through the serial console, one of the more useful tips is to interrupt the startup process with a keystroke to be able to interact with Uboot.

Warning

I did notice that on this installation if the serial cable is hooked up to the Jetson but is not plugged into the PC, then the display connected to the Jetson remains dark. I could not tell if the machine booted or not. Connecting the cable to the PC, or removing the cable from the Jetson solved that issue.

More Information

The carrier board of the Jetson TX1 and Jetson TX2 are the same. You can take advantage of a good tutorial on the Jetson TX1 wiki labeled Serial Console Wiring. This is a useful tutorial if you plan on build your own cable attached to a header, which is useful for dedicated development.

Conclusion

For the most part, there are two sets of developers who use the serial console. The first set is the casual user, people who only need access through the serial port occasionally. Hopefully this article helps, connect a couple of wires and be done with it. For the more hardcore developers, they will probably build their own cable with connector for a reliable connection.

The post Serial Console – NVIDIA Jetson TX2 appeared first on JetsonHacks.

NVPModel – NVIDIA Jetson TX2 Development Kit

$
0
0

The introduction of the Jetson TX2 Development Kit brings with it the introduction of the new command line interface nvpmodel tool.

Background

Applications for the Jetson Tegra systems cover a wide range of performance and power requirements. As the Jetson family has become more sophisticated over the years, power and performance management is becoming an increasingly important issue.

Fortunately, NVIDIA is providing a new command line tool which takes out a lot of the guess work in configuring the CPU and GPU settings to maximize performance and energy usage under different scenarios.

There are natural performance/energy points which provide the best performance for the minimal amount of energy. NVIDIA has done the heavy lifting and done the calculations to figure out which of the core and clock frequencies provide the best performance for the energy budget.

Remember that the Jetson TX2 consists of a GPU along with a CPU cluster. The CPU cluster consists of a dual-core Denver 2 processor and a quad-core ARM Cortex-A57, connected by a high-performance coherent interconnect fabric. With 6 CPU cores and a GPU, you can understand how the average developer benefits by not having to run all the performance/energy tests themselves.

On the Jetson Tegra, CPUs may be online or offline (except CPU0, which is always on for obvious reasons). CPUs have minimum frequencies and maximum frequencies.

Usage

Nvpmodel introduces five different “modes”. On the Jetson TX2. The following table breaks down the modes, which CPU cores are used, and the maximum frequency of the CPU and GPU being used.

nvpmodel mode definition
Mode Mode Name Denver 2 Frequency ARM A57 Frequency GPU Frequency
0 Max-N 2 2.0 GHz 4 2.0 GHz 1.30 Ghz
1 Max-Q 0 4 1.2 Ghz 0.85 Ghz
2 Max-P Core-All 2 1.4 GHz 4 1.4 GHz 1.12 Ghz
3 Max-P ARM 0 4 2.0 GHz 1.12 Ghz
4 Max-P Denver 2 2.0 GHz 0 1.12 Ghz

Max-Q mode provides equivalent performance to a Jetson TX1 at full clock modes, while Max-N provides almost twice the performance. This is due to a variety of factors, not just clock speeds. For example, the Jetson TX2 has a 128-bit wide memory bus versus the 64-bit wide TX1.

To call nvpmodel:

$ sudo nvpmodel -m [mode]

where mode is the number of the mode that you want to use. For example:

$ sudo nvpmodel -m 1

places the Jetson into Max-Q mode.

You can query which mode is currently being used:

$ sudo nvpmodel -q –verbose

The file /etc/nvpmodel.conf holds the different models. Developers can add their own models to add different modes suitable to their application.

Conclusion

Using nvpmodel provides developers with a nice tool set to easily setup different energy usage and performance scenarios. Recommended.

The post NVPModel – NVIDIA Jetson TX2 Development Kit appeared first on JetsonHacks.

Build Kernel and Modules – NVIDIA Jetson TX2

$
0
0

In this article, we cover building the kernel onboard the NVIDIA Jetson TX2. Looky here:

Background and Motivation

Note: This article is for intermediate users, and the methods within are somewhat experimental. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

With the advent of the new Jetson TX2 running L4T 27.1 with the 4.14 kernel, NVIDIA recommends using a host PC when building a system from source. See the Linux for Tegra R27.1 web page where you can get the required GCC 4.8.5 Tool Chain for 64-bit BSP.

The boot load sequence is more sophisticated on the Jetson TX2 in comparison to the TX1. In addition to the Uboot boot loader, there are additional loaders for hardware support. The previously mentioned tool chain is useful in building those features.

If you are building systems which require generating the entirety of Jetson TX2 system, those are good options. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or three as modules to support some extra hardware with the TX2. What to do, what to do …

Hack of course! With a little bit of coffee and swearing I was able to compile the kernel with modules on a Jetson TX2 itself.

Installation

The script files to build the kernel on the Jetson TX2 are available on the JetsonHacks Github account in the buildJetsonTX2 repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX2Kernel.git
$ cd buildJetsonTX2Kernel

There are three main scripts. The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file. In the video, the local version of the kernel is set. The stock kernel uses -tegra as its local version identifier. Make sure to save the configuration file when done editing. Note that if you want to just compile a module or two for use with a stock kernel, you should set the local version identifier to match.

The second script, makeKernel.sh, fixes up the makefiles so that the source can be compiled on the Jetson, and then builds the kernel and modules specified.

$ ./makeKernel.sh

The modules are then installed in /lib/modules/

The third script, copyImage.sh, copies over the newly built Image and zImage files into the /boot directory.

$ ./copyImage.sh

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. Note that it is new, and not thoroughly tested at this point. Use it at your own risk.

Note

The video above was made directly after flashing the Jetson TX2 with L4T 27.1 using JetPack 3.0.

The post Build Kernel and Modules – NVIDIA Jetson TX2 appeared first on JetsonHacks.

Intel RealSense Camera Installation – NVIDIA Jetson TX2

$
0
0

Intel RealSense cameras can use an open source library called librealsense as a driver for the Jetson TX2 development kit. Looky here:

Background

Note: This article is intended for intermediate users who are comfortable with Linux kernel development, and can read and modify simple shell scripts if needed.

In earlier articles we talked about the Intel RealSense R200 Camera, which is a relatively inexpensive RGBD device in a compact package. The camera uses USB 3.0 to communicate with a computer.

Intel has made available an open source library, librealsense on Github. librealsense is a cross platform library which allows developers to interface with the RealSense family of cameras, including the R200. Support is provided for Windows, Macintosh, and Linux.

There are two major parts to getting the R200 camera to work with the Jetson. First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.

A kernel module is code that can be accessed from the kernel on demand, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems.

A module is compiled code which is stored as a file separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to modify the entire kernel. We will be building a module called uvcvideo to help interface with the RealSense camera. Normally uvcvideo is built-in to the kernel, we will designate it as a module as part of our modification. We will modify uvcvideo to recognize the RealSense camera data formats.

The second part of getting the R200 to work with the Jetson TX1 is to build and install librealsense.

Kernel and Module Building

Note: In the video above, the installation was performed on a newly flashed L4T 27.1 TX1 using JetPack 3.0. The kernel sources were downloaded and built before shooting the video.

In the previous article, we built a new kernel for the Jetson TX2. Please refer to the article for the details. Once completed, you’re ready to install librealsense..

Install librealsense

A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.

$ cd $HOME
$ git clone https://github.com/jetsonhacks/installLibrealsenseTX2.git
$ cd installLibrealsenseTX2
$ ./installLibrealsense.sh

This will build the librealsense library and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and the camera can be accessed from user space.

USB Video Class Module

Note: This step assumes that the kernel sources are located in /usr/src/kernel and that the kernel is to be installed on the board. If this is not your intent, modify the script accordingly. applyUVCPatch.sh has the command to patch the UVC driver with the RealSense camera formats.

The third major step is to build the USB Video Class (UVC) driver as a kernel module. This is can be done using the script:

$ ./buildPatchedKernel.sh

The buildPatchedKernel script will modify the kernel .config file to indicated that the UVC driver should be built as a module. Next, the script patches the UVC driver to recognize the RealSense camera formats. Finally the script builds the kernel, builds the kernel modules, installs the modules and then copies the kernel image to the /boot directory.

Note: The kernel and modules should have already been compiled once before performing this step.

One more minor point of bookkeeping. In order to save power, the Jetson TX2 will auto-suspend USB ports when not in use for devices like web cams. This confuses the RealSense camera. In order to turn auto-suspend off, run the following script:

$ ./setupTX1.sh

Once finished, reboot the machine for the new kernel and modules to be loaded.

Examples are located in the directory /librealsense/build/examples

Conclusion

So there you have it. This has been a little bit more involved than some of our other projects here, but if you are interested in this kind of device, well worth it.

Notes

There are several notes on this project:

  • In the video above, the installation was done on a Jetson TX2 running L4T 27.1 immediately after being flashed by JetPack 3.0
  • QtCreator and Qt 5 are installed as dependencies in the librealsense part of the install. There are QtCreator project files located in the librealsense.qt directory. The project files build the library and example files. If you do not use QtCreator, consider modifying the installer script to take QtCreator out.
  • The librealsense examples are located in librealsense/build/examples after everything has been built.
  • These scripts install librealsense version v1.12.1 (last commit 7332ecadc057552c178addd577d24a2756f8789a)
  • The RealSense R200 is the only camera tested at this time.
  • Issues have been reported on the Jetson TX1 if the camera is plugged directly into the USB 3.0 port. The workaround is to use a powered hub. It’s unknown at this time if the TX2 suffers from the same issue.

The post Intel RealSense Camera Installation – NVIDIA Jetson TX2 appeared first on JetsonHacks.

Robot Operating System (ROS) on NVIDIA Jetson TX2

$
0
0

Robot Operating System (ROS) was originally developed at Stanford University as a platform to integrate methods drawn from all areas of artificial intelligence, including machine learning, vision, navigation, planning, reasoning, and speech/natural language processing. You can install it on the NVIDIA Jetson TX2! Looky here:

Background

From 2008 until 2013, development on ROS was performed primarily at the robotics research company Willow Garage who open sourced the code. During that time, researchers at over 20 different institutions collaborated with Willow Garage and contributed to the code base. In 2013, ROS stewardship transitioned to the Open Source Robotics Foundation.

From the ROS website:

The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.

Why? Because creating truly robust, general-purpose robot software is hard. From the robot’s perspective, problems that seem trivial to humans often vary wildly between instances of tasks and environments. Dealing with these variations is so hard that no single individual, laboratory, or institution can hope to do it on their own.

Core Components

At the lowest level, ROS offers a message passing interface that provides inter-process communication. Like most message passing systems, ROS has a publish/subscribe mechanism along with request/response procedure calls. An important thing to remember about ROS, and one of the reason that it is so powerful, is that you can run the system on a heterogeneous group of computers. This allows you to distribute tasks across different systems easily.

For example, you may want to have the Jetson running as the main node, and controlling other processors as control subsystems. A concrete example is to have the Jetson doing a high level task like path planning, and instructing micro controllers to perform lower level tasks like controlling motors to drive the robot to a goal.

At a higher level, ROS provides facilities and tools for a Robot Description Language, diagnostics, pose estimation, localization, navigation and visualization.

You can read more about the Core Components here.

Installation

The installROSTX2 repository on the JetsonHacks Github account contains scripts which install ROS on the TX2.

The main script, installROS.sh, is a straightforward implementation of the install instructions taken from the ROS Wiki.

You can grab the repository and run the script:

$ git clone https://github.com/jetsonhacks/installROSTX2.git
$ cd installROSTX2
$ ./installROSTX2

The script installs ros-base, rosdep and rosinstall. You can modify the script to install ros-desktop or ros-desktop-full if desired. ROS has a huge number of packages (over 1700) to choose from, this script provides an outline for installation.

There is a convenience script to install a Catkin Workspace, which is a tool support environment for ROS. The script is called setupCatkinWorkspace.sh. An optional parameter after the script names the workspace, the default name is catkin_workspace. The workspace will be installed in the home directory. For example:

$ ./setupWorkspace.sh jetsonbot

will create a Catkin Workspace directory named jetsonbot in the home directory.

Notes

  • In the video, the Jetson TX2 was flashed with L4T 27.1 using JetPack 3.0. L4T 27.1 is derived from Ubuntu 16.04.
  • A custom kernel was compiled for the TX2. See this article. Note that this is an optional step, installing ROS will work on a stock kernel.

The post Robot Operating System (ROS) on NVIDIA Jetson TX2 appeared first on JetsonHacks.

Intel RealSense Package for ROS on NVIDIA Jetson TX2

$
0
0

Intel provides an open source ROS package for their RealSense Cameras. Let’s install the package on the Jetson TX2. Looky here:

Background

Intel is investing heavily in computer vision hardware, one of the areas being 3D vision. There have been several generations of the RealSense devices, in the video we demonstrate a RealSense R200. A R400 has been recently announced, and should be available soon.

The size of the R200 and the light weight make the R200 camera a device to consider for 3D vision and robotic applications.

Installation

There are two prerequisites for installing the realsense_camera package on the Jetson TX2. The first is to install the camera driver library, called librealsense, on the Jetson TX2. We covered this installation in an earlier article.

The second prerequisite of course is to install Robot Operating System (ROS). A short article on how to install ROS on the Jetson TX2 is also available.

Install RealSense Package for ROS

There are convenience scripts to install the RealSense ROS package on the Github JetsonHacks account. After the prerequisites mentioned above have been installed:

$ git clone https://github.com/jetsonhacks/installRealSenseROSTX2
$ cd installRealSenseROSTX2
$ ./installRealSenseROSTX2 <catkin workplace name>

Where catkin workplace name is the name of the catkin_workspace to place the RealSense ROS package. In the video, the workspace is named jetsonbot.

You can then launch a R200 node:

$ cd
$ source devel/setup.bash
$ roscd realsense_camera
$ roslaunch realsense_camera r200_nodelet_rgbd.launch

The RealSense ROS package contains configuration files for launching Rviz. If you have Rviz installed on the Jetson TX2:

$ cd
$ source devel/setup.bash
$ roscd realsense_camera
$ rviz -d rviz/realsense_rgbd_pointcloud.rviz

Notes

There is a file called Notes.txt in the installRealSenseROSTX2 directory which has some short notes for installing Rviz and rqt-reconfigure to help visualize the output from the RealSense camera and adjust camera parameters from the Jetson TX2.

The installation above shows a Jetson TX2 running L4T 27.1 which was installed using JetPack 3.0. The scripts install Intel RealSense ROS package version 1.8.0.

The JetsonHacks script installs ros-kinetic-librealsense. In effect, this means that there are two installations of librealsense on the Jetson. The reason that librealsense needs to be built on the Jetson instead of installed from the ROS repository is that a kernel module named uvcvideo must be modified to recognize the RealSense camera formats. The JetsonHacks librealsense install covers how to build the module. You can remove the original installation if desired:

$ cd librealsense
$ sudo make uninstall

Also, the ros-kinetic-librealsense package installs linux-headers in the /usr/src directory. These headers DO NOT match the Jetson TX2, so you should consider deleting them. Same with the uvcvideo realsense directory.

$ cd /usr/src
$ sudo rm -r linux-headers-4.4.0-70
$ sudo rm -r linux-headers-4.4.0-70-generic
$ sudo rm -r uvcvideo-1.1.1-3-realsense

The ROS repository also holds a ros-kinetic-realsense-camera package. The package is version 1.7.2 (as of March, 2017). There is an issue with that particular version, the auto-exposure (lr_auto_exposure) does not work correctly. This makes the camera considerably less effective in varied lighting conditions. Therefore the script builds the package from source (version 1.8.0) where the issue has been addressed.

Unlike most of articles and videos on JetsonHacks, installation of the Intel RealSense ROS package requires some prerequisites to be installed before installation. While previous articles cover the steps involved, be aware that this is a little more complicated than most of the software installation articles on this site.

As always, the scripts made available from JetsonHacks only provide a guide for how to complete the described task, you may have to modify them to suit your needs.

The post Intel RealSense Package for ROS on NVIDIA Jetson TX2 appeared first on JetsonHacks.

Install Samsung SSD on NVIDIA Jetson TX2

$
0
0

Installing a Solid State Disk (SSD) on a Jetson TX2 is good, clean fun. Looky here:

Background

Serial-ATA drives are used in many desktop and laptop computers. While this article describes installing a Solid State Disk (SSD), this information can be used to install other types of SATA drives. SATA drives are probably the fastest external storage interface to the Jetson TX2, they can be more than twice as fast as USB drives. Also, SATA drives are relatively inexpensive for the amount of storage they hold.

You can simply use the SATA drive as supplemental storage, or choose to use the drive as the root directory of the operating system. This basically means that the system runs from the SATA drive instead of the internal flash (eMMC) memory.

This method is a mostly GUI solution, there are command line equivalents that others may use and are more sophisticated. Just be forewarned that if you ask for help, others may speak in ‘CLI‘ language.

Materials

You’ll need a SSD drive of course. Here’s links to items shown in the video:

Installation

Note: The installation on the video was done on a Jetson TX2 running L4T 27.1, after flashing with JetPack 3.0.

Because the installation demonstration is using mostly GUI tools, please refer to the video for the walk through. Here are the basic steps:

Make sure that the Jetson is powered down, and disconnect the power. Attach the SATA SSD to the Jetson using a SATA extension cable. Some SSD drives will fit on the Jetson TX2 SATA connector directly. However, this can be rather precarious as the SSD can act as a big lever which when bumped may and break off the connector from the TX2 carrier board. Adding a cable minimizes this risk.

With the SATA drive installed, connect the power and start up the machine.

Format the SATA drive by adding at least one partition with a ext4 format. There are a couple of ways of doing this, an easy way is to use the Disks application which provides a GUI for formatting disks. In the L4T 27.1 release there seems to be an issue with creating a portion on the disk. Remember how this was good clean fun? Not so much as a few blue words were sprinkled in before using a Terminal to cast the magic incantation:

$ sudo parted /dev/sda/ mklabel gpt

where /dev/sda is the location of the SSD. This creates a partition on the SSD. Format the disk using ext4 afterwards.

Next, mount the SATA drive. Double clicking the SATA drive icon in the sidebar will mount the SATA drive and open a file browser. Now enjoy the sea of GB goodness.

Conclusion

In the video we talked a little about different technologies used in SSDs. Here’s an article, Flash Memory and You, where we cover that ground and then some.

The post Install Samsung SSD on NVIDIA Jetson TX2 appeared first on JetsonHacks.


Flash Memory and You

$
0
0

In a previous article about SSDs we had a chat in the video about flash memory types.

Ivan R commented on the JetsonHacks YouTube channel:

Jim, I’ve noticed a lot of development boards now a days are coming with on board eMMC memory for storage as compared to older style, say Raspberry Pi’s with an SD card. I only have older Raspberry Pi’s at the moment unfortunately but I’d be curious of your opinion on a few things. In your experience how does this type of storage on these type of devices fair? How is it compared to SD card and SSD card writes/reads? Is there a life expectancy on that eMMC type?

Now seems to be like a good time to untangle some of the questions that have arisen. Before we start, this is not an exhaustive survey of flash memory, just enough knowledge to be dangerous.

Different types of Flash

You can think about flash memory as a cell. One of the things most people hear about is the Program/Erase cycle (P/E) and that each cell has an expected number of times that it can be changed before it fails.

The first generation flash devices are what is called Single Level Cell (SLC). SLC devices hold 1-bit in each cell.

The second generation are Multi Level Cell (MLC). MLC devices hold 2-bits per cell. MLC generally provides higher storage density and lower cost with the tradeoff being slower write performance, narrower operating temperature, and 10-20 times less endurance cycles (P/E cycles).

The third generation is naturally Triple Level Cell (TLC) flash. You can effectively store 3 bits per cell. The tradeoff in comparison to MLC similar to MLC vs SLC with about 50% less endurance cycles.

Samsung and Intel introduced the idea of stacking the components for higher density, this is parallel to the cell type. For example, Samsung calls their technology V-NAND which provides higher storage density but at the same level of the type of underlying flash cell type being used.

As you might guess, each of these technologies has a market to fill. Enterprise servers and embedded devices that must operate reliably in a broad range of operating temperatures tend to be SLC. Because MLC and TLC devices cost less and allow for higher storage density, they make it possible to create affordable mobile devices with large amounts of data storage.

Different Ways to Hook It Up

Another consideration is the way that the flash memory actually is connected to the CPU/ memory controller/SoC. For example on the Jetson the eMMC flash memory comes in over a 200mbs/8 bit bus. As I recall, the eMMC flash and the SD card reader come in through the same bus.

On the other hand, the SATA port is a 6 Gbs bus, so effectively is much faster than the bus that connects eMMC. When you run the system off of a SATA drive, it subjectively seems much faster. The USB bus is somewhere in the middle.

Out in the Wilds

That’s all great as an explanation and everything, but what does it mean?

First, remember that these are development machines by nature. Storage is one of the expendables. By the way, have you ever gone back and looked at one of your disks from 5 years ago? How much storage does it have?

To answer the original Ivan R question about internal eMMC vs. SD Card, you can think about it a couple of ways. First, the Raspberry Pi really isn’t a development kit, it’s a consumer product. One of the ways they cut cost is by not putting any flash on board. Here’s the thing, the actual SD cards themselves? Depends on which one you use. Some are industrial strength, SLC based. A nice 32 GB card costs about $350 give or take. Seems unlikely someone would use it on a $35 board, not when you can get the same thing for less than $15 in MLC or TLC.

Because the SD card is mass market, you might not know the quality of the chips being used. Realistically there are only a few flash chip manufacturers out there, but you never know the relationship between the chip manufacturer and the SD card builder. Sure, when you buy a Samsung SD card you know what you’re getting. When you buy brand X? Not so much.

Another issue is the mechanical connector. Connections are always a point of failure. So what the development board manufacturers do is mitigate all of the risk by placing eMMC flash onboard. In the case of the Jetson, this gives NVIDIA the flexibility of being able to spec which actual memory chips and cell type are being used. For industrial applications, the module can contain high rated SLC, for consumer application lower rated SLC or high rated MLC. The developer gets the comfort of knowing that there is at least one configuration that they can depend on.

On a development board, you’re much more likely to fry other components before you ever reach the life of the eMMC. That’s the nature of the hardware development boards, things like “I forgot I wasn’t supposed to short all the pins to ground all at once”. The magic smoke tends to get out of the support chips before they come out of the module.

As far as the SSD side of life, it be good there. The more storage that you have to spread the write cycles over, the longer the drive lasts. In practice this means that you always want to have more than 10% storage free so that all of the write activity just doesn’t sit in one place. You can also imagine the advantage of a larger disk, they have longer life expectancies because they have more bits to write on. Get the most you can afford.

For me personally, right now for less than $100 I can pick up 250GB of flash that’ll last 5 years. Developers keep their code in repositories backed up some where (such as Github or on a network), so that’s not an issue. You can imagine the case where you collect video 24/7 and store it to disk, upload it to a server, then overwrite it. Even in that case, the drive will last for a 18 months. For an extra $50, you can double that life expectancy. And you can double that amount of storage on larger drives for a reasonable amount. If you’re looking for a 2TB drive, you’re still well under a grand. But like we talked about, for a professional developer these are expendables and part of doing business.

There are applications where you’ll need expensive SLC SSDs, but more than likely you know more about the subject than I do.

The post Flash Memory and You appeared first on JetsonHacks.

TensorFlow on NVIDIA Jetson TX2 Development Kit

$
0
0

In this article, we will work through installing TensorFlow v1.0.1 on the Jetson TX2. Looky here:

Background

TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT, which is the run time for TensorFlow. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.

However, some people would like to use the entire TensorFlow system on a Jetson. This has been difficult for a few reasons. The first reason is that TensorFlow binaries aren’t generally available for ARM based processors like the Tegra TX2. The second reason is that actually compiling TensorFlow takes a larger amount of system resources than is normally available on the Jetson TX2. The third reason is that TensorFlow itself is rapidly changing (it’s only a year old), and the experience has been a little like building on quicksand.

In this article, we’ll go over the steps to build TensorFlow v1.0.1 on the Jetson TX2. This will take about three hour and a half hours to build.

Note: Please read through this article before starting installation. This is not a simple installation, you may want to tailor it to your needs.

Preparation

This article assumes that Jetson 3.0 is used to flash the Jetson TX2. At a minimum, install:

  • L4T 27.1 an Ubuntu 16.04 64-bit variant (aarch64)
  • CUDA 8.0
  • cuDNN 5.1.10

TensorFlow will use CUDA and cuDNN in this build.

In order to get TensorFlow to compile on the Jetson TX2, a swap file is needed for virtual memory. Also, a good amount of disk space ( > 6 GB ) is needed to actually build the program. If you’re unfamiliar with how to set the Jetson TX2 up like that, the procedure is similar to that as described in the article: Jetson TX1 Swap File and Development Preparation.

There is a repository on the JetsonHacks account on Github named installTensorFlowTX2. Clone the repository and switch over to that directory.

$ git clone https://github.com/jetsonhacks/installTensorFlowTX2
$ cd installTensorFlowTX2

Prerequisites

There is a convenience script which will install the required prerequisites such as Java and Bazel. The script also patches the source files appropriately for ARM 64.

$ ./installPrerequisites.sh

From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.

Building TensorFlow

First, clone the TensorFlow repository and patch for Arm 64 operation:

$ ./cloneTensorFlow.sh

then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.

$ ./setTensorFlowEV.sh

We’re now ready to build TensorFlow:

$ ./buildTensorFlow.sh

This will take a couple of hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:

$ ./packageTensorFlow.sh

The wheel file will be in the $HOME directory, tensorflow-1.0.1-cp27-cp27mu-linux_aarch64.whl

Installation

Pip can be used to install the wheel file:

$ pip install $HOME/tensorflow-1.0.1-cp27-cp27mu-linux_aarch64.whl

Validation

You can go through the procedure on the TensorFlow installation page:

TensorFlow on NVIDIA Jetson TX2 Development Kit appeared first on JetsonHacks.

Jetson RACECAR Restart!

$
0
0

The green flag is back out, and the Jetson RACECAR project is starting again! The Jetson RACECAR is a ROS based, 1/10 scale, R/C car platform autonomous vehicle. The Jetson RACECAR is based on the MIT RACECAR project, an open source hardware and software project.

Here’s a quick look at some of the changes that we’re making to the previous work. Looky here:

Project Intent

Autonomous vehicles are a very interesting area to study. Current research vehicles tend to be rather expensive. The idea here is to build a scale model with many of the same features and sensor types so that any particular problem area can be broken down into the component parts. The idea here is play. We want to be able to play with the parts that interest us.

Component Selection

With that idea in mind, that means we should be able to look into any given component as deeply as desired, while maintaining full control of vehicle. As an example, the stock Electronic Speed Ccontroller (ESC) on the TRAXXAS performs admirably, but is difficult to control at slow speeds. After all, it’s intended use is off road racing, there’s not much need for going at a snails pace!

However, under robotic control we should be able to control the vehicle at any speed we choose. For that reason, the VESC controller is the choice because it both can be controlled at slow speeds, and is open source. In theory, that means if you’re interested in control theory, you can look ‘at the bottom of the hardware stack’.

The selection of sensors is also a major choice point. For example, the MIT RACECAR uses a Hokuyo UST-10LX laser range finder. MIT races in the tunnels underneath the campus, so the LIDAR is a great mapping aid. On the other hand, it’s not very well suited towards outdoor racing. Cameras seem to be the choice for the great outdoors. The Hokuyo is a little on the pricey side, a major strike against it for some folks.

Things change

When working with consumer products, there are frequent changes to products. Sometimes the products are discontinued, are changed significantly, or are difficult to acquire. If they are difficult to get a hold of because of popularity, that’s a good thing. If it’s because of lack of demand, that can be bad.

In this particular project, there have been several parts that have changed or been replaced. The TRAXXAS car itself is difficult to acquire. The SparkFun IMU has been superseded. The original battery to drive the electronics has been superseded. The VESC (an open-source, brushless motor controller) which originally had quite a long lead time, is now more easily available. The VESC also is now offered in a version with considerably friendlier device package.

It’s Go Time

In the video, we go over some of the hardware selections. Some platforms made from laser cut 1/4″ ABS mount on the TRAXXAS car chassis to support the Jetson Dev Kit, sensors and electronics.

We’ve chosen to follow the MIT RACECAR electronics selection. This includes the VESC (for the reason listed above), an Amazon USB 3.0 hub, and a Jetson TX1 Development Kit.

This means that for actual control of the vehicle, we’ll be able to use the MIT RACECAR ROS software without change. We’ll worry about sensor selection further on down the line.

As noted in the video, this is the second prototype. There is a third prototype which uses updated parts, and some of the lessons learned from this particular build.

As a note, this doesn’t mean there aren’t other equally interesting component choices. You can read the Daniel Tobias’ Cherry Autonomous Racecar article to see an alternate and absolutely amazing implementation.

Looking forward to getting this going again!

The post Jetson RACECAR Restart! appeared first on JetsonHacks.

GPIO and SPI – NVIDIA Jetson TX1

$
0
0

In a post on the NVIDIA Jetson TX1 forum, Wilkins White (Atrer) from Nova Dynamics (ww@novadynamics.com) wrote up a quite wonderful explanation of how to enable SPI on the Jetson TX1. The SPI interface is used in the discussion to interface with a MCP2515 CAN Bus Module (CAN Bus is a vehicle bus standard. Bus as in computer bus in a vehicle, not the big yellow thing).

In the discussion, Wilkins explains not only how to enable SPI, but also how to decipher and process the GPIO mappings from the pinmux and the kernel source code to setup the board DTSI file. All good fun. We’re reprinting it here because we think it is good information to share throughout the community. The process is similar on all of the Jetson family. Here’s the post:

Forum Post

Looks like NVIDIA has integrated CAN into the TX2, but the rest of us have to do it the old fashioned way. Here’s a quick guide on how to get an MCP2515 working with the TX1 development board.

Special Note: The TX1’s SPI logic level is 1.8V, the MCP2515 runs on 3.3V and expects a logic level of 0.9*VDD (~2.97V) to latch. Thus you’ll need a level shifter between the TX1 and the MCP2515 to bridge the gap if you’re using the Display Expansion connector (SPI0 and SPI2). It is possible that J21 SPI1 can be set to 3.3V by setting the jumper on J24, but I haven’t tested that.

Here are the attachment files:
tegra210-daxc03.dtsi
board-t210ref

Kernel Configuration

In order for the MCP2515 to work the kernel needs to have both SPI and the MCP251x drivers enabled. First step is becoming familiar with the process for building the kernel. Ridgerun has an excellent guide on the subject here: https://developer.ridgerun.com/wiki/index.php?title=Compiling_Tegra_X1_source_code

Run through that guide until you get to Build Kernel step 5:

make tegra21_defconfig
make menuconfig

The configurations that need to be set are the following:

  • CONFIG_SPI_SPIDEV=y
  • CONFIG_CAN_DEV=y
  • CONFIG_CAN_MCP251X=y

For menuconfig that means this:
< blockquote >Device Drivers →
  <*> SPI support →
    <*> User mode SPI device driver support

<*> Networking support →
  <*> CAN bus subsystem support →
  CAN Device Drivers →
    <*> Microchip MCP251x SPI CAN controllers

Once the kernel is configured we need to update the dtsi and edit the TX1 board file so it knows where to find our MCP2515.

Device Tree

Next step, which may not be necessary for using the MCP2515, but is useful for debugging, is to enable spidev in the device tree. This allows you to manipulate the SPI port using tools such as the SPIDEV kernel module or the py-spidev python package (https://github.com/doceme/py-spidev).

The TX1 loads a device tree blob (dtb) on boot which contains configuration information such as clock speeds, register settings, and pin defaults. To find out what file your board uses you can check dmesg.

$ dmesg | grep .dts
[ 0.000000] DTS File Name: arch/arm64/boot/dts/tegra210-jetson-tx1-p2597-2180-a01-devkit.dts

Now navigate your kernel_source folder. And take a look at that dts file.

$ cd $DEVDIR/64_TX1/Linux_for_Tegra_64_tx1/sources/kernel_source/
$ vim arch/arm64/boot/dts/tegra210-jetson-tx1-p2597-2180-a01-devkit.dts

After the copyright information you’ll see an include line. Insert your own include with the SPI configuration below that one. If you’re using my dtsi it would look like this:

/*
* arch/arm64/boot/dts/tegra210-jetson-tx1-p2597-2180-a01-devkit.dts
*
* Copyright (c) 2014-2015, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
*/

#include “tegra210-jetson-cv-base-p2597-2180-a00.dts”
#include “tegra210-daxc02.dtsi”

Don’t forget to actually put that file (or a link to the file) in the folder so it can be included.

If you want to modify the device tree in place (on the TX1) you can find instructions at the following link: http://elinux.org/Jetson/TX1_SPI

GPIO and SFIO

Selecting which GPIO to use (and figuring out which ones are available) can be tricky. In this section I’ll talk about GPIO and SFIO, pin mappings, and walk through enabling access to the SPI port on the J21 header.

The majority of pins on the TX1, and most embedded devices for that matter, can be configured to either source or sink current. They are aptly called “General Purpose Input Output” or GPIO. Sometimes these pins are connected to specialized internal circuitry for purposes such as analog to digital conversion, generating waveforms, or handling protocols such as UART, SPI, or I2C. When a pin has circuitry like that it is called a “Specific Function Input Output” or SFIO. The device tree is responsible for telling the TX1 how every single one of those pins is to be configured. Either as an output (source current), an input (sink current), or SFIO.

To start figuring out the TX1 GPIO system you’ll want to jump over to the download center and grab “Jetson TX1 Module Pinmux” (http://developer.nvidia.com/embedded/dlc/jetson-tx1-module-pinmux). This spreadsheet was created as a customer reference showing how the TX1 pins are configured. It even has scripts to allow you to generate a new gpio defaults dtsi, but I prefer to do everything in my own dtsi file. Mux is shorthand for “multiplexor” which is the device that handles selecting which internal circuitry a pin is connected to.

Column A shows the pin names and columns G-K show the GPIO and SFIO mappings. If we scroll down to the SPI section we can see that the SPI1 bus, which is connected to the J21 header, can be multiplxed to either GPIO PC1-4, or SPI. We can also see on column AN that GPIO is selected by default. Thus to use the port as SPI we need to configure those pins to be SFIO instead of GPIO.

“GPIO3_PC.00” probably doesn’t mean much to you. Luckily there is a mapping buried in the kernel source.

$ cd $DEVDIR/64_TX1/Linux_for_Tegra_64_tx1/sources/kernel_source/
$ less arch/arm/mach-tegra/gpio-names.h

With the gpio-names file we can cross reference the GPIO names with the actual pin numbers.

#define TEGRA_GPIO_PC0 16
#define TEGRA_GPIO_PC1 17
#define TEGRA_GPIO_PC2 18
#define TEGRA_GPIO_PC3 19
#define TEGRA_GPIO_PC4 20

Once we have that information, switching the pins to sfio is as easy as calling “gpio-to-sfio” in gpio defaults like in my example dtsi.

gpio@6000d000 {
/* Needed for J21 Header SPI1 */
gpio_default: default {
gpio-input = <170 174 185>; // Set PV2, PV6, and PX1 to input
gpio-to-sfio = <16 17 18 19 20>; // J21 Header SPI1
};
};

This sets three pins, 170, 174, and 185 to inputs, and converts 5 pins, 16-20, to SFIO.

Choosing an interrupt GPIO

Now that we have these two documents we can choose pins for our MCP2515 interrupts. Lets say we want to stick with the J21 header. If we take a look at the header pinout (http://www.jetsonhacks.com/nvidia-jetson-tx1-j21-header-pinout/) we can see that several pins are helpfully already labeled as GPIO. Most of these have some other label such as GPIO9_MOTION_INT, but these are just suggestions. A GPIO is a GPIO, you can use it for whatever you would like.

Say we wanted to use J21 pin 31 (GPIO9_MOTION_INT) as our interrupt pin. Pull up the Pinmux spreadsheet and search for GPIO9. You should find something like “GPIO9/MOTION_INT” in column A. Scrolling over to column G shows that it is mapped to “GPIO3_PX.02”

Now check /arch/arm/mach-tegra/gpio-names.h for the pin number.

$ cd $DEVDIR/64_TX1/Linux_for_Tegra_64_tx1/sources/kernel_source/
$ cat arch/arm/mach-tegra/gpio-names | grep PX2

Using that number (in this case 186) you can set that gpio to an input in the dtsi and use it for your interrupt in the board-file, below. Note that gpio-names.h is compiled into the kernel. So for the board file you can use the define in place of the pin number. Ex: TEGRA_GPIO_PX2

Board File

The version of the MCP251x driver included in the kernel does not use the device tree for its configuration. Thus the TX1’s board file needs to be modified to add the driver’s private data structs.

This board file compiled into the TX1 kernel is located here:

kernel_source/arch/arm64/mach-tegra/board-t210ref.c

First define the MCP2515 private data structures and init function. You may need to make the following changes to my definitions:

  • Change the irq to whichever GPIO pin you are using
  • Switch the oscillator_frequency to match the crystal you have connected to the mcp2515
  • Change the bus_num to the SPI port you are using, see my dtsi file for mapping

#ifdef CONFIG_CAN_MCP251X
#include <linux/can/platform/mcp251x.h>
#define CAN_GPIO_IRQ_MCP251x_SPI TEGRA_GPIO_PV6

static struct mcp251x_platform_data mcp251x_info = {
.oscillator_frequency = 16 * 1000 * 1000, /* Oscillator connected to the MCP2515 crystal */
.board_specific_setup = NULL, /* We don’t have a board specific setup */
.power_enable = NULL, /* We don’t want any power enable function */
.transceiver_enable = NULL, /* We don’t want any transceiver enable function */
};

struct spi_board_info mcp251x_spi_board[1] = {
{
.modalias = “mcp2515”, /* (or mcp2510) used chip controller */
.platform_data = &mcp251x_info, /* reference to the mcp251x_platform_data mcp251x_info */
.max_speed_hz = 2 * 1000 * 1000, /* max speed of the used chip */
.chip_select = 0, /* the spi cs usage*/
.bus_num = 1, // SPI0
.mode = SPI_MODE_0,
},
};

static int __init mcp251x_init(void)
{
mcp251x_spi_board[0].irq = gpio_to_irq(CAN_GPIO_IRQ_MCP251x_SPI); // #define CAN_GPIO_IRQ_MCP251x_SPI TEGRA_GPIO_PK2
spi_register_board_info(mcp251x_spi_board, ARRAY_SIZE(mcp251x_spi_board));
pr_info(“mcp251x_init\n”);
return 0;
}

#endif

Next add the init function to t210ref_late_init;

static void __init tegra_t210ref_late_init(void)
{
struct board_info board_info;
tegra_get_board_info(&board_info);
pr_info(“board_info: id:sku:fab:major:minor = 0x%04x:0x%04x:0x%02x:0x%02x:0x%02x\n”,
board_info.board_id, board_info.sku,
board_info.fab, board_info.major_revision,
board_info.minor_revision);

t210ref_usb_init();
tegra_io_dpd_init();
#ifdef CONFIG_PM_SLEEP
/* FIXME: Assumed all t210ref platforms have sdhci DT support */
t210ref_suspend_init();
#endif
tegra21_emc_init();
isomgr_init();

#ifdef CONFIG_CAN_MCP251X
mcp251x_init();
#endif

/* put PEX pads into DPD mode to save additional power */
t210ref_camera_init();
}

Once you have those changes made go ahead and finish the Ridgerun guide (step 6+) and finish compiling and flashing the kernel.

Bringing up the Interface

The MCP2515 should appear as a can0 interface under ifconfig -a

To bring it up use the following commands (change bitrate to whatever your bus is)

$ sudo ip link set can0 type can bitrate 500000
$ sudo ifconfig can0 up

Conclusion

I wish I had this when trying to figure out the mappings for the TX1 when I received a prototype unit a few years ago!

The post GPIO and SPI – NVIDIA Jetson TX1 appeared first on JetsonHacks.

Build OpenCV on the NVIDIA Jetson TX2

$
0
0

As a developer, sometimes you need to build OpenCV from source to get the configuration desired. There is a script on the JetsonHacks Github account to help in the process. Looky here:

Background

JetPack can install a CPU and GPU accelerated version of the OpenCV libraries, called OpenCV4Tegra, on the Jetson. OpenCV4Tegra is version 2.4.13 as of this writing. This is great for many applications, especially when you are writing your own apps. However, some libraries require different modules and such that require upstream OpenCV versions.

Installation

The community has gathered the recipe(s) for building OpenCV for version later than OpenCV 3.0. There is a repository on the JetsonHacks Github account which contains a build script to help in the process.

To download the source and build OpenCV:

$ git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
$ cd buildOpenCVTX2
$ ./buildOpenCV.sh

Once finished building, you are ready to install.

As explained in the video, navigate to the build directory to install the newly built libraries:

$ cd ~/opencv/build
$ sudo make install

Once you have generated the build files, you can use the ccmake tool to examine the different options and modules available.

Remember to setup you OpenCV library paths correctly.

Notes

  • This is meant to be a template for building your own custom version of OpenCV, pick and choose your own modules and options
  • Most people do NOT have both OpenCV4Tegra and the source built OpenCV on their system. Some people have noted success using both however, check the forums.
  • Sometimes the make tool does not build everything. Experience dictates to go back to the build directory and run make again, just to be sure
  • Different modules and setting may require different dependencies, make sure to look for error messages when building.
  • After building, you should run the tests. The build script includes the testing options. All tests may not pass.
  • The build script adds support for Python 2.7
  • The compiler assumes that the Jetson TX2 aarch64 (ARMv8) architecture is NEON enabled, therefore you do not have to enable the NEON flag for the build

The information for this script was gathered from several places:

The post Build OpenCV on the NVIDIA Jetson TX2 appeared first on JetsonHacks.

Viewing all 339 articles
Browse latest View live