With the deprecation of OpenCV4Tegra in L4T 28.1, developers may want to build OpenCV from source for their applications. There is a script on the JetsonHacks Github account to help in the process. Looky here:
Background
For some applications it makes sense to build OpenCV from source. OpenCV is a rich environment with many options. For example, the OpenCV libraries in the Ubuntu repositories currently do not include GPU acceleration for aarch64 machines. For the Jetson, with its built in GPU, it makes sense to build GPU support into the OpenCV library.
An earlier NVIDIA supplied version of OpenCV4Tegra provided a variety of optimizations for the OpenCV library. Over the course of the last couple of years these optimizations have migrated upstream. That means that the optimizations are now available in the public OpenCV library.
Installation
The community has gathered the recipe(s) for building OpenCV for version later than OpenCV 3.0. There is a repository on the JetsonHacks Github account which contains a build script to help in the process.
To download the source and build OpenCV:
$ git clone https://github.com/jetsonhacks/buildOpenCVTX1.git
$ cd buildOpenCVTX1
$ ./buildOpenCV.sh
Once finished building, you are ready to install.
Navigate to the build directory to install the newly built libraries:
$ cd ~/opencv/build
Note: As discussed in the video, due to the nature of the multi-processor make process, some files may not be compiled. It’s a good idea to run make again after switching to the build directory:
$ make
This takes an extra few minutes, but will round up any stragglers and save headaches down the road.
Now install:
$ sudo make install
Once you have generated the build files, you can use the ccmake tool to examine the different options and modules available.
Remember to setup your OpenCV library paths correctly after installation.
Notes
This is meant to be a template for building your own custom version of OpenCV, pick and choose your own modules and options
The Jetson TX1 in the video is running L4T 28.1, OpenCV 3.3 is the version being built.
In the script, GStreamer support has been enabled. However there are issues using the Jetson TX1 onboard camera with it. The issue is discussed here: NVIDIA Jetson TX1 Forum – L4T 28.1 Onboard Camera Error. As of this writing (9/5/2017), I have been unable to get this working.
Most people do NOT have both OpenCV4Tegra and the source built OpenCV on their system. Some people have noted success using both however, check the forums.
Sometimes the make tool does not build everything. Experience dictates to go back to the build directory and run make again, just to be sure
Different modules and setting may require different dependencies, make sure to look for error messages when building.
After building, you should run the tests. The build script includes the testing options. All tests may not pass.
The build script adds support for Python 2.7
The compiler assumes that the Jetson TX1 aarch64 (ARMv8) architecture is NEON enabled, therefore you do not have to enable the NEON flag for the build
The information for this script was gathered from several places:
At a recent MIT/Lincoln Labs 2017 Beaver Works Summer Institute Seminar, Ariel Anders gave a lecture on route planning with regards to mobile robot navigation. Looky here:
Background
There are many parts to mobile robot navigation. As this information is used directly in programming the MIT RACECAR, a NVIDIA Jetson based robot, we will cover that information here.
Mobile robot navigation can be broken into these main categories:
Perception
Localization
Task and Route Planning
Motion Planning and Execution
This lecture covers the route planning aspect of the task.
There are many more lectures available in this summer series, with a wide range of subject matter. We will be providing pointers to the lectures that directly address the RACECAR, but it’s worth going through the playlist to find other topics which may interest you.
Note that these lectures are given to high school senior students.
Note: Some people find it helpful to set the playback speed for these types of videos to 1.25X on YouTube, the setting is available in the settings menu. This saves a little time while watching, but the fidelity is still good enough to understand the lecture. You can always put it back to normal speed for the tricky bits.
In this article, we build a simple demonstration of a Canny Edge Detector using OpenCV, Python, and the onboard camera of the NVIDIA Jetson TX2 Development Kit. Looky here:
Background
Back in 1986, John F. Canny developed the Canny Edge detector. The Canny Edge is one of the image processing milestones which is still in use today.
You can read some more about the Canny Edge Detector and the technical details here: OpenCV.org Canny Edge Detector and here: Wikipedia – Canny edge detector
In this article we will use a simple Python script which will use the OpenCV library implementation of the Canny Edge Detector to read the onboard camera and run the frames through the filter. Earlier we went over on how to build the OpenCV library for the Jetson. There is a repository on the JetsonHacks Github account which contains a build script to help in the process. You will need to enable GStreamer support. As of this writing the current script enables GStreamer support (OpenCV 3.3), while earlier versions did not. GStreamer must be enabled to support the onboard camera.
The Codez
The file cannyDetection.py is available in the Examples folder of the buildOpenCVTX2 repository on the JetsonHacks Github account. The script is also available as a Github Gist. The Gist is seen in its entirety further down below.
GStreamer Camera Pipeline
The first task is to open the onboard camera. Camera access is through a GStreamer pipeline:
The main part of the filter processing reads a frame from the camera, converts it to gray scale, runs a gaussian blur on the gray scale image, and then runs the Canny Edge Detector on that result:
ret_val, frame = cap.read();
hsv=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blur=cv2.GaussianBlur(hsv,(7,7),1.5)
edges=cv2.Canny(blur,0,edgeThreshold)
Not surprisingly, it is more concise to just write the code. In the code, both the GaussianBlur and Canny functions have parameters to fine tune the results.
At this point, we could simply display it on a window on the screen:
cv2.imshow('Canny Edge Detector', edges)
In the script, we add a little user interface sugar which allows us to optionally display each step. The only interesting part is that the image for each step is composited into one larger frame. This requires that the images be converted to the same color space before compositing. In the video, the Jetson TX2 is set to run at maximum performance (See note below). Here’s the full script:
Conclusion
Many people use OpenCV for everyday vision processing tasks, and the Canny Edge Detector is a valuable tool. The purpose of this article is to show how to access the onboard camera using GStreamer in a Python script with OpenCV. More importantly, I played guitar in the video.
Notes
In the video, the Jetson TX2 is running L4T 28.1, OpenCV 3.3 with GStreamer support enabled
In the video, the Jetson TX2 is running ‘$ sudo nvpmodel -m 0’
In this article, we will build and install TensorFlow v1.3.0 on the Jetson TX2 running L4T 28.1 from source. Looky here:
Background
TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX2 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.
Note: We built TensorFlow back in April, 2017 for the Jetson TX2 running L4T 27.1 (JetPack 3.0). This article is the update to build TensorFlow for L4T 28.1 (JetPack 3.1).
Some people would like to use the entire TensorFlow system on a Jetson. In this article, we’ll go over the steps to build TensorFlow v1.3.0 on the Jetson TX2 from source. This should take about two hours to build.
Note: Please read through this article before starting installation. This is not a simple installation, you may want to tailor it to your needs.
Preparation
This article assumes that Jetson 3.1 is used to flash the Jetson TX2. At a minimum, install:
L4T 28.1 an Ubuntu 16.04 64-bit variant (aarch64)
CUDA 8.0
cuDNN 6.0.1
TensorFlow will use CUDA and cuDNN in this build.
It may be helpful to enable all of the CPU cores for the build:
$ sudo nvpmodel -m 0
There is a repository on the JetsonHacks account on Github named installTensorFlowTX2. Clone the repository and switch over to that directory.
There is a convenience script which will install the required prerequisites such as Java and Bazel. The script also patches the source files appropriately for ARM 64.
Before installing TensorFlow, a swap file should be created (minimum of 8GB recommended). The Jetson TX2 does not have enough physical memory to compile TensorFlow. The swap file may be located on the internal eMMC, and may be removed after the build. Note that placing the swap file on a SATA drive if available will be faster.
If you do not already have one available on the Jetson, there is a convenience script for building a swap file. For example, to build a 8GB swapfile on the eMMC in the home directory:
$ ./createSwapfile.sh -d ~/ -s 8
After TensorFlow has finished building, the swap file is no longer needed and may be removed (See below).
Scripts in this repository will build TensorFlow with Python 2.7 support, and/or Python 3.5 support.
For Python 2.7
$ ./installPrerequisites.sh
From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.
First, clone the TensorFlow repository and patch for Arm 64 operation:
$ ./cloneTensorFlow.sh
then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.
$ ./setTensorFlowEV.sh
Continue to Building and Installation
For Python 3.5
$ ./installPrerequisitesPy3.sh
From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.
First, clone the TensorFlow repository and patch for Arm 64 operation:
$ ./cloneTensorFlow.sh
then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.
$ ./setTensorFlowEVPy3.sh
Build TensorFlow and Install
We’re now ready to build TensorFlow:
$ ./buildTensorFlow.sh
This should take less than two hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:
$ ./packageTensorFlow.sh
The wheel file will be placed in the $HOME directory.
Validate your TensorFlow installation by doing the following:
Start a Terminal.
Change directory (cd) to any directory on your system other than the tensorflow subdirectory from which you invoked the configure command.
Invoke python or python3 accordingly, for python 3.X for example:
$ python3
Enter the following short program inside the python interactive shell:
If the Python program outputs the following, then the installation is successful and you can begin writing TensorFlow programs.
Hello, TensorFlow!”
Conclusion
So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may make the job a little bit simpler.
Notes
The install in the video was performed directly after flashing the Jetson TX2 with JetPack 3.1
The install is lengthy, however it certainly should take much less than 4 hours once all the files are downloaded. If it takes that long, something is wrong.
TensorFlow 1.3.0 is installed
Github recently upgraded their operating system, and regenerated checksums for some of their archives. The TensorFlow project relies on those older checksums in some of their code, which can lead to dependency files not being downloaded. Here we use a patch to update the file workspace.bzl to ignore the old checksums that are applied after TensorFlow is git cloned, but there may be other instances of this issue as time goes on. Beware.
Removing the Swap File
If you created a swap file for this installation, you may wish to remove it after building TensorFlow. There are several ways to do this. First, if you did not AUTOMOUNT the swapfile, you may reboot the machine and then delete the file either through the Terminal or through the GUI.
If you wish to delete the swapfile without rebooting the machine, you should turn the swap off and then remove the swap file. For example, for the swapfile located in the home directory:
$ swapoff ~/swapfile
$ swapoff -a
$ rm ~/swapfile
If you used the AUTOMOUNT option, you will probably also need to edit the file /etc/fstab.
In this article, we will build TensorFlow v1.3.0 on the Jetson TX1 running L4T 28.1 from source, and then install it. Looky here:
Background
TensorFlow is one of the major deep learning systems. Created at Google, it is an open-source software library for machine intelligence. The Jetson TX1 ships with TensorRT. TensorRT is what is called an “Inference Engine“, the idea being that large machine learning systems can train models which are then transferred over and “run” on the Jetson.
Note: We built TensorFlow r0.11 back in December, 2016 for the Jetson TX1 running L4T 24.2.1. This article is the update to build TensorFlow for L4T 28.1 (JetPack 3.1). In order to build a version greater than v0.12 of TensorFlow, cuDNN version 6.0 is needed. Because that earlier version of L4T was running cuDNN 5.X, upgrading to the new L4T 28.1 now allows us to run versions r1.0 and greater of TensorFlow on the Jetson TX1.
Some people would like to use the entire TensorFlow system on a Jetson. In this article, we’ll go over the steps to build TensorFlow v1.3.0 on the Jetson TX2 from source. This should take about three hours to build and install. This procedure is very similar to the Jetson TX2 installation with the exception being that a modified kernel is needed to enable swap memory.
Note: Please read through this article before starting installation. This is not a simple installation, you may want to tailor it to your needs.
Preparation
This article assumes that Jetson 3.1 is used to flash the Jetson TX1. At a minimum, install:
L4T 28.1 an Ubuntu 16.04 64-bit variant (aarch64)
CUDA 8.0
cuDNN 6.0.1
TensorFlow will use CUDA and cuDNN in this build.
It may be helpful to set the CPU clocks to their maximum settings:
$ sudo ./jetson_clocks.sh
There is a repository on the JetsonHacks account on Github named installTensorFlowTX1. Clone the repository and switch over to that directory.
Before installing TensorFlow, a swap file should be created (minimum of 8GB recommended). The Jetson TX1 does not have enough physical memory to compile TensorFlow. The swap file may be located on the internal eMMC, and may be removed after the build. Note that placing the swap file on a SATA drive if available will be faster.
Unfortunately the L4T 28.1 release does not have swap files enabled in the kernel. You will need to build to build a custom kernel with the swap file enabled. You may do this in the NVIDIA approved manner of building everything on the host, or cheat and build it on the Jetson itself. Here’s and article on how to Build Kernel and ttyACM Module on the Jetson TX1.
You will need to enable the “Support for paging of anonymous memory (swap)” The symbol is CONFIG_SWAP. Here’s what it looks like:
Once you have the new kernel installed and swap memory enabled, there is a convenience script for building a swap file.
For example, to build a 8GB swapfile:
$ ./createSwapfile.sh -d [filename] -s 8
where [filename] is a directory path to an external device. The Jetson TX1 probably does not have enough free room on the eMMC to place the swap file there. If you have a SSD attached to the Jetson, that’s probably the fastest. USB is next, and then a SD card.
After TensorFlow has finished building, the swap file is no longer needed and may be removed (See below).
Prerequisites
There is a convenience script which will install the required prerequisites such as Java and Bazel. The script also patches the source files appropriately for ARM 64.
Scripts in this repository will build TensorFlow with Python 2.7 support, and/or Python 3.5 support.
For Python 2.7
$ ./installPrerequisites.sh
From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.
First, clone the TensorFlow repository and patch for Arm 64 operation:
$ ./cloneTensorFlow.sh
then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.
$ ./setTensorFlowEV.sh
Continue to Building and Installation
For Python 3.5
$ ./installPrerequisitesPy3.sh
From the video installation of the prerequisites takes a little over 30 minutes, but will depend on your internet connection speed.
First, clone the TensorFlow repository and patch for Arm 64 operation:
$ ./cloneTensorFlow.sh
then setup the TensorFlow environment variables. This is a semi-automated way to run the TensorFlow configure.sh file. You should look through this script and change it according to your needs. Note that most of the library locations are configured in this script. The library locations are determined by the JetPack installation.
$ ./setTensorFlowEVPy3.sh
Build TensorFlow and Install
We’re now ready to build TensorFlow:
$ ./buildTensorFlow.sh
This should take less than two hours. After TensorFlow is finished building, we package it into a ‘wheel’ file:
$ ./packageTensorFlow.sh
The wheel file will be placed in the $HOME directory.
Validate your TensorFlow installation by doing the following:
Start a Terminal.
Change directory (cd) to any directory on your system other than the tensorflow subdirectory from which you invoked the configure command.
Invoke python or python3 accordingly, for python 3.X for example:
$ python3
Enter the following short program inside the python interactive shell:
If the Python program outputs the following, then the installation is successful and you can begin writing TensorFlow programs.
Hello, TensorFlow!”
Conclusion
So there you have it. Building TensorFlow is quite a demanding task, but hopefully some of these scripts may make the job a little bit simpler.
Notes
The install in the video was performed directly after flashing the Jetson TX1 with JetPack 3.1
The install is lengthy, however it certainly should take much less than 4 hours once all the files are downloaded. If it takes that long, something is wrong.
TensorFlow 1.3.0 is installed
Github recently upgraded their operating system, and regenerated checksums for some of their archives. The TensorFlow project relies on those older checksums in some of their code, which can lead to dependency files not being downloaded. Here we use a patch to update the file workspace.bzl to ignore the old checksums that are applied after TensorFlow is git cloned, but there may be other instances of this issue as time goes on. Beware.
Removing the Swap File
If you created a swap file for this installation, you may wish to remove it after building TensorFlow. There are several ways to do this. First, if you did not AUTOMOUNT the swapfile, you may reboot the machine and then delete the file either through the Terminal or through the GUI.
If you wish to delete the swapfile without rebooting the machine, you should turn the swap off and then remove the swap file. For example:
At a recent MIT/Lincoln Labs 2017 Beaver Works Summer Institute Seminar, Sertac Karaman gave a lecture on low-level robot vision. Looky here:
Background
With the advent of the digital camera, vision sensors have become ubiquitous. Combining vision sensors with GPU computing elements enables a breakthrough in camera based perception for a wide range of applications. In this lecture, we find out how this can be applied to mobile robots. As this information forms a basis for programming the MIT RACECAR, a NVIDIA Jetson based robot, we cover that information here.
In the lecture, Dr. Karaman gives a short history of computer vision and then expounds on how modern low-level robot vision systems work. Topics include:
Camera as sensor
Color representation
Object detection
Camera calibration
Much of this lecture is drawn from material covered in actual MIT classes.
There are many more lectures available in this summer series, with a wide range of subject matter. We will be providing pointers to the lectures that directly address the RACECAR, but it’s worth going through the playlist to find other topics which may interest you.
Note that these lectures are given to high school senior students.
Note: Some people find it helpful to set the playback speed for these types of videos to 1.25X on YouTube, the setting is available in the settings menu. This saves a little time while watching, but the fidelity is still good enough to understand the lecture. You can always put it back to normal speed for the tricky bits.
The last few articles we’ve been building TensorFlow packages which support Python. The packages are now in a Github repository, so we can install TensorFlow without having to build it from source. Install files are available both for the Jetson TX1 and Jetson TX2. Looky here:
Background
In the earlier articles we went over the procedure to building TensorFlow for the Jetson TX1 and the Jetson TX2. The procedure takes a few hours on each platform. Worse, for the Jetson TX1 a new kernel is needed so that swap memory can be enabled.
Fortunately we were able to build a new Github repository called installTensorFlowJetsonTX which contains all of the Python install .whl (wheel) files that were created during the process (and then some).
The repository contains two sets of wheel files. There is a set of files for the Jetson TX1 in the TX1 directory. The other set of wheel files are for the Jetson TX2, which is in the TX2 directory. Each of the directories contains two wheel files. One of the wheel files is for Python 2.7, the other is for Python 3.5. For example, the file tensorflow-1.3.0-cp35-cp35m-linux_aarch64.whl in the TX1 folder is the TensorFlow 1.3 wheel file for Python 3.5. The term aarch64 indicates that the file is for the Jetson TX1 ARM64 architecture.
Install TensorFlow
Install the wheel files with then Python python-pip tool. The first step is to install the appropriate version python-pip. Next, download the appropriate wheel file from the repository. Then use the pip tool with the corresponding wheel file to finish the installation.
Downloading the wheel file can be done in a couple of ways. The first way is to clone the repository. This has the side effect of downloading all of the wheel files (currently about 200MB). Perhaps a better way, as shown in the video, is to navigate to the wheel file you wish to download in the web browser on the Github site, and then click the Download button. The wheel file will be placed in your Downloads folder.
Conclusion
This is a relatively easy way to install TensorFlow. TensorFlow is a very rich environment, the downside of this method is that you will not be able to specify the build options. Note that you will also have to install the TensorFlow models and support materials. The upside is that you don’t have to spend the build time just to get TensorFlow up and running.
Notes
There are wheel files for Python 2.7 and Python 3.5 The Jetson environment for both the TX1 and TX2:
At a recent MIT/Lincoln Labs 2017 Beaver Works Summer Institute Seminar, Ariel Anders gave a lecture on visual servoing control systems with regards to mobile robot navigation. Looky here:
Background
There are many parts to mobile robot navigation. As this information is used directly in programming the MIT RACECAR, a NVIDIA Jetson based robot, we will cover that information here.
Mobile robot navigation can be broken into these main categories:
Perception
Localization
Task and Route Planning
Motion Planning and Execution
This lecture covers the motion planning and execution aspect of the task. The planning and control system is based on camera sensor input.
There are many more lectures available in this summer series, with a wide range of subject matter. We will be providing pointers to the lectures that directly address the RACECAR, but it’s worth going through the playlist to find other topics which may interest you.
Note that these lectures are given to high school senior students.
Note: Some people find it helpful to set the playback speed for these types of videos to 1.25X on YouTube, the setting is available in the settings menu. This saves a little time while watching, but the fidelity is still good enough to understand the lecture. You can always put it back to normal speed for the tricky bits.
In this quick tip, we install a package so that we can read SD Cards that are formatted for cameras and Windows machines. Looky here:
When you insert a SD Card in to a Jetson Development Kit, a Dialog appears that says that the file directory type is not known, and that the card can not be read. There is some hint that the card format exFAT is not understood. Fortunately, it’s easy to add a file system in user space so that the SD card may be used. Install the exFAT package:
$ sudo apt-get install exfat-fuse exfat-utils
Once the package is installed, the SD Card should be accessible.
This technique currently works with the Jetson TX2, Jetson TX1, and Jetson TK1 running Ubuntu 14.04 and Ubuntu 16.04 (L4T version up to and including L4T 28.1).
Okay, it’s on. As of this writing, you have 118 days to amaze your friends, and confuse your enemies.
The prize pool? Currently $42,800 USD. Big ideas, big money.
Here’s the challenge and money quote:
Calling all great developers, engineers, scientists, startups, and students! NVIDIA is challenging you to show us how you can transform robotics, industrial IoT, healthcare, security, or any other industry with a powerful AI solution built on the NVIDIA® Jetson™ platform.
You’ll not only get the chance to win amazing prizes, but also a trip to present your project to CEOs, executives, and industry peers at the world’s biggest event for GPU innovation and Artificial Intelligence (AI)—the GPU Technology Conference (GTC).
Prizes include:
Up to $10,000 in cash
Top-of-theline NVIDIA TITAN Xp graphics card
NVIDIA Jetson TX2 Developer Kit
Deep Learning Institute training
And so much more, I’ll stop now so I don’t get tired typing. Not enough for you? There’s more!
Ten finalists will receive an expense paid trip to GTC in Silicon Valley, California for the opportunity to present their NVIDIA Jetson-enabled creation.
Go there to get the skinny on resources, rules, FAQ and all the stuff needed to conquer and dominate! Fame, fortune, and everything that goes with it! You want to be the one in the picture? Get to work!
One question for you. Why are you still reading this? You should be off building right now!
A USB ttyACM device cannot be accessed on a stock L4T 28.1 installation. In this article, we cover installation of a prebuilt cdc-acm module on the NVIDIA Jetson Dev Kits running L4T 28.1 to access ttyACM devices. This works with both the Jetson TX1 and Jetson TX2. Looky here:
Background
Some USB devices report as USB, others report as ACM. Here’s an article explaining the good bits about what that means. Many devices, such as some Arduinos, report as ACM devices.
We have previously gone over how to download the kernel sources and build the kernel and modules, even how to build a module specifically to “talk ACM”. However, that seems like a lot of work to do something both simple and frequently needed. Therefore there’s a JetsonHacks repository on Github named installACMModule which contains a pre-built module along with a script to install the module.
Installation
Installation is straightforward:
$ git clone https://github.com/jetsonhacks/installACMModule
$ cd installACMModule
There is one script:
$ ./installCDCACMModule.sh
The script compares the module version with the currently running kernel version. If the versions match, then the cdc-acm module is installed. If the versions do not match, then you will be asked if this is something you really want to do. In this case, you should make sure that the kernel on your machine is a version of 4.4.38:
$ uname -r
If you have a custom version of 4.4.38 running, it should be fine. If you have a different version, then you probably should not proceed. If you decide to go ahead, you may have to do a force install, which you can do with something similar to modprobe -f cdc-acm though this hasn’t been tested. There are many online resources which talk about this type of situation, so we won’t cover them here. Here’s an extra bit of information. A module contains something called a ‘version magic’ string which tells which version of a particular kernel built the module. If the kernel version of your machine and the version magic do not match, then the module system pouts a little bit. You can override it, but just remember that it is there for a reason.
Conclusion
Whether you’re just starting out on the Jetson, or you just need a quick fix for not being able to access your ttyACM USB device, this should be a useful tool for your bag of tricks.
Notes
This has been tested both on NVIDIA Jetson TX1 and TX2 Development Kits, directly after flashing with L4T 28.1 (JetPack 3.1).
This is the first step of building a RACECAR/J. Here we prepare the chassis by removing parts we do not use, and upgrading the springs and bumper. Looky here:
Background
RACECAR/J is derived from the open source MIT RACECAR, an “open-source powerful platform for robotics research and education”. The first version of RACECAR/J is based on MIT RACECAR build 2.5, the current build as of November, 2017.
The RACECAR/J Chassis is based on the TRAXXAS Slash 4×4 Platinum Truck, an upgraded version of the normal Slash 4×4 which adds aluminum bits and pieces such as C-hubs, steering blocks, rear hub carriers and axle nuts. An anti-roll bar is also added, which helps better handle the weight that we will add with the autonomy sled.
Tools
The tools provided with the Traxxas Slash can be used to complete the procedures we are discussing. However, tiny allen wrenches may not be your cup of tea. I have found the following tools very useful, especially if you’re building a lot of robotic projects:
iFixit Toolkit – A very useful toolkit for almost anything consumer electronic
As shown in the accompanying video, there are several steps in preparing the TRAXXAS Slash. Preparation takes 30-45 minutes. Most of the preparation involves removing parts of the RC Car which which are not used. Here are the major steps:
Remove the 4 body clips which hold the clear plastic body on the car
Remove the plastic body
Remove the body mounting brackets. There is one in the front, and one in the rear. Each mounting bracket is held in place by two screws.
Remove the receiver case. 4 screws hold the cover down, 2 more screws accessible from inside the box hold it to the chassis.
Remove the stock Electronic Speed Controller (ESC), which is held in place by two screws.
Remove the stock front bumper.
Upgrade the front and rear springs.
Install a new front bumper.
Remove the antenna holder
The video gives detailed instructions on the modifications.
Remove the Body and Body Mounts
The first step is to remove the Traxxas Slash 4×4 Platinum from the package and place it on a work space.
Remove the body retaining clips, remove the body, and then remove the two body mounts. There are two screws holding each body mount in place.
Here are some pictures:
Remove Receiver Box
The Traxxas Slash does not have a receiver, but has a box for one attached to the chassis. In this step, remove the receiver box.
Remove the four screws holding the top of the receiver box in place. Remove the top cover, and thread the ESC and Steering Servo wires from the Receiver box. Next, remove the two screws holding the lower receiver box to the chassis, and remove the lower receiver box.
Here are some pictures:
Remove the Stock ESC
The next step is to remove the stock Traxxas ESC. Detach the wires going to the car motor, and then remove the two screws that are holding the ESC in place. Then remove the ESC.
Here’s some pictures:
Now remove the antenna holder. A pair of pliers may be useful for this operation.
Remove the Front Bumper
There are five screws holding the front bumper to the chassis. Remove the two screws holding the top the bumper to the chassis. Next, remove the three screws holding the bottom of the bumper to the chassis. Then remove the bumper.
Replacing the Springs
Because of the weight being added to the RACECAR, new springs are added to the Traxxas Chassis. The springs are available on the Racecar/J web store. The entire RACECAR/J can be be bought as a kit (minus cameras and lidar), take a look around. Note: RACECAR/J Kits and parts are currently only shipping to the United States. New regions soon!
Replacing the springs is difficult to describe, watch the video for a walk through. Remove the screw holding the lower shock assembly to the suspension arm. Afterwards, you can flip the shock assembly up for easier access. For the front shocks, access is easier if the front bumper is removed. For the rear shocks, access to the attachment screw is easier if the car is placed upside down.
Once the shock assembly is free, compress the spring. Remove the spring retainer, then remove the old spring. Install the new spring. For the front shock, the short 2″ springs are used. For the rear springs, the longer 2.75″ springs are used. Compress the new spring after placing it on the collar, and reinstall the retaining clip. Reattach the shock assembly to the suspension arm.
Here’s a couple of pictures, but the video may be more useful here:
Bumper Installation
The front bumper is replaced with a foam bumper, for better impact resistance. This is particularly useful for indoor use.
The bumper included in the RACECAR/J kit is a JConcepts Scalpel Bumper kit. For installation, see the directions included with that kit. People also use the Traxxas foam bumper, which uses Traxxas part numbers 7436, 7415x, 7437. Note that these are Amazon links, you can also order through Traxxas.com. There have been reports of clearance issues with the new springs with the Traxxas foam bumper parts, you may have to modify them to get a proper fit. I have not worked with the Traxxas foam bumper parts, so I have no experience to share.
Conclusion
Preparation of the chassis is straightforward. Basically remove the bits and pieces that are not needed, upgrade the springs, and add a new front bumper.
There are many ways to modify this build to suite any given application. In this build, we remove the stock ESC so that it can be replaced with a VESC. The VESC is an open source brushless DC motor controller. This provides better control at slow speeds than the stock ESC, as well as the ability to monitor engine speed. The engine speed can be used to calculate crude odometery, since there are no encoders built into the car drivetrain.
There are two mounting points for cameras, and a place for a lidar. In the next article, we will cover installing the platform decks and basic electronics that control the chassis. Stay tuned!
Thank you for reading and participating in the JetsonHacks community. Wishing everyone an absolutely great 2018!
At this time of year, it is fun to recap some statistics about the website. A lot of people are curious how many other people are using JetsonHacks.
JetsonHacks Website
When JetsonHacks was first started in 2014, I was curious how social media and network effects are related. As a software person, I know the theory behind network effect. However, observing it in a social media context is a very interesting exercise. The chart below gives you an idea how much the website has grown over the last 3+ years:
The Jetson TX1 Development Kit was introduced in October 2015, the Jetson TX2 Development Kit was introduced in March 2017. As you can see, the number of page views just about doubled from 2016 to 2017. 2016 brought in more than 290,000 views from 92,000 visitors. 2017 brought in around 570,000 views from 165,000 visitors. The site appears still to be still growing a little, the average daily page views has increased from 800 in 2016 to 1575 in 2017. We also passed the 1 million all time view mark this year, currently sitting at 1,001,890.
An interesting point in the network effect is that the page views for 2017 is greater than the previous 2.5 years combined!
For 2016, there were 223K views with a watch time of 515K minutes. There were 1,043 Likes, 40 Dislikes, 321 Shares and 1228 Subscribers added. For 2017, there were 423K views with a watch time of 1.15 million minutes. There were 2,587 Likes, 93 Dislikes, 1,337 Shares and 2,709 Subscribers added. As with the website, the total views for 2017 are greater than the previous 2.5 years combined!
Lifetime views are a little over 800K, with a watch time of a little over 2 million minutes. We are currently at 4,906 subscribers, closing in on the 5K mark. There are currently around 200 videos on the channel.
Similar to the website, the introduction videos for the Jetson TX2 and support software garnered the largest number of views, around 13% of the total views for the year. There is still great interest in additional sensors for the Jetsons with lidars and cameras being of high interest.
With the growth of the JetsonHacks website and YouTube channel, we’re of course picking up much more spam and people not behaving themselves. On the YouTube side there was a particularly nasty incident, which ended up with YouTube detecting it and banning the user. However for the most part it’s all good.
Just a quick note. “Likes” in the YouTube world, along with how long the video is watched, helps to recommend the video to other viewers. Subscriptions and comments do much the same thing. If you like the video, give it a thumbs up. On the other hand, if you disliked the video and give it a thumbs down, it would be useful to know why you didn’t like it in the comments. If you dislike the video I won’t hate you forever, just for what’s left of my natural life.
You should note with the higher traffic, it has become much more difficult to answer all the questions. Please note that if you ask questions that are not about the article/video where posted, you may not get a response.
JetsonHacks Github Repository
In the JetsonHacks Github Repository, there are now 80 repositories, up from 71 in 2016. People have been using the repositories on a regular basis, I hope everyone is finding them useful. Make sure to give them a star if you find them useful, it helps decide future projects. Also, please generate pull requests for improvements.
Special Shoutouts
I want to take the time to thank the folks from the NVIDIA Jetson team who have been gracious and generous in sharing their time and knowledge with me over the last few years. This is a long list of people (I remember our first meeting where we all fit in a conference room), but a special shoutout to Lynette, Lan, Phil, Murali, Chitoku, Amit, Jennifer, Chidi, Robert, Jesse and Eric. Thanks to everyone else on the Jetson team too, I appreciate it! Thank you, kindly.
On to 2018
Certainly if you have anything you’re working on and would like JetsonHacks to know about, send an email.
Again, thank you for all of your support. I hope your 2018 goes really swell. Oh, and it’s good to see you survived 2017, others weren’t so lucky.
RACECAR/J Platform Preparation is the second step of building RACECAR/J. In an earlier article we prepared the chassis, here we will be working on preparing the platform decks and adding some electronics. Looky here:
Background
One way to think about RACECAR/J is that it consists of two parts. The first part is the chassis, a 1/10 scale remote control car with modifications to support the extra weight of the electronics. The second part is the electronics. The electronics consist of the computers and sensors which enable the car to act autonomously.
The electronics attach to platform decks, which in the case of RACECAR/J are made from precision cut 3/16″ Delrin sheets.
The full RACECAR/J kit includes the mechanical hardware, platform decks, electronic speed controller and USB hub shown in the video. Full kits are available in the RACECAR/J shop. The video shows the “Big Mouth” kit in a MIT RACECAR configuration. The MIT RACECAR configuration mounts a Hokuyo UST-10LX Lidar at the front of the RACECAR, along with a stereo camera.
The Platform Decks and Mechanical hardware are available separately in the RACECAR/J shop.
Some paper towels and isopropyl alcohol can are useful for cleaning the platform decks. Soapy water is a good alternative. 3M Dual Lock Reclosable Fastener is used to attach the USB Hub to the platform deck. Industrial Velcro is a good alternative.
Platform Deck Preparation
If you order the Platform Decks from the RACECAR/J shop, the decks are delivered as a single piece of Delrin protected by an overlay sheet. As shown in the video, remove the decks from the sheet and remove the overlay sheet.
After removing the overlay sheet, clean the platform decks. The laser cutting process leaves a residue which tends to create a mess. In the video, isopropyl alcohol is the cleaning agent; some soap and water can be used as a substitute. Use a lintless towel if possible, you can wipe it down afterwards with a micro fiber cloth if need be.
You may notice that the Platform Deck has some surface scratches. This is normal. Delrin is an industrial plastic, and arrives from the manufacturer with some surface imperfections.
Standoffs
As an optional step, you can use a 4-40 tap to thread the holes for the standoffs. If you have a large number of RACECAR/Js to build, I have found this to be a real time saver.
The holes drilled in the platform deck are sized such that the standoffs are self tapping. You can simply use a 3/16″ driver to screw the 1/4″ standoffs into the platform deck. If you tap the holes before hand, assembly is easier. Here is a major WARNING:Do not over tighten the standoffs!! Aluminum is a soft metal, and your super human strength may sheer the standoff. Removing the standoff remains is very unfun. Don’t ask me how I know that.
8 1/4″ 4-40 standoffs go on the top of the bottom platform deck. Here’s how it should look after installation:
Next, install the 5 2″ Standoffs with 7/16″ 4-40 machine screws as shown below:
Do a test fit of the upper platform to make sure everything lines up correctly.
Electronic Speed Controller
RACECAR/J uses an open source electronic speed controller (ESC). Previously this was called a VESC, but due to copyrights each manufacturer now has a different name for their particular version of the hardware. The ESC takes two forms. There are mounting hole patterns on the lower platform deck for either form. For a traditional VESC derivative, 1/4″ standoffs are used to mount the ESC. The video covers the non-traditional Enertion Boards FOCBOX installation. The FOCBOX is a more compact and better package for this application.
Here are the mounting points:
The ESC should be placed on the bottom of the platform. There are 4 through holes to mount the FOCBOX, which is held in place by M3x8 mm machine screws.
Once the FOCBOX is in place, add the extra long header to connect with the steering servo later.
. Turn the Platform Deck over. Four pieces of Dual Lock tape, each about 2″ long, attach the hub to the underside of the platform. First, attach two pieces of Dual Lock to the underside of the USB Hub. Second, lightly attach the mating Dual Lock tape to each. Remove the backing tape, and then place the the hub on the platform. Then remove the hub, and make sure that the Dual Lock firmly adheres to the platform deck.
When finished, run a USB cable (Micro-B to USB A) cable from the FOCBOX underneath the USB Hub mount. The Dual Lock acts as a raceway to run the cable. Attach the USB Hub to the Dual Lock tape on the platform, and then plug the USB cable in to the hub.
Lidar Plate Mount
All that remains is to mount the lidar plate. If you are using the Hokuyo, you will want to use the 1/4″ aluminum plate. The plate is included in the full RACECAR/J BigMouth kit, and available separately in the RACECAR/J shop. This plate will help dissipate the heat which the lidar generates. This step is of course a place holder until the actual lidar itself is installed.
The lidar plate mounts with 4 1″ standoffs using 7/16″ 4-40 machine screws.
Let’s say that you are tired of fiddling around with scale cars and the idea of being confined where your car drives. Break out of that rut with TheRoboticsClub.org during a long weekend in Brisbane, Australia!
In Part One and Part Two of the RACECAR/J Series, we built the chassis and platform decks. Now it’s time to put them together. Looky here:
Introduction
Building even a simple robot like RACECAR/J usually means several assembly steps. Now that we’ve built the chassis and the platform decks, it’s time to hook up the wiring, install the base electronics, and attach the two together.
If you know your final build configuration, you may want to modify this step to include some of the sensors and electronics that you have chosen. You should consider this step a test fit of the robot parts.
For consumables, we use some electrical tape, 4″ and 8″ zip ties, and some 3M Dual Lock tape.
For this build, the wire routing is meant mostly to keep any wires from contact with the drive train during testing. Once the installation of the rest of the sensors and electronics is complete, that is a good time to go over the final wire routing and attachment. In other words, consider this a throw away attempt.
Installation
First, prepare the lower Platform Deck by installing the USB 3.0 Cable and power cable to the Amazon Basics USB Hub.
There are three motor wires and the steering servo cable which must be connected to the electronic speed controller.
Here’s what it looks like after the motor cables are attached when attaching the steering servo cable. Notice that a zip tie helps keep the motor wires in place:
For this particular electronic speed controller, we install a XT-60 to Traxxas Male converter cable and the battery for the chassis.
Place the Platform Decks on the body mounting points. Install the IMU using 4 1/4″ 4-40 machine screws:
Then attach the Jetson Development Kit using 4 1/4″ 4-40 machine screws:
The next step is to connect the USB cable for the IMU, connect the USB Hub to the Jetson, and connect the battery power cable to the Jetson:
Then attach the Lower Platform Deck to the chassis using 4 M3x10mm machine screws:
Usually I wait to attach the Platform Deck until after initial testing in case I need to access the wiring.
Finally, attach the Top Platform Deck to the 2″ standoffs using 5 7/16″ 4-40 machine screws. There are a couple of strips of 3M Dual Lock added to hold a battery:
This is the initial configuration of the finished assembly.
Conclusion
The base of the robot is now assembled, and we’re ready to start loading software on to the Jetson to control it. Stay tuned for the next article where we install the ROS software and RACECAR packages!
The course MIT 6.S094: Deep Learning for Self-Driving Cars is currently in session. Course instructor Dr. Lex Fridman states that “Our goal is to release 1 lecture every other day until all 20 lectures and guest talks are out. It’s important to me to make this course free and open to everyone.”
Deep Learning for Self-Driving Cars
Here’s the course blurb:
This class is an introduction to the practice of deep learning through the applied theme of building a self-driving car. It is open to beginners and is designed for those who are new to machine learning, but it can also benefit advanced researchers in the field looking for a practical overview of deep learning methods and their application.
The best part is that the slides and lecture videos are online, usually available a few days after the lecture is given. Here’s the playlist:
If you’re interested in deep learning and self-driving cars, go to the website and check it out! It’s worth grabbing the slides as you watch the lectures.
This class takes an engineering approach to exploring possible research paths toward building human-level intelligence. The lectures will introduce our current understanding of computational intelligence and ways in which strong AI could possibly be achieved, with insights from deep learning, reinforcement learning, computational neuroscience, robotics, cognitive modeling, psychology, and more. Additional topics will include AI safety and ethics. Projects will seek to build intuition about the limitations of state-of-the-art machine learning approaches and how those limitations may be overcome. The course will include several guest talks. Listeners are welcome.
The course materials aren’t online just yet, I will update this post when available. However, you should go over and check out the website for good things to come!
The VESC and SEN-14001 are part of the RACECAR/J Base hardware configuration, the ZED and UST-10LX are additional.
The base RACECAR/J uses some new code for the Sparkfun IMU. Note that this is transition code in the RACECAR/J racecar Repository (the RacecarJTransitory branch). A different code base for the IMU is now under submission to the OSRF to become an official package. As such, we’ll publish updates as new code becomes available.
Installation Requirements
The software install is for the Jetson on the RACECAR/J. In the video a Jetson TX2 is used, connected to a monitor, keyboard, mouse and ethernet connection. The only difference in the code bases is the version of the ZED Camera driver. The version for the Jetson TX2 is in the folder ‘JetsonTX2’ and the version for the Jetson TX1 is in the folder ‘JetsonTX1’.
Note that version 2.2.1 of the ZED Camera driver installs to match the CUDA 8.0 package.
The current software stack runs on the Jetson TX1 and Jetson TX2 running L4T 28.1. Use JetPack 3.1 to install L4T 28.1, and at a minimum:
CUDA 8.0
cuDNN 6.0
OpenCV4Tegra 2.4.13
Note that the next version of L4T will release in the next few weeks from this writing, at which time we’ll start to update the code.
$ git clone https://github.com/RacecarJ/installRACECARJ.git
$ cd installRACECARJ
Next, install the appropriate ZED camera driver for the Jetson in use. If possible, plug the ZED camera into the RACECAR/J USB hub (in one of the FAST ports, they are full speed USB 3.0). If the ZED is present during installation, the ZED driver installer will download the cameras’ calibration file. To install the Jetson TX2 driver for example:
$ cd JetsonTX2
$ ./installZED-SDK-TX2.sh
Then, return to the installRACECARJ directory:
$ cd ..
We’re now ready to install the Robot Operating System (ROS) software and the rest of the RACECAR/J software stack. The installation script does the following:
L4T 28.1 does not have a cdc-acm driver. The script installs a pre-built cdc-acm driver. The driver expects a stock kernel (4.4.38-tegra)
Because the electronic speed controller and the IMU both report as ttyACM, a udev rule is installed which names them as vesc and imu respectively.
ROS is configured and rosbase is installed
One of the dependencies is missing in the package specifications, so ros-kinetic-opencv3 is installed.
The MIT RACECAR packages are installed, which includes the ZED v2.2.x ROS wrapper.
To start the installation:
$ ./installMITRACECAR.sh
The directory ‘~/racecar-ws’ is the default workspace directory name; it can be specified on the command line after the script name. Because there is such a large amount of messages during installation, you may want to log everything to a log file:
$ ./installMITRACECAR.sh |& tee softinstall.log
The log will be placed in the file ‘softinstall.log’ for review. This is useful in case there are installation issues.
ROS Environment Variables
Worth noting is that the scripts also set up two environment variables in the .bashrc, namely ROS_MASTER_URI and ROS_IP. These are placeholders, you should replace them with values appropriate to your network layout. Also, while the normal ROS setup.bash is sourced, you may want to source the devel/setup.bash of your workspace.
After installation, you should be able to run the teleoperation launch file if the VESC is programmed.
Conclusion
The installation of the entire ROS software stack for RACECAR/J can be a complicated affair. However, these installation scripts should make things fairly simple.
Have you experienced plugging in a USB device, and the Jetson not recognizing it? Sometimes all you need is the right USB to Serial Convert Kernel Module! Looky here:
Background
As we discovered in a previous article, Install ttyACM Module , sometimes a NVIDIA Jetson TX1 or Jetson TX2 running L4T 28.1 needs some help before it can properly talk to a USB device.
You might notice that a device is visible where you expect it, such as ttyACM or ttyUSB, but you cannot communicate with it. Some of the other devices you have work properly, but not others. Typically one of the causes is that the correct driver is not available for the device. For example, many Arduino clones use a QinHeng Electronics CH-340/341 USB-Serial Adapter. Another example is a Slamtec RPLIDAR which requires Cygnal Integrated Products CP210x UART Bridge support. Both of these chipsets basically do the same thing; they convert USB signals to serial that the device understands.
The L4T 28.1 kernel has support for the FTDI USB to Serial converter, so that’s why some devices work. One question that gets asked a lot is “Why aren’t all of these drivers built into the stock kernel?”
The answer, depending on your viewpoint, is “They aren’t.” The Jetson is an embedded development kit, meant for integrating in to products. The basic idea is you have a minimal configuration, and then add what you need for your application. Extra drivers and modules take up valuable memory in such devices, so better to add them judiciously.
The other side of that coin is that many of the Jetson users come from a Linux desktop environment where everything but the kitchen sink is enabled out the box. Therefore it’s a little confusing that something simple like this is missing. After you figure out what the issue is, you need to compile the kernel modules. Let’s just say that first time through building kernel modules on the Jetson is a little challenging. And it is frustrating for folks who just want to do something like connect an Arduino to the Jetson.
What to do, what to do …
Installation
Well, you’re in luck! Two of the more common USB to Serial kernel modules, CH341 and CP210x, are now in a JetsonHacks Github repository, along with installer scripts.
Plug your USB device in, and figure out which driver you need using:
$ lsusb
If you see the device reporting as a CH340/CH341 device or CP210x device, you’re in luck!
First, clone the repository, and switch to the repository directory:
$ git clone https://github.com/jetsonhacks/installACMModule
$ cd installACMModule
For a CH341 device:
$ ./installCH341.sh
For a CP210x device
$ ./installCP210x.sh
It takes a lot less time to install it than to explain it!
You will probably need to replug (unplug the device, then plug it back in) for the proper kernel module to load. You can examine the loaded modules using:
$ lsmod
Going forward!
In the next release of L4T, L4T 28.2, both the cdc-acm driver and the CP210x driver are part of the stock kernel. That should make life simpler.
Notes
The modules use 4.4.38-tegra as there “Kernel Magic” numbers. This means that they are for use with the stock L4T 28.1 kernel. If you are using a different kernel, you will have issues.
Before we can run RACECAR/J, we need to program the electronic speed controller, called a VESC, for the motor and steering. Looky here:
Background
For RACECAR/J, we replace the stock TRAXXAS ESC with Vedder Electronic Speed Controller (VESC) version 4.12 compatible hardware. The major reason for the change is to gain full control of the robot at low speeds. The stock ESC puts the minimum vehicle speed at around 6 mph. Another reason is that the VESC is open source, which allows the curious to explore the motor controller implementation.
Note: VESC is now a registered trademark. There are several manufacturers who build VESC compatible hardware, but expect different names since the registration. There is a new version of VESC hardware (6.4), but for the purposes of the this article, we will use the term VESC to indicate version 4.12 compatible hardware.
Architecturally, the VESC has a STM32 ARM Cortex processor. The STM32 runs ChibiOS, a real-time operating system. The default firmware flashed on the VESC-X is ‘Servo-in’, which allows a remote controller to set the motor speed. For the RACECAR/J application, the VESC servo port needs to be programmed as ‘Servo-out’, which allows commands to be sent to the robot steering servo.
Fortunately there is a compiled binary of the version of the VESC firmware that includes the Servo-out setting. We can flash the STM32 directly using a program called ‘bldc-tool’. BLDC is an acronym for BrushLess DC motor.
Once the bldc-tool loads the servo-out firmware on to the VESC, we then load a configuration file which matches the VESC configuration to control a TRAXXAS Velineon 3500 motor.
The full RACECAR/J Kit contains an Enertion Boards FOCBOX, VESC 4.12 compatible hardware. The FOCBOX is programmed before shipping from RACECAR/J. This article is useful if at some point you need to reprogram the FOCBOX.
L4T 28.1 does not have a driver for ACM USB devices, such as the VESC. This article covers how to install a cdc-acm module for the Jetson if you have not already installed one.
This article covers installing and running the BLDC Tool on a Jetson TX Dev Kit running L4T 28.1. To build on a Linux x86 machine (like the one used to flash the Jetson), use the installBLDCToolHost.sh script
$ git clone https://github.com/racecarj/installBLDCTool
$ cd installBLDC
$ ./installBLDCToolJetson.sh
This will install prerequisites for the building, build the bldc-tool from source code, and download the VESC firmware and RACECAR/J motor configuration files. The VESC firmware and RACECAR motor configuration files are downloaded from the Github RACECAR/J vesc-firmware repository.
The BLDC Tool application is in the ~/bldc-tool directory.
A USB cable communicates motor speed and steering angle information between the Jetson and the VESC. The TRAXXAS steering servo is wired to the VESC servo cable. The USB cable is also used to flash the firmware on the VESC.
The FOCBOX connects via USB 2.0 to the Jetson. The connector on the FOCBOX is micro-USB. The programming USB cable supplied with the Jetson may be used:
Other VESC 4.12 hardware connect via USB 2.0 using a mini-USB connector. For example:
Before starting the bldc-tool, connect the VESC to the vehicle battery:
Programming the VESC
$ cd ~/bldc-tool
$ ./BLDC_Tool
This will bring up the GUI to interact with the VESC. Before flashing the firmware, hit the ‘Connect’ button to communicate with the VESC. The current firmware revision will display in the lower right hand corner upon connection.
Use the ‘Firmware’ tab to go to the flash the firmware binary area. Select the firmware file, and upload to the VESC. After the firmware is finished uploading the VESC will disconnect and reboot. Important Note: You must select the correct version of firmware to match the VESC that you are using, otherwise damage and other bad things can happen. In the video, we flashed the firmware ‘VESC_servout.bin’ for version 4.12 of the hardware.
The firmware for hardware version 4.12 in the file located in ~/vesc-firmware/firmware/VESC_servout.bin
After the firmware is done uploading, go to the ‘Motor Controller’ tab. ‘Connect’ to the VESC. The configuration is loaded from the ‘Load XML’ button. Once the configuration is loaded, write the configuration to the VESC.
The configuration file is located in ~/vesc-firmware/VESC-Configuration. The configuration file shown in the video is ‘FOCBOX_hw_30k_erpm.xml’ which is for the FOCBOX. If you are using a regular VESC, you will probably want to use the configuration file ‘VESC_30k_erpm.xml’.
Note: You should use the updated bldc configurations, the configurations lower the min and max erpm values to avoid damaging the VESC when connected to the TRAXXAS motor.
Once the VESC has been flashed and configured, you are ready to start using the robot! All that is left is to connect a battery to the Jetson and USB hub.
Notes
In the video, a Jetson TX2 running L4T 28.1 is shown.
The next generation VESC website VESC Project supports both older hardware and the new 6.4 version of the VESC.
VESC is a registered trademark of Benjamin Vedder.
Conclusion
At this point, we have a working robot platform. In the next few articles we will be covering attaching the Jetson to a battery, and adding different sensors. Stay tuned!