Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

Build a Custom Kernel for the NVIDIA Jetson TK1

$
0
0

In this article, we’ll cover how to build a custom kernel for the NVIDIA Jetson TK1. Looky here:

Background and Motivation

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. It is also helpful to be able to read shell scripts to understand the steps described.

As an embedded development kit the Jetson TK1 ships with a bare bone approach to device support. For the most part, the L4T kernel supports a minimal set of device drivers. This is on purpose as this a development platform for an embedded device. The idea is that the developer adds only the device drivers and services that are needed for the product being developed.

Here’s the rub: the Jetson TK1 is powerful enough to be a desktop computer. Desktop computers usually support a wide range of devices “out of the box”. Desktop computers also support “plug and play” peripheral devices. Because the Jetson spans both of these computing paradigms, new users can have different expectations for device support out of the box.

If you’re a desktop user relatively new to the Jetson TK1, a great alternative is to install the Grinch Kernel. The Grinch kernel replaces the stock kernel to support a wide variety of different devices and services.

On the other hand, if you’re building a specialized application then you may want to take a more minimal approach. This is the case discussed in this article. We are building a kernel with only the device drivers needed for one application. In this case we are starting to build a kernel for a robotic race car.

Installation

Note: The screen cast was recorded directly from a Jetson TK1. All commands being executed are running on the TK1.

Building a kernel for the Jetson TK1 is straightforward. It is good practice to start with a fresh flash of a stock kernel when building a new kernel from scratch. In the video above, the Jetson TK1 was flashed with L4T 21.4 using JetPack 2.2.

Get the Kernel Source

Building the kernel consists of a few steps. First, gather the source code for the kernel. The sources for L4T 21.4 are available from the NVIDIA embedded developers website. The sources are delivered in a compressed form, to the next step is to “untag” them into the /usr/src/ directory. Once the sources are expanded, then derive what is called the .config file. The .config file describes which parts of the kernel source code should be included when building, and which modules and drivers to include. The .config file also specifies where the modules should be placed, either internal to the kernel or an external file. The advantage of having a module being ‘external’ is that it can easily be changed or upgraded without having to recompile the entire kernel.

The steps above are in a script on the JetsonHacks Github account in a repository named buildJetsonTK1Kernel. You can get the repository:

$ git clone https://github.com/jetsonhacks/buildJetsonTK1Kernel.git
$ cd buildJetsonTK1Kernel

You can execute the script to get the sources and open an editor on the configuration file:

$ ./installKernelSources.sh

Edit the Kernel Configuration

Next, edit the configuration file. The number of choices in the configuration file is overwhelming, it helps to have a good idea where the desired option resides. The configuration editor has a find function, which is rather limited, but can be helpful. In the above video, we enable the FTDI driver and set it to be built as an external module. Then the UVC driver is set to be built as an external module, and patched to support an Intel RealSense camera.

Local Version

There is a local version number which identifies the kernel build. On a stock kernel, you can see this by executing:

$ uname -r

The local version is usually the designation after the kernel version, for example 3.10.40-gdacac96 is the stock kernel. ‘-gdacac96’ is the local version. Modules use the kernel version to determine compatibility. One of the issues that people commonly experience the first time they build a module is that the module will not load because the kernel versions do not match between the kernel and the module. The issue usually turns out to be that the local version was not set to match the kernel version being used.

When done configuring, make sure to save the file!

Prepare and Make

Once the configuration is set, then it is time to build the kernel. There is a convenience script for this purpose:

$ ./buildKernel.sh

The process to build the kernel is surprisingly easy. First switch over the kernel directory, prepare and then make:

$ cd /usr/src/kernel
$ make prepare
$ make modules_prepare
$ make -j4
$ make modules
$ make modules_install

The modules_install command copies any modules built over to the /lib/modules directory appropriately.

Copy Boot Image

There are a few options at this point. You can save the kernel to the PC host and have a kernel that you can flash a Jetson TK1. In our case, we copy the zImage file over to the /boot directory which effectively makes that the new kernel. I do suggest that you save the .config file that you built so if things go south you don’t have to start entirely from scratch. Of course, if things don’t work after copy the zImage file and rebooting, you can always flash from the host again.

The idea that we’re working on here is to build up a kernel for a specific project. Once we’re happy with everything, we can clone the entire image and save it to a host machine.

We copy the zImage to the boot directory:

$ ./copyzImage.sh

which basically executes:

$ cd /usr/src/kernel
$ sudo cp arch/arm/boot/zImage /boot/zImage

After copying the image, reboot the Jetson TK1 and the changes will take effect.

Conclusion

This certainly is not an exhaustive explanation on building kernels. This particular subject runs deep, as they say. Different environments can be much more challenging, such as cross compiling kernels. There can be other circumstances such as architecture differences like on the Jetson TX1. On the TX1 L4T version 23.X, the underlying machine architecture is 64 bit, but the user space is 32 bit. This requires a lot of gyrations to get working, since the kernel cannot be compiled on the Jetson TX1.

The Jetson TX1 L4T 24.X can be natively compiled on the Jetson TX1 itself, making life much more tolerable.

For our purposes, having a way to build a custom kernel or add some modules here and there is a good to have in the tool belt.

Note

After the first script gathers the source for the kernel, it generates the .config file:

$ zcat /proc/config.gz > .config

The period/dot in the .config file name indicates that it is ‘invisible’. In other words, it won’t be visible when doing a normal file browse or file list.

The command gives the default kernel configuration file. The file /proc/config.gz is generated from a kernel option which is by default turned on in the L4T kernels. If you are running a differently configured or modified kernel, you may want to generate the .config file in a different manner. One way to do this is to use:

$ make oldconfig

which will try to build a new .config file from the existing settings. However, there’s more magic to this than one prescription can cure. You will probably have to do some research to get this to work properly.

If for some reason you don’t have the default .config available, then you can generate it on the TK1 from the /usr/src/kernel directory:

$ make tegra12_defconfig

The Jetson TK1 is is tegra12x series, or tegra124.

If you are on a Jetson TX1:

$ make tegra21_defconfig

Jetson TX1 is tegra21x series, or tegra210.

Remember that you still need to set CONFIG_LOCALVERSION (the suffix to “uname -r”, e.g., “-gdacac96”) as it is not stored in “/proc/config.gz”

Thanks to linuxdev in the Jetson forums for the last few tidbits.

The post Build a Custom Kernel for the NVIDIA Jetson TK1 appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles