Quantcast
Channel: JetsonHacks
Viewing all 339 articles
Browse latest View live

Jetson TK1 Interface Boards Update

$
0
0

EasyTK1_IO

Mauro Soligo recently announced that EasyTK1_IO is now available on Katodo.com in kit form. Several roboticists in the Jetson community are using the EasyTK1_IO for interfacing their Jetsons with their robots, including Walter Lucetti over at Myzharbot and Rafaello Bonghi (who has a blog over at Officine Robotiche) who is working on his RUDE robot.

EasyTK1_IO
EasyTK1_IO

More information about the EasyTK1_IO is available on Plutarcorobot.com.

Jetduino

You might remember from our last article Jetduino Jetson TK1 Interface Board Update that Dr. David Cofer over at NeuroRoboticTech posted results on the first Jetduino prototype.

Jetduino V2
Jetduino Mount on Jetson TK1

Since then, there have been two updates posted on the NeuroRobotic Technologies blog. The first, Jetduino V0.2 test results, goes over some of the issues encountered in the second prototype, along with some tips and tricks for hand assembling the boards. Well worth the read.

The second article, Jetduino pricing and board options, is another interesting read which discusses a couple of strategies for bringing the Jetduino boards to market. I found the pricing and feature trade offs discussion fascinating, as it gives a feeling as to what it takes to bring a low production hardware run to market. Well worth the read, both from a geek view and from a small business point of view. Take a few minutes and read the articles, it will be worth your time.

The post Jetson TK1 Interface Boards Update appeared first on JetsonHacks.


DJI Matrice Part 5 – Gimbal, Camera and GPS Attachment

$
0
0

The DJI Matrice 100 is a quadcopter for developers. The drone allows developers to customize the flight platform using the DJI SDK. Today we attach the DJI Zenmuse X3 Gimbal, Camera and GPS to the airframe. Looky here:

Background

Gimbal and Camera

The Zenmuse X3 camera is a 1/2.3 CMOS sensor, 12.4 Megapixel camera that can takes 12 MP stills and 4K video, and record it to an onboard micro SD card. The Zenmuse X3 is mounted on a 3-Axis gimbal, which helps the camera to produce smooth and stable footage under many different flight conditions.

The camera has a 20mm focal length lens with a 94 degree FOV. The gimbal and camera are attached to the Matrice using a Gimball Installation Kit, which includes a mounting plate and the proper cables to connect the X3 to the airframe and flight controller. Note: The Manifold has supplemental cables used to attach the X3 to the Manifold, as shown in the above video.

For mounting the gimbal, this video provided useful instruction:

GPS System

In a previous video, one of the issues encountered is building the GPS system mast. Andrew Baker suggested using a two part epoxy to attach the mast to its associated parts. Thanks for the tip! The mount and antenna base was attached to the carbon fiber mast using a 2 part epoxy. Before applying the epoxy, each end of the mast was roughed up with sandpaper with the hope of creating a better joint. After assembly, the mast was set aside for 24 hours for the epoxy to cure. The mast assembly was mounted on the Matrice as seen in the first video on this page, and then the rest of the GPS system was installed and wired.

The GPS system was mounted using the instructions in this video:

In general, the videos produced by DJI have been very helpful in the assembly process. There are some differences in the Matrice model used in the video versus the one that is being assembled here (which is expected as production modifications happen), but for the most part putting the aircraft together has been straightforward.

Conclusion

This completes assembly of the aircraft. At this point it’s time to start work on the software installation and development. The embedded computer, the Tegra K1 based DJI Manifold, has been installed on the aircraft, so we’re ready to start development! Looky here:

The post DJI Matrice Part 5 – Gimbal, Camera and GPS Attachment appeared first on JetsonHacks.

DJI Matrice Part 6 – Matrice Simulator, Software and Manifold Example

$
0
0

The DJI Matrice 100 is a quadcopter for developers. The drone allows developers to customize the flight platform using the DJI SDK. Today we go over some of the DJI software, and build a provided Manifold example. Looky here:

Background

DJI provides several tools for working with the Matrice 100. There are the DJI Assistant 2, Guidance Assistant Software, the Flight Simulator, and the Manifold development kit. The DJI Assistant 2, Guidance Assistant and Flight Simulator for the Matrice run on Windows. In the above video, the demonstration was performed on a Window 8.1 installation.

One of the things that DJI also provides is called the DJI WIN Driver Installer which is basically a USB driver which allows the Matrice to speak to the PC over USB. There is an installation issue when installing this package on Windows 8.1. Back in the time frame when Windows 8 was being released, there was a major push by Microsoft to make Windows more secure. There were several different changes to the OS which provided a more “secure” environment. One of these changes was requiring drivers, such as the DJI WIN Driver, to be signed and 64 bit. Unfortunately the WIN Driver does not meet this criteria.

There is a work around for installing the driver, which involves some Windows magic which I will not cover here. Basically you reconfigure the PC during the booting procedure to tell it that it’s all right to install drivers that don’t meet the more stringent current standard.

This is all fine and well, but unfortunately there’s no way to know this when you naively install the driver for the first time on a Window 8+ box. The installation simply says ‘Installation Failed’, with no further information given. Let’s just say there’s a lot of guessing involved as to what to do to rectify the issue. I did not see any good information in the DJI forums about the issue, but I may have just overlooked it.

But here’s the thing: Windows is much more secure now with the changes. At the same time, the actual usability of the operating system has slipped well below what a reasonable person should be able to expect. Not a knock, MS has a lot of people working on such issues, but at the end of the day the PC has to be able to help people accomplish things other than making sure that the PC is ‘secure’.

Software Installation and Updates

In software systems as complex as the Matrice, there are usually firmware updates required to the flight controller and other on board systems. These updates are done through the DJI Assistant 2 software. The Intelligent Batteries which power the Matrice are also updated through the DJI Assistant software. The Guidance system firmware is updated through the Guidance Assistant.

The Manifold, which is NVIDIA Tegra based, is maintained in a different manner as the Manifold is a computer in its own right. Note that upon receiving the Manifold, a system image backup was made and stored away for safe keeping.

Flight Simulator

For the developer, simulators are very important tools when dealing with robots. DJI has a flight simulator which allows the PC to connect to the Matrice flight controller with a USB cable. When in simulation mode, the flight simulator mimics the actions that are being run through the physical flight controller on the quadcopter. This feature will prove very useful when developing code, as it allows the code to be run through the simulator to predict how the quadcopter will react.

Manifold Demo Example

Also in the video above, a small demo provided with the Manifold is compiled and executed. The program is simple, it takes the video from the Zenmuse X3 camera and displays it on a monitor attached to the Manifold HDMI port.

DJI Matrice Software Simulator
DJI Matrice Software Simulator

The post DJI Matrice Part 6 – Matrice Simulator, Software and Manifold Example appeared first on JetsonHacks.

DJI Matrice Part 7 – First Flight

$
0
0

The DJI Matrice 100 is a quadcopter for developers. The drone allows developers to customize the flight platform using the DJI SDK. Time to take the first flight to begin testing! Looky here:

Background

In preparation for flying the Matrice 100, I spent about 20 hours learning how to fly drones by practicing with “toy” quadcopters. I bought a couple, the one I spent the most time working with was the SYMA X5C Quadcopter (Remember to get extra batteries, I used these). For the most part, learning to fly the quadcopters was a typical experience. At first, everything doesn’t make much sense and there are the inevitable crashes and such. After a few hours, it kinda sorta makes sense, and then after about 15 hours or so you can impress the dogs at the park with your mad skillz.

I will note here that there appears to be a mode on the SYMA designed to fly the quad into a tree. If you happen to have the propeller guards on, there is a good chance that it will get stuck in the tree. Don’t ask me how I know that. Stinkin’ tree monsters. If you fly outside, don’t put the prop guards on. When the SYMA is in ‘Mode 2’ on the controller, I never had any problems as long as I stayed within range of the transmitter. Once the SYMA gets out of range of the transmitter, it falls out of the sky in most cases.

Flying

In comparison to flying the toy quadcopters, flying the Matrice is quite a different experience. First, the toys are affected by any type of wind making them much more difficult to control. The Matrice, being much heavier, doesn’t suffer from that issue as badly. The day that we shot the video, it was too windy to fly the toy whereas the Matrice didn’t seem to mind all that much.

Second, the Matrice is much more serious. I have to admit it’s a little intimidating, but that helps in maintaining a healthy respect for it. With the Matrice, there is a preflight check list, which includes things like calibrating the compass, making sure that the battery is well seated in its compartment, and so on. As you saw in the video, even though checking that the propeller is securely fastened is on the checklist, it is really important to be absolutely sure as you prepare to fly. On the toy, of course, you basically turn it on and off you go.

Another big difference between the Matrice and the toy quadcopter is the way the aircraft hovers. On the Matrice, setting the throttle causes the aircraft to hover at a constant altitude. The Matrice uses onboard sensors to maintain that altitude, so the pilot doesn’t have to be quite as attentive to throttle application. On the toy, this does not happen automatically. Hover is maintained by judiciously juggling the sticks to control the throttle, yaw, pitch, and roll. So in that sense, the Matrice is easier to fly along with all of its other “big boy” features like Return To Home (RTH) which allows the Matrice to return and land at a preset home location autonomously.

Of course, there’s also a post flight check on the Matrice which the toy doesn’t need where all of the different accessories are dismounted and packed off. In addition, when piloting an aircraft like the Matrice a log should be kept which needs to be filled out.

Also, landing the aircraft as seen in the video is a little different. On one of the toy quadcopters, you can just set the throttle to off and your done. On the Matrice, you saw in the video that I tried to ‘disarm’ it right after it landed, and in that act almost did one of the dreaded flips. Pilot error. So the after landing sequence is different.

There’s not too much to say about the actual flying itself. For my first flights on the Matrice, I will not work much with the onboard camera as I try to become acclimated to piloting the aircraft. Some of my more serious friends who fly these types of devices for Hollywood films actually fly them as a tandem. One person is the pilot of the aircraft, the second person acts as a cameraman using a second remote control. The pilot is responsible for getting the aircraft to the desired place, the camera man is responsible for getting the shot.

I will say that one thing that surprised me was the quality of the video footage shot by the Matrice Zenmuse X3 camera. As it was a pretty windy day, and the platform moved around quite a bit while it was up in the area, the gimbal system and shock absorbers made the shots look very smooth. The crane types of shots were very surprising in their quality.

In any case, it’s a good feeling to know that the thing can actually fly. This makes it easier to imagine it autonomously flying some day.

Special Thanks

Special thanks to Matthew Graham for filming the Matrice while I piloted the vehicle.

Matrice First Flight
Matrice First Flight

The post DJI Matrice Part 7 – First Flight appeared first on JetsonHacks.

See you at GTC 2016

$
0
0

I’ll be in San Jose next week at GTC 2016. Hope to see you there!

I will be bringing the Matrice Tegra Drone and the Jetson Racecar that we’ve been working on over the last few months.

The hall will start filling up soon for a bunch of good lectures, seminars and exhibits:

GTC 2015 in San Jose, CA
GTC 2015 in San Jose, CA

The post See you at GTC 2016 appeared first on JetsonHacks.

GTC 2016 – Flying Robot Demo

$
0
0

GTC 2016 in San Jose last week was a great technical conference. I will be writing a few articles about some of the interesting things that I saw there. I want to thank everyone who stopped by and said hello while I was in the NVIDIA booth, it was great getting a chance to chat with some of the readers and viewers.

Most of the technical talks from the conference are online (or soon will be), which provides a wonderful opportunity to see what leaders in several fields are using GPUs for.

In this article, I will cover a demonstration that I showed. Looky here:

Background

If you’ve been following JetsonHacks for the past few weeks, you noticed that there were several articles about the DJI Matrice, which is a drone for developers. By attaching a product called Manifold, a Tegra K1 computer similar to the Jetson TK1, the developer can gather information from the drone sensors, such as the gimbaled camera, and use that information to control the drone through an onboard flight controller. A simple UART connects the Manifold to the flight controller, while the video feed from the camera comes through a special USB port which also allows pass through to the flight controller. This arrangement allows the flight controller to broadcast the video back to a ground station. In this manner, the drone can act autonomously.

Demonstration

The actual demonstration is a simplistic interpretation of codes that the drone captures through the camera. The codes are known as AprilTags, more formally as fiducial markers in computer science lingo. Each one of the tags represents a different code. For example, in the demonstration when AprilTag code 12 was shown to the drone, the drone would Take Off, Code 20 would move the drone to the right a little. In initial testing in the outdoor Drone Zone cage, apparently code 21 tells the drone to fly into the protective netting. Looky here:

In some sense this is to be expected. The drone was in a field for the trial runs, it’s not surprising that it moved a little too far in a more enclosed space. Here’s the initial test for reference:

Development

Being a developer drone, the Matrice can be configured in many different ways. There are many mounting options for different components, and in some sense putting the Matrice together is like completing a jigsaw puzzle. The project started about three weeks before GTC.

About 14 days went by assembling and waiting for the different bits and pieces to arrive (as well as way too many trips to Frys Electronics to pick up various miscellaneous bits for the project).

DJI has several different software SDKs for the Matrice, including an Onboard SDK for the Manifold (there is also a version which supports ROS, Robot Operating System). There are also versions for talking through the R/C controller of the drone with different mobile devices, both iOS and Android. For the demonstration, Onboard SDK was used.

After the Matrice was assembled and once the Manifold computer and camera were in order, work on the demonstration software started. The demo software was put together in about 5 days, with another couple of days spent debugging and testing.

The actual program itself is rather simple. Video is taken from the camera, which comes in as NV12, and is converted into a form which OpenCV can process. There is an open source AprilTags recognizer written in C++ on Github which is a port from the original Java program written by Professor Edwin Olson of the University of Michigan. Connecting the library code together and getting it to work was straightforward, probably the easiest part of the project.

The rest of the software development, not so much. As you might have guessed, getting the video from the camera and then changing the format was the first hurdle. Fortunately some digging found some example code to help with the process.

The next hurdle was much more complex. To actually use the APIs for the flight controller, the developer needs to know something about how the quadcopter actually flies, things such as controlling the pitch, the roll and the yaw of the aircraft, controlling the throttle, little things like that. I had never flown a quadcopter before the start of the project, so that part was a little challenging. However, I understood from 3D graphics what the concepts were all about. Still, there are some nuances that were lost on me, such as when the Manifold needed to take flight control when executing commands. Some of the commands needed to have control, others do not. I’m sure a developer more experienced in the domain would have found it straightforward.

There is also a concept of Modes which I found confusing. The R/C controller has a three way switch on the left front labeled ‘P’, ‘A’, and ‘F’. For the Manifold to take control, the switch needs to be in the ‘F’ position. However, the actual Onboard SDK has the same modes that it controls, which when thinking about it made my head hurt. Also, with DJI being a Chinese company, the documentation has some strange terms for which the meanings are not immediately obvious. There are some examples included with the SDK, but the examples seem oriented towards people who actually know what they are doing. I know what I wanted the drone to do, I just didn’t have the mental map on how to tell the flight controller how to do it.

Once the program was debugged for the most part and tested in the DJI Simulator (a very nice way to see what is going to happen in the field) there were a few more obstacles to overcome.

A quick note on the DJI Simulator. For the Matrice, the simulator only runs on a Windows box as of the time of this writing. I happen to have a Windows 8.1 box. When I tried to install the DJI Driver to talk to the Matrice over USB, installation always failed, with the helpful error message “Installation Failed”. No reason, no error code, just failed.

Having failed a lot in life, I found this easy to accept. But I also knew that if I persevered long enough, that square peg was going through that round hole come hell or high water. It turns out that the DJI driver was written as an ‘untrusted’ 32 bit driver. Starting in Windows 8, the kingdom banned any such behavior. Only trusted 64 bit drivers can be installed on the Windows boxen. As in most things computers, special incantations can circumvent the rules of the kingdom, so the rogue DJI Driver was installed and the DJI Simulator was able to come to life. You might guess that finding this information out and implementing the solution took about as long as reading this paragraph. Your guess is terribly wrong.

The Manifold talks to the Matrice through both USB and UARTs which require device permissions. This means that the program must have permissions set when the Manifold boots. Normally the command to execute the app would be placed in /etc/rc.local, but there was a rub.

The application is built in Qt using OpenGL! Normally that’s not really a robot thing, one would run a much simpler user interface through a console. However, in this case a graphical user interface was used to expedite development and testing. It was easier to have a GUI to manipulate and try to replicate the actions in the the application.

So what’s the problem with having it run as a GUI? First, in /etc/rc.local one is not assured that X11 (the base for the graphics system) has started. If X11 hasn’t started, the GUI based program can’t run. A quick solution for this was to place the app in the ‘Startup Applications’ area.

The second issue is not as obvious. Once you unplug the HDMI monitor from a Manifold (or other X11 systems such as the Jetson) when the computer boots X11 sees that there is no monitor, and thus no need for graphics, X11 does not load. That’s all fine and well most of the time, but in this case I need graphics to run my app!

The work around in my case was to buy a HDMI display Emulator:

Headless Display Emulator @ Amazon

Not surprisingly, I had forgotten that the actual connector on the Manifold is mini-HDMI, so back in the car to Frys … They didn’t have quite what I was looking for, so I ordered this collection of goodness from Amazon:

The reason I’m telling you this is that you my find yourself in such a situation one day …

Note: There is a way to set the X11 config file to do this in software on the Tegra boards, I’ll write up a separate article on that tidbit. Still, this can be a convenient solution.

Anyway, after those niggles were behind it was time to go to the park for the first test. Things mostly worked. However, I will reveal that there was one nasty bug that took me a while to hunt down.

In my haste to build some tables, I neglected to put in range checking in the lookup portion of the code. I know better, you know better, and yet still … I must say I admire that C++ does exactly what you tell it to do, even if it is a jump to a nonexistent memory location that causes the application to crash hard. Even better, when the app crashes and the app is connected to a robot, good times! Combine this with the fact that the app needs sudo permissions to actually run making it very difficult to run in the traditional debugger/Qt IDE …

I will just say that I may have said a dirty word. Or maybe a couple. For like four hours straight, trying to find the issue. But at the end of the development period it mostly worked, as the video above showed. At the show, we spent most time in the NVIDIA booth hooked up to the simulator:

Flying Robot in NVIDIA booth
Flying Robot in NVIDIA booth

The lights at the show made everything look nice, and it was great talking to the readers that stopped by and said hello!

The post GTC 2016 – Flying Robot Demo appeared first on JetsonHacks.

GTC 2016 – Jetson Community Robots

$
0
0

At GTC 2016 in early April of 2016, the Jetson community showed off several different robots that people have been working on over the last year or so. All the way from Italy, Walter Lucetti and Rafaelo Bonghi brought Myzharbot and DUDE, two tracked robots. Looky here:

The robots each have a Jetson TX1 as a controller, and utilize a Stereolabs ZED camera as a 3D imaging sensor. Myzharbot has additional ultrasonic sensors mounted at the front to help detect objects which are very close to the robot.

As I was busy in the NVIDIA booth this year, I didn’t get as much video footage of all the different devices as I would have liked. However, I did get a chance to talk with some folks about their projects.

An interesting robot being shown is called TURBO2, an outdoor autonomous rover created by Dustin Franklin.

TURBO2 with cover removed
TURBO2 with cover removed

Here’s a picture of Dustin at SXSW earlier in the year with TURBO2:

turbo-sxcreate

One of the interesting things about TURBO2 is that it utilizes Deep Reinforcement learning to map its environment, and plan navigation. Best of all, the source code for the robot is available on Github, as is the Torch based rovernet code. Certainly worth checking out:

TURBO2 Code and BOM
Rovernet Code

The TURBO2 incorporates a Stereolabs ZED camera. In addition, there is a RP-Lidar unit mounted on the top of the robot to better help map the surrounding area, which in turn leads to better data for SLAM algorithms. The idea is that you place the robot in an area, and it basically learns about the environment and how to navigate around. A very cool idea.

The post GTC 2016 – Jetson Community Robots appeared first on JetsonHacks.

GTC 2016 – Drones

$
0
0

At GTC 2016 in early April of 2016, there were several different rotorcraft being shown that incorporate a Jetson Development Kit or NVIDIA Tegra SoC. The following are in no particular order.

IFM Technologies

One of my favorites was a prototype from startup IFM Technologies. Looky here:

The prototype incorporates a Jetson TK1 Dev Kit which enables the IFM to fly autonomously using vision processing. This enables high levels of autonomy in indoor environments where GPS is unavailable.

Aerialtronics

On the other end of the drone spectrum, Aerialtronics brought an industrial outdoor drone called the Altura Zenith. The Altura Zenith can reach speeds of 55 miles per hour, and can carry a 5 kilogram payload. The Altura uses a Jetson onboard to interface with a dual band infrared fusion pod for use in Search and Rescue missions.

Aerialtronics Altura Zenith
Aerialtronics Altura Zenith

AerialGuard

AerialGuard is a startup which enables unmanned vehicles to navigate in places with no prior knowledge of the environment.

AerialGuard
AerialGuard

The drone utilizes a Jetson to process the sensor data from a Stereolabs ZED camera.

Stereolabs

Speaking of Stereolabs, they were showing a Jetson TX1 mounted on a DJI Matrice drone connected to a Stereolabs ZED camera. This combination was creating a 3D computer model of objects in the environment, while the drone was being flown.

Stereolabs
Stereolabs

Stereolabs figured prominently in many of the different robotics projects at GTC, in part because of the quality of their product, and in part because of the optimized code written specifically for Jetson and the NVIDIA Tegra processors.

Percepto

Percepto was showing their PerceptoCore module which enables onboard computer vision abilities for consumer and commercial drones. PerceptoCore is built around a Tegra K1 SoC.

PerceptoCore
PerceptoCore

Prox Dynamics

Prox Dynamics discussed the challenges of integrating a Tegra K1 chip into their 18 gram nano-drone. A very interesting talk was delivered at the conference which included one of the Black Hornets being flown about the room during the talk. Extremely way impressive!

Prox Dynamics BlackHornet
Prox Dynamics BlackHornet

Conclusion

This year appeared to be the start of NVIDIA entry into the drone market, as the Tegra processors are beginning to show up in different vendors products. The popularity of flying robots should increase significantly as their ‘electronic brains’ grow in capacity and capability.

Photos courtesy of Apollo Timbers of Second Robotics LLC.

The post GTC 2016 – Drones appeared first on JetsonHacks.


Jetson – Allow Graphics without HDMI

$
0
0

On a recent project with a rushed development schedule, there was a little issue. The workaround of booting the Jetson into a graphics desktop without a monitor attached is worth sharing.

Background

As is the case with a lot of projects that involve visual processing, the example framework for this project was written using a Graphical User Interface, or GUI. In the rush of development, this framework was used as the structure for building a demonstration application.

In this particular demo (Flying Robot Demo), an image stream is acquired from a camera sensor and each image is examined for the presence of a fiducial marker. The fiducial marker in this case is an April Tag, which looks like a big QR code. If a fiducial marker is recognized, then commands are sent out to the robot to control its position corresponding to the markers meaning. For example, a marker recognized as code 20 might tell the robot to move left 2 meters. A very simplistic concept.

Having a graphics program, in this case one written using the framework Qt, is useful for a person to help in the debugging purposes. In this case the camera stream is placed on the monitor, and any markers recognized are highlighted by placing a colored rectangle over the marker on the screen.

Now the computer doesn’t really care about this visual representation, it’s there purely to help the person writing the program. But here’s the thing: as a developer working with a tight deadline you need all the help you can get. The easiest thing to do in this case was to use the example framework and build on that.

Once the program was ready to be tested on the robot standalone, the demo program was marked as a startup application using the Startup Applications app through the desktop. In this case the permissions were fiddled with so that the program could talk to the USB camera. Everything was tested once more, to make sure everything worked on startup as expected.

Now this being a flying robot and all, you probably have guessed that the robot does not carry an attached HDMI monitor on board. So when the program was to the point that it was working on the bench and ready to be tested standalone, the monitor and external peripherals were detached and everything booted up. What happened next? Nothing. The robot sat there like a big fat pig and ignored any attempt at recognizing markers. Of course the natural inclination was to hook everything back up to make sure everything works. It does. Disconnect, startup. Nothing.

What Happened?

What took a little while to realize was that if there is no monitor attached to the Jetson, then the X11 graphics manager does not load, which in turns means that the desktop manager does not load. If the desktop manager does not load, the startup applications do not load. Then what happens? In this case, nothing. It just sits there silently like a big fat pig dog.

For most robot applications, programs are written as console applications. This allows for easy communication with the robot through SSH from another computer. Unfortunately in this case, the framework for controlling the robot is commingled with the graphics user interface. Given the short amount of development time available, it was unlikely that the program could be rewritten as a console app to meet the deadline.

Solution

After Googling like a chicken with its head cut off, the solution that was implemented for the demo was based on attaching a HDMI monitor emulator to the Jetson. Here’s the HDMI display Emulator used:


Headless Display Emulator @ Amazon

This emulator appears to the Jetson to be a HDMI monitor. X11 and the desktop happily load, none the wiser.

As it turns out after going to GTC 2016 and talking to knowledgeable NVIDIA people, there is a software workaround.

Allow X without HDMI at Boot

Edit /etc/X11/xorg.conf

Section "Device"
...
...
Option "AllowEmptyInitialConfiguration" "true"
EndSection

Which will allow X11 to startup without a monitor attached.

Conclusion

So there you have it. If you wander down the wrong path during development, and find yourself in a similar situation of hopeless despair, you too can tell the developer gods “Not Today!” and snatch victory from the jaws of defeat.

The post Jetson – Allow Graphics without HDMI appeared first on JetsonHacks.

Coming back from hiatus …

$
0
0

Just a programming note, Jetsonhacks is coming back from spring hiatus within the next few days.

Over the last few weeks, next seasons projects have beens shaping up nicely. There has been a very interesting development in RGBD sensing (3D vision) that we’ll show how to bring to the Jetson platform. Also, we’ll be working on a couple of projects.

We’ll continue work on the Jetson RACECAR project which should be lots of fun. There are a couple of versions of this concept out there now. The MIT version that we’ve been discussing, and one from a new group that we haven’t discussed here yet, the University of Pennsylvania. The University of Pennsylvania is using a similar teaching model to that of MIT with the car hardware and software being open source, and the lectures and course material being freely available on the web. We’ll look at the differences between the two designs, and apply what we learn to building one of our own.

Let’s wait a while to reveal the second project, need to build up a little suspense. At this point in time, let’s stick with “It should be interesting. It will amaze your friends, and confuse your enemies”.

One change that is being made here on the website is the addition of an editorial category. The editorial category will not be limited to Jetson only information, but rather observations about systems and software. I’ll also share some material that I have found inspiring over the years. I’d like to think entering into this that the editorials won’t strictly be rants 😉

Stay tuned!

Jim
JetsonHacks

The post Coming back from hiatus … appeared first on JetsonHacks.

Viraj Padte’s JetsonBot – Research Platform

$
0
0

Viraj Padte from Oklahoma has been working on building a JetsonBot based on the articles posted on JetsonHacks.

Background

Viraj has a bachelor’s degree in Electronics and Telecommunications Engineering. Viraj has an interesting motivation for building a JetsonBot. Viraj states, “I choose to make the JetsonBot in order to learn to utilize the NVIDIA Tegra Kepler architecture for performing fast image computation for path planning and decision-making in mobile robots.”

Money quote:

The major obstacle for me is learning ROS. I was completely a newbie to ROS and took a lot of time in figuring installation and configuration. The videos made by Jetsonhacks proved to be extremely useful in the learning process.

Images

Here is Viraj Padte’s JetsonBot:

Virag Padte's JetsonBot
Virag Padte’s JetsonBot

Testing the Jetson with an attached battery pack:

Virag Padte's JetsonBot - Battery Test
Virag Padte’s JetsonBot – Battery Test

This one warms the heart, a happy fire extinguisher and batteries being charged. This is better than ten cute kitten pictures:

Virag Padte's JetsonBot - Charging the Battery
Virag Padte’s JetsonBot – Charging the Battery

Going Forward

Viraj plans to extend the capabilities of the JetsonBot by adding a Velodyne LIDAR sensor for improved path planning and autonomous navigation tasks. That capability will certainly take the JetsonBot to the next level!

Conclusion

I want to thank Viraj for sharing his work with the JetsonHacks community, hopefully this will provide some inspiration for other Makers out there. If you have a JetsonBot or other Jetson based project build you would like to share, send in an email!

The post Viraj Padte’s JetsonBot – Research Platform appeared first on JetsonHacks.

Jetson Based Autonomous Race Car – University of Pennsylvania

$
0
0

mLab at the University of Pennsylvania has built a website for building an Autonomous Race Car (ARC). The compute platform of the race car is a Jetson TK1. Looky here:

Background

mLab (Real-Time & Embedded Systems Lab) promotes competition with the autonomous 1/10th scale F1 race cars through the website: http://f1tenth.org

The competition involves designing, building, and testing the cars facilitated by lectures and reading material available as an online teaching kit. The online lectures and tutorials are in-depth and provide a comprehensive overview of both building the car and the theory behind the control systems used to control the car. Lectures and tutorials are also provided as background for building Simultaneous Localization and Maping (SLAM) capabilities.

The lectures and course material are provided here: http://f1tenth.org/lectures.html

The computation system consists of a Jetson TK1 which runs Robot Operating System (ROS). The vehicle chassis is a Traxxas 74076 Rally R/C car. The complete bill of materials for the car are available on the f1tenth.org website, along with build instructions.

Readers of this website will recognize the above components as the basis for both the MIT RACECAR and the Jetson RACECAR that we’ve been building. In fact, the sensors on the UPenn car are basically the same ones used on the MIT RACECAR.

Autonomous Race Car - UPenn
Autonomous Race Car – UPenn

Conclusion

It’s great to see university level explanations of PID controllers and SLAM algorithms for this application online and freely available. That alone is worth the price of admission, even if you don’t build a car for yourself. Having the instructions for actually building a vehicle for your own use is icing on the cake!

As we start the next part of the Jetson RACECAR build, we’ll examine the differences (and the similarities) of the UPenn and the MIT cars, and see what take aways we can “borrow” for the Jetson RACECAR. We might have a trick up our sleeves that we can use, we’ll see how that goes.

The post Jetson Based Autonomous Race Car – University of Pennsylvania appeared first on JetsonHacks.

Intel Realsense Camera

$
0
0

The Intel Realsense Camera is a RGBD (Red, Green, Blue, Depth) camera which fits a large amount of imaging technology into a small package. Looky here:

Background

Over the last couple of years, we’ve had several articles about RGBD cameras. RGBD cameras provide a color image stream (the RGB part) and a depth stream (the D part) which can be used for a variety of imaging and robotic applications. In the video you saw how the mechanical packaging of these types of devices has changed over the years.

A couple of months ago Intel announced the Intel® RealSense™ Robotic Development Kit which includes a Realsense R200 camera, along with an Intel Atom x5 processor board. I thought that was interesting enough to order the kit from Intel. After all it had the word “robot” in the name!

At the same time, I am intrigued with the form factor of the R200 camera. This uses the same technology as the ‘Project Tango’ tablet (now called ‘Tango’), though without the fisheye camera. The R200 is available separately, and for a price of $99 USD it is certainly worth investigating. Will it run on the Jetson?

Please, you’re on this website. You know it will!

Technology

From the librealsense documentation:

R200 Component Layout
R200 Component Layout

The R200 is an active stereo camera with a 70mm baseline. Indoors, the R200 uses a class-1 laser device to project additional texture into a scene for better stereo performance. The R200 works in disparity space and has a maximum search range of 63 pixels horizontally, the result of which is a 72cm minimum depth distance at the nominal 628×468 resolution. At 320×240, the minimum depth distance reduces to 32cm. The laser texture from multiple R200 devices produces constructive interference, resulting in the feature that R200s can be colocated in the same environment. The dual IR cameras are global shutter, while 1080p RGB imager is rolling shutter. An internal clock triggers all 3 image sensors as a group and this library provides matched frame sets.

Outdoors, the laser has no effect over ambient infrared from the sun. Furthermore, at default settings, IR sensors can become oversaturated in a fully sunlit environment so gain/exposure/fps tuning might be required. The recommended indoor depth range is around 3.5m.

In a little less techy terms, the camera appears pretty capable. The camera works outdoors (a big drawback to the indoors only Kinect/infrared types of devices), and multiple R200s can be used in the same physical indoor space without having to worry about infrared pattern interference. All of the depth processing and image registration is done in hardware on the camera, so there isn’t computational drag on the host computer. That is one of the drawbacks of the Stereolabs ZED camera, where the host processor builds the depth maps from the gathered camera images. That takes a large amount of compute cycles on a small processor like a Jetson TK1. The biggest thing of all? The R200 is small. For the amount of horsepower, the $100 price is a bargain in the current marketplace.

Software

Intel has an open source library, librealsense on Github. librealsense is a cross platform library which allows developers to interface with the Realsense family of cameras, include the R200. Support is provided for Windows, Macintosh, and Linux.

The next couple of articles on JetsonHacks will cover how to install librealsense on the Jetson TK1. This includes building a kernel with support for the cameras, along with installing the librealsense library. At this point, let’s say that this is a project for non-noobs.

Intel also has a ROS interface for the R200. There will be an upcoming article about installing the ROS interface too!

Conclusion

The Intel Realsense R200 is an interesting entry into the 3D imaging field. If sophisticated imaging robotic applications are to enter into the personal developer market (as opposed to corporate developer), this device will be one of the key enablers.

The post Intel Realsense Camera appeared first on JetsonHacks.

Intel RealSense Camera Installation – librealsense – NVIDIA Jetson TK1

$
0
0

Installation of the Intel RealSense Cameras on the Jetson TK1 is made possible by the use of the open source library librealsense. Looky here:

Background

Note: This article is intended for intermediate users who are comfortable (or want to be) with Linux kernel development, and can read and modify simple shell scripts if needed.

In a previous post, Intel RealSense Camera, we discussed adding the RealSense R200 camera to the Jetson TK1.

Intel has made available an open source library, librealsense on Github. librealsense is a cross platform library which allows developers to interface with the RealSense family of cameras, including the R200. Support is provided for Windows, Macintosh, and Linux.

There are two parts to getting the R200 camera to work with the Jetson. First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.

A kernel module is code that can be loaded into the kernel image at will, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems. The code for these modules is either in the kernel itself, in which case it is called ‘built-in’, or designated to built as a module. When built as a module the compiled code is stored separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to rebuild the entire kernel. We will be building a module called uvcvideo to help interface with the RealSense camera.

The second part of getting the R200 to work is to build and install liberealsense.

Kernel and Module Building

Note: In the video above, the installation was performed on a newly flashed L4T 21.4 TK1 using JetPack 2.2.

In order to build the kernel and the modules, download the kernel sources. The sources should be placed into /usr/src

A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.

$ git clone https://github.com/jetsonhacks/installLibrealsense.git
$ cd installLibrealsense/UVCKernelPatches
$ ./getKernelSources.sh

The above commands gets the repository and runs a script which downloads the kernel sources. Note: The convenience files are simple shell scripts which automate the tasks described above. They are provided as a guideline on how to accomplish the task, but due to various issues (such as network or directory layout) you may need to modify them to suit your needs.

Once the kernel sources have been downloaded and decompressed into the /usr/src directory, a configuration editor opens. In the configuration editor, set the local version of the kernel to that of the current configuration. The current local version number is available through the command:

$ uname -r

which displays:

3.10.40-gdacac96

The local version number consists of the digits following the 40 in this case, i.e. -gdacac96.
Remember the – sign, it is important! This identifier is used to ensure that the module matches the build of the kernel and should match exactly. Place the local version number in the field:

General Setup -> Local version – append to kernel release:

Next, we will be modify the USB Video Class (UVC) to understand RealSense video formats.

The option to compile UVC as a module is located in:

Device Drivers -> Multimedia Support -> Media USB Adapters -> USB Video Class (UVC)

Once you find the entry, right-click on the entry until you see a small circle. The circle indicates that the option will be compiled as a module. Save the configuration file.

Patching the UVC module

A patch file is provided to apply on the module source and a shell script is provided to apply the patch. Again, these are convenience files, you may have to modify them for your particular situation.

./applyUVCPatch.sh

Next, compile the kernel and module files:

./buildKernel.sh

This takes several minutes as the kernel and modules are built and the modules installed. Once the build is complete, you have a couple of options. The first option is to make a backup of the new kernel and modules to place them on a host system to flash a Jetson system with the new kernel. We will not be covering that here, but for reference:

Build own kernel for Jetson TK1 – NVIDIA Developer Forum

The second option is to copy the kernel over to the boot directory. A convenience script is provided:

$ ./copyzImages.sh

In addition to copying the new kernel into the boot directory, the newly built module, uvcvideo, is added to the file /etc/modules to indicate that the module should be loaded at boot time.

The RealSense cameras require USB 3.0. The USB port is set for USB 2.0 from the factory. Also, the stock kernel uses what is called ‘autosuspend’ to minimize power usage for USB peripherals. This is incompatible with most USB video devices. If you have not changed these settings on your TK1, a convenience script has been provided:

$ ./setupTK1.sh

Now reboot the system.

Building and Installing librealsense

Once the machine has finished rebooting, open a Terminal:

$ cd installLibrealsense
$cd ./installLibrealsense.sh

This will build librealsense and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and can be accessed from user space. Once installation is complete, you will be able to play with the examples. For example:

$ cd bin
$ ./cpp-config-ui

The example allows you to set the camera parameters. Hit the ‘Start Capture’ button to start the camera.

Qt Creator

There are Qt Creator files in librealsense which may be used to build the examples and librealsense itself. A convenience script, ./installQtCreator.sh , is provided in the installLibrealsense directory to install Qt Creator.

Conclusion

So there you have it. This has been a little bit more involved than some of our other projects here, but if you are interested in this kind of device, well worth it.

The post Intel RealSense Camera Installation – librealsense – NVIDIA Jetson TK1 appeared first on JetsonHacks.

RealSense Camera ROS Install on Jetson TK1

$
0
0

In this article, we cover ROS installation of the Intel RealSense Camera on the NVIDIA Jetson TK1. Looky here:

Background

Note: This article is intended for intermediate to advanced users who are familiar with ROS.

One of the intended uses of Intel RealSense Cameras is robotics. The premiere operating system for robots is Robot Operating System (ROS). A great platform for running ROS and RealSense Cameras? Jetson TK1! Let’s get the shotgun out and have wedding!

Installation

In order to get started, there are three prerequisites required. First, librealsense needs to be installed on the Jetson TK1. Here is an article on how to do that, librealsense installation on Jetson TK1.

Second, we need to have ROS installed on the Jetson. If you do not already have ROS installed, here is a way excellent article on installing ROS on the Jetson TK1. In a nutshell:

To install ROS on the Jetson:

$ git clone https://github.com/jetsonhacks/installROS.git
$ cd installROS
$ ./installROS.h
$ cd ..

This will install ROS Indigo ros-base, rosdep, and rosinstall.

Next, we download the realsense_camera package installer:

$ git clone https://github.com/jetsonhacks/installRealSenseCameraROS.git
$ cd installRealSenseCameraROS

The third prerequisite we need is a Catkin Workspace for our base of operations. There is a convenience script to create a new Catkin Workspace.

$ ./setupCatkinWorkspace [workspace name]

In the video above, jetsonros is the workspace name. This script creates an initialized Catkin Workspace in the ~/ directory.

With the prerequisites installed, we’re ready to install the realsense_camera package:

$ ./installRealSense.sh [workspace name]

where [workspace name] is the name of the Catkin Workspace where you want the realsense_camera package installed. In the video, the workspace name used is jetsonros.

If you do not have a swap file enabled on your Jetson, there may be issues compiling the package because the TK1 does not have enough memory to compile this in one pass. The installation script has been changed since the video was filmed to compile using only one core to relieve memory pressure, i.e.

$ catkin_make -j1″

If this doesn’t fix the problem, refer to the video for a workaround.

Note: As of this writing, the ROS package in the debian repository cv_bridge is hard linked against an OpenCV package which is not installed on the Jetson (2.4.8). There are several ways to get around this, discussed on the ROS Answers forum. For this installation, installing cv_bridge from source is chosen.

At this point, you are ready to launch the node.

Launch RealSense Camera Node

There are several launch files included in the realsense_camera package. These are covered in the README.md file in real_sense camera directory. In order to launch the camera on the Jetson:

$ roslaunch realsense_camera realsense_r200_nodelet_standalone_preset.launch

Visualzation Workstation

On your visualization workstation, you can view the camera configuration:

$ rosrun rqt_reconfigure rqt_reconfigure

If you intend to view a point cloud, you must setup a frame of reference, i.e.

$ rosrun tf static_transform_publisher 0.0 0.0 0.0 0.0 0.0 0.0 map camera_depth_optical_frame 100

You can also open RVIZ and load the provided RVIZ configuration file: realsenseRvizConfiguration1.rviz.

$ roscd realsense_camera
$ rosrun rviz rviz -d rviz/realsenseRvizConfiguration1.rviz

Please read the README.md file for more information.

Conclusion

RealSense camera support under ROS is still relatively new. Some interesting features of the camera are not yet supported, such as hardware registration of the color and depth map in the package. However things are shaping up quite nicely for this new entry in the RGBD camera space.

The post RealSense Camera ROS Install on Jetson TK1 appeared first on JetsonHacks.


Jetson RACECAR Part 7 – Razor IMU Mounting

$
0
0

In the seventh part of our Jetson RACECAR build, we add a Sparkfun Razor 9DOF IMU to the lower platform. Looky here:

Background

The Sparkfun 9 Degrees of Freedom Razor IMU is a frequent choice for many robotic projects. From the Sparkfun website:

Description: The 9DOF Razor IMU incorporates three sensors – an ITG-3200 (MEMS triple-axis gyro), ADXL345 (triple-axis accelerometer), and HMC5883L (triple-axis magnetometer) – to give you nine degrees of inertial measurement. The outputs of all sensors are processed by an on-board ATmega328 and output over a serial interface. This enables the 9DOF Razor to be used as a very powerful control mechanism for UAVs, autonomous vehicles and image stabilization systems.

The product is a collaboration with Jordi Munoz of 3d Robotics, which uses these components as part of the Pixhawk open source autopilot system. The Pixhawk is in use by UAVs, rovers, aircraft, boats and other robotic vehicles. The board connects to a SparkFun FTDI Basic Breakout – 3.3V with a right angle 0.1″ header. The FTDI Basic Breakout converts the serial output of the board to a miniUSB connector.

The board has a built in ATMega328. The ATMega328 is programmable through an Arduino software interface to provide Attitude Heading Reporting System (AHRS) firmware that works with a ROS package.

razor_imu_9dof (available at: http://wiki.ros.org/razor_imu_9dof) is a comprehensive ROS package which provides a ROS driver for the Razor IMU, as well as the Arduino firmware that runs on the Razor board to generate the AHRS information. The ROS package also provides a diagnostic GUI for the IMU board.

IMU Selection

We’ve covered several IMUs in the past, why select this particular one? As it turns out, both the MIT RACECAR and the University of Pennsylvania F1/10 race car use this particular IMU. Add to this that there is great support in ROS for this particular device, which means that we should be able to integrate the device pretty easily with the Jetson RACECAR.

Installation

On the Jetson RACECAR prototype, two holes were drilled on the lower platform to mount standoffs on which the IMU is mounted. In the picture below, a USB 2.0 hub is shown connected to the Jetson and the Razor IMU. Note that this is just some test fitting, we’ll have to think about how we are going to wire the car in a future episode. My current thought is to run control systems on USB 2.0, and a sensor on USB 3.0. Both the MIT and UPenn cars use a powered USB 3.0 hub which seems like a reasonable approach if you are using as many sensors and devices as they are onboard.

Razor IMU
Razor IMU on Jetson RACECAR

The post Jetson RACECAR Part 7 – Razor IMU Mounting appeared first on JetsonHacks.

Build a Custom Kernel for the NVIDIA Jetson TK1

$
0
0

In this article, we’ll cover how to build a custom kernel for the NVIDIA Jetson TK1. Looky here:

Background and Motivation

Note: This article is for intermediate users. You should be familiar with the purpose of the kernel. It is also helpful to be able to read shell scripts to understand the steps described.

As an embedded development kit the Jetson TK1 ships with a bare bone approach to device support. For the most part, the L4T kernel supports a minimal set of device drivers. This is on purpose as this a development platform for an embedded device. The idea is that the developer adds only the device drivers and services that are needed for the product being developed.

Here’s the rub: the Jetson TK1 is powerful enough to be a desktop computer. Desktop computers usually support a wide range of devices “out of the box”. Desktop computers also support “plug and play” peripheral devices. Because the Jetson spans both of these computing paradigms, new users can have different expectations for device support out of the box.

If you’re a desktop user relatively new to the Jetson TK1, a great alternative is to install the Grinch Kernel. The Grinch kernel replaces the stock kernel to support a wide variety of different devices and services.

On the other hand, if you’re building a specialized application then you may want to take a more minimal approach. This is the case discussed in this article. We are building a kernel with only the device drivers needed for one application. In this case we are starting to build a kernel for a robotic race car.

Installation

Note: The screen cast was recorded directly from a Jetson TK1. All commands being executed are running on the TK1.

Building a kernel for the Jetson TK1 is straightforward. It is good practice to start with a fresh flash of a stock kernel when building a new kernel from scratch. In the video above, the Jetson TK1 was flashed with L4T 21.4 using JetPack 2.2.

Get the Kernel Source

Building the kernel consists of a few steps. First, gather the source code for the kernel. The sources for L4T 21.4 are available from the NVIDIA embedded developers website. The sources are delivered in a compressed form, to the next step is to “untag” them into the /usr/src/ directory. Once the sources are expanded, then derive what is called the .config file. The .config file describes which parts of the kernel source code should be included when building, and which modules and drivers to include. The .config file also specifies where the modules should be placed, either internal to the kernel or an external file. The advantage of having a module being ‘external’ is that it can easily be changed or upgraded without having to recompile the entire kernel.

The steps above are in a script on the JetsonHacks Github account in a repository named buildJetsonTK1Kernel. You can get the repository:

$ git clone https://github.com/jetsonhacks/buildJetsonTK1Kernel.git
$ cd buildJetsonTK1Kernel

You can execute the script to get the sources and open an editor on the configuration file:

$ ./installKernelSources.sh

Edit the Kernel Configuration

Next, edit the configuration file. The number of choices in the configuration file is overwhelming, it helps to have a good idea where the desired option resides. The configuration editor has a find function, which is rather limited, but can be helpful. In the above video, we enable the FTDI driver and set it to be built as an external module. Then the UVC driver is set to be built as an external module, and patched to support an Intel RealSense camera.

Local Version

There is a local version number which identifies the kernel build. On a stock kernel, you can see this by executing:

$ uname -r

The local version is usually the designation after the kernel version, for example 3.10.40-gdacac96 is the stock kernel. ‘-gdacac96’ is the local version. Modules use the kernel version to determine compatibility. One of the issues that people commonly experience the first time they build a module is that the module will not load because the kernel versions do not match between the kernel and the module. The issue usually turns out to be that the local version was not set to match the kernel version being used.

When done configuring, make sure to save the file!

Prepare and Make

Once the configuration is set, then it is time to build the kernel. There is a convenience script for this purpose:

$ ./buildKernel.sh

The process to build the kernel is surprisingly easy. First switch over the kernel directory, prepare and then make:

$ cd /usr/src/kernel
$ make prepare
$ make modules_prepare
$ make -j4
$ make modules
$ make modules_install

The modules_install command copies any modules built over to the /lib/modules directory appropriately.

Copy Boot Image

There are a few options at this point. You can save the kernel to the PC host and have a kernel that you can flash a Jetson TK1. In our case, we copy the zImage file over to the /boot directory which effectively makes that the new kernel. I do suggest that you save the .config file that you built so if things go south you don’t have to start entirely from scratch. Of course, if things don’t work after copy the zImage file and rebooting, you can always flash from the host again.

The idea that we’re working on here is to build up a kernel for a specific project. Once we’re happy with everything, we can clone the entire image and save it to a host machine.

We copy the zImage to the boot directory:

$ ./copyzImage.sh

which basically executes:

$ cd /usr/src/kernel
$ sudo cp arch/arm/boot/zImage /boot/zImage

After copying the image, reboot the Jetson TK1 and the changes will take effect.

Conclusion

This certainly is not an exhaustive explanation on building kernels. This particular subject runs deep, as they say. Different environments can be much more challenging, such as cross compiling kernels. There can be other circumstances such as architecture differences like on the Jetson TX1. On the TX1 L4T version 23.X, the underlying machine architecture is 64 bit, but the user space is 32 bit. This requires a lot of gyrations to get working, since the kernel cannot be compiled on the Jetson TX1.

The Jetson TX1 L4T 24.X can be natively compiled on the Jetson TX1 itself, making life much more tolerable.

For our purposes, having a way to build a custom kernel or add some modules here and there is a good to have in the tool belt.

Note

After the first script gathers the source for the kernel, it generates the .config file:

$ zcat /proc/config.gz > .config

The period/dot in the .config file name indicates that it is ‘invisible’. In other words, it won’t be visible when doing a normal file browse or file list.

The command gives the default kernel configuration file. The file /proc/config.gz is generated from a kernel option which is by default turned on in the L4T kernels. If you are running a differently configured or modified kernel, you may want to generate the .config file in a different manner. One way to do this is to use:

$ make oldconfig

which will try to build a new .config file from the existing settings. However, there’s more magic to this than one prescription can cure. You will probably have to do some research to get this to work properly.

If for some reason you don’t have the default .config available, then you can generate it on the TK1 from the /usr/src/kernel directory:

$ make tegra12_defconfig

The Jetson TK1 is is tegra12x series, or tegra124.

If you are on a Jetson TX1:

$ make tegra21_defconfig

Jetson TX1 is tegra21x series, or tegra210.

Remember that you still need to set CONFIG_LOCALVERSION (the suffix to “uname -r”, e.g., “-gdacac96”) as it is not stored in “/proc/config.gz”

Thanks to linuxdev in the Jetson forums for the last few tidbits.

The post Build a Custom Kernel for the NVIDIA Jetson TK1 appeared first on JetsonHacks.

Jetson RACECAR Part 8 – Custom Kernel and ROS Install

$
0
0

In the eighth part of the Jetson RACECAR build, we start on the RACECAR software component. We build a custom kernel for the Jetson and then we install ROS. Looky here:

Background

During the Jetson RACECAR build, you noticed that we are adding various devices which need to be interfaced with the Jetson, such as a Razor IMU. While the Razor IMU hooks into the Jetson with a USB cable, you also know that you need to provide a FTDI driver. The FTDI driver is not present on the stock L4T kernel. What to do?

One alternative is to install the Grinch Kernel. That’s certainly a good answers to the problem. In our case there are other devices for which we need to add kernel support. One of the avenues that we’re going to explore is using an Intel RealSense camera, so we’ll need to modify the UVC driver.

One option is to modify the Grinch Kernel to add RealSense camera support. The procedure is the same as building a custom kernel, just remember to use the Grinch Kernel sources instead of the stock L4T sources.

In this case, we build a custom kernel. We’ll add the needed bits and pieces as we go along. This probably won’t be the only kernel build that we do on this project, but just the start of building what we need.

Installation

Note: The screen cast was recorded directly from a Jetson TK1. All commands being executed are running on the TK1.

There are two repositories on the JetsonHacks account on Github to help you build a custom kernel and install ROS.

$ git clone https://github.com/jetsonhacks/buildJetsonTK1Kernel.git
$ git clone https://github.com/jetsonhacks/installROS.git

Building a custom kernel is straightforward. There’s even an article about it already! The article covers building the custom kernel described in the above video. Normally I would duplicate the instructions here, but I’m way too lazy to do that. It’s the thought that counts.

Once the custom kernel has been configured and installed, it’s time to install the base ROS package. There is already an article to do that too!

You’re thinking at this point, “Is the installation in this article just pointers to already existing articles?” Yup.

This provides the base that we will be building on for the Jetson RACECAR.

Conclusion

Taking the first step for installing the Jetson RACECAR gets us to the point that we’re ready to start building all the separate control packages that will control the Jetson RACECAR. Having this base makes things much simpler going forward.

The post Jetson RACECAR Part 8 – Custom Kernel and ROS Install appeared first on JetsonHacks.

Jetson RACECAR Part 9 – Razor IMU ROS Install

$
0
0

In the ninth part of the Jetson RACECAR build, we install the Sparkfun Razor IMU ROS package. Looky here:

Background

After installing the Sparkfun Razor IMU on the Jetson RACECAR, we’re ready to prepare the driver software packages which enable the IMU to communicate with ROS.

The ROS wiki page for the Razor IMU 9DOF provides excellent instructions for installing the razor_imu_9dof package. There are also instructions on how to calibrate the IMU which we will visit at a later time.

The Sparkfun Razor IMU 10736 that we are using in this project has a built-in micro controller. The micro controller is programmed using Arduino software. For this ROS application, an AHRS Ardunio sketch will be uploaded to the IMU micro controller. Here is a great read about how the sketch actually works.

Installation

The Razor IMU interfaces with the Jetson using a FTDI serial to USB converter. This article assumes that the kernel supports FTDI, and that ROS is installed on the Jetson. Here is an article which covers how to build the kernel with FTDI support and install ROS.

The installRazorIMUROS repository on the JetsonHacks Github account contains helper scripts for the install process. First clone the repository and switch to the repository directory:

$ git clone https://github.com/jetsonhacks/installRazorIMUROS.git
$ cd installRazorIMUROS

If you have not already built a Catkin Workspace for the project, you may run the script:

$ ./setupCatkinWorkspace.sh jetsonbot

which will setup a Catkin Workspace named ‘jetsonbot’ in the ~/ directory. You may name the workspace with whatever you feel comfortable with. If you leave the name off of the command line the script is running from, then a default ‘catkin_ws’ directory will be created.

The setupCatkinWorkspace shell file modifies the .bashrc file in ~/ to include the some extra information about the ROS environment. You will want to review that information to make sure that it matches your installation.

Install the Razor IMU ROS Package

Next, we’re ready to install the razor_imu_9dof ROS package. There is a convenience script:

$ ./installRazor.sh

Let’s take a look at it:

#!/bin/sh
# Install the Razor IMU ROS package
sudo apt-get install ros-indigo-razor-imu-9dof -y
# For visualizing, you’ll need these python packages
sudo apt-get install python-visual python-wxgtk2.8 -y
# To put firmaware on the Razor, you’ll need the Arduino software
sudo apt-get install arduino arduino-core -y

The script first installs the ROS Indigo package razor_imu_9dof.

Next, the script installs a python visualization package. In the video, the demonstration uses the visualizer to show the orientation of the IMU and heading in a graphics environment. The visualizer is optional.

Finally the script installs the Arduino software, version 1.0. The Arduino software uploads compiled sketches to the Razor IMU.

Note: In the video I used the term ‘flash’ interchangeably with upload the sketch.

Uploading the Arduino Script

After installation, prepare to upload the AHRS script to the Razor IMU.
Switch to the Catkin Workspace, i.e.

$ cd ~/jetsonbot
# source the devel
$ source devel/setup.bash

Note: During development you may want to place the ‘source devel/setup.bash’ into your .bashrc file so you do not have to source it every time you open a new Terminal.

Next, copy the script to the Arduino sketchbook directory:

$ roscd razor_imu_9dof
$ cp -r src/Razor_AHRS ~/sketchbook/Razor_AHRS // Jetson TK1 uses Arduino 1.0 software

You can then use a file browser to navigate to the ~/sketchbook directory and open the Razor_AHRS folder. Double click on the Razor_AHRS.ino file, which will bring up the Arduino software.

In the sketch editor, find ‘Hardware Options’. Uncomment which device is being used, in this case the 10736. Save the sketch.

Now is the time to setup the Arduino software to talk with the IMU:

  • Go to “Tools” → “Board” and select “Arduino Pro or Pro Mini (3.3v, 8mhz) w/ATmega328”. Note: in Aduino 1.5+, the board menu doesn’t allow selecting the voltage/frequency; go to the Processor menu after selecting “Arduino Pro or Pro Mini” and select “ATMega 328 (3.3V, 8Mhz)”
  • Go to “Tools” → “Serial Port” and select the port used with the Razor.
  • Go to “File” and hit “Upload”. After a short while at the bottom of the Arduino code window it should say “Done uploading”.

Once the sketch is done uploading, it’s ready to talk with ROS!

Demo

To run the demos, you should make a copy of the configuration data. In a Terminal, go to the jetsonbot directory and source the devel. Then:

$ roscd razor_imu_9dof/config
$ sudo cp razor.yaml my_razor.yaml

You can then go back to the jetsonbot directory. Then start roscore:

$ roscore

To publish the IMU data, open another Terminal, go to the jetsonbot directory, and source the devel. Then:

$ roslaunch razor_imu_9dof razor-pub.launch

You can then use the commands ‘rostopic list’ and ‘rostopic echo /imu’ to see the data being published.

In order to see the visualization, stop the razor-pub.launch process (CtrlC does nicely here) then:

$ roslaunch razor_imu_9dof razor-pub-and-dislay.launch

After a few moments, the IMU will begin publishing data and you will be able to change the orientation of the IMU sensor and see the results on a monitor.

Conclusion

Like most ROS packages, actually installing the Razor IMU ROS package is fairly easy. The one thing that’s a little different in this install is that the AHRS firmware needs uploading to the IMU before running the device. The ROS wiki instructions are very good, you should read them for further information.

The post Jetson RACECAR Part 9 – Razor IMU ROS Install appeared first on JetsonHacks.

Jetson RACECAR 10 – Motor Control and Battery Discussion

$
0
0

In the tenth part of our Jetson RACECAR build we discuss how to control the drive motor and steering servo. Then we discuss battery selection to power the electronic components we are adding to the car. Looky here:

Background

As discussed in Part 3 – ESC Motor Controller, the TRAXXAS Rally steering servo and the drive motor Electronic Speed Controller (ESC) are controlled by PWM signals sent from an on board radio receiver. PWM stands for Pulse-Width Modulation. In the article, we explore sending PWM signals through a dedicated hardware PWM driver. We are then able to determine appropriate values to send to control the car from a Jetson.

Discussion

As you recall, the Jetson RACECAR is a derivative of the MIT RACECAR. There is another MIT derivative race car from the University of Pennsylvania. Both the MIT and UPenn cars are open source autonomous robot platforms.

The MIT and UPenn designs use two different approaches for controlling the cars. The UPenn approach is to use a micro controller to generate PWM pulses which control the stock TRAXXAS ESC and the steering servo. On the other hand, the MIT approach is to replace the stock TRAXXAS ESC with a Vedder Electronic Speed Controller (VESC).

UPenn Approach

The UPenn car incorporates a Teensy 3.2 Microcontroller which is a USB-based micro controller development system. While not an official Arduino product, the Teensy can be programmed using Arduino software. In the UPenn design, the Teensy generates PWM signals for the ESC and steering servo.

A nice feature of using the Arduino software on the Teensy is that you can use the Arduino ROS library. In that way, the Teensy can act as an independent ROS node that is accessed through the Arduino rosserial_client.

The Teensy costs around $20 from places such as Adafruit or Sparkfun. You can also get it from Amazon with headers attached.

So it’s inexpensive and easy to integrate, what are the drawbacks? The drawbacks depend on your perspective. As discussed in Part 3, because the stock TRAXXAS ESC is being used the minimum speed that the car travels from a constant minimum pulse is around 6 or 7 mph. One view of this is that it is a race car after all, we can’t really fault it for wanting to go fast. We can always jog after it.

An opposing view is that since it is robotic, it should be able to control itself at all speeds. I also believe that it is against one of the major ethical rules of robotics, which is one should never have to run after a robot. Running should be reserved for running from robots in terror.

Another “drawback” is that the stock setup does not provide odometry. The MIT approach which we will discuss shortly does provide odometry from the custom ESC. This may or may not be of concern depending on how the car is being used. Note here however that TRAXXAS offers a telemetry package for the Rally car which adds a hall effect sensor for sensing drivetrain revolutions. The sensor sends out PWM signals, it may be possible to read the signal and derive odometry from it using the Teensy.

MIT Approach

In the first version of the MIT RACECAR, the stock ESC and steering servo were driven with PWM signals generated from a Jetson TK1. As we have discussed in previous articles, this is not ideal because the Jetson is not using either a real-time operating system or dedicated hardware to generate the PWM pulses.

In the second version of the RACECAR, a VESC replaces the stock ESC. This provides a couple of advantages for a teaching vehicle. First, it’s easy to experiment with the algorithms in the firmware due to the open source nature of the hardware and software. For example, when teaching a class on feedback control the students can implement different algorithms in the ESC firmware. In the case of the RACECAR, this also means that low speed motor control of the car is under programmer control.

The VESC has a servo control port which controls the car steering servo.

The second advantage is that the VESC provides monitoring the speed of the motor under control. If you know how fast the motor is turning, you can calculate how much the wheels turn. If you know that, you can tell the distance the car has travelled, thus you have odometry. Now it’s not super accurate as it does not take into account factors such as drivetrain slippage, but it can be pretty close.

The drawbacks? The VESC itself is a relatively low volume part which means availability can vary. Depending on where you buy it, delivery times can be short or on the order of 4-6 weeks. Most of the manufacturers appear to take orders in batches, they wait until they have enough orders to run a batch. Because the VESC is open source, you can build your own if you like.

You will probably have to modify the VESC for the RACECAR application, at least changing the wiring to match the gauges being used on the RACECAR. The main VESC application is to control motorized skateboards which draw a lot more current than the TRAXXAS car. Overall, the VESC is harder to integrate into the RACECAR design than the Teensy approach.

The other drawback of the VESC is that it is relatively expensive compared to the Teensy. The price on the VESC seem to vary widely depending on where you buy it. In general they seem to range from around $125 USD to $225 USD.

Battery

Both the MIT and UPenn cars use the Energizer XP18000AB Universal Power Adapter with External Battery. In earlier episodes we talked about using a regular LiPo battery. However the Energizer solves a couple of issues. First, it provides three different voltages, 5V, 12V and 19V. That makes it easy to drive the Jetson and a good selection of sensors, transmitters, and other peripherals. Second, the battery has built-in temperature and short-circuit protection when charging. This makes it somewhat safer to charge the battery than a traditional LiPo.

The Energizer is more expensive than the traditional LiPo solution we have discussed, but is much more flexible.

There are a lot of ways to view this subject. In this particular design, a battery is placed in the car chassis to drive the motor and steering servo. In the stock configuration, the battery also powers the receiver. A secondary battery, in this case the Energizer, is used to power the additional electrical components which turn the car into a robot.

In general you need to have power isolation between motors and electrical components, having two batteries is an easy way to accomplish this. However, recognize it is a tradeoff.

The Energizer is a big 18 Amp hour battery, so placement in the car and adjusting the center of gravity may be of concern to the real racers out there.

The takeaway is that for a development mule, this seems like a great solution. You can add and subtract components without having to worry too much about power drain or voltage requirements.

Next Steps

The original Jetson RACECAR hardware PWM driver approach is viable. In order to make it work, some code needs to be written to integrate it into the ROS environment. While not a big project, it does not make sense to undertake it when the UPenn Teensy solution is conceptually equivalent, more flexible and already finished.

So off comes the PCA9685, and on goes the Teensy 3.2. We’ll program up the micro controller and test out the motors and steering from ROS.

The post Jetson RACECAR 10 – Motor Control and Battery Discussion appeared first on JetsonHacks.

Viewing all 339 articles
Browse latest View live