Quantcast
Channel: JetsonHacks
Viewing all 339 articles
Browse latest View live

MS Kinect V2 on NVIDIA Jetson TX1

$
0
0

With a USB firmware patch and an updated libfreenect2, the Microsoft Kinect V2 now runs on the Jetson TX1 Development Kit. Looky here:

Background

For a stretch there it was not possible to run the open source Kinect V2 driver libfreenect2 on the Jetson TX1 because of an issue with the USB firmware. Fortunately NVIDIA has issued a firmware patch
(see the Jetson TX1 Forum, USB 3 Transfer Failures ) which fixes the issue. As you might recall, Microsoft now offers the Kinect V2 as a two part kit a Xbox One Kinect Sensor Bar along with a Kinect Adapter for Windows. You will need both a Kinect Xbox One sensor and the adapter for use with the Jetson, or the discontinued Kinect for Windows. The Kinect Adapter for Windows converts the output from the Kinect to USB 3.0. The advantage of this setup is that you can use the Kinect sensor from your Xbox One, or at least have an excuse to get a Xbox One + Kinect for “research” purposes.

Installation

The installLibfreenect2 repository on the JetsonHacks Github account contains convenience scripts for installing libfreenect2 and the USB firmware patch. First, get the repository:

$ git clone https://github.com/jetsonhacks/installLibfreenect2

Second, install libfreenect2 and compile the library and examples:

$ cd installLibfreenect2
$ ./installLibfreenect2

Third, you will need to patch the USB firmware:

$ ./firmwarePatch.sh

After installing the USB firmware patch, it is necessary to reboot the machine in order for the firmware changes to take effect.

When the machine reboots, you can run the example:

$ cd ~/libfreenect2/build/bin
$ ./Protonect

Some Notes

The installation of libfreenect2 in the video is on L4T 24.1, flashed by JetPack 2.2. CUDA is required. Both 32 bit and 64 bit versions of L4T are shown in the video, installation of libfreenect2 and the firmware patch is the same in both cases.

The Mesa libs installed by libfreenect2 overwrite libGL.so, which causes issues. The fix as of this writing (July, 2016) is to link libGL.so to the Tegra version.

The repository contains a script jetson_max_l4t.sh which sets the CPU and GPU to maximum clock values. This will increase the Jetson TX1 performance at the cost of power consumption.

L4T 23.X Notes

The JPEG decompressor under L4T 23.X produces RGBA format, where as the Protonect viewer consumes BGRA format. The repository contains a patch which adds a simplistic algorithm to rearrange the bytes appropriately. If you intend to use this script with L4T 23.X, you will need to uncomment the line:

# patch -p 1 -i $PATCHDIR/bgra.patch

Also, if you plan to use this library in production L4T 23.X code, your application should consider writing specialized code to do the RGBA→BGRA conversion more efficiently.

For L4T 24.1, there is no patch applied as the JPEG decompressor produces BGRA format natively.

The post MS Kinect V2 on NVIDIA Jetson TX1 appeared first on JetsonHacks.


L4T and JetPack on NVIDIA Jetson TX1

$
0
0

I have been reading the Jetson Forum lately and there seems to be a general confusion as to the relationship between JetPack and Linux for Tegra (L4T) version 24.1. A reader wrote in and asked a related question which I actually knew the answer to! This is a rare enough occurrence that I wrote this article as a response, and hopefully other people might find it useful.

Background

When the Jetson TK1 was first shipped way back in ’14, the system software was revision at L4T 19.2. L4T 19.2 uses a boot loader called fastboot to load the operating system when the computer starts. In order to upgrade to a different version, the user has to “flash” a new system onto the Jetson. This involves hooking up the Jetson to a PC host via a USB cable, placing the Jetson into recovery mode, and then invoking the flash application from a Terminal command line on the host. Here’s the “Quick Start Guide” should you suddenly feel the need to go temporarily blind. The quick outline is that you:

  • Untar the files and assemble the rootfs
  • Flash the rootfs onto the system’s internal eMMC
  • Install Optional Packages such as OpenCV4Tegra and CUDA

As you might guess, newcomers to embedded development probably don’t understand what the first two entries in the list mean. This is especially true if the developers background is from a Windows or Macintosh environment.

Also, it’s pretty easy to “fat finger” in a typo or two in command lines like:

sudo ./flash.sh -S 8GiB jetson-tk1 mmcblk0p1

so you can imagine some of the support questions that came through.

Time passes. NVIDIA decided to build a graphical user interface (GUI) front end to the process, which is a host application called JetPack. JetPack basically helps download the kernel image, flash the device, and add the optional packages without having to issue commands in the Terminal. Remember that JetPack is a wrapper to the installation process itself, basically calling the aforementioned command line instructions in an automated manner. When JetPack was first released, the L4T version for the TK1 went to the 21.X series. One of the major changes for 21.X is that the boot loader changed from fastbin to U-Boot. U-Boot is a more flexible boot loader for development purposes.

When the Jetson TX1 was introduced in ’15, JetPack was expanded to include support for the new board. You can flash either the TK1 or the TX1 using JetPack. JetPack will determine which version of each package to download and send to the Jetson.

Why doesn’t XYZ work?

The Tegra TX1 is a 64 bit ARM processor. When first arriving on the scene, the TX1 ran the 23.X versions of L4T. The inner workings of the operating system (called the kernel) were running in 64 bit, and the “user space” was running on 32 bit. The 32 bit user space provided compatibility with other 32 bit ARM applications, such as those running on the Jetson TK1. Running this ‘legacy’ app strategy is pretty standard stuff when changing architectures.

Anyway, as you might guess the villagers were up in arms. Where’s true 64 bit? Be careful what you ask for …

With the new release of 24.1, there are two flavors of L4T available for the Jetson TX1. The first flavor is 32 bit, which is the previously described 64 bit kernel, 32 bit user space. The second flavor is 64 bit, which is 64 bit kernel and 64 bit user space.

The immediate inclination is to load 64-bit on the Jetson TX1 and bask in all the goodness. You use JetPack to setup the Jetson TX1, and start it up …

Let’s open up a browser. Hmmm, no browser in the Dock, like on the 32 bit version. Ok, load up Firefox, launch it and … Segfault. If you’re new to this, replace the term Segfault with “no worky”. Try to load ROS from the repositories … No ROS in the repositories for this architecture. And so on …

Welcome to the world of bleeding edge. At this moment (July 2016) there are not a large number of 64 bit ARM processors out in the wild. The way to get what you need in terms of libraries and applications is to build it all from source. The ecosystem in the Linux world hasn’t had a chance to build up the support needed to build all the various bits and pieces and put them in packages and repositories yet. Over time, this will all happen. But not today.

Welcome to the land of milk and honey. You didn’t know that the milk was still in the cow and the honey was still in the beehive? You might get stung a little getting the honey out. Why is it called “bleeding edge”? Because you bleed.

Conclusion

It’s great to have 64-bit available on the Jetson TX1. It will also take some time to get the Linux ecosystem to support it for most users. So here’s the practical recommendation.

Money Quote:

If you’re planning on getting your particular project to work on 64 bit (such as doing a ARM 64 port of Mozilla or ROS), you have to work on the 64 bit version. Most other people will want to stay on 32 bit to develop with a plan of going to 64 bit when things get a little more mature in the Linux ecosystem.

PS: Let me add that is not about NVIDIA support, this is about Linux ecosystem support. The Jetson TX1 is a 64 bit ARM platform, the Linux ecosystem just hasn’t had enough time to build up around that particular architecture. It’s basically a chicken and egg problem.

The post L4T and JetPack on NVIDIA Jetson TX1 appeared first on JetsonHacks.

Jetson RACECAR 11 – Arduino ROS Node for Car Control

$
0
0

In the eleventh part of our Jetson RACECAR build we construct a breadboard to interface the car with the Jetson using an Arduino ROS Node. Then we install the software, and test using a ROS teleoperation node to control the car with a game controller to boot. Looky here:

Background

In our earlier Motor Control and Battery Discussion we had planned on using a Teensy micro controller to interface the Jetson to the steering servo and ESC on the car. After working with the Teensy for a while, I realized that in order to upload a sketch to the Teensy (a sketch is a program that is uploaded and then runs on an Arduino type of processor) an updated version of the Arduino software is needed.

The updated Arduino version is not readily available for the Jetson. An alternative is to run the Arduino software from a PC, such as the host used to flash the Jetson. At the end of the day, it is easier just to replace the Teensy with a regular Arduino and use the Arduino software natively on the Jetson. That means that during development one doesn’t have to bounce back and forth between different development machines.

In the video, an Arduino Nano is used. The Nano was selected because the parts were already on hand, if buying new, an Arduino Micro or clone probably would be the choice.

Note: These are relatively inexpensive, and may use substitute parts. For example, the Arduino Nano uses a CH-340 serial to USB chip instead of a FTDI chip. See the note at the end of the article.

Arduino ROS Node

There are ROS bindings available for the Arduino. Money quote from the ROS wiki:

The Arduino and Arduino IDE are great tools for quickly and easily programming hardware. Using the rosserial_arduino package, you can use ROS directly with the Arduino IDE. rosserial provides a ROS communication protocol that works over your Arduino’s UART. It allows your Arduino to be a full fledged ROS node which can directly publish and subscribe to ROS messages, publish TF transforms, and get the ROS system time.

In our case, we will use the Arduino ROS node to send PWM pulses to the cars steering servo and ESC using the Arduino Servo library.

Construction

Here’s a wiring diagram:

Arduino Board Wiring Diagram
Arduino Board Wiring Diagram

This is a straightforward build. The parts from the video:

along with a soldering station: Hakko FX888D-23BY Digital Soldering Station

One note, notice that the steering servo receives 6V power from the ESC header, which gets power from the car battery. The Arduino receives 5V power over USB, and sends signals to both the steering servo and ESC. To avoid nasties, an isolation circuit can be added.

Make sure that the car battery is disconnected from the ESC before wiring the Arduino breadboard to the car. This Nano uses a mini USB connector.

Here’s what it looks like after assembly and connected to the car, with the USB from the Nano connected to the Jetson:

Arduino ROS Node breadboard
Arduino ROS Node breadboard – Interface Jetson with TRAXXAS car

Software Installation

The prerequisites for software installation are ROS and the appropriate serial to USB driver for the Arduino. FTDI is usual choice for standard Arduinos, here’s an article for building a kernel and installing ROS on the Jetson TK1.

Once the prerequisites are met, you can go to the installJetsonCar repository on the JetsonHacks Github account. Install the repository on the Jetson, and then run the installation script:

$ git clone https://github.com/jetsonhacks/installJetsonCar.git
$ cd installJetsonCar
$ ./installJetsonCar.sh

Next, go to the Arduino sketchbook and open the jetsoncar sketch. This should be in ~/sketchbook/jetsoncar. The name of the sketch file is jetsoncar.ino Open jetsoncar.ino, setup the Arduino software for flashing by selecting the correct Arduino model and port number, and then upload the sketch to the Arduino.

Operation

In the video, a Nyko game pad controller was used, similar to this one. This controller is from the JetsonBot project. The game pad needs to be paired over Bluetooth with the Jetson. The Jetson in the video has a an Intel 7260 Wireless/BT Network Interface Card (NIC) card. Here’s an article on installing the NIC. You’ll also need antennas.

Then open a separate Terminal window for each command:

$ roscore
$ rosrun rosserial_python serial_node.py /dev/ttyUSB0
$ roslaunch jetsoncar_teleop nyko_teleop.launch

You should be able to examine rostopics being fired by the controller. In order to send cmd_vel topics, the ‘Y’ button on the game controller must be pressed while controlling the left joystick, right trigger, or the right button about the right trigger. The ‘Y’ button is commonly called a deadman switch in this use case.

Note: Usual safety warnings apply, make sure that the car is in stable position and the tires are free from contact with surroundings.

Once you are satisfied everything is working correctly, connect the car battery, and hit the power button on the ESC. The controller should then control the car actions.

Conclusion

As with most of these articles, this is not a step by step operation on how to build a project. It requires a bit of ingenuity on the readers part to gather the parts needed on their own, and as such these articles are more for providing a framework on how to build a project like this. Some of the software will need to take that into account, but there should be enough there to give a good starting point for your own build.

Note on Arduino Nano

As noted earlier an Arduino Nano is used in this project. This will probably be revisited with a new model. In order to get this particular Nano to work, a kernel module is built to support the CH-340 serial to USB chip on the Nano. Most Arduinos use FTDI chips, but these are somewhat more expensive than the CH-340, so some Chinese clones substitute the part. The option in the Linux kernel is:

USB Winchiphead CH341 Single Port Serial Driver

One issue that presented itself is that if the rosserial node was used more than once, the Nano would hang. This appears to be related on this particular chip to the pyserial library. On the Jetson, pyserial is 2.6 which exhibits the issue. Rolling back to version 2.5 seems to fix the problem for this particular board.

To install pyserial 2.5:

sudo apt-get install python-pip -y
pip install –user ‘pyserial==2.5’

For more information see:
https://github.com/ros-drivers/rosserial/issues/219

The post Jetson RACECAR 11 – Arduino ROS Node for Car Control appeared first on JetsonHacks.

Anthony Virtuoso’s Jetson TX1 Rover Project

$
0
0

Anthony Virtuoso from New York, New York just finished building a hardware stack for a rover robot based on the Jetson TX1. Now he’s getting ready to write the software that allows the robot to run autonomously. Best of all, Anthony has graciously agreed to share the robot build with us here on JetsonHacks!

Background

By day Anthony Virtuoso is a Senior Software Development Engineer at Amazon in NYC where they are working on building a massive data analytics platform for several thousand users. The platform is built on Amazon Web Services and the associated ecosystem. Big metal software. Anthony feels that the rapidly maturing Machine Learning and Deep Learning fields can deliver innovative features for his groups customers.

Anthony is well qualified to make such assessments as he holds a Master’s Degree in Computer Science (Machine Learning) from Columbia University. Currently Machine Learning is all about utilizing the GPU. Money quote from Anthony:

I personally learn by doing, so I needed a project where I could use this technology to solve a real world problem. I needed a way to see, first hand, what Nvidia’s CUDA or OpenCV could really do when pitted against a top-of-the-line CPU in an intensive task. So, I did what any bored engineer would do I fabricated a complex problem to answer a simple question: “How difficult is it to use a GPU to speed up a largely compute-bound operation?”

But why build a robot?

I’m a software engineer by trade but I’ve never really been able to get the opportunity to work on/with hardware that enables my software to interact with the physical world in a meaningful way. For that reason and because the Jetson seemed like such an amazing platform I set out to build an autonomous rover… but to do so a bit differently. I had read up on ROS and their navigation stack but before handing control over to these season frameworks I wanted to understand how far a naive implementation could go… basically “Why is SLAM and navigation such a hard problem to solve?”

Hardware Build

Here’s what the completed hardware looks like. The project is in the ros_hercules repository on Github.

External View of the Hercules Rover
External View of the Hercules Rover
Internal View of the Hercules Rover
Internal View of the Hercules Rover

The robot uses a couple of the usual suspects for sensors, a Stereolabs ZED stereo camera and a RP-LIDAR unit which is a cost effective 2D LIDAR for robotic applications.

Software

With the hardware base well underway, Anthony is starting to turn attention towards the more interesting part of the project, which is the robot software. Included in the ros_hercules README.md are several great tips and tricks for interfacing with the the rover sensor hardware and micro controllers.

It promises to be very interesting (and fun!) to watch an experienced machine learning expert apply and explore their craft here.

Conclusion

Stay tuned for articles by Anthony Virtuoso here on JetsonHacks about the Rover Project.

The post Anthony Virtuoso’s Jetson TX1 Rover Project appeared first on JetsonHacks.

Thoughts on Programming Languages and Environments – Jetson Dev Kits

$
0
0

Over the last couple of years, I have been thinking and reflecting on programming languages and environments. Programming as a skill (or art depending on how you view such things) has gone through many changes over the last 10 years. Probably the most profound change is in the complexity of the hardware that needs to be controlled, and the scale of interaction that takes place.

Background

The two most obvious areas of change are smartphones/tablets and web services at scale. A personal computer from 2006 typically had < 1GB of main memory, 1024×786 graphics, a < 500 GB hard drive. Multiple processor core designs (i.e. Intel Core Duo) were just being introduced into the market in that time frame. For programming, use of C# on the Windows PC using the Visual Studio IDE was starting to become prevalent (though people were using a wide variety of languages) and on the Macintosh Objective C in the Xcode IDE tended to be the weapon of choice. Another popular choice for both platforms was Java, though consumer facing applications were rarely delivered in that form.

Todays smartphones are much more capable, and in addition have a large number of peripherals attached such as radios of the phone and Bluetooth, cameras, IMU, and touch screens. Smartphones are mostly programmed using Objective C or Java.

The Web backbone was much more ad hoc than it is today; Facebook had around 5M users (it’s marginally larger now), YouTube had 50M users when Google bought them in 2006, Twitter was just founded in 2006, Netflix started its streaming service towards the end of that year. Larger companies mostly had unique infrastructures, though colocation and managed network spaces were starting to become popular. In fact 2006 was the year Amazon introduced Amazon Web Services which turned much of the Internet into what it is today. Programming on the web back then was much more primitive, there was a lot of hand coded Javascript, CGI, PHP, and CSS. The major technology developments of the day were battle hardening the open source Apache web server, and building web services to run at scale. A lot of Apache is written in C or C++, and the inevitable migration to the ‘free’ Linux operating system to run Apache made sure that much of the web was run on commodity PCs.

Today of course, Amazon Web Services (AWS) rule the lands. There are several large outposts, such as the Google land and the Facebook lands, but for the most part AWS is the go to for everyone from startups to companies such as Netflix. AWS takes care of most of the basic architectural plumbing that transparently scales to millions of users on demand.

Back in 2006, most embedded systems were written in assembler or cross compiled in C or some low-level domain specific language. Some of the larger embedded systems were built with Java. Note that the Tegra is much different from what historically has been referred to as an embedded system, much more like the processor in a smartphone than something like an embedded controller.

We’ll call this The Crossover, where embedded systems need to be thought of more as a full computer system than a low level device used to control simple hardware. Conceptually this is something similar to what happened when computer data base programmers realized that there was enough main memory on computers to hold entire data bases in memory. This required a major paradigm shift in thinking, as up to that point most of data base management was about dealing with keeping the data base synchronized with disk. Note: Companies such as MapD are now keeping databases in GPU memory and working on everything in parallel, which will require another shift.

Another tidbit, Robot Operating System was originally developed in 2007 at the Stanford Artificial Intelligence Laboratory in C++, Python and LISP.

Cluster Fucks are Scalable

If you went to an engineering school, one of the first things you learn is that “Cluster Fucks are Scalable”. In polite company we refer to them as Charlie Foxtrot, here we’ll refer to them as CFs. The first day of Engineering 101, you are presented with this textbook case: Northeast Blackout of 1965. The basic story is that a technician incorrectly set a protective relay too low on a power transmission line. When a small surge of power caused the relay to trip, power was diverted to other lines. The added power caused properly set relays downstream to trip and reroute the incoming power. The ripple effect left over 30 million people over 80,000 square miles without power for up to 13 hours. The professor suggests, “Don’t let this be you”.

A major lesson here is that a system may “Operate as designed, but not as intended”. Also note that the system was not sabotaged or hacked, which are of greater concern today.

Most people aren’t very good with parables or metaphors, and think “I am not a power engineer. This won’t happen to me!” On top of that, computer scientists/programmers tend to be intelligent (“Too clever by half”) and believe that they will always engineer perfect systems and plan for all possibilities/edge cases. I’ll let you guess what happens next.

There are many famous CFs in the computer programming world. There are several root causes, the first of which starts with someone coming up with a really good idea and implementing it. What happens next is that the implementation is not what we’ll call “engineered for success”, usually with mitigating circumstances. A classic example is the “Twitter Fail Whale”, an image of a whale being lifted by birds shown to users in the event of a Twitter service outage back in the late 00s. That the image was famous for this exposure tells the story.

To be fair, at that time it was very difficult (and expensive) to build an exponentially growing system serving millions of users. Basically they ended up nuking the whole thing and bringing in an engineering group that knew how to build that type of system at scale. The pool of engineers that knew how to build at that scale was very small back then. There’s also the realization here of network effect, systems that grow exponentially in a very short amount of time. After all, it is getting to the point where most computers in the world are connected together! Cascading failures seem very real in the computer world all of the sudden.

There are many other causes of CFs course. For the purposes of this discussion, we’ll discuss “death by a thousand paper cuts”. You’ve probably experienced these types of projects yourself, where the underlying implementation has what feels like an unlimited number of issues that need to be fixed. This may be because the system is old and crufty, or more likely that the original engineering was lacking or doesn’t accurately reflect the underlying model. You may also find that the project has a lot of people who say “To fix it, can’t we just … ?”. That’s usually a tell that something is really wrong. There can also be another cause. The technology you’re building on invites disaster.

That brings us to Programming Languages.

Off to Part II

The post Thoughts on Programming Languages and Environments – Jetson Dev Kits appeared first on JetsonHacks.

Thoughts on Programming Languages and Environments Part II – Jetson Dev Kits

$
0
0

In our previous discussion about programming, we discussed what the target development environments were like 10 years ago. Desktops were generally programmed using C# on Windows PCs, and the Macintosh used Objective C. Of course in such a large population there is a wide variety of languages being used, but for the most part that’s how apps were written.

On the web, it was rather a mish-mash of different technologies. It should also be noted here that three operating systems were commonly used. Windows on the PC (and some servers), OSX on the Mac (a Mach kernel, Unix underneath the GUI), and Windows or Unix/Linux in the network server area.

More Background

Many people treat operating systems and computer languages as religions. Here’s an interesting question to ask. “What do you think programming will be in 50 years?”. Where will the current programming paradigms fit in, if at all? We’ve all seen the science fiction movies with life-like robots, and pervasive virtual/augmented reality. People are today just beginning to understand machine/deep learning (even though it has been around for 50 years). What will programming even mean?

If your answer is that you’ll be mostly working from the command line on your Terminal, read no further.

If you are still reading, think about this: “How will computer programming become simpler, and less error prone?”

Let me be clear, if you would have asked me 35 years ago what programming would be like today and shown me the current state of affairs, I would be a broken old man now. Oh wait …

It certainly feels like we’ve gone in a circle. Ok, this is about programming, we’ve been going in a loop. Remember that Unix, was first released in 1971, the C programming language came out in 1972. Unix was rewritten from assembler to C, with one of the benefits being portability. Also remember that Unix was a research oriented operating system, to help better understand operating systems. A “Unix Philosophy” developed over time.

Time passes, a GUI is added (though shunned by many, especially early on), Linux comes along. On the desktop, Unix struggles against Windows and Mac, but is reinvigorated in the late 90s by the movie industry and the Internet Web server market.

The movie industry uses Unix in two ways. First, the company SGI creates and pioneers the use of OpenGL for 3D graphics and sells a version of Unix on their boxen. Second, the boxen are networked together for rendering graphics scenes such as the dinosaurs in the movie ‘Jurassic Park’.

At the same time Sun Microsystems, who was selling BSD Unix based boxen, was growing by selling into what would later become known as the 2000 Internet bubble. The Internet had become the new wild west with an insatiable thirst for networked machines to run their web services. Sun Microsystems also invented the programming language Java. Java was initially promoted as a platform for client-side applets running inside web browsers.

Here’s the loop we talked about. In the mid 1990s, a company called Netscape built the first successful web browser. Netscape also introduced Javascript, a programming language which runs within the browser environment. Javascript and Java have some outward similarities, but are fundamentally quite different. By including Javascript inside the browser, it would become perhaps the most ubiquitous programming language in the world. The Netscape web browser (now under Mozilla) provided the desktop experience, both on Windows and Mac. The result is that for the most part the Unix boxen were sent to network closets serving web pages to a starving world.

The tools built to serve the web pages in those early days were not very sophisticated, a collection of scripts or simple script building tools. Pages being served were mostly text, as most users were using what was called a “dial-up” connection. A box called a modem was plugged into a telephone line, and then the modem was plugged into the desktop computer. The speed of the data transmission was usually around 56 kbps, or 0.056 Megabits per second. Today a slowish cable modem provides 30 Megabits per second.

So after 20 years of progress on the desktop, the PC became a text serving time-sharing terminal all over again. It took almost another 10 years before a web browser could reliably deliver anything like a desktop experience on the given infrastructure. It really wasn’t until YouTube in 2005 that much video was being streamed, mostly because of the bandwidth requirements. With the advent of YouTube, the die had been cast and infrastructure providers realized the dire need for more bandwidth and faster speeds.

Similarly, the wireless providers weren’t quite ready for the advent of the iPhone in 2007. AT&T was first up, and watched the iPhone bring their network to its knees. The iPhone brought with it a whole new paradigm of computing, morphing the desktop into an even more “personal computing experience” and a different idea as to what network connectivity means. For the purposes of our discussion, note that the iPhone is programmed using Objective C. Several competitors such as Samsung use the Java programming language for their Android phones.

One more piece of background. Objective C, Java and Javascript all claim to be what is called an ‘Object Oriented’ language. These languages were heavily influenced by Smalltalk, a programming language and environment developed at Xerox PARC during the 1970s. This is all stuff of legend now, when Steve Jobs saw Smalltalk running at PARC he ran with it and turned it into the Macintosh.

We’ve introduced most of the characters now. The programming languages:

  • C
  • Java
  • JavaScript
  • Objective C
  • Smalltalk

form the next part of the discussion.

On to Part III!

The post Thoughts on Programming Languages and Environments Part II – Jetson Dev Kits appeared first on JetsonHacks.

Thoughts on Programming Languages and Environments Part III – Jetson Dev Kits

$
0
0

In Part II of our discussion about programming, we talked about when some of the more popular programming languages came into existence, and how people used them. In this part of the discussion, we’ll talk a little about the environment the language C sprang from.

This series of articles will swing back to embedded programming on the Jetson! I think it is useful to understand the parallel to the way other systems have evolved over time. Unfortunately I just don’t have enough time to make the articles shorter.

Even More Background

You noticed in the last article the lack of mention of the Windows operating system underpinnings. Up front, the original Windows 1.0 was built using C and assembler. The preceding IBM PC operating system was the command line MSDOS, written in strictly assembler. Because of the very large user base for the preceding MSDOS, Windows has always had a much harder time of things. Basically everything has to be backwards compatible, without the luxury of having known hardware configurations. Everyone is familiar with the ubiquitous ‘Blue Screen of Death‘ which was the ‘Twitter Fail Whale’ of its generation(s).

OK, the major desktop and network machines use C or variations thereof for low-level work. That is a compelling reason for using C to program embedded processors, so we’re done. Not so fast …

Here’s the original paper The UNIX Time-Sharing System Dennis M. Ritchie and Ken Thompson Bell Laboratories when Unix was introduced to the world. The interesting part is the machine that Unix was developed on, the DEC PDP-11.

The PDP-11 was an innovative machine. The particular model in the paper, the PDP-11/40 costs about $40K USD at the time (1974). The PDP-11 was a 16 bit processor with 144KB of memory and a 40 MB hard drive. In case you don’t recognize the term 144KB, it means 144 kilobytes, kilo meaning thousand. So 144*1024 bytes, or approximately a tenth of a megabyte. Of that physical 144KB, the Unix operating system took up about 42KB, leaving a generous 100K or so left for application and user programming.

If you are a ‘modern’ day programmer, it is hard to wrap your head around those numbers. An entire operating system in 42KB? Today a $5 Raspberry Pi Zero has a 32 bit processor and 512MB of onboard RAM. As you might imagine, this meant that memory was an extremely precious resource, and because the processors were relatively slow execution speed was a major concern. This is also your first clue as to the mindset of programmers of the day, and how some of those programming ideas have persisted over the last 40 odd years.

In those days most computer printers used continuous feed paper with perforations at page breaks. This made it possible to print out what were called ‘computer listings’ or ‘program listings’ on one long sheet of paper. It was common to see people spread their program listings on their desks or on the floor down a hall and read/markup/write their programs. They would frequently act as if they were ‘computers’ themselves, going through the execution of critical pieces of code and jumping from page to page imagining how the program executes. This is reminiscent of the way that Napoleon used maps laid out on the floor to visualize battles before going to war, getting a feel from the ‘virtual reality’ technology of the day.

Remember that the Unix OS executable was only 42kb, which means that the source code for the entire system was probably in the 100K line range spread across 4400 files. My guess would be < 10K lines of code for the kernel. One could easily print the entire kernel in a listing and learn it. A much different environment from today where the Linux kernel is around 15 Million lines of code. Not quite apples to oranges comparison, as driver support in modern Linux is around 8 million LOC and 2 million LOC is for architecture support. The Linux kernel itself is probably around 200K LOC. You would need quite a few trees and an awful long hallway to print it all out.

At the time of the Unix paper, there were 75 users of Unix. There are slightly more now. The main point here is that the developers of Unix had a different level of familiarity with the system than people can have now. At the time, they were dead serious when they would refer people to look at the source code for the real documentation on how the system works. The system at the time was small and simple enough where one person could understand it by reading the source code. Of course, such advice travels through the ages and eventually becomes part of the culture. People to this day will tell others to look “through the source code” to understand the OS.

Eventually about 20 years into it, there were enough people who had developed a difficult relationship with Unix and responded with a friendly book, “The UNIX-HATERS Handbook“. Money quote:

Our grievance is not just against Unix itself, but against the cult of Unix zealots who defend and nurture it. They take the heat, disease, and pestilence as givens, and, as ancient shamans did, display their wounds, some self-inflicted, as proof of their power and wizardry. We aim, through bluntness and humor, to show them that they pray to a tin god, and that science, not religion, is the path to useful and friendly technology.

Some of the criticisms from the book have been addressed in the following two decades. The book was written before the Internet became popular, so it was possible to have an actual polite discourse about the subject. But at the end of the day, most users don’t really get a say in the matter.

For example, I have no idea how this web page is being served, by what kind of machine, or what kind of software. All I know is that I upload this amazing content to a service, and the service delivers it on demand. I can control parts of the interaction obviously (such as how I produce content), but I don’t really control the OS in the data center, or the set-top box, the phone, or machine that you’re viewing it on. Back to programming …

One way to look at C is as a portable assembler. The language itself is pretty simple, and it is easy to imagine the mappings of structs and such directly to hardware. It is also very assembler like in that there is no safety net for things like memory allocation/deallocation, range checking structure/array fetch/stores, or illegal memory access due to things like invalid pointer arithmetic.

It’s also easy to be seduced by the romantic idea of a handful of really smart people writing a bug free, beautiful and efficient operating system in C which runs in 42KB. You can even imagine that scaling to some degree and still have warm fuzzies. Then you start reading stats from the Linux foundation [2015]:

Regular 2-3 month releases deliver stable updates to Linux users, each with significant new features, added device support, and improved performance. The rate of change in the kernel is high and increasing, with over 10,000 patches going into each recent kernel release. Each of these releases contains the work of over 1,400 developers representing over 200 corporations.

Since 2005, some 11,800 individual developers from nearly 1,200 different companies have contributed to the kernel. The Linux kernel, thus, has become a common resource developed on a massive scale by companies which are fierce competitors in other areas.

and:

“The rate of Linux development is unmatched,” the foundation said in an announcement accompanying the report. “In fact, Linux kernel 3.15 was the busiest development cycle in the kernel’s history. This rate of change continues to increase, as does the number of developers and companies involved in the process. The average number of changes accepted into the kernel per hour is 7.71, which translates to 185 changes every day and nearly 1,300 per week. The average days of development per release decreased from 70 days to 66 days.”

They’re very proud. This also leads to having a “push forward” culture which requires that whenever a bug or issue is encountered, the first question asked is “Do you have the latest updates?” What does the term “latest” mean in a context where there are 8 changes made every hour? After you “update”, were you sure that other issues weren’t being introduced in other unrelated areas?

Remember in Part 1 of this series where I stated that sometimes Cluster Fucks are created when the technology you’re building on invites disaster?

Take a quick glance at Open SSL Vulnerabilities and search for the terms like ‘overflow’ and ‘underflow’ and ‘heap corruption’. SSL is kinda important for everyone, it would be really swell to be able to be able to rely on it without worrying about hackers attacking you. The programming language and environment has not serve them well.

In part, the ‘deficiencies’ of C are magnified because there is a new programming paradigm, where thousands of people contribute to software projects. The whole idea of thousands of contributors with disparate programming backgrounds is a new phenomenon, which suggests that programming languages for such use needs a bit more safety than something like C provides.

On to Part IV !

The post Thoughts on Programming Languages and Environments Part III – Jetson Dev Kits appeared first on JetsonHacks.

Thoughts on Programming Languages and Environments Part IV – Jetson Dev Kits

$
0
0

In Part III of this series, we discussed how the Unix kernel came to life using the C programming language, and a little bit of the genesis of the kernel. We also noted that there are some issues using C as a general purpose programming language in modern-day use.

Surprisingly, with the exception of Windows, the popular modern-day operating systems all use Unix or Unix derived kernels. The difference between the operating systems tend to be how the rest of the OS is implemented. Android uses a Linux kernel with higher level functions developed in Java. OSX and iOS use a Mach kernel with Objective C on top. Linux typically implements the higher level functions in C itself. Each OS has some assembler of course, and there is a little C++ sprinkled here and there.

Changes!

As we noted earlier in the series, the last ten years have brought major changes in hardware. Multi-core CPUs are commodity, GPUs are standard, networking is taken for granted. The types of peripherals supported offered has also expanded greatly, especially in the mobile arena. Radios, IMUs, and multi-touch screens are now prominent.

Another major change is the amount of information that programmers have available. This information comes from places like Stackoverflow. Another valuable source of information about coding comes from the open source repositories of Github. Github holds entire treasure troves of working code which solve a wide variety of computing issues. These resources are contributed and available globally. Life be good!

Well, mostly good. As it turns out, there are some issues. While hardware is much more capable, software has lagged behind. The array of new hardware needs to be supported, and as we’ve discussed the current set of tools has their genesis 15 years ago for C#, at least 20 years ago in the case of Java and Objective C, and 40 for C itself. Some of the “newer” hardware capabilities, like multiple CPU cores, are just hard to program and control at this point.

While there is a lot of great code on Github, and some wonderful answers on Stackoverflow, we also know that there are equally poor examples on both. How do you tell the difference? I’m sure you’ve run across Github repositories that provide a good 80% answer, but would take a major rewrite to turn into code that is suitable for your project. People have picked up the habit of copy/pasting Stackoverflow answers into their code. How do you know if code has been written by a seasoned developer, a researcher, or someone just starting out? Is the project a one-off demo? Is it a research project? Is it for production code use?

The Commercial Approach

As it turns out, many companies face much the same issues. High tech companies like Apple and Google have programmers with a wide range of experience. Some engineers have just been recruited out of college, others have been programming for decades. One of the major questions is, “How do you bring in new people and make them productive?” This is also at unprecedented scale, where there are billions of users of a company’s products.

Companies have distinct advantages over the open source community. They get to pick who works there, have money to invest, infrastructure and so on. They also have a vested interest in helping their developers build reliable software, because any issue can have a great financial impact. Let’s say you’re a backend programmer at Google. Today they do about 40K search queries a second. Each Google query uses 1,000 computers to retrieve an answer in 0.2 seconds. Then there’s the AdSense server area, which is generating actual revenue. What happens if a problem arises? How long will it take to detect it, find it and then fix it? Time is money at an unprecedented rate.

There are several paths companies can take to help level the playing field. There are some common defense mechanisms, mostly having to do with program memory management. Here are two of the new developments from Google and Apple, who are both building new open source programming languages.

Google & Go

One of the areas that Google has been working on is what I’ll call the “working man” approach to programming. It turns out that in 2006 Google hired one of the creators of Unix, Ken Thompson. Another Bell Labs alumni, Rob Pike along with Robert Griesemer started sketching the Go programming language. From “Go at Google: Language Design in the Service of Software Engineering“.

The Go programming language was conceived in late 2007 as an answer to some of the problems we were seeing developing software infrastructure at Google. The computing landscape today is almost unrelated to the environment in which the languages being used, mostly C++, Java, and Python, had been created. The problems introduced by multicore processors, networked systems, massive computation clusters, and the web programming model were being worked around rather than addressed head-on. Moreover, the scale has changed: today’s server programs comprise tens of millions of lines of code, are worked on by hundreds or even thousands of programmers, and are updated literally every day. To make matters worse, build times, even on large compilation clusters, have stretched to many minutes, even hours.

Go was designed and developed to make working in this environment more productive. Besides its better-known aspects such as built-in concurrency and garbage collection, Go’s design considerations include rigorous dependency management, the adaptability of software architecture as systems grow, and robustness across the boundaries between components.

Apple & Swift

Apple comes at things from a different perspective. Since the introduction of the Mac, Apple has been thought of as a graphics front end for computers. For many years, Objective C has been their ace in the hole. Objective C integrates C with Object Oriented Programming in the Smalltalk tradition. Objective C has built-in garbage collection, and a lot of other things that make programming nice. Objective C has been around since the early 1980s. While it has served its master well, Apple is rolling out a new programming language called Swift to replace Objective C. Swift takes the lessons learned over the last few decades and rolls them into a new programming environment.

From Swift.org:

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns.

The goal of the Swift project is to create the best available language for uses ranging from systems programming, to mobile and desktop apps, scaling up to cloud services. Most importantly, Swift is designed to make writing and maintaining correct programs easier for the developer. To achieve this goal, we believe that the most obvious way to write Swift code must also be:

Safe. The most obvious way to write code should also behave in a safe manner. Undefined behavior is the enemy of safety, and developer mistakes should be caught before software is in production. Opting for safety sometimes means Swift will feel strict, but we believe that clarity saves time in the long run.

Fast. Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). As such, Swift must be comparable to those languages in performance for most tasks. Performance must also be predictable and consistent, not just fast in short bursts that require clean-up later. There are lots of languages with novel features — being fast is rare.

Expressive. Swift benefits from decades of advancement in computer science to offer syntax that is a joy to use, with modern features developers expect. But Swift is never done. We will monitor language advancements and embrace what works, continually evolving to make Swift even better.

As is typical with most things Apple, everything seems very happy and fluffy 😉

The Others

Of course there are also the tried and true approaches. Most companies use Python and Java to help with their programming issues. Some use C++, with mixed results. C++ does not tend to lend itself to leveraging good results across a large programming population.

Conclusion

This series of articles has been some of the thought process and background for how to go about starting to program embedded systems on the Jetson Dev Kits.

Takeaways

First, you need to be able to efficiently interface with C routines. There’s too much existing infrastructure out there to ignore, including interfacing with the operating system kernel.

Second, you need a memory safe language (automatic garbage collection, range checking). While you may able to produce flawless code with memory cohesion, more than likely others can not. There will be times when you will need to be able to use third-party libraries, and you need to reduce the risk of memory corruption as much as possible. You don’t want these guys patching your code for you.

Third, multi-core execution and concurrency is a big deal. It’s also hard to get right. Make sure your programming language helps you with this.

Fourth, make sure that the programming language has enough critical mass behind it so that you can leverage other people’s knowledge. This is true for how to actually use the language or programming environment itself, as well as the availability of libraries and such. You can be using a kick ass language, but if you have to figure out everything yourself and write all the libraries that’s a problem.

Next Steps

Now we’re off to start working on the Jetson. To be clear, this is embedded programming in the large, not for things like base Arduinos and such. We’re talking robots and vision systems and the like. The inclusion of Googles’ Go programming language and Apples’ Swift were intentional. Remember that the iPhone runs ARM code, just like the Jetson does which makes Swift a natural candidate for use. Android and Java could also work on the Jetson in a similar role.

From a different perspective, Go seems like a great candidate for lower level programming for something like robots. As we’ve talked about, memory management and concurrency is difficult and in distributed systems even more so.

The post Thoughts on Programming Languages and Environments Part IV – Jetson Dev Kits appeared first on JetsonHacks.


Go (Golang) – NVIDIA Jetson Dev Kits

$
0
0

The Go programming language (sometimes called Golang) is a modern programming language which aims to “make it easy to build simple, reliable and efficient software”.

Background

As we discussed in the four part series “Thoughts on Programming Languages and Environments – Jetson Dev Kits” (part 1 is here), there is an impedance mismatch between the programming languages of the past and current hardware. In no particular order, here are some major concerns when writing new software today:

  • Memory Safety
  • Concurrency
  • Network/Web Integration
  • Integration with Baseline Software

For the purposes of this discussion, we will talk about these in the context of embedded systems. The Jetson Dev Kits in particular.

Memory Safety

Memory Safety is probably the cause of most computer application issues. These issues deal with memory allocation and memory access. The “three major” programming languages on the most popular platforms, C#, Java, and Objective-C all provide good support in this area. As do dynamic languages, such as Python and Ruby. These languages use built-in tools like automatic garbage collection and run-time range checking for memory access.

On Linux based machines (and the underlying kernels of Windows, MacOS/iOS, and Android) the C programming language does not lend much support for this particular issue. There is a much broader ecosystem of languages being used on Linux at the application programming level than most other platforms. While there are “memory safe” languages that run on Linux, there are also some languages that do not have that support built-in to the language. The foremost of which is C/C++.

Concurrency

Concurrency allows programs to be thought of as a collection of components, which can possibly be run in parallel. The seminal paper on concurrency, Communicating Sequential Processes, was written in the 1980s by Tony Hoare.

The paper was written before multi-processor machines were widely available. The paper is a little math-y, but if you are a computer programmer/scientist the paper is considered a must read. If you haven’t read it in awhile, hunt it up and read it again.

With the advent of inexpensive multi-core/multi-processor computers, huge performance gains make a whole new range of applications possible. It is also helps to give perspective about computation on the GPU.

Currently most mainstream implementations of CPU concurrency are rather ad-hoc and not supported at a language level. Concurrency is notoriously difficult to get right in the general case, just mention the word ‘deadlock’ to a programmer and see the pain and fear in their eyes. Most platforms provide concurrency at an OS level, with programming languages calling platform libraries or lightly wrapped version thereof.

Network/Web Integration

As every knows, everything needs integrated network/web access. Just a few short years ago, the cost to talk to a network from an embedded device was prohibitive. Now it is so inexpensive that every device is expected to belong to the Internet of Things (IoT).

Integration with Baseline Software

It is difficult to build entire programming ecosystems from the ground up. Just the need to interface with the operating system means that a new programming language must be able to interface with ‘C’ libraries (C++ would be nice also).

Go

Go (golang.org) is designed to address the above issues head on. The creators of the Go language work at Google. Google has a large group of programmers with various degrees of experience working on world-scale systems. To better help facilitate programming in this type of environment, the authors realized that the programming language must have the above attributes built-in, it must be easy to scale, and people should easily be able to ascertain the intent of the program itself. In other words, programs should be “easy to read”.

Think of this as a “working mans” programming language. While Go may not have all the bells and whistles of some of the other more flexible languages, it does “the right thing” in its intended domain, which is mostly writing servers.

Answers!

For Memory Safety, Go implements an automatic garbage collector and does range checked storage.

For Concurrency, Go implements channels with some of the grammar introduced by Hoare. Having concurrency built into the language itself ensures consistent use by all participants. Concurrency also enables programs to execute in parallel as the environment allows. As an added bonus, there is even a deadlock detector!

Not surprisingly, since Go is used to build servers, Network/Web support is also built into the language. It is trivial to write a simple web server which takes into account all the little nooks and crannies that one has to think about in such pursuits. Why is this important in an embedded device? IoT!! With everything on the web now, web programming needs to be a standardized part of languages.

While Go is built for the server world, it also has very good Integration with Baseline Software. There are tools which help build wrappers around C library calls and such.

Go is industrial programming.

Conclusion

This has been a little bit of the ‘why’ of Go, let’s look at using Go on the Jetson in Part II.

The post Go (Golang) – NVIDIA Jetson Dev Kits appeared first on JetsonHacks.

Go (Golang) Part II – NVIDIA Jetson Dev Kits

$
0
0

In Part I of our Go (Golang) discussion, we went over why Go might be a good programming language for embedded systems. It is certainly worth you time to go to their website and explore the information there. There’s an interactive workspace for trying the language on the website.

In this article, we will load Go on to a Jetson and look at a simple software application. The software app displays some numbers on a 7 Segment LED Display connected to the Jetson over I2C. In addition, the app implements a web server which sends Server Side Events (SSE) to any attached web browser so that the web browser mirrors the LED display. Looky here:

Hardware

Note: This demo has been tested on the Jetson TK1 L4T 21.5 and Jetson TX1 L4T 24.1 (32 bit and 64 bit versions). A Jetson TX1 with 64 bit L4T 24.1 is shown in the demo

First, before powering up the Jetson, let’s wire up the LED Segment Display.

For this project, a Adafruit 0.56″ 4-digit 7-segment Display W/i2c Backpack – Green is wired to a Jetson. The Display is assembled per the Adafruit instructions.

On a Jetson TK1, here’s a wiring combination for I2C Bus 1:

GND J3A1-14 -> LED Backpack (GND)
VCC J3A1-1 -> LED Backpack (VCC – 5V)
SCL J3A1-18 -> LED Backpack (SCL)
SDA J3A1-20 -> LED Backpack (SDA)

For the TK1, here’s another article for using the LED Segment Display.

On a Jetson TX1, here’s a wiring combination for I2C Bus 0 (as shown in the video)

GND J21-6 -> LED Backpack (GND)
VCC J21-2 -> LED Backpack (VCC – 5V)
SDA J21-3 -> LED Backpack (SDA)
SCL J21-5 -> LED Backpack (SCL)

Note that the TX1 also has a I2C Bus 1 interface. See the J21 Pinout Diagram.

Software Installation

Once the board is wired up, turn the Jetson on.
Install the libi2c-dev library. In order to be able inspect the LED Display, you may find it useful to also install the i2c tools:

$ sudo apt-get install libi2c-dev i2c-tools

After installation, in a Terminal execute (0 is the I2C bus in this case):

$ sudo i2cdetect -y -r 0

ubuntu@tegra-ubuntu:~$ sudo i2cdetect -y -r 0
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: — — — — — — — — — — — — —
10: — — — — — — — — — — — — — — — —
20: — — — — — — — — — — — — — — — —
30: — — — — — — — — — — — — — — — —
40: — — — — — — — — — — — — — — — —
50: — — — — — — — — — — — — — — — —
60: — — — — — — — — — — — — — — — —
70: 70 — — — — — — —

You should see an entry of 0x70, which is the default address of the LED Segment Display. If you have soldered the address pins on the Display you should see the appropriate address.

Installing Golang

With the I2C library installed and the display attached, we’re now ready to install Go. The directory ‘gowork’ will be the working directory. You may name it to your liking.

$ sudo apt-get install golang-1.6
$ mkdir -p gowork/src

Next edit bashrc to set the environment variable GOPATH, and have the PATH point to the Go binary.

$ gedit ~/.bashrc

Add the lines:

export GOPATH=$HOME/gowork
export PATH=$PATH:/usr/lib/go-1.6/bin

Now either source bashrc or close the current Terminal and open a new one.
Next, we’re ready to install the goi2c example from the JetsonHacks account on Github. Go understands how to retrieve Github repositories:

$ go get github.com/jetsonhacks/goi2c

Configuration Note

You will have to set the I2C bus for the demo, depending on how you wired the LED Display to the Jetson. The line that needs to be changed is located in src/github.com/jetsonhacks/i2cExampleServer/main.go

backpack,err := ledBackpack7Segment.NewLedBackpack7Segment(1, 0x70)

In the repository, the bus is set to 1 (the first parameter). You may need to change it to 0.

Now let’s compile the example:

$ cd gowork/src
$ go install github.com/jetsonhacks/goi2c/i2cExampleServer

At this point, the example, ‘i2cExampleServer’, is in the gowork/bin directory.
For this example, the server requires the folder ‘templates’ to be in the same directory as the demo executable. Copy the templates directory to gowork/bin or create a new folder and copy the demo executable and the templates folder into it.

Before running the server, you may want to open a browser to monitor the display. Open a HTML 5 browser and point it to:

http://localhost:8000/test

To run the demo, go to the folder in which you have placed the executable, and run the demo with sudo permissions. E.g.

$ ~/gowork/bin
$ sudo ./i2cExampleServer

Once the demo starts, the LED Segment Display will run a demo cycle. The web server sends out web pages to reflect the current LED Display. Type ^C to end the demo.

Summary

This Go sample app is not idiomatic Go programming. Instead it is a sketch of some of the functionality that may be useful in an embedded environment. It was also my first attempt at Go programming.

The Good

There’s not much to comment about on memory safety, as it’s pretty much invisible to the programmer.

Concurrency is a difficult thing, but Go has a good mechanism for dealing with it. Play with the samples and write some of your own code to get a feel for what it can do for you.

Server side events being pushed out from a server are pretty typical of something that you may want to do from an embedded device. Think of it as a stock ticker pushing data from the device. As expected, the web server part is pretty straightforward. Once you start thinking ‘the Go way’, it’s a simple task to write the server code.

Actually interfacing against a low level device, like the I2C Segment Display, feels to be about the same amount of work as a C program. If no memory allocation happens in the device driver, it’s probably just as easy to wrap an existing C library for use.

The thing compiles fast, I’ll certainly give it that.

The Bad

Drawbacks? Sure. On Linux there’s really not an agreed upon IDE for Go with a good editor, project management and integrated debugger built-in. This seems to stem from the roots of the language designers who aren’t real believers in such things. This leads to a whole lot of what I call ‘Programming by guessing’.

When a professional programmer starts out in the world, the first thing they realize is that they don’t get to work on their own code. Most are thrown in the deep end of very large programs that have an issue which needs fixing. It’s one thing to work on your own code and fix issues, it’s another thing to be thrown out into the sea on your own. Debuggers and project organizers act as life preservers. It’s rather harsh not to have those available in a modern way, especially when you’re first starting out.

Conclusion

All in all, fairly positive. Certainly for having a server of any type on the device, Go should be a definite candidate. If there’s some particular task which needs concurrency at the CPU level, Go should be on the checklist. Certainly worth kicking the tires.

The post Go (Golang) Part II – NVIDIA Jetson Dev Kits appeared first on JetsonHacks.

What is Programming Again? – Thoughts on Programming

$
0
0

Exactly what is programming again? We’ve been discussing some programming languages, and some assumptions about what programming should be like today. Back in the 1973 Bret Victor gave a seminal talk on “The Future of Programming”. Well, actually he gave the talk in 2013 as if he were giving the talk in 1973 at a DBX Conference. Confused? This is a great talk that is about 30 minutes long. Looky here:

The overhead projector was much more fun than a PowerPoint deck.

Discussion

Usually there isn’t homework in these articles, but I think it’s useful to take a look at different perspectives on any given subject so that you can better understand some of the ramifications of the bias and decision-making based on a given set of assumptions.

If you’re of a certain age and background, you have been exposed to the ideas in the above talk. You probably have implemented your own compilers and interpreters, built your own domain specific languages (DSLs), and other computer science-y things.

If you have are a bit younger, you should have been given this information as background as part of your computer science education. However, Bret makes an absolutely brilliant and fundamental point: a couple of generations now view programming as dogma. People know how to program, here’s how you do it and here’s the language that you use. The program architecture stacks are given. The OS, the graphics library, the desktop/main screen, APIs, and so on.

Run time environments

Parts of this are true. If you are programming Windows boxen, you have a very well-defined set of tools it is in your best interest to use if you want your program to work and for you to keep your job. Same with Macintosh and iOS. Android, check. If you are programming web boxen in the cloud, then you get a much broader selection of tools. You might get to use one or two programming languages on the server, and then deliver some type of magical javascript/css mish-mash to the browser.

Lock-In

It is in the interest of each of the computing ecosystems to have everything work together. So each platform has the same calling conventions, and everyone is worried about things like the size of any particular number being sent to a particular API in case the API doesn’t understand such things. In other words, most of the arguments about programming are literally bookkeeping. The size of this particular number, where this particular memory gets allocated, where and how should things be stored. People even come up with commandments such as DRY (Don’t repeat yourself!) because it makes bookkeeping harder. Up until fairly recently, things work as they do today with this one model, from this one perspective.

All this works. Until it doesn’t.

Let’s look through a different perspective, one that comes from biological systems. Now people disagree as to if humans are more sophisticated computational devices than computers, but one thing is clear. Humans have a different way of calculating and learning than current computers.

Humans also don’t seem to obey the ‘commandments’ of computing. For example, each cell in a human (except for erythrocytes, you’ll point out) has DNA in it composed of about 3 billion base pairs per cell. DNA is a map of just about everything physical in a human. And it’s replicated about a trillion times give or take in a 150 lb person. At a hardware level, biologics repeat themselves. People have some protocols built-in, but are flexible in negotiating different types of communication.

For example, two people who speak different languages can still communicate though not as efficiently as if they are both fluent in the same language. An important point is that given some time, the two people will be able to communicate efficiently. Computers find that difficult.

Leave this Place

Surprisingly, if you give a human a set of protocols for a complex task, unlike a computer they can rarely execute the protocols correctly on the first try. In fact, it may take them several thousand hours to become extremely proficient on some tasks such as playing a musical instrument or playing a sport. But one of the main attributes that humans have is the ability to learn new things, and improve on what they were originally taught.

If you train an artist like a computer, you might first try giving them a coloring book to paint with a very specific color between the lines until they are proficient at painting. Then teach them to draw a straight line by giving the mathematical equation for a line (don’t forget negative slopes!), then a curve, and so on. You might share this actual idea with an artist, and watch their reaction. You are sure to be amused.

Throughout history people have been training to be artists, and each generation tends to bring with it different ideas and evolution on what it means to actually be an artist. Pointedly, we don’t do that with computers in the traditional sense. The question is, “Can we?”

To the Future!

Think about what was discussed in the video, and think about what’s going on now. There are a whole lot of people centered directly and who work only in the dogma world. People are just now getting around to start thinking in the manner of the ideas in the video.

A biology inspired metaphor, called neural networks, is similar to the ideas discussed in the video. The first randomly wired neural network machine (SNARK) was built by Marvin Minsky in … 1951! The idea itself isn’t new, but the discovery of backpropogation in 1975 by Werbos and the application of GPUs on the problem in 2012 by Geoff Hinton now make teaching machines how to “think” possible.

You see what a great conflict is about to happen! On one hand you have the standard guard, who think of the computer as a conscript who needs to be told exactly when and what to do, no deviation. This number is an unsigned 32 bit integer soldier! On the other hand, you have a group of people who believe you just show the computer a dataset and a goal, adjust some dials, sliders, and buttons, and have the computer figure out what it needs to know. Drive a car? Sure, give it enough sensor footage and it will figure it out. Play the game Go? Shown all the games of Go ever played, and Alpha Go can beat almost any human.

Of course you will ask, “How do you know that the computer learned everything it needed to know, for example, to drive a car safely?” The standard guard will go on and on about the edge cases and rigorous unit tests for each and every situation that they lovingly hand coded. The machine learning guys will have a sheepish smile and say “It works doesn’t it? Oh, and it does it better than people do now.” This will play out great in the courtrooms.

Conclusion

In earlier articles we talked about system level programming languages which tend to be rather standard guard, low-level, do exactly what I say types of things. Those languages are certainly valuable, but should be thought of as the building blocks for building more intelligent and interesting systems.

For machines like the Jetson Dev Kit, there are software tools built-in for leveraging machine learning. Train models on a large system, deploy them on the Jetson. At the same time, it is important to be able to reliably communicate with peripherals and other devices without having to worry about memory safety or storage minutiae. Use the proper tool for the proper job.

The take away is that a lot of people treat computers and languages like religions. Believe this, or believe that, the other side is completely wrong. Burn heretics and all that. Hopefully you take another view with perspective. Computer science is in its infancy. Actively seek out new ideas and explore them.

Here’s a handy trick. Take the calendar year and add a leading zero, i.e. 2016 becomes 02016. Programming started around 01950. If you think everything is known about computing in this small slice of the 10,000 year calendar, you need to think again.

As Alan Kay says “The computer revolution hasn’t started yet!”

The post What is Programming Again? – Thoughts on Programming appeared first on JetsonHacks.

ROS Rviz – NVIDIA Jetson TK1

$
0
0

RViz is a 3D visualizer for the Robot Operating System (ROS) framework. In a lot of cases, RViz is run on a visualization workstation to monitor a robot.

However, sometimes it is useful to have RViz running on the robot itself. On the Jetson TK1 in particular this presents an issue, because running RViz causes a segmentation fault. Here’s the workaround:

The basic story is that one of the dependencies (pcre) is unhappy and needs one of the environmental variables unset. This seems to make everything happy and fluffy again.

RViz
RViz

The post ROS Rviz – NVIDIA Jetson TK1 appeared first on JetsonHacks.

JetPack 2.3 Development Tools Released – NVIDIA Jetson Development Kits

$
0
0

JetPack 2.3 for the NVIDIA Jetson Development Kits is now out in the wild! JetPack 2.3 is available here.

JetPack

Background

JetPack is an on-demand package which installs all of the software tools required to develop on Jetson Development Kits. The JetPack installer includes host and target development tools, APIs and packages to enable developers full access to the Jetson Embedded Platform. JetPack runs on an Ubuntu 14.04 PC based host machine.

Major Features

The JetPack 2.3 release supports the Jetson TX1 Developer Kit, running 64-bit Linux for Tegra (L4T) 24.2 operating system (which is derived from Ubuntu 16.04) . JetPack 2.3 also supports the Jetson TK1 Developer Kit, running 32-bit L4T 21.5. In particular, many upgrades and features are added to the TX1. In addition to upgrading to Ubuntu 16.04, existing packages have been significantly upgraded and a GPU Inference Engine, called TensorRT, is now available.

TensorRT

TensorRT is a deep learning inference engine which can double the performance of applications such as image processing, object detection, and segmentation on trained neural networks which support the prototxt model format.

cuDNN 5.1

cuDNN provides a CUDA accelerated library for deep learning that includes standard routines for convolutions, activation functions and tensor transformations. cuDNN includes support for LSTM (long short-term memory), and other types of recurrent neural networks.

CUDA 8.0

The CUDA NVCC CUDA compiler now achieves a 2x faster compilation time. Support for cuBLAS and nvGRAPH is now available.

Multimedia API

Access to a low-level Camera API and V4L2 API have been improved for L4T 24.2

Conclusion

Again, you’re sitting here reading this when you can be installing new goodness? Go and get it!

The post JetPack 2.3 Development Tools Released – NVIDIA Jetson Development Kits appeared first on JetsonHacks.

Caffe Deep Learning Framework – 64-bit NVIDIA Jetson TX1

$
0
0

Back in February, we installed Caffe on the TX1. At the time, the TX1 was running a 32-bit version of L4T 23.1. With the advent of the 64-bit L4T 24.2, this seems like a good time to do a performance comparison of the two. The TX1 can now do an image recognition in about 8 ms! For the install and test, Looky Here:

Background

As you recall, Caffe is a deep learning framework developed with cleanliness, readability, and speed in mind. It was created by Yangqing Jia during his PhD at UC Berkeley, and is in active development by the Berkeley Vision and Learning Center (BVLC) and by community contributors.

The L4T 23.1 Operating System release was a 64-bit kernel supporting a 32-bit user space. For the L4T 24.2 release, both the kernel and the user space are 64-bit.

Caffe Installation

A script is available in the JetsonHack Github repository which will install the dependencies for Caffe, downloads the source files, configures the build system, compiles Caffe, and then runs a suite of tests. Passing the tests indicates that Caffe is installed correctly.

This installation demonstration is for a NVIDIA Jetson TX1 running L4T 24.2, an Ubuntu 16.04 variant. The installation of L4T 24.2 was done using JetPack 2.3, and includes installation of OpenCV4Tegra, CUDA 8.0, and cuDNN 5.1.

Before starting the installation, you may want to set the CPU and GPU clocks to maximum by running the script:

$ sudo ./jetson_clocks.sh

The script is in the home directory, and is also included in the installCaffeJTX1 repository for convenience.

In order to install Caffe:

$ git clone https://github.com/jetsonhacks/installCaffeJTX1.git
$ cd installCaffeJTX1
$ ./installCaffe.sh

Installation should not require intervention, in the video installation of dependencies and compilation took about 10 minutes. Running the unit tests takes about 45 minutes. While not strictly necessary, running the unit tests makes sure that the installation is correct.

Test Results

At the end of the video, there are a couple of timed tests which can be compared with the Jetson TK1, and the previous installation:

Jetson TK1 vs. Jetson TX1 Caffe GPU Example Comparison
10 iterations, times in milliseconds
Machine Average FWD Average BACK Average FWD-BACK
Jetson TK1 (32-bit OS) 234 243 478
Jetson TX1 (32-bit OS) 179 144 324
Jetson TX1
with cuDNN support (32-bit OS)
103 117 224
Jetson TX1 (64-bit OS) 110 122 233
Jetson TX1
with cuDNN support (64-bit)
80 119 200

There is definitely a performance improvement between the 32-bit and 64-bit releases. There are a couple of factors for the performance improvement. One is the change from a 32-bit to 64-bit operating system. Another factor is the improvement of the deep learning libraries, CUDA and cuDNN, between the releases. Considering that the tests are running on the exact same hardware, the performance boost is impressive. Using cuDNN provides a huge gain in the forward pass tests.

The tests are running 50 iterations of the recognition pipeline, and each one is analyzing 10 different crops of the input image, so look at the ‘Average Forward pass’ time and divide by 10 to get the timing per recognition result. For the 64-bit version, that means that an image recognition takes about 8 ms.

NVCaffe

It is worth mentioning that NVCaffe is a special branch of Caffe used on the TX1 which includes support for FP16. The above tests use FP32. In many cases, FP32 and FP16 give very similar results; FP16 is faster. For example, in the above tests, the Average Forward Pass test finishes in about 60ms, a result of 6 ms per image recognition!

Conclusion

Deep learning is in its infancy and as people explore its potential, the Jetson TX1 seems well positioned to take the lessons learned and deploy them in the embedded computing ecosystem. There are several different deep learning platforms being developed, the improvement in Caffe on the Jetson Dev Kits over the last couple of years is quite impressive.

Notes

The installation in this video was done directly after flashing L4T 24.2 on to the Jetson TX1 with CUDA 8.0, cuDNN r5.1 and OpenCV4Tegra. Git was then installed:

$ sudo apt-get install git

The latest Caffe commit used in the video is: 80f44100e19fd371ff55beb3ec2ad5919fb6ac43

The post Caffe Deep Learning Framework – 64-bit NVIDIA Jetson TX1 appeared first on JetsonHacks.

Build TX1 Kernel and Modules – NVIDIA Jetson TX1

$
0
0

In this article, we cover building a kernel onboard the NVIDIA Jetson TX1. Looky here:

Background and Motivation

Note: This article is for intermediate users, and the methods within are somewhat experimental. You should be familiar with the purpose of the kernel. You should be able to read shell scripts to understand the steps described.

When the Jetson TX1 was first shipped the operating system, L4T 23.1, was a hybrid 64-bit kernel, 32-bit user space affair. The only practical route to rebuild the kernel was to use a host computer because two different development toolchains are needed.

With the introduction of the L4T 24.X releases (currently L4T 24.2), both the kernel and the user space are now 64-bit. NVIDIA gives detailed instructions on how to build the system using a host computer. There are other good sets of instructions around, including “Compiling Tegra X1 source code” over at RidgeRun.

If you are building systems which require generating the entirety of Jetson TX1 system, those are good options. For a person like me, it’s a little overkill. Most of the time I just want to compile an extra driver or two as modules to support some extra hardware with the TX1. What to do, what to do …

As it turns out the 24.X kernels are mostly 64-bit, but there’s a sprinkle of 32-bit object files here and there. So here’s an idea: why not just grab the compiled 32-bit sprinkles from a host build, and put them into the kernel build process? That way the kernel and modules can be built on the Jetson TX1 itself without the need for a host development environment.

Now normally I am against developing on host machines when I can develop on a target machine. The TX1 is certainly competent enough for development. But in this case, it seemed worth it to build the needed 32-bit object files on a development host. Once the 32-bit files are built, it is a pretty straightforward task to build the rest of the kernel (with the associated modules) entirely on the TX1 itself.

Installation

The script files to build the kernel on the Jetson TX1 are available on the JetsonHacks Github account in the buildJetsonTX1 repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX1Kernel.git
$ cd buildJetsonTX1Kernel

There are three main scripts. The first script, getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file. In the video, the local version of the kernel is set. The stock kernel uses -tegra as its local version identifier. Make sure to save the configuration file when done editing.

The second script, patchAndBuildKernel.sh, patches one of the sources files to more easily compile the kernel.

$ ./patchAndBuildKernel.sh

Then the script copies over the 32-bit object files, and proceeds to build the kernel and modules using make. The modules are then installed in /lib/modules/

The third script, copyImage.sh, copies over the newly built Image and zImage files into the /boot directory.

$ ./copyImage.sh

Once the images have been copied over to the /boot directory, the machine must be restarted for the new kernel to take effect.

Spaces!

The kernel and module sources, along with the compressed versions of the source, are located in /usr/src

After building the kernel, you may want to save the sources off-board to save some space (they take up about 3GB) You can also save the boot images and modules for later use, and to flash other Jetsons from the PC host.

Conclusion

For a lot of use cases, it makes sense to be able to compile the kernel and add modules from the device itself. This particular technique is the first attempt at that. Note that it is new, and not thoroughly tested at this point. Use it at your own risk.

Note

The video above was made directly after flashing the Jetson TX1 with L4T 24.2 using JetPack 2.3.

The post Build TX1 Kernel and Modules – NVIDIA Jetson TX1 appeared first on JetsonHacks.


Sony PlayStation Eye – NVIDIA Jetson TX1

$
0
0

In our last article, we built a kernel for the Jetson TX1. In this article, we go over an example of how to build a simple module for the ubiquitous Sony PlayStation Eye for the Jetson TX1. Looky here:

Preface

Note: If all you are looking for is a prebuilt PlayStation Eye Camera driver module for the standard Jetson TX1 dev kit, check out a version on Github available here. The Github version is for the L4T 24.2 release, kernel 3.10.96-tegra.

To load the module:

$ git clone https://github.com/jetsonhacks/installPlayStationEyeTX1.git
$ cd installPlayStationEyeTX1
$ ./setupPS3Eye.sh

If you want to learn how to build modules, continue on.

PS3 Eye
PS3 Eye

Background

Way back in January 2015, we covered how to build a PlayStation Eye driver module for the Jetson TK1. The PS3 Eye’s inexpensive price, relatively good performance, and the ability to mod the device to detect infrared makes the camera a favorite among DIYers. The procedure for building the driver module for the camera is the very much the same as for the Jetson TX1.

The instructions assumed that the kernel source has been installed as described in the previous article.

$ cd /usr/src/kernel
$ sudo make xconfig

This will bring up the kernel configuration editor.

Here’s the path to enable building the camera device driver module:

Device Drivers -> Multimedia Support -> Media USB Adapters -> GSPCA based Webcams. Scroll down to OV534 OV772x USB Camera Drive

Make sure to save the kernel configuration. Next, prepare the modules, and then build them:

$ sudo make modules_prepare
$ sudo make modules SUBDIRS=drivers/media/usb/gspca

Once the module is built, it’s time to copy it over to the appropriate /lib/modules directory:

$ sudo cp /usr/src/kernel/drivers/media/video/gspca/gspca_ov534.ko /lib/modules/$(uname -r)/kernel/drivers/media/usb/gspca/

You can then insert the module:

$ sudo depmod -a
$ cd /lib/modules/$(uname -r)/kernel/drivers/media/usb/gspca/
$ sudo insmod gspca_ov534.ko

At this point, you should be good to go!

Note: If the insert module gives you an error, it main be because gspca_main is not inserted. gspca_main should be listed when you run:

$ lsmod

If gspca_main is not listed:

$ sudo insmod gspca_main.ko
$ sudo insmod gspca_ov534.ko

Running lsmod again should show both gspca_main and gspca_ov534 loaded.

Some shortcuts

Instead of copying the driver, you can run:

$ sudo make modules_install

Note that this will install all of the modules, and might cause issues.

Also, rebooting the system once the driver has been copied to the correct location should result in the driver being loaded, and mean that you don’t have to run the insmod command.

Conclusion

Hopefully by this point, you have an idea of how to build the kernel and auxiliary modules on the Jetson TX1. This is not an all inclusive description, but just a sample to get started.

The post Sony PlayStation Eye – NVIDIA Jetson TX1 appeared first on JetsonHacks.

Intel RealSense Camera Installation – NVIDIA Jetson TX1

$
0
0

Intel RealSense cameras can use an open source library called librealsense as a driver for the Jetson TX1 development kit. Looky here:

Background

Note: This article is intended for intermediate users who are comfortable with Linux kernel development, and can read and modify simple shell scripts if needed.

In earlier articles we talked about the Intel RealSense R200 Camera, which is a relatively inexpensive RGBD device in a compact package. The camera uses USB 3.0 to communicate with a computer.

Intel has made available an open source library, librealsense on Github. librealsense is a cross platform library which allows developers to interface with the RealSense family of cameras, including the R200. Support is provided for Windows, Macintosh, and Linux.

There are two major parts to getting the R200 camera to work with the Jetson. First, operating system level files must be modified to recognize the camera video formats. When doing development on Linux based machines you will frequently hear the terms “kernel” and “modules”. The kernel is the code that is the base of the operating system, the interface between hardware and the application code.

A kernel module is code that can be accessed from the kernel on demand, without having to modify the kernel. These modules provide ancillary support for different types of devices and subsystems.

A module is compiled code which is stored as a file separately from the kernel, typically with a .ko extension. The advantage of having a module is that it can be easily changed without having to modify the entire kernel. We will be building a module called uvcvideo to help interface with the RealSense camera. Normally uvcvideo is built-in to the kernel, we will designate it as a module as part of our modification. We will modify uvcvideo to recognize the RealSense camera data formats.

The second part of getting the R200 to work with the Jetson TX1 is to build and install librealsense.

Kernel and Module Building

Note: In the video above, the installation was performed on a newly flashed L4T 24.2.1 TX1 using JetPack 2.3

We have covered building the kernel for the Jetson TX1 in a previous article. Here are the major steps involved:

The script files to build the kernel on the Jetson TX1 are available on the JetsonHacks Github account in the buildJetsonTX1 repository.

$ git clone https://github.com/jetsonhacks/buildJetsonTX1Kernel.git
$ cd buildJetsonTX1Kernel

The script getKernelSources.sh gets the kernel sources from the NVIDIA developer website, then unpacks the sources into /usr/src/kernel.

$ ./getKernelSources.sh

After the sources are installed, the script opens an editor on the kernel configuration file. In the video, the local version of the kernel is set. The stock kernel uses -tegra as its local version identifier. Make sure to save the configuration file when done editing.

Next, patchAndBuildKernel.sh, patches one of the sources files to more easily compile the kernel:

$ ./patchAndBuildKernel.sh

and proceeds to build the kernel and modules using make. The modules are then installed in /lib/modules/3.10.96[local version name]

Install librealsense

A convenience script has been created to help with this task in the installLibrealsense repository on the JetsonHacks Github account.

$ cd $(HOME)
$ git clone https://github.com/jetsonhacks/installLibrealsense.git
$ ./installLibrealsense.sh

This will build the librealsense library and install it on the system. This will also setup udev rules for the RealSense device so that the permissions will be set correctly and the camera can be accessed from user space.

USB Video Class Module

Note: This step assumes that the kernel is located in /usr/src/kernel and that the kernel is to be installed on the board. If this is not your intent, modify the script accordingly. applyUVCPatch.sh has the command to patch the UVC driver with the RealSense camera formats.

The third major step is to build the USB Video Class (UVC) driver as a kernel module. This is can be done using the script:

$ ./buildPatchedKernel.sh

The buildPatchedKernel script will modify the kernel .config file to indicated that the UVC driver should be built as a module. Next, the script patches the UVC driver to recognize the RealSense camera formats. Finally the script builds the kernel, builds the kernel modules, installs the modules and then copies the kernel image to the /boot directory.

Note: The kernel and modules should have already been compiled once before performing this step. Building the kernel from scratch as described in the ‘Kernel and Module Building’ section above pulls a few shenanigans to get things to build properly the first time through.

One more minor point of bookkeeping. In order to save power, the Jetson TX1 will auto-suspend USB ports when not in use for devices like web cams. This confuses the RealSense camera. In order to turn auto-suspend off, run the following script:

$ ./setupTX1.sh

Once finished, reboot the machine for the new kernel and modules to be loaded.

Conclusion

So there you have it. This has been a little bit more involved than some of our other projects here, but if you are interested in this kind of device, well worth it.

Notes

There are several notes on this project:

  • In the video above, the installation was done on a Jetson TX1 running L4T 24.2.1 immediately after being flashed by JetPack 2.3
  • These scripts only support the 64-bit L4T series, 24.X
  • One difference between L4T 24.2 and L4T 24.2.1 is that a soft link issue with Mesa drivers has been resolved. If you are using L4T 24.2, you may have to:

    $ cd /usr/lib/aarch64-linux-gnu
    $ sudo rm libGL.so
    $ sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libGL.so libGL.so

  • QtCreator and Qt 5 are installed as dependencies in the librealsense part of the install. There are QtCreator project files located in the librealsense.qt directory. The project files build the library and example files. If you do not use QtCreator, consider modifying the installer script to take QtCreator out.
  • The librealsense examples are located in librealsense/bin after everything has been built.
  • These scripts install librealsense version v1.11.0 (last commit 74ff66da50210e6b9edc3157411bad95c209740f)
  • The RealSense R200 is the only camera tested at this time.
  • Examples using librealsense are located in librealsense/bin

The post Intel RealSense Camera Installation – NVIDIA Jetson TX1 appeared first on JetsonHacks.

Thoughts on Programming Languages and Environments Part IV – Jetson Dev Kits

$
0
0

In Part III of this series, we discussed how the Unix kernel came to life using the C programming language, and a little bit of the genesis of the kernel. We also noted that there are some issues using C as a general purpose programming language in modern-day use.

Surprisingly, with the exception of Windows, the popular modern-day operating systems all use Unix or Unix derived kernels. The difference between the operating systems tend to be how the rest of the OS is implemented. Android uses a Linux kernel with higher level functions developed in Java. OSX and iOS use a Mach kernel with Objective C on top. Linux typically implements the higher level functions in C itself. Each OS has some assembler of course, and there is a little C++ sprinkled here and there.

Changes!

As we noted earlier in the series, the last ten years have brought major changes in hardware. Multi-core CPUs are commodity, GPUs are standard, networking is taken for granted. The types of peripherals supported offered has also expanded greatly, especially in the mobile arena. Radios, IMUs, and multi-touch screens are now prominent.

Another major change is the amount of information that programmers have available. This information comes from places like Stackoverflow. Another valuable source of information about coding comes from the open source repositories of Github. Github holds entire treasure troves of working code which solve a wide variety of computing issues. These resources are contributed and available globally. Life be good!

Well, mostly good. As it turns out, there are some issues. While hardware is much more capable, software has lagged behind. The array of new hardware needs to be supported, and as we’ve discussed the current set of tools has their genesis 15 years ago for C#, at least 20 years ago in the case of Java and Objective C, and 40 for C itself. Some of the “newer” hardware capabilities, like multiple CPU cores, are just hard to program and control at this point.

While there is a lot of great code on Github, and some wonderful answers on Stackoverflow, we also know that there are equally poor examples on both. How do you tell the difference? I’m sure you’ve run across Github repositories that provide a good 80% answer, but would take a major rewrite to turn into code that is suitable for your project. People have picked up the habit of copy/pasting Stackoverflow answers into their code. How do you know if code has been written by a seasoned developer, a researcher, or someone just starting out? Is the project a one-off demo? Is it a research project? Is it for production code use?

The Commercial Approach

As it turns out, many companies face much the same issues. High tech companies like Apple and Google have programmers with a wide range of experience. Some engineers have just been recruited out of college, others have been programming for decades. One of the major questions is, “How do you bring in new people and make them productive?” This is also at unprecedented scale, where there are billions of users of a company’s products.

Companies have distinct advantages over the open source community. They get to pick who works there, have money to invest, infrastructure and so on. They also have a vested interest in helping their developers build reliable software, because any issue can have a great financial impact. Let’s say you’re a backend programmer at Google. Today they do about 40K search queries a second. Each Google query uses 1,000 computers to retrieve an answer in 0.2 seconds. Then there’s the AdSense server area, which is generating actual revenue. What happens if a problem arises? How long will it take to detect it, find it and then fix it? Time is money at an unprecedented rate.

There are several paths companies can take to help level the playing field. There are some common defense mechanisms, mostly having to do with program memory management. Here are two of the new developments from Google and Apple, who are both building new open source programming languages.

Google & Go

One of the areas that Google has been working on is what I’ll call the “working man” approach to programming. It turns out that in 2006 Google hired one of the creators of Unix, Ken Thompson. Another Bell Labs alumni, Rob Pike along with Robert Griesemer started sketching the Go programming language. From “Go at Google: Language Design in the Service of Software Engineering“.

The Go programming language was conceived in late 2007 as an answer to some of the problems we were seeing developing software infrastructure at Google. The computing landscape today is almost unrelated to the environment in which the languages being used, mostly C++, Java, and Python, had been created. The problems introduced by multicore processors, networked systems, massive computation clusters, and the web programming model were being worked around rather than addressed head-on. Moreover, the scale has changed: today’s server programs comprise tens of millions of lines of code, are worked on by hundreds or even thousands of programmers, and are updated literally every day. To make matters worse, build times, even on large compilation clusters, have stretched to many minutes, even hours.

Go was designed and developed to make working in this environment more productive. Besides its better-known aspects such as built-in concurrency and garbage collection, Go’s design considerations include rigorous dependency management, the adaptability of software architecture as systems grow, and robustness across the boundaries between components.

Apple & Swift

Apple comes at things from a different perspective. Since the introduction of the Mac, Apple has been thought of as a graphics front end for computers. For many years, Objective C has been their ace in the hole. Objective C integrates C with Object Oriented Programming in the Smalltalk tradition. Objective C has built-in garbage collection, and a lot of other things that make programming nice. Objective C has been around since the early 1980s. While it has served its master well, Apple is rolling out a new programming language called Swift to replace Objective C. Swift takes the lessons learned over the last few decades and rolls them into a new programming environment.

From Swift.org:

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns.

The goal of the Swift project is to create the best available language for uses ranging from systems programming, to mobile and desktop apps, scaling up to cloud services. Most importantly, Swift is designed to make writing and maintaining correct programs easier for the developer. To achieve this goal, we believe that the most obvious way to write Swift code must also be:

Safe. The most obvious way to write code should also behave in a safe manner. Undefined behavior is the enemy of safety, and developer mistakes should be caught before software is in production. Opting for safety sometimes means Swift will feel strict, but we believe that clarity saves time in the long run.

Fast. Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). As such, Swift must be comparable to those languages in performance for most tasks. Performance must also be predictable and consistent, not just fast in short bursts that require clean-up later. There are lots of languages with novel features — being fast is rare.

Expressive. Swift benefits from decades of advancement in computer science to offer syntax that is a joy to use, with modern features developers expect. But Swift is never done. We will monitor language advancements and embrace what works, continually evolving to make Swift even better.

As is typical with most things Apple, everything seems very happy and fluffy 😉

The Others

Of course there are also the tried and true approaches. Most companies use Python and Java to help with their programming issues. Some use C++, with mixed results. C++ does not tend to lend itself to leveraging good results across a large programming population.

Conclusion

This series of articles has been some of the thought process and background for how to go about starting to program embedded systems on the Jetson Dev Kits.

Takeaways

First, you need to be able to efficiently interface with C routines. There’s too much existing infrastructure out there to ignore, including interfacing with the operating system kernel.

Second, you need a memory safe language (automatic garbage collection, range checking). While you may able to produce flawless code with memory cohesion, more than likely others can not. There will be times when you will need to be able to use third-party libraries, and you need to reduce the risk of memory corruption as much as possible. You don’t want these guys patching your code for you.

Third, multi-core execution and concurrency is a big deal. It’s also hard to get right. Make sure your programming language helps you with this.

Fourth, make sure that the programming language has enough critical mass behind it so that you can leverage other people’s knowledge. This is true for how to actually use the language or programming environment itself, as well as the availability of libraries and such. You can be using a kick ass language, but if you have to figure out everything yourself and write all the libraries that’s a problem.

Next Steps

Now we’re off to start working on the Jetson. To be clear, this is embedded programming in the large, not for things like base Arduinos and such. We’re talking robots and vision systems and the like. The inclusion of Googles’ Go programming language and Apples’ Swift were intentional. Remember that the iPhone runs ARM code, just like the Jetson does which makes Swift a natural candidate for use. Android and Java could also work on the Jetson in a similar role.

From a different perspective, Go seems like a great candidate for lower level programming for something like robots. As we’ve talked about, memory management and concurrency is difficult and in distributed systems even more so.

The post Thoughts on Programming Languages and Environments Part IV – Jetson Dev Kits appeared first on JetsonHacks.

Go (Golang) – NVIDIA Jetson Dev Kits

$
0
0

The Go programming language (sometimes called Golang) is a modern programming language which aims to “make it easy to build simple, reliable and efficient software”.

Background

As we discussed in the four part series “Thoughts on Programming Languages and Environments – Jetson Dev Kits” (part 1 is here), there is an impedance mismatch between the programming languages of the past and current hardware. In no particular order, here are some major concerns when writing new software today:

  • Memory Safety
  • Concurrency
  • Network/Web Integration
  • Integration with Baseline Software

For the purposes of this discussion, we will talk about these in the context of embedded systems. The Jetson Dev Kits in particular.

Memory Safety

Memory Safety is probably the cause of most computer application issues. These issues deal with memory allocation and memory access. The “three major” programming languages on the most popular platforms, C#, Java, and Objective-C all provide good support in this area. As do dynamic languages, such as Python and Ruby. These languages use built-in tools like automatic garbage collection and run-time range checking for memory access.

On Linux based machines (and the underlying kernels of Windows, MacOS/iOS, and Android) the C programming language does not lend much support for this particular issue. There is a much broader ecosystem of languages being used on Linux at the application programming level than most other platforms. While there are “memory safe” languages that run on Linux, there are also some languages that do not have that support built-in to the language. The foremost of which is C/C++.

Concurrency

Concurrency allows programs to be thought of as a collection of components, which can possibly be run in parallel. The seminal paper on concurrency, Communicating Sequential Processes, was written in the 1980s by Tony Hoare.

The paper was written before multi-processor machines were widely available. The paper is a little math-y, but if you are a computer programmer/scientist the paper is considered a must read. If you haven’t read it in awhile, hunt it up and read it again.

With the advent of inexpensive multi-core/multi-processor computers, huge performance gains make a whole new range of applications possible. It is also helps to give perspective about computation on the GPU.

Currently most mainstream implementations of CPU concurrency are rather ad-hoc and not supported at a language level. Concurrency is notoriously difficult to get right in the general case, just mention the word ‘deadlock’ to a programmer and see the pain and fear in their eyes. Most platforms provide concurrency at an OS level, with programming languages calling platform libraries or lightly wrapped version thereof.

Network/Web Integration

As every knows, everything needs integrated network/web access. Just a few short years ago, the cost to talk to a network from an embedded device was prohibitive. Now it is so inexpensive that every device is expected to belong to the Internet of Things (IoT).

Integration with Baseline Software

It is difficult to build entire programming ecosystems from the ground up. Just the need to interface with the operating system means that a new programming language must be able to interface with ‘C’ libraries (C++ would be nice also).

Go

Go (golang.org) is designed to address the above issues head on. The creators of the Go language work at Google. Google has a large group of programmers with various degrees of experience working on world-scale systems. To better help facilitate programming in this type of environment, the authors realized that the programming language must have the above attributes built-in, it must be easy to scale, and people should easily be able to ascertain the intent of the program itself. In other words, programs should be “easy to read”.

Think of this as a “working mans” programming language. While Go may not have all the bells and whistles of some of the other more flexible languages, it does “the right thing” in its intended domain, which is mostly writing servers.

Answers!

For Memory Safety, Go implements an automatic garbage collector and does range checked storage.

For Concurrency, Go implements channels with some of the grammar introduced by Hoare. Having concurrency built into the language itself ensures consistent use by all participants. Concurrency also enables programs to execute in parallel as the environment allows. As an added bonus, there is even a deadlock detector!

Not surprisingly, since Go is used to build servers, Network/Web support is also built into the language. It is trivial to write a simple web server which takes into account all the little nooks and crannies that one has to think about in such pursuits. Why is this important in an embedded device? IoT!! With everything on the web now, web programming needs to be a standardized part of languages.

While Go is built for the server world, it also has very good Integration with Baseline Software. There are tools which help build wrappers around C library calls and such.

Go is industrial programming.

Conclusion

This has been a little bit of the ‘why’ of Go, let’s look at using Go on the Jetson in Part II.

The post Go (Golang) – NVIDIA Jetson Dev Kits appeared first on JetsonHacks.

Go (Golang) Part II – NVIDIA Jetson Dev Kits

$
0
0

In Part I of our Go (Golang) discussion, we went over why Go might be a good programming language for embedded systems. It is certainly worth you time to go to their website and explore the information there. There’s an interactive workspace for trying the language on the website.

In this article, we will load Go on to a Jetson and look at a simple software application. The software app displays some numbers on a 7 Segment LED Display connected to the Jetson over I2C. In addition, the app implements a web server which sends Server Side Events (SSE) to any attached web browser so that the web browser mirrors the LED display. Looky here:

Hardware

Note: This demo has been tested on the Jetson TK1 L4T 21.5 and Jetson TX1 L4T 24.1 (32 bit and 64 bit versions). A Jetson TX1 with 64 bit L4T 24.1 is shown in the demo

First, before powering up the Jetson, let’s wire up the LED Segment Display.

For this project, a Adafruit 0.56″ 4-digit 7-segment Display W/i2c Backpack – Green is wired to a Jetson. The Display is assembled per the Adafruit instructions.

On a Jetson TK1, here’s a wiring combination for I2C Bus 1:

GND J3A1-14 -> LED Backpack (GND)
VCC J3A1-1 -> LED Backpack (VCC – 5V)
SCL J3A1-18 -> LED Backpack (SCL)
SDA J3A1-20 -> LED Backpack (SDA)

For the TK1, here’s another article for using the LED Segment Display.

On a Jetson TX1, here’s a wiring combination for I2C Bus 0 (as shown in the video)

GND J21-6 -> LED Backpack (GND)
VCC J21-2 -> LED Backpack (VCC – 5V)
SDA J21-3 -> LED Backpack (SDA)
SCL J21-5 -> LED Backpack (SCL)

Note that the TX1 also has a I2C Bus 1 interface. See the J21 Pinout Diagram.

Software Installation

Once the board is wired up, turn the Jetson on.
Install the libi2c-dev library. In order to be able inspect the LED Display, you may find it useful to also install the i2c tools:

$ sudo apt-get install libi2c-dev i2c-tools

After installation, in a Terminal execute (0 is the I2C bus in this case):

$ sudo i2cdetect -y -r 0

ubuntu@tegra-ubuntu:~$ sudo i2cdetect -y -r 0
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: — — — — — — — — — — — — —
10: — — — — — — — — — — — — — — — —
20: — — — — — — — — — — — — — — — —
30: — — — — — — — — — — — — — — — —
40: — — — — — — — — — — — — — — — —
50: — — — — — — — — — — — — — — — —
60: — — — — — — — — — — — — — — — —
70: 70 — — — — — — —

You should see an entry of 0x70, which is the default address of the LED Segment Display. If you have soldered the address pins on the Display you should see the appropriate address.

Installing Golang

With the I2C library installed and the display attached, we’re now ready to install Go. The directory ‘gowork’ will be the working directory. You may name it to your liking.

$ sudo apt-get install golang-1.6
$ mkdir -p gowork/src

Next edit bashrc to set the environment variable GOPATH, and have the PATH point to the Go binary.

$ gedit ~/.bashrc

Add the lines:

export GOPATH=$HOME/gowork
export PATH=$PATH:/usr/lib/go-1.6/bin

Now either source bashrc or close the current Terminal and open a new one.
Next, we’re ready to install the goi2c example from the JetsonHacks account on Github. Go understands how to retrieve Github repositories:

$ go get github.com/jetsonhacks/goi2c

Configuration Note

You will have to set the I2C bus for the demo, depending on how you wired the LED Display to the Jetson. The line that needs to be changed is located in src/github.com/jetsonhacks/i2cExampleServer/main.go

backpack,err := ledBackpack7Segment.NewLedBackpack7Segment(1, 0x70)

In the repository, the bus is set to 1 (the first parameter). You may need to change it to 0.

Now let’s compile the example:

$ cd gowork/src
$ go install github.com/jetsonhacks/goi2c/i2cExampleServer

At this point, the example, ‘i2cExampleServer’, is in the gowork/bin directory.
For this example, the server requires the folder ‘templates’ to be in the same directory as the demo executable. Copy the templates directory to gowork/bin or create a new folder and copy the demo executable and the templates folder into it.

Before running the server, you may want to open a browser to monitor the display. Open a HTML 5 browser and point it to:

http://localhost:8000/test

To run the demo, go to the folder in which you have placed the executable, and run the demo with sudo permissions. E.g.

$ ~/gowork/bin
$ sudo ./i2cExampleServer

Once the demo starts, the LED Segment Display will run a demo cycle. The web server sends out web pages to reflect the current LED Display. Type ^C to end the demo.

Summary

This Go sample app is not idiomatic Go programming. Instead it is a sketch of some of the functionality that may be useful in an embedded environment. It was also my first attempt at Go programming.

The Good

There’s not much to comment about on memory safety, as it’s pretty much invisible to the programmer.

Concurrency is a difficult thing, but Go has a good mechanism for dealing with it. Play with the samples and write some of your own code to get a feel for what it can do for you.

Server side events being pushed out from a server are pretty typical of something that you may want to do from an embedded device. Think of it as a stock ticker pushing data from the device. As expected, the web server part is pretty straightforward. Once you start thinking ‘the Go way’, it’s a simple task to write the server code.

Actually interfacing against a low level device, like the I2C Segment Display, feels to be about the same amount of work as a C program. If no memory allocation happens in the device driver, it’s probably just as easy to wrap an existing C library for use.

The thing compiles fast, I’ll certainly give it that.

The Bad

Drawbacks? Sure. On Linux there’s really not an agreed upon IDE for Go with a good editor, project management and integrated debugger built-in. This seems to stem from the roots of the language designers who aren’t real believers in such things. This leads to a whole lot of what I call ‘Programming by guessing’.

When a professional programmer starts out in the world, the first thing they realize is that they don’t get to work on their own code. Most are thrown in the deep end of very large programs that have an issue which needs fixing. It’s one thing to work on your own code and fix issues, it’s another thing to be thrown out into the sea on your own. Debuggers and project organizers act as life preservers. It’s rather harsh not to have those available in a modern way, especially when you’re first starting out.

Conclusion

All in all, fairly positive. Certainly for having a server of any type on the device, Go should be a definite candidate. If there’s some particular task which needs concurrency at the CPU level, Go should be on the checklist. Certainly worth kicking the tires.

The post Go (Golang) Part II – NVIDIA Jetson Dev Kits appeared first on JetsonHacks.

Viewing all 339 articles
Browse latest View live