Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

JetsonHacks Newsletter – April, 2023

$
0
0

Hi there!

The big Jetson announcement at GTC 2023 this year is the new NVIDIA Jetson Orin Nano Developer Kit. Here’s a full unboxing and review on JetsonHacks, “NVIDIA Jetson Orin Nano Developer Kit – The Perfect Solution for Makers and Developers: A Review”. The units will be shipping a little later this month.

The Orin Nano Devkit slots into the Jetson lineup where the Jetson Xavier NX Devout previously resided. That makes the current Developer Kit lineup the entry level Jetson Nano, the mid-level Jetson Orin Nano, and the top of the range Jetson AGX Orin.

Late March also marks the availability of all of the announced Jetson Orin modules. There are 6 all together:

  • Jetson AGX Orin 64GB and 32GB versions
  • Jetson Orin NX 16GB and 8GB versions
  • Jetson Orin Nano 8GB and 4GB versions

There will be an industrial version of the AGX Orin module shipping later this year.

Hands On

Here’s a little inside baseball. Well before the release of a Jetson, NVIDIA contacts reviewers and briefs them on the new release. NVIDIA, like most computer manufacturers, have pre-production builds. There are rules for these builds, such as they cannot be sold. Instead they send them to the press and reviewers for marketing purposes. The pre-production builds tend to be very close to production models (almost exactly in NVIDIA’s case has been my experience). But while the hardware is very close, the software can vary quite a bit from the final release. I try to shy away from doing benchmarks and such until the production release of the software. The production software release usually happens within a week or two of the introduction announcement.

I think the marketing folks tried to view the Orin Nano to be the evolution of the original Jetson Nano. 80x faster and all that. But the original Nano is a little long in the tooth, using a Tegra X1 chip. There are two chip generations (TX2, Xavier) between that and the Orins.

After installing the production software on the Jetson Orin Nano Developer Kit, it’s pretty clear that it’s a more natural comparison to the Xavier NX. The CPUs and memory feel a little faster resulting in slightly snappier desktop performance. There are more CUDA cores, so if your calculations are compute bound then that’s a big plus.

At the same time, there’s only 6.3GB of memory available out of 8GB. Some of this memory was lost in going to Ubuntu 20.04 because they reserved additional memory for camdbg_carveout and ramoops_carveout. I haven’t looked into what it takes to modify those constants. Unfortunately the amount of RAM that you have might be the determining factor for the size of the models that you can inference. You may have to plan on going to the NX or AGX Orins for larger models.

As the Orin Nano is the entry level into the Jetson Orin lineup, a few of the other compute complexes have been disabled. This includes among other things the Deep Learning Accelerator (DLA) and the video encoders. NVIDIA provides sample code to run video encoding on two of the CPU cores. This seems adequate for some situations, but it is something to consider if you have to encode several video streams at once. Feels a little penny wise, dollar foolish to me. 

Overall it feels pretty solid, which I can’t say about the pre-release software. I’ve been using it for a week or two and it seems like a capable little development machine.

NSight

On the software side, the NSight tools are now available to run natively on the Orin family. The NSight tools integrate with Visual Studio Code to provide a seemless development experience. The NSight tools allow low level performance monitoring and debugging of many of the NVIDIA special features of the Jetson. This includes items such as the CUDA and the Deep Learning Accelerator (DLA) cores. Previously people had to run the NSight tools remotely on a host machine attached to the target Jetson.

Isaac Sim

Another big announcement is the availability of NVIDIA Omniverse platform-as-a-service on the Azure cloud. Isaac Sim, a robotics simulation application, is powered by the Omniverse platform. The big idea here is that you can use synthetic data generation (SDG) to build virtual environments. Then you can you can simulate what robots sense in the virtual environments, and train AI models using that data.

For example, let’s say you have a warehouse. You model the warehouse, and populate it with objects. This can include synthetic people. If you’re developing a ground robot that is moving payloads around, you then train a model using synthetic camera or lidar data that the model generates. You can compensate for dynamic data in the scene, such as people walking, using the field of view of the sensor. This synthetic data can be used to retrain a people segmentation model before optimizing it with NVIDIA TAO toolkit. Once trained, you then transfer the model to the Jetson on a real robot for inferencing.

Many large operations have complicated machinery and other equipment which require this level of modeling for operation. Being able to train the robots on this data saves a bunch of time and money.

GTC Talks

There are 650+ sessions on the GTC website for 2023. I’ll share a secret with you, I didn’t watch them all. However, I watched more than a couple. Mostly those centered around Jetson and the Keynote of course. It’s a little like trying to drink from a firehouse. To be honest, I’m not quite sure what to make of it. There’s so much going on in the areas of technology that NVIDIA touches. And it touches a lot! To be sure, if it has anything to do with advanced computing, there’s a session at GTC that covers it.

At the same time, just a few days before GTC, ChatGPT 4 was officially launched. NVIDIA provides the GPUs to OpenAI. There was some nostalgia during the keynote watching a video of the OpenAI founders receiving their first A100 GPU from Jensen Huang. To say that OpenAI has made some noise is a huge understatement.

A lot going on in tech world right now!

A Consistent Software Stack

One of the really interesting features of NVIDIA software is the consistency across product lines. The CUDA code base runs on all of the GPUs. Sure, some models like the Jetson might be a revision behind at times. There might be architectural differences in the hardware that the developer needs to understand. On the Jetson for example this is unified memory versus separate GPU/CPU memory on a desktop GPU. But for the most part everything is consistent. 

I remember when I first started talking to the NVIDIA folks a bunch of years ago about this topic. That was back in 2015. I remember thinking to myself how difficult that must be, and the amount of discipline it must take between different product lines to make sure that happens. 

As it turns out, this is ingrained into the NVIDIA DNA. What I would think would be difficult is what enables them to keep advancing at this incredible rate. They can leverage the code base everywhere! That means that when new features arrive, everyone gets them. 

Combine this with a consistent hardware architecture in that all the products use the same GPU architecture, it gives NVIDIA a tremendous advantage. Look at the roll outs. Once every 12 to 18 months there’s a new architecture of GPU chips. The chips go into just about everything NVIDIA makes, from the high end data center GPU cards like the H100 to the gaming GPU cards, and eventually to the Jetsons. And everything in-between. 

Now I would think that there is significant technical debt which builds up over time with this approach. But NVIDIA seems to handle it quite well. I guess if you have a 30 year history of doing exactly this, maybe you might know a thing or two about it.

Along the Same Lines

Another thing that I found fascinating about NVIDIA the company is the way they are handling the great technology contraction in 2023. As you know, many tech companies grew quite a bit over the pandemic years. Money was cheap, demand was high, companies got fat, life be good!

Then the music stopped. All of a sudden, everything that had been going right for them during that brief period came to a grinding halt. In response, many tech companies made the decision to start layoffs. Or even worse, demanded they actually make real money instead of being in the employee day-spa and stock speculation business. 

There are several layoff strategies. Many companies have a “last in, first out” policy. That’s typical of companies which are in more mature industries, or have less “skilled” workers. In tech world, many companies use a much different approach. You can think about it as “keep the best, plus the ones that make us money” strategy. 

That strategy works as follows. During growth spurts and good times, hire freely. Use down times to trim the less productive. Rinse, repeat. This is interesting, as this type of thinking would lead you to believe that you’re always gathering better people as the end result.

Except some tech companies have been so successful over the years they’ve rarely had to go through any such contractions. Many tech companies have trouble just getting enough people to begin with. The turnover is another large factor, as tech workers hop jobs to ever increasing paychecks. 

Then you have companies like Google who came up with a novel plan. Just pay people extremely well and provide so many perks that it would be difficult for them to leave. Combine that with generous stock option grants on a very bullish stock and life be good. Being a competitive industry, other companies followed suit. 

Then the other shoe fell. We are now in a time that we’re not suppose to call a recession. It’s been interesting watching the tech companies’ response. Google had to do a big round of layoffs, and it was pretty clear that they didn’t really know how to execute one. A lot of the Google perks? They seem to be leaving the building too

NVIDIA, on the other hand, appears to have made a commitment to keep their new hires. They are doing this by tightening their belts. Now, to be clear, they aren’t in the same world of hurt that companies like Google appear to be heading towards. At the same time, demand for tech is down considerably. 

Of course, there’s always the possibility that the situation could change and NVIDIA may have to make tough decisions about layoffs in the future. However, for the time being, their priority is to retain their workforce and navigate the current economic climate with their team intact.

NVIDIA sometimes gets a bad rap. A lot of this is from the consumer gaming crowd, or from people who have philosophical differences on how open the software stack should be. It’s never been clear to me what they are comparing it against. Or how they would run a company of their own in the same space differently. Successfully, that is. But it appears to me from the outside that NVIDIA treats their people respectfully. That’s a big plus in my book.

—-

Thank you for taking the time to read this newsletter. As always, you can reply to this email if you want to share some of your thoughts or you have an interesting story or three to share.

Jim

The post JetsonHacks Newsletter – April, 2023 appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles