Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

JetsonHacks Newsletter – June 2024

$
0
0

JetsonHacks Newsletter – June 2024

This is a sample newsletter. Sign up to get it email delivery!

Hello there!

First the news. Lots of good stuff happening.

Platform Services

NVIDIA JetPack 6.0’s Jetson Platform Services have just been released! Supporting cloud-native technologies, these services allow efficient deployment and management of microservices. The microservices enhance edge AI applications with advanced video processing, AI inference, and analytics capabilities. These services are essential for building robust AI applications on Jetson devices, streamlining workflows, and improving deployment efficiency.

For more details, visit the ​NVIDIA Developer Blog​.

AI-Based Steering

Here’s a cool project. Arrow Electronics and NVIDIA have collaborated to develop an AI-based steering system for the SAM Car, a racecar designed for disabled drivers. Leveraging NVIDIA’s AGX Orin processors and AI frameworks, the system uses high-resolution, dual-axis cameras and deep learning algorithms to interpret driver inputs, controlling the vehicle’s steering, throttle, and brakes in real-time. On a vehicle that can go 213 mph! ​Arrow Electronics and NVIDIA Collaborate on New AI-Based Steering System for SAM Car​

Planet Labs

How are you going to keep Jetsons out of space? Planet Labs is partnering with NVIDIA to enhance the onboard processing capabilities of its upcoming Pelican-2 satellite using NVIDIA’s Jetson AGX Orin platform. This integration aims to provide advanced AI-driven intelligent imaging and rapid data insights The collaboration will enable near real-time data processing and delivery directly from orbit, significantly improving the satellite’s ability to monitor and analyze Earth phenomena such as forest fires and natural disasters.

For further details, visit the ​Investing.com article​.

Allxon Cloud Serial Console

JetsonHacks will be doing a review soon of the Allxon Out-of-Band Cloud Serial Console. This combination of hardware and software allows you to fully monitor and manage Jetson devices remotely. This should greatly simplify remote admin, and with hardware in the loop it promises to be a robust solution. ​https://www.allxon.com​

Jetson AI Lab

Last but not least, the Jetson AI Lab Research Group is at it again! At the last meeting, Dusty Franklin showed off an impressive Agent Studio demonstration. Agent Studio includes a node based editor for connecting different sensors and AI components together with AI Agents which should reduce the amount of code you need to write very interesting applications. Looky here at the demo from the last meeting: ​JETSON AI LAB | Research Group Meeting (6/11/2024)​

Understanding the Impact of AI on Computing and Programming

For the past year and a half, AI has been a hot topic, with many claiming it will program, take our jobs, or even destroy humanity. That’s a lot to unpack! How exactly is AI going to do these tasks? We’ve seen impressive demos, but world domination seems more than a bit far off. 

Let’s be clear: the idea of world domination by an AI seems strange. However, the notion of an AI company using the ‘feature’ of being able to recognize a cat to continuously record your and your family’s video and audio life is straight from Dr. Evil’s playbook. You can understand why the interest in Jetson and compute on the edge to keep everything ‘in house’ and private as it were.

From Deterministic to Stochastic Computing

A fundamental change from deterministic to stochastic computing seems odd. Historically, computing has been deterministic, meaning fully repeatable processes. People expect machines to do exactly what they are told. The machines are designed that way. Stochastic machines, built by the Linear Algebra Mafia, operate differently. Getting the same answer twice is hard. But for tasks like visual processing, this is beneficial. Eventually, we expect computer vision to match human vision and then some.

AI in Programming and Mathematics

What about writing programs or doing math? Jensen Huang of NVIDIA suggests the future of coding is natural language. Curious, I spent a week with ChatGPT to explore this idea.

The Nature of Programming: Past and Present

When personal computers were much, much smaller and slower, programmers knew every detail of the system. Assembly language was common. Drawing lines on a screen with a hand-coded Bresenham’s line algorithm was a given. Space was limited, so there was no room for waste. Operating systems were minimal. If that weren’t enough, most people ended up stepping through their programs in an assembly language level debugger. They KNEW the system!

Today, systems are so large that it’s hard to specialize in just one part, let alone many. Networking and smart devices add to the complexity. Entire industries focus on specific subsections and applications. Websites at scale are incredibly complicated. Huge codebases rely on many components, with no guarantees.

The Result: Limited Knowledge and Increased Complexity

Programmers, now called developers, can’t know everything about what they’re creating. We’ve shifted from knowing everything to knowing just a little. Libraries and components handle heavy lifting. ‘Tools’ like Stack Overflow and GitHub assist with common tasks or problems.

Computing stacks are fragile. Changes in base components like operating systems or programming languages break things. You know this: moving from one Jetson release to the next is painful. It’s a never-ending upgrade cycle. It would be nice if apps ran without needing to understand the whole system when a library of which you only use a very small slice changes.

Symbols and Abstract Thinking

Using symbols to aid in abstract thinking is one of man’s greatest inventions. Symbols like π, ∑, and ∫ allow clear communication in a universal language. Describing mathematical equations in natural language is painful. Visual interfaces and human interaction descriptions are equally challenging.

Experimenting with ChatGPT

I experimented with ChatGPT, specifying personas like Sally from Marketing, Don the CTO, and James the Lead Developer. I asked them to implement features and build a web application to keep track of a Persona database. 

When ChatGPT first came out, it was a nightmare trying to produce code. It’s a bit better now. For this task, it chose a Flask backend and a ReactJS frontend. However, once scripts grow beyond about a page and a half, issues arise. There are paths that lead to unrecoverable states or infinite loops when explaining errors or tasks.

You can explain and scold all you want, but a LLM doesn’t learn from its mistakes like a junior coder. Typically, you can tell a human, “Here’s a mistake, learn from it.” The LLM? Not so much.

The Role of AI in Large Organizations

In large organizations, management gurus teach that the square root of the number of people does 50% of the work, a variation of the Pareto Principle. For 1,000 people, that’s about 32 doing half the work. Does an LLM act as a force multiplier in research and development? In getting work done? Or does it mean management will eliminate jobs, thinking the AI will take up the slack? What does that mean at scale?

Much of developing applications is boilerplate. Developers need to know which headers to import, libraries to load, and platform dependencies. Why do we need to track imports, headers, libraries, and modules today? We have enough computing power to manage everything installed on a dev machine and serve nicer development environments. Simply making an OS call or reading the time, why do we have to figure out which headers and libraries to import? If LLMs handle housekeeping by boilerplating apps and acting as super help, developers can focus on adding value. Oh, and to have the LLM explain the code is very useful for maintainers. Remember that maintaining applications accounts for 80% of the cost of most commercial systems over time. However, it’s not clear that stochastic systems are great for creating and building deterministic systems.

That’s not to say LLMs don’t have application! There are some things LLMs are really good at, translating text, summarizing, organizing and so on. And by really good at, I mean amazing. Here’s a key point: One LLM is interesting, multiple LLMs and AI Agents will eventually get you as close to an answer as people can in a lot less time and for a lot less money. The current going rate for consumer LLMs like ChatGPT Plus is around $20 per month. How much work can you get from a person for $20 nowadays?

It’s easy to imagine a scenario where you’re doing research, writing a paper, translating text or some other task and you ask a LLM to help. The response is usually in the ballpark and makes sense when you read it. Now imagine the second part: you ask another LLM (or two) to check the work and critique it, and then send it back to the original. Like hiring an editor, going back and forth until they are both “satisfied” the work is accurate and polished. What is the value of that work product?


Conclusion

AI is changing computing and programming, shifting from deterministic to stochastic computing. While AI can help with code generation, many challenges remain. Especially with complex tasks and learning from mistakes. What’s your prediction for the future of AI coding?

—-

Thank you for taking the time to read this; I hope all is well your way. As always, reply to this email if you want to share some of your thoughts or you have an interesting story, product or three to share.

Jim

The post JetsonHacks Newsletter – June 2024 appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles