Quantcast
Channel: JetsonHacks
Viewing all articles
Browse latest Browse all 339

ChatGPT – 4 Useful Insights

$
0
0

There has been a lot of hype around ChatGPT over the last several months. Deservedly so. Here’s some thoughts on how to use it. Looky here:

Introduction

When something like ChatGPT explodes onto the scene, there’s a great divide between what people with a technical background see and everyone else. With ChatGPT, that’s quite a big chasm.

At its core, ChatGPT is a Large Language Model (LLM). It is a Generative, Pre-trained, Transformer (GPT). The OpenAI GPT is trained on a corpus which consists of most of the open Internet, circa 2021. The model is supplemented with Reinforcement Learning from Human Feedback (RHLF). The ‘Chat’ part of ChatGPT refers to an actual application which interacts with the model, a chatbot. A chatbot takes a prompt from the user, and in this case returns a response inferenced from GPT.

When you talk to most people, they believe that you ask ChatGPT a question and it responds with an answer. Typically they believe that there is some type of data store where answers are stored, and the machine is looking up the information.

However, it’s doing something different than that. GPT takes a prompt, breaks it down into tokens, and converts this to a vector of numbers. Based on all of the data that it has seen (remember, it’s the whole internet), it then tries to predict what the next tokens would be. It does this over and over using computational statistics to produce the response.

Check the Answer!

Once the model has been initially trained, people ask it questions and ‘grade’ the answers, RHLF. The trainers then adjust the parameters on the model to take this information into account. But here’s the thing.

By its nature, a GPT is building an answer that “looks” correct. If you request an answer with references, it will provide what looks like an answer with what appears to be references. It might cite an author prominent in the field, a reasonable title, and an associated journal. And it might even be right! However, it’s goal is to produce an answer that looks correct, not one that is correct.

There in lies the rub. People assume that computers are authoritarian in the sense that they will always give a correct answer. In contrast a GPT is non-deterministic. It can give different answers to the same input. Sure, there are some algorithms that are non-deterministic, such as Monte Carlo simulations. For the most part of the last 70 years, computers compute and give an answer based on the input deterministically. Same output on the same input. Not so with neural networks.

More Human than Human

In that way, a GPT is like a person. You get different answers from the same person all the time. You know that some people lie or have a different viewpoint which generates answers that you might consider wrong. With these early models, you have to kind of assume that you’re talking to a liar that lies.

With that said, that doesn’t mean a GPT isn’t useful. Quite the contrary. For the most part, it handles a lot of tasks that are worth the price of admission. Translation between languages, like English to French. Converting lists to tables, adding rows and columns to tables. Making outlines of tasks, the list goes on. Researchers have identified over 100 emergent behaviors that GPT exhibits which no one expected.

But the deal is that you have to check its work. You have to know the answer when you see it, and call it on nonsense. People do that all the time with other people of course. It’s just that we’ve seen the computer be an authoritative voice for so long, it will take a lot of time to work through the new world order.

Does this change everything?

With all the hype and noise around ChatGPT, it’s hard to get a reading on how important this technology is going to be in the near future. You see people using it to generate blog posts and video scripts, for example. There is going to be a lot of short term arbitrage opportunities for tasks like that. However, in the long run there’s no value add to it. Simply rewriting or regurgitating previous internet articles doesn’t provide any benefit. We’ll see people adjust to that.

At the same time, adding ChatGPT type capabilities to a browser breaks the whole search paradigm. Use the new ChatGPT extensions on the Microsoft Edge browser and see how many times you have to use a search engine after that. The thing that fuels search engines? Ad revenue.

With ad revenue drying up, content creators will be faced with having to monetize their content differently. It’s not clear what will be the winning mechanism, but some type of subscription or paywall mechanism seems likely. We’ve seen this with Substack, Locals and Medium already.

This is certainly technology worth checking out.

The post ChatGPT – 4 Useful Insights appeared first on JetsonHacks.


Viewing all articles
Browse latest Browse all 339

Trending Articles