Ep 194: 5 ChatGPT Facts You Might Not Know

Episode Categories:

Resources

Join the discussion: Ask Jordan questions on ChatGPT

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan Wilson: LinkedIn Profile


Five Lesser-Known Facts About ChatGPT

In today's episode, we're going over 5 facts you should know about ChatGPT.

Decoding The Language Model: Tokens, Not Words

The hype around OpenAI's generative AI, ChatGPT, is undeniable. What fewer people know is that ChatGPT relies on a process called tokenization to generate responses. It does not understand words in the same way humans do. Instead, it breaks down inputs into tokens and uses the relationship between words and context to design tokens, the building blocks of language. The ability to improve communication inputs plays a more significant role than you'd think in producing quality outputs.

Demystifying the True Memory Capacity of ChatGPT

While it’s commonly marketed to have a 128k token memory, ChatGPT actually has a limit of 32k tokens. Once this token limit is exceeded, the model forgets the previous information, which could result in inaccurate outcomes known as 'hallucinations'. Understanding and leveraging memory management is key to achieving optimal results with ChatGPT

The Reality of Browsing with Bing Feature

The default browsing with Bing feature in ChatGPT does not support specific website browsing but merely allows quick querying based on the user's questions. It relies heavily on Search Engine Optimization (SEO) for querying information, leading to potential inaccuracies. One must be mindful of these limitations when using the 'browse with Bing' feature.

The Cost of Phasing Out Plugins

There are reports that OpenAI could phase out plugins in favor of GPTs, a decision that might result in increased hallucinations and decreased capabilities. This move could lead to a loss in market share and user trust, particularly among non-developers who often rely on plug-ins for automating manual tasks. This approach may also result in a decrease in the overall effectiveness of ChatGPT.

The Impending Arrival of GPT 5 and What It Means

While the release date for GPT 5 remains uncertain, there’s general consensus in the AI community that users need to understand and use large language models properly to get the best results and mitigate the risk of misinformation. A technique often employed is 'priming' the model with more information to safeguard against inaccurate results.

Understanding the New Default Mode and Its Limitations

With the introduction of the new default mode with Bing, ChatGPT can perform quick queries based on the user's questions, which, if not properly controlled, could potentially lead to inaccurate or problematic outcomes. For instance, a common issue is when the model returns inaccurate data due to the heavy reliance on SEO.

In conclusion, while ChatGPT offers powerful capabilities, it's crucial to understand its underpinnings, limitations, and potential risks to fully leverage its benefits and avoid misinformation. Emphasizing further learning in generative AI can significantly enhance personal and professional growth, especially in this fast-paced tech world.


Topics Covered in This Episode

1. Overview of ChatGPT's Functioning
2. Common Misconceptions on ChatGPT
3. Structural Changes in ChatGPT
4. Ways to Improve Outputs in ChatGPT


Podcast Transcript

Jordan Wilson [00:00:16]:
There's a lot of things that You have wrong about ChatGPT. Whether you're just looking at the headlines or The marketing or find you know, following a a Billy Boy ChatGPT influencer who's 19 and lives with his mom. Ad There's so many things you have wrong about ChatGPT. So today, I'm gonna tell you 5 facts that you might not know about everyone's favorite large language model. Alright. We're gonna talk about that today and a lot more on Everyday AI. What's going on y'all? If you're new here, my name is Jordan, and I'm the host of Everyday AI. In this live stream, this podcast, this free daily newsletter, it's for you.

Jordan Wilson [00:01:00]:
It's it's to help us all learn what's going on in the world of generative AI and how we can all leverage it. Alright? So gotta pay specific attention to that piece today because I'm gonna tell you how to actually properly leverage Chat g p t to grow your company and to grow your career. So, if you're joining us on the podcast, thank you. As always, make sure to check out your show notes. Make sure to follow, subscribe to the show, leave us a a rating. But also, there's contact information in there. We actually have some hidden surprises in the in in in in the, show notes there, so make sure to check those out. You can always even reach out to me.

Jordan Wilson [00:01:38]:
Put my my LinkedIn and my email in there as well. And if you're joining us on the live stream, thank you as well. So, You know, Tara joining us from Nashville and Harvey, Mauricio and Brian, everyone, thank you for joining us. But before we get into facts that you might not know about Chat GPT. Let me first remind you. Sign up for the free daily newsletter at your everyday ai.com. Also, while you're there on our website, whether you know it or not, there's something on there called AI tracks. Go click on that because this is what I Still think is the 1 single best source of free information on generative AI that there is on the Internet, period.

Jordan Wilson [00:02:18]:
We have now almost 200 episodes interviewing experts from the companies making this technology to start up entrepreneurs and everything. So you can go Listen to you know, if you're interested in AI and education, we've got episodes on that. If you're interested in AI and entrepreneurship, we've got dozens of episodes in that. So make sure to go check that out After you subscribe to our free daily newsletter. Alright. So before we get into those 5 facts you might not know about ChatGPT, let's first go over the AI news as we do every day. Alright. So let's start it off.

Jordan Wilson [00:02:52]:
Intel stock is dropping as investors fear they're not invested enough Into AI. Intel's stock has taken a hit after weak guidance for, the current quarter, leading analyst Question the company's ability to compete in the AI and chip market. Analysts believe that it could take time for Intel to regain its position in the AI chip market, and they will need to compete according to investors, to prove themselves in the AI market and compete with companies like NVIDIA who are Literally just running away, in in the GPU market. Alright. So next, is ChatGPT's laziness Gone with a new update from OpenAI. So OpenAI updated its GPT 4 turbo model to reduce causes of laziness and improve cogeneration tasks. The company also lost small, launched smaller AI models called embeddings to help applications with retrieval and Augmented generation. Rag.

Jordan Wilson [00:03:51]:
Alright. So OpenAI has updated the GPT 4 turbo model to address laziness that many users have reported specifically when it comes to generating code. A lot of times, it would kinda just give you a couple responses or say, alright. Well, for the rest, you're gonna have to finish it yourself. So the company made this update. They've acknowledged that, There's been this problem, but they've also lost these, launched these smaller AI models called embeddings to help applications with retrieval augmented generation or To be able to bring in outside data while you're working with the GPT model. Speaking of OpenAI, So the federal government is investigating big tech relationships around AI investments. So the Federal Trade Commission has announced an inquiry into the AI industry and its major players, including everyone.

Jordan Wilson [00:04:36]:
Amazon, Alphabet, Microsoft, Anthropic, OpenAI, all the big, AI companies right now. This market inquiry will be conducted under section 6 b of the FTC Act and aims to investigate potential antitrust violations or deceptive practices in in the field of artificial intelligence. So FTC chair, Lina Khan has emphasized that there is no AI exemption From existing laws, and the agency will closely examine any instances of companies using their power to hinder competition or mislead the public. Interesting. Alright. We got some we got some hot water now, with with the federal government and artificial intelligence. So If you wanna keep up with those stories and more, please make sure to go to your everyday AI.com. Sign up for the free daily newsletter.

Jordan Wilson [00:05:24]:
We'll be recapping those stories and a lot more. If you haven't listened to or, read Our newsletter, you can actually listen to it. We have an AI Jordan read it sometimes. But if you haven't read our newsletter, I'll tell you this. It's written by a real human, me, former journalist. And we break down the fluff, the the the the lies, the the the marketing, and we actually just give it to you real. Alright. So if you wanna actually learn generative AI, I'd say there's no better place than our newsletter.

Jordan Wilson [00:05:53]:
Alright. Speaking of that, let's actually learn some facts about Chat GPT and talk about 5 facts that you might not know about what I'd say is everyone's favorite large language model. I mean, if you're talking active users, Technically, the favorite. But I also wanna know from from our livestream audience joining us. Right? Cecilia joining us from Chicago. Jay and Ted and Judy, thank you all. And Mike Forgy and Wuji and Laura, what questions do you all have? Or maybe what's What's one thing that you may be, have in your mind, that you're like, okay. Is this a chat GPT fact, or is this fiction? Go ahead.

Jordan Wilson [00:06:33]:
Get your get your comments in. Get your questions in. Same thing if you're listening on the podcast. Hit me up. I'll give it to you straight. So before I dive into these 5 facts, I first want to Give you very, very, very brief background about myself and everyday AI. Alright. I'd say a lot of what comes, comes into play now at Everyday AI is given my background.

Jordan Wilson [00:06:57]:
Right? I was an award winning investigative journalist for for many years. Right? And one of the things about being a journalist is you never believe what anyone tells you. Anyone. Right? You know? There's the saying that says, your mom says she loves you, get it in writing. That's our approach when it comes to generative AI because we have, technically, more than a 100,000 people trusting us Each and every month to separate the truth from the lies, to separate marketing from what is actually happening. Alright? So if if you're a vet around here, you probably know that. You know, sometimes I'll spend 5, 6, 12 hours doing research just for a single show. Alright? Because I wanna make sure if you're listening to this, if you're watching this, if you're reading about this, you know these are facts.

Jordan Wilson [00:07:57]:
Alright? I'm not just copying and pasting what, You know, big company says in their press release or you know? Big companies always hit me up trying to get their people on the show. I vet everything. Alright. I want you to know that. So when I'm talking about these facts, you may not believe some of them because they might go against what a big company says. But as always, I bring the receipts. Right? You'll see that if you're watching live, if you're on the podcast. I've got some screenshots.

Jordan Wilson [00:08:28]:
I'll I'll do my best to describe them, but I want you to know that. Alright? I'm not one of those Billy boys. I mentioned Billy boy. Right? That's that's those, you know, Those 19 year old kids that are, you know, unfortunately teaching the rest of the Internet how to use ChatGPT. And they say, oh, you know, do this, do this. No. They have no clue what they're talking about. Right? I have a background in this space.

Jordan Wilson [00:08:50]:
Right? Our team has been using the GBT technology since 2020. So we're not new when it comes to GPT. You know? Chat GPT has been out since November 2020. Our team had already been using the GPT technology for multiple years at that time. Alright. I just wanna get that out there before we dive into these 5 facts. So Let's get going here. Let's go over the 5 facts.

Jordan Wilson [00:09:16]:
And, again, thank you for everyone, for joining live. Get your questions in. I'll try to answer and And or as I go. We'll see how it happens. Alright. So number 1, fact you might not know about Chat GPD is, Well, Chat GPD doesn't technically understand words. You might be saying, what, Jordan? I've seen it. You know, I can talk to it.

Jordan Wilson [00:09:39]:
I can chat with it. I put in a query, and it gives me something back. Right? And you've probably seen some great outputs with ChatGPT. So you're like, alright, Jurgen. You clearly don't know what you're talking about. ChatGPT clearly understands words. Not really. Right.

Jordan Wilson [00:09:57]:
So ChatGPT thinks and analyzes and responds Via tokenization. Actually did an entire, like, 40 minute episode on on that process. So if you're really interested, we'll make sure to leave that in the show notes. But, essentially, it's like this. And this is how most large language models work. It's through tokenization. So a token can be, you know, characters. It can be part of a word.

Jordan Wilson [00:10:25]:
It can be an entire word. But, essentially, large language models, When you type something in, it breaks that down into tokens, right, and it assigns tokens To all to all of the different words. So, actually, you know, even went over, you know, OpenAI has a tokenizer. You can go and play and type in words and see how it thinks how it breaks it down. Right? It's not like, oh, each word has the same token. You know? The example that I've showed is, you know, in a I used the word just 4 times, and depending on the context of the word, it actually signed 4 different tokens to the word just. Right. Because the word just can be used in many different ways.

Jordan Wilson [00:11:07]:
So that's how large language models actually work is they use context in all of these different, kind of relationships between words and, context of your inputs into designs tokens. Alright? So Doesn't actually technically know what words are. Tokens are essentially building blocks of language. Alright. And like I said, most large language models work this way. Because, again, it's important to know, all ChattGPT is or all Google Barthes or, anthropicloud, not Islama. All they are are the world's most Advanced autocomplete. But they autocomplete and create and generate and analyze all via tokens.

Jordan Wilson [00:11:56]:
Right. Which is why, you know, I keep I keep telling people this and people don't understand. One of the best Skills that you can have in 2024 and moving forward as the world as the business world starts to integrate its daily processes around large language models. Right. Microsoft Copilot. You know, you have your enterprise, your Amazon queue, you know, your IBM Watson. We're gonna be using in the business world more and more prompting language. Right? Which is why it's important to understand how to prompt and understand, really, tokenization, Because words matter.

Jordan Wilson [00:12:33]:
One of the best skills you can have is written communication. Right? If you are a great communicator, if you are great with words, you are going to get better outputs Inside of Chachi continue in a large language model. You don't need better prompts. Don't need to become an an expert prompt engineer. No. You need to improve your written, your verbal communication. We do free courses all the time, and someone had a a question like, what book would you recommend To to become a better prompter, right, to get better outputs in ChattCPT. And I said the book on writing well by William Zinsser, You know, a book that's been around for decades that if you wanna become better at prompting, read that book.

Jordan Wilson [00:13:22]:
That helps your words mean more. That's one of the best skills you can know, but that's fact number 1. Chat GPT doesn't understand words. Alright. Number 2, speaking of tokens, right, ChatGPT does not Have a 128 k token memory. Alright. This one's a little dorky, but I'm gonna try my best to explain it to you. And that's why We talked about kind of tokens in number 1 and number 2.

Jordan Wilson [00:13:52]:
So now that you know that ChatGPT both analyzes and response, essentially, in tokens, those tokens add up. Okay? And At dev day so OpenAI's dev day in November, they announced, hey. There's a new model, GPT 4 Turbo. And GPT 4 Turbo Has 128 k memory context window, and ChatGPT has and is powered by GPT 4 turbo. Right. So you saw all the major publications. You know? Yeah. I have an example here, a Forbes article that came out, and it said, You know, Chat GPT's new features, and it talks about Chat GPT now supports a 128 k tokens.

Jordan Wilson [00:14:38]:
No. It doesn't. It doesn't. That's marketing. It's marketing. So right now, it has 32 k tokens. So that means when you are talking back and forth to ChatGPT, all of the words you put in are converted to tokens. Everything ChatGPT spits back to you is counted in tokens.

Jordan Wilson [00:15:03]:
And those tokens, once you Get past 32,000, it is going to forget what's at the top. So let's just say as an example, once you get to 35 tokens of back and forth conversation. ChatGPT has already forgot was what was in the first 3,000. Alright. Let me do an explanation. And, again, this is why don't blindly listen to the Billy boys on the Internet or To anyone, even the big companies saying, oh, 128,000 tokens. No. It doesn't.

Jordan Wilson [00:15:36]:
Here's an example. Alright. Here's an example. So I have a token counter. You should always use a token counter. I have a a little video on our YouTube channel that shows you how to do that, but, there's Internet extensions that count these tokens as you go back So as an example, I started a chat out by saying, my name is Jordan. I'm from Chicago. I like Carolina Blue.

Jordan Wilson [00:16:05]:
My favorite food is deep dish pizza. Then I ask Chat GPT. Right? The very next thing. I asked Chat GPT, what's my name? Where am I from? What color do I like? What's my favorite food? And because I did this right away, ChatGPT says, certainly, Jordan. Here's a recap. Your name is Jordan. You're from Chicago. Your favorite color is Carolina blue.

Jordan Wilson [00:16:28]:
Your favorite food is deep dish pizza. Alright? So what I then do in a brand new chat, I do the exact same thing, and And then I just start pasting in a lot of information. Right? I'm actually just pasting in transcripts from our everyday AI episodes and asking chat gbt to tell me what this means just to get through these tokens pretty quickly. Right? Took about 8 minutes. But you'll see now, once I got to 34,000 tokens, I ask the same question. What's my name? Where am I from? What color do I like? What's my favorite food? And then at that point, ChatGpT replies, I don't have access to personal information. Right? But if you do the exact same thing when you're at 31,000 tokens, It can still recall. So what does this mean? What does this mean? Well, one of the biggest problems When people are using ChatGPT is hallucinations.

Jordan Wilson [00:17:28]:
Right? Those are untruths or, you know, when ChatGPT forgets things or when it just lies and makes stuff up or when it gives you inaccurate information. And there's really 3 main causes of this. And we go over all of these in our free course that we do couple times a month called prime prompt polish. So if you're listening, if you want access, Just type in PPP. Hey. If you're listening, let me know. Did you learn anything in the in our free print, prompt, polish course? It's been taken by Over 2,000 business leaders even from big companies like Microsoft and beta that are building the technology, but It's one of the most common mistakes that people make is not keeping memory in mind. Right? And then what happens is they just say, You know Billy Boy says, Chat GPT, you know, is is not good because you're not using my prompts.

Jordan Wilson [00:18:21]:
No, Billy. It's because you are not using ChatGpt correctly. You have to play by ChatGpt's rules. Alright? But the correct rules, not just what Chatty Bitty says. Oh, 128,000. No. 32,000. Keep memory in mind, You are gonna cut down on hallucinations.

Jordan Wilson [00:18:42]:
Hey. Woozy and Frank both said amazing course. What's up, Fabian? Fabian says this is the best AI program. Alright. So let's keep going. Yeah. Shout out shout out to all these people joining from all over the world. Alright.

Jordan Wilson [00:18:58]:
Fact number 5, you might not know about ChatGPT. Alright? So default, browse with Bing, does not actually allow you to browse websites. Yes. This is one thing I can't stand. I can't stand. There is such a misconception around this. Alright? But I know we have a lot of more advanced people, you know, at least here on the live stream because I I recognize your names. You're all very smart people, but Let me try to break this down.

Jordan Wilson [00:19:27]:
If you're newer to Chat GPT, maybe on the podcast, you're just getting into it. Alright. So Chat GPT has a knowledge cutoff. It's actually different for different modes. Alright? But with the paid version of Chat GPT, $20 a month, Chat GPT plus gives you access to GPT 4, plug ins mode, and GPTs. Alright. When you're working with those paid models, The knowledge cutoff is April 2023. So, essentially, g p, GPT 4, Just like most other large language models is trained on the entirety of the open Internet, closed Internet, and a lot more.

Jordan Wilson [00:20:05]:
Alright? So it essentially gobbles up all of this information, put its in its put its, all of this information, trillions of parameters in its database, and that database is essentially static. It stops at April 2023. Alright? But Chatt GBT, to its credit, Did a lot to fix this because now in the new default mode, you have browse with Bing. So what what that means is if you're opening up a new chat In the default mode in chat GPT, if you ask it a question, sometimes even if you don't ask it to, it will browse with Bing. Right. It'll do a quick query based on what you are asking. Okay? There's no rhyme or reason. Obviously, if you ask chat g p t to use browse with Bing.

Jordan Wilson [00:20:53]:
It will. But sometimes even if you don't, ask it to browse with Bing, it still will. Right? So if ChatGPT, when it's, converting your question into tokens and if it thinks, oh, this is something that might need A quick browse with Bing, then it will do it. Okay? Which is actually a little problematic, And I'm gonna show you what I mean. But here's what people get wrong about this literally all the time. Even these, quote, unquote, large language models, experts that are, you know, speaking in front of, large conferences, they don't know what they're doing. Alright? People are more concerned about making money than taking the time to understand things, to test things. Alright? You gotta do that, y'all.

Jordan Wilson [00:21:51]:
Y'all are dropping the ball. Alright. So Let me tell you what this means. So browse with Bing can't always visit specific websites. So if you put a URL Into browse with Bing, people think that that means I'm gonna put in this URL, and now my chat within chat g p t knows and understands this information because I'm telling Bing to go read this URL. It's not how it works. K? It flashes very quickly across the screen. I'm gonna show you an example of that.

Jordan Wilson [00:22:27]:
But all Browse with Bing actually does is it queries the information. Okay? It queries whatever is in the URL. So there's actually some semantics there. So it's it's kind of reliant on SEO, what's gonna come up and what's not. If Browse with Bing is actually going to read and analyze that website. It depends. And I've done this many videos on the YouTube channel Looking at how all large language models do this and chat GPT with plug ins is the only mode that consistently does this. But If you don't fully understand this, if you don't test this for yourself, if you aren't careful, you are going to allow these hallucinations or lies into your outputs if you don't know what you're doing.

Jordan Wilson [00:23:12]:
So listen. I got it. Don't worry. So let's take a look. We're gonna take a look by showing another example in fact 4. Alright. So hallucinations are actually rare if you use chat GPT correctly. Okay? These 2 facts correlate with each other.

Jordan Wilson [00:23:37]:
And now I'm gonna show you that example of browse with Bing, k, and how it can actually lead to hallucinations. Alright. If you're on the podcast, you might wanna watch you you you know, you might wanna click the, the link. We We always leave, the link to watch this, but I'm gonna do my best to walk you through these screenshots. Okay. So I am in the default mode, browse with Bing. And I say, what companies have a $3,000,000,000,000 market cap, okay, in the default mode? So I didn't ask Bing, but it's smart enough to its own credit To browse with Bing. Right? Because it's determined that, okay, this query might need up to date information past April 2023.

Jordan Wilson [00:24:31]:
Right? That's, like, 10 months old. So Chat GPT does know today's date, so it does know that my query here asking about what US companies have a $3,000,000,000,000 market cap, it might know that it needs to browse with Bing. So I have a screenshot here, okay, that says it has the little browser thing icon, and it says u it's searching US companies with three $1,000,000,000,000 market cap of 2024. Seems like a great query. Right? So the problem is that flashes on the screen for, like, a second, and then it goes away. And then you can't necessarily see exactly what chat gpt did behind the scenes or exactly what browse with Bing did behind the scenes. K. Now let me read you the response from the default mode from browse with Bing.

Jordan Wilson [00:25:17]:
Ready? As of January 2024, Apple is the only US company that has achieved a market capitalization of and $3,000,000,000. Alright? So if you don't know any better, if you don't know how the default mode works, if you don't know how How chat gpt and large language models work. You're gonna see this, and there's also some citations there. Right? So you're gonna look at this and think, oh, this is great. Save me some time. Right? Think of how you might use this in business, in in in your company, to grow your career, whatever. There's this quick little flash of browse with Bing, and you get what looks like great information that's sourced. So you're like, oh, great.

Jordan Wilson [00:26:03]:
You know? You feel confident. You feel secure. Guess what? That's a hallucination. That is false. K. Browse with Bing. Because of how it works, it doesn't work very well. If you want accuracy, you're not getting it here.

Jordan Wilson [00:26:24]:
This is false. Alright. Now I'm doing the exact same query, but by using plug ins mode. Alright. So here's what I'm doing differently. I have 3 plug ins enabled. One of them is a plug in called Web Reader. So I'm say I'm saying using the web reader plug in.

Jordan Wilson [00:26:46]:
K. Plug ins mode, literally freaking fantastic. I tell people this all the time. It is a literal cheap code. It's not fair that plug ins mode exists and that so few people know how to use it. Alright. So I'm saying using the web reader plug in, please visit this article. Okay? And then I give it a specific URL to go to.

Jordan Wilson [00:27:08]:
And then I say, then please answer. And I get the exact same question verbatim word for word. What US companies have a three $1,000,000,000,000 market cap. Alright. So then it says, And it says using WebReader, so I know. It says the US companies that have reached a A $3,000,000,000,000 market cap are Microsoft and Apple. Wait. What? That can't be right Because browse with Bing just told me, as of January 2024, Apple is the only US company that has achieved a market cap of 3,000,000,000,000.

Jordan Wilson [00:27:53]:
What's up with that y'all? Who's lying? Browse with Bing is lying. You do not have control over how browse with Bing works. You do not have Very great, depth or information on what it actually is doing behind the scenes. Right? Because all it did I actually had to do this prompt multiple times to get this screenshot because it flashes so quickly, and then you can't go back and look at what happened behind the scenes. So, again, browse with Bing. All it did is it searched US companies with $3,000,000,000,000 market cap. Okay. ChatGPT plug ins Actually goes if you know what plug in to use and how to use them, it goes to an actual website, right, Which if you try that in browse with Bing, again, all it does is query the words in the URL.

Jordan Wilson [00:28:51]:
Right? Because OpenAI Has gotten into a little bit of, some legal issues. Right? They even had to take the browse with Bing feature down for multiple months As it was facing lawsuits because, it was determined or it was alleged That browse with Bing was accessing copyrighted materials behind paywalls, which it was. Right? That's one of the reasons why now OpenAI is facing a multibillion dollar lawsuit from The New York Times, and they're being you know, most all Large language models are being sued by publishing companies all over the place, but that's one of the reasons why. So using plug ins, You avoid hallucinations. Alright? So, again, if you know what you're doing, if you're using Chat GPT correctly, Hallucinations are actually rare. You know? There are some days when I'm in chat GBT for 8 hours And get 0 hallucinations. Why? Well, I follow our prime, prompt, polish method. When you do that, very rarely get hallucinations, but also Using the right mode at the right time for the right output.

Jordan Wilson [00:30:10]:
Sometimes I'll use browse with Bing, but I know The pros and the cons of it. But most of the times, I'm using ChatGPT plug ins because I have more control. Right. If you increase the quality of your inputs inside of chat gbt, you will increase the quality of your output. Sometimes you have To take away decision making power from chat GPT, especially in the default mode because you can't really see under the hood. So you gotta say, nope. I'm not leaving this up to you, browse with Bing, because you're gonna Google or you're gonna Bing, right, just like what I showed you. You're gonna ding US companies with $3,000,000,000,000 market cap, 2024, and Bing thinks that's right.

Jordan Wilson [00:30:56]:
And even the response, you can look at the response As of January 2024. Well, that was true 2 weeks ago in January 2024, but guess what? It's still January 2024, and that is no longer true. Alright. Last one. Here we go. And get your questions in now. I see a couple of them. I'm gonna get to them.

Jordan Wilson [00:31:19]:
Hey. We got some, Mind blown emojis and some people saying, wow. Thanks, y'all. So Number 5. Chat gpt is too developer focused, which will cost it billions. That's billions with a b. I've had a whole show on this. And let me yeah.

Jordan Wilson [00:31:46]:
I had to take a deep breath here. So OpenAI is, obviously, I think, one of the most brilliant companies in the history of technology. You can't really make an argument against that. Right. What other company ever has had 500 employees in evaluation of $100,000,000? Hardly any. Right? Hardly any. Make an amazing product. It is extremely capable.

Jordan Wilson [00:32:22]:
I think right now, I've used hundreds of generative AI softwares. And it's not that Chat gbt is in 1st place. Chat gbt is in a category of its own. I would not even say, oh, Chat JBT is first, Blank is second. No. Chat JBT is literally in its own category. Because if you use it correctly, nothing else on the market, no generative AI product helps you win back as much time as chat g p t if you use it correctly. Alright.

Jordan Wilson [00:32:53]:
So let me talk about something you might not know now. I think they're too smart. OpenAI is too smart for its own good. Let me talk about. What I mean here. So, an article I'm showing here that says OpenAI pissed off developers by There's been some unofficial proclamations that ChatGPT is getting rid plug ins. We're phasing them out. Right? It's not official official, but, you know, I have a tweet from someone in developer relations here responding to someone saying, Plug ins aren't showing right now due to a bug.

Jordan Wilson [00:33:39]:
This was about a month or 2 ago. But he said, long term, we hope that GPTs will be the new home for plug ins, and you can reuse an existing plug in as an action in your GPT. Alright. Let me tell you what that means. So since November, OpenAI OpenAI has stopped allowing new plug ins. K. And it has been reported and alluded to that they are phasing plug ins out. Alright.

Jordan Wilson [00:34:11]:
In lieu of GPTs. So let me tell you this. GPTs are amazing. Don't get me wrong. Right? So if you don't know what GPTs are, essentially, you can create your own version of chat gpt. You can essentially train it, on how you want it to act. You can configure it in real plain time English. You don't have to know how to code.

Jordan Wilson [00:34:35]:
But also if you do know how to code, You can add some advanced functionality. Right? You can put in some, you know, some some coding. I'll just say that to keep it easy, in the actions. Right? So they're saying in the future, you can reuse, quote, unquote, an existing plug in as an action. So you're like, alright, Jordan. What's the big deal? You can reuse a plug in And in action in the future. Well, number 1, it's not out yet. But number 2, and this is where I think OpenAI is too developer focused, and they are not, I don't think, thinking of the everyday person.

Jordan Wilson [00:35:14]:
Right? So this is, I think, The double edged sword of OpenAI. This has led them being very developer focused and extremely smart has led to their rapid rise. Right. And they were first. They had 1st mover advantage. But I think this is going to decisions like this are gonna cut into their market share because I don't think that this is a decision that, as an example, Microsoft would have made. Because right now, the plug ins and what truly makes plug ins valuable For the everyday person, emphasis on the everyday person, I e, nondevelopers. Right? The everyday person could go in Even just by taking our free courses, and they could automate probably 60 to 70% of their manual work that they're doing By creating plug in packs.

Jordan Wilson [00:36:10]:
Okay? Here's why that's important. Right now inside of chat g p t plug ins mode, you can use Any 3 plug ins at one time. So what that means, as an example, is you can use in 1 prompt. Right. If you go through and properly train it like we teach y'all in our free prime prompt polish course, you can properly train it first. But then in 1 prompt, You can have, as an example, 1 plug in that can read the Internet, specifically, unlike browse with Bing. Then you can have a plug in that creates Spreadsheets for you. K? And then you can have a plug in that takes information from that spreadsheet and creates diagrams.

Jordan Wilson [00:36:52]:
This is a plug in pack that I demonstrate on the show often. So you can, you know, give chat GPT in one chat. Here's a 100 page PDF. Read through it, summarize it for me, grab all of this specific financial information, tell me what it all means, Put it all in a CSV in a spreadsheet, and then also from that, create a diagram for me. Right? That's people's jobs. Right. But that is only possible because you can pick and choose any 3 plug ins at once. So by phasing out plug ins, an emphasis here can re in this tweet Saying you can reuse an existing plug in.

Jordan Wilson [00:37:35]:
Singular. Who knows? Maybe I'm kicking around dirt for nothing. Maybe you'll be able to use all your plug ins inside one GPT, but it does not look that way. Alright. So all this does is it takes away the power of chat GPT for the everyday person. And I think that's what's going to happen if and when OpenAI does take away plug ins over the course of a couple of months, people are gonna realize, Wow. I'm getting more hallucinations. You know? I used to use plug ins.

Jordan Wilson [00:38:11]:
Now I I I'm using browse with Bing. Right? Because that's the default mechanism to access the the web when you create a GPT is browse with Bing, And it's problematic. I just showed you that. So what happens now? When all of these businesses are making these GPTs, right, And they have this false sense of security or maybe false confidence because they don't fully understand how browse with Bing works. You're gonna have an increased level of reported hallucinations. People are going to distrust chat GPT. People are going to say, just getting more hallucinations because they're taking away capabilities from everyday person, I'd say. Obviously, there's things I don't know.

Jordan Wilson [00:39:01]:
Right? I'm sure there's reasons whether it's financial, legal, etcetera, that maybe is is making them take away kind of, plug ins for a reason that I don't know. But as it stands right now, right, you can still use plug ins. In this decision, if they do phase them out in lieu of GPTs, is a move that will eventually mark my words, y'all. Mark my words. I don't say this lightly. It will cost them 1,000,000,000 of dollars. People are gonna jump ship. Right? Microsoft CoPilot Pro has released their GPT builder, They've announced it.

Jordan Wilson [00:39:40]:
They're gonna release it. K. I think people are gonna jump over there. Or whenever, you know, Bard catches up, People are gonna jump there because I think this is gonna lead to a lot of problematic usage inside of ChatJPT. Got it. We made it to the end. I'm gonna get to your questions. If you heard something in this podcast, in this live stream, and you're like, Interesting.

Jordan Wilson [00:40:05]:
I didn't know that. I should investigate more. You know what? We do the hard work for you, y'all. This is what we do literally every day. Cut the fluff. Give you straight facts. Alright. So if you wanna know more, Just type in PPP if you're, you know, listening live here, you know, like like Val just did.

Jordan Wilson [00:40:31]:
Send me an email. If you're listening on the podcast, the email's in there. My LinkedIn's in there. We do these free courses literally all the time. They are free. They are live. Alright? Teach you how to properly do this stuff. But let me see.

Jordan Wilson [00:40:45]:
I'm gonna get through a couple of your of, everyone's questions here Because this is a show. This is a live stream. You know? Alright. So Declan says, when is v five coming, And what new updates will that bring? Good question, Declan Declan. Sorry if I'm if I'm not pronouncing your name correctly. Who knows? Right? If you were to follow Twitter, right, or these Billy boys out there, they would have told you 2 weeks ago that g p t was out you know, g p t five was Right? Sam Altman did an interview, and everyone's like, oh, he's talking about g p d five. It's out. It's already here.

Jordan Wilson [00:41:22]:
No. It's not. When will it come out? No one knows. Could be tomorrow. Could be next year. All I know is GPT 4 is coming up on, jeez, You're I don't yeah. Almost 2 years? A year a year and change now, and it is still by far the most powerful Model out there. It is not even close.

Jordan Wilson [00:41:45]:
Look at all the benchmarks. Right? Even Google's new model that they just released Does not come close to GPT 4. So I don't know. What new updates will GPT 5 bring? It's a great question. Right. We recap, we recapped this, in our newsletter a couple weeks ago. I'll try to put it in the comments of this LinkedIn livestream. But An in, an increased emphasis on the ability to reason, right, to think more and analyze more like a human, To better understand our language, but also an obvious increased increased emphasis On multimodality.

Jordan Wilson [00:42:27]:
So what that means is the ability to better input different modes. So Input text, input photos, input video. Right? And then the ability for chat gbt to output in multimodal. So Those are the at least what's been reported so far is, a better ability to reason and a higher emphasis on Multi modality. And, obviously, you know, probably everything else will get better, the the the memory, the speed, the accuracy, etcetera, but those are kind of the big things. Alright. Woozy. Woozy says, any use cases that you think an earlier version of GPT, not GPT 4, Actually does better on than 4, at least just as good.

Jordan Wilson [00:43:12]:
I'll say no, and I'll say that with a caveat. So I will say in some instances, even when you're using the GPT four model that is connected to the Internet, you might wanna cut Internet access off. Because like I just showed you here, you don't necessarily have access. You know? You can put in a prompt, not even asking chat GPT to go to browse with Bing, and it'll still do it. But I'll say no. You know, at least, Woozy, if you're asking, oh, should I go to, You know, GPT 3.5 at sometimes. Probably not. You know, like I said, if you are getting results from GPT 4 where it's bringing in bad information From browse with Bing, try it again and put in your prompt and instruct Chet gpt to not use browse with Bing.

Jordan Wilson [00:43:56]:
But, I will say there's probably not an instance when you're going to get better results. Jason, fantastic question here. Jason says, How would you know that you have a hallucination? Yeah. That's great. And this is why it's important. Number 1, there's not enough education out there on large language models Because all the companies are really putting out is marketing, and then you have a bunch of people trying to monetize. Then you have everyday AI everyday AI in the middle. Right? We're saying no.

Jordan Wilson [00:44:29]:
No. No. Don't follow this kid on Twitter. Don't listen to what big companies telling you. Right? There's the whole debacle about when, you know, Google released their new Gemini model and they showed this video, and Like, no. That's marketing. That's not how it works. Not at all.

Jordan Wilson [00:44:48]:
Right? So there is a huge problem, Jason, Because there is no good way to avoid hallucinations unless you actually know the pros and cons of the specific Generative AI model, the large language model that you're using. And most people don't take the time, the effort, and the care to truly understand how to properly use it. Right? They either just listen to the marketing from the big company or they listen to the 19 year old in his basement that has no clue what he's doing and is just trying to sell you some crap. But they don't take the time to do it the right way. That's why everyday AI exists. Alright. Let's see. I think I think we got to everything here.

Jordan Wilson [00:45:29]:
Okay. Wait. I think we got 1 more here. I think we got 1 more. Alright. I want it's it's Friday. This is an informal episode. We're having fun.

Jordan Wilson [00:45:35]:
We're kicking it. Alright. So, Another question here from Declan. So we're better off keeping our prompt short and sweet to get better outputs and longer conversations. Not necessarily. You're best off by priming chat g p t. Right? We teach that whole process. You never go in.

Jordan Wilson [00:45:53]:
Copy and paste prompts Literally do not work. That's not how you use a large language model. I don't care who you're you know, who online, what expert is telling you, use this copy and paste prompts. That's laughable. It's not how large language models work. Right? There's a reason why you look at any benchmarking. Right? So when these new models come out, I'll try to explain this for, our nontechnical friends. But anytime a new model comes out, there's all of these benchmarks.

Jordan Wilson [00:46:19]:
There's all of these different tests. Right. And they get scores. So any single time, any model always performs better when you'd go to a 5 shot, As an example, versus a 0 shot or a 1 shot. Okay. So what that means, if you just put in a prompt and ask for an output, You don't give examples. No back and forth. That's a 0 shot prompt.

Jordan Wilson [00:46:41]:
Right? 5 shot, I'll simplify it. Let's just say that means you go back and forth with chat GPT or any large language model 5 times, Or give it 5 examples. Right? I'm oversimplifying it, but that's a 5 shot. So, essentially, Which obviously makes sense. Every single model, every single test always does better on a 5 shot versus a 0 shot or a 1 shot. It always performs better because that is how large language models work. When you start a new chat, it knows both nothing and everything at the same time. So So if you just throw in a huge, long prompt, you're gonna get either hot garbage or lukewarm garbage.

Jordan Wilson [00:47:24]:
You're always gonna get better results By treating chat g p t like a human. Alright? We had Abren from OpenAI come, and that's exactly what we talked about on that on that show with him. You're always gonna get better results in chat GPT. Treat it like a brand new employee. You need to have a conversation. You need to go back and forth. You need to go through reinforcement learning. You need to go through knowledge testing.

Jordan Wilson [00:47:46]:
You need to share information in the right way. You need to test it. You need to keep memory in mind. It's not as simple as do I need a short prompt or a long prompt. You need to properly understand how the model works, and you need to go through a process. Alright. I think we got to it all. Yeah.

Jordan Wilson [00:48:02]:
I hope this was fun. Did you learn anything? Right? Hey. I tried to make this one that even if even if you're here all the time, I wanted you to know at least 1 new thing. So I hope today's episode where we talk about the 5 facts that you might not know about ChattCPT. I hope you learned something. Alright? So here's what we're gonna do. I try to do this because I know sometimes I'm long winded y'all. We're gonna go over them rapid fire.

Jordan Wilson [00:48:29]:
Fact number 1, Chat GPT doesn't technically understand words. It uses tokens. Fact number 2, Chat GPT does not actually have a 100 28 k memory. No matter what someone tells you, it has 32 k. Number 3. Fact number 3. The default browse with Bing does not actually allows you Allow you to always browse specific websites. It just queries, and sometimes it doesn't tell you.

Jordan Wilson [00:48:53]:
Fact number 4, Hallucinations are actually rare if you use chat gpt correctly, and I showed you that. In fact number 5, Chad, it's more of a hot take, but it's gonna end up being a fact. Don't worry. Chad GPT is actually too developer focused, which will cost it ad Billions. Alright. That's it, y'all. Thank you so much for tuning in. Make sure to go to your everyday AI.com.

Jordan Wilson [00:49:18]:
Sign up for that free daily newsletter. We're gonna break down today's conversation and more, letting you know how to actually Learn generative AI, how to leverage it to grow your company, grow your careers. That's it. Thanks for tuning in. Hope to see you back ad Monday and every day with more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI