EP 274: 7 things you need to know about GPT-4o and what OpenAI isn’t telling the truth about

The Game Changer: GPT-4o and OpenAI's Truths and Untruths

The recently released GPT-4 version features a noticeable change in the knowledge cutoff, shifting from previously hinted at October or December 2023 to May 2023. Along with this, there is also a new UI/UX update, which mimics a texting layout and includes a novel mode selector. The advantage of such changes provides a more user-friendly experience and efficient topical queries.

Controversy Over Context Window Size

Astute observers may notice a discrepancy in the context window size of 128,000 tokens reported by OpenAI. This highlights the importance of independently testing and verifying the claims made about these complex AI models rather than relying solely on corporate disclosures.


Browse With Bing: An Improved Functionality

An enhancement in the functionality of Browse with Bing has been observed, specifically impacting the use of large language models. This more sophisticated version is better at handling specific queries accurately and reliably. Additionally, when compared with perplexity, Browse with Bing showed demonstrable differences in speed, accuracy, and data retrieval capabilities.


Advantages of Internet-connected GPTs

The use of targeted Browse with Bing calls in conjunction with Internet-connected GPTs yields significantly better outcomes, especially when focusing on certain web pages. These respective perks differentiate between the options available on free versus paid versions.


Key Takeaways on GPT-4o

Given the current rapid pace of AI development, numerous innovations are on the horizon. One of them is OpenAI’s GPT-4o. This omni model comes packed with a trove of features, with many yet to be released. There needs to be clarification around the messages regarding GPT-4o’s availability – especially for free users – and future updates.


Misinformation and the Need for Transparent Communication

As the AI sphere expands and interest in it becomes more widespread, the potential for misinformation rises. Unfortunately, communication regarding new AI features – and which of these will be available to non-paying customers – seems lacking. Business leaders should be aware of this gap and seek information from reliable sources to ensure they make the most data-driven decisions.


The Role of AI in Future Business Growth

With such rapid advancements and transformative models like GPT-4o, businesses have the opportunity to leverage cutting-edge AI technologies and catalyze their growth. However, it’s essential for decision-makers to have a firm grasp of these tools and their potential before making significant business investments or changes based on them.


Boosting AI Transparency with Markers

Despite the promising advancements in memory capacity and recall for these models, there are still limitations. To enhance transparency and trust around these models, the integration of markers could be beneficial. This would provide easily discernible points of reference and improve understanding of the model’s memory capabilities.


Concluding Remarks

In the rapidly evolving world of AI, staying updated on the latest developments can seem like a daunting task. However, tools and systems such as OpenAI's GPT-4o could provide businesses with an edge and drive robust growth if utilized effectively. As with all business decisions, it's essential to understand the fine print, separate fact from speculation, and ultimately take well-informed strides in the AI landscape.

Topics Covered in This Episode

1. Changes in OpenAI's language models

2. Interaction with ChatGPT and Browse with Bing

3. Detailed analysis of OpenAI's GPT-4.0 model

4. Announcement and analysis of OpenAI's GPT 4 o

5. Potential developments for ChatGPT Plus

6. Discussion of AI influencers and misinformation on the internet

7. Close-up on specific features of GPT 4.0 and device restrictions


Podcast Transcript

Jordan Wilson [00:00:17]:
OpenAI just released its new model, GPT 4 o. The o stands for omni or omni model as OpenAI completely changed everything under the hood with how its base model works. But the o could stand for obscurity because there's so much unknown, so much misinformation out there from random people on the Internet, and a lot hidden about what's real and what's not with this new announcement and the new model. And there's, 1 or 2 small little things that OpenAI just isn't telling us the truth about. So we spent way too much time investigating this new update from OpenAI, so you didn't have to waste hours. And today, we're giving you the 7 things you need to know about the new GPT 4 o model. What's going on y'all? My name is Jordan Wilson. I'm the host of Everyday AI.

Jordan Wilson [00:01:05]:
We do this every dang day. Coming to you live 7:30 AM Central Standard Time with our livestream podcast and free daily newsletter, helping everyday people like you and me learn and leverage generative AI. It's a lot to keep up with. So, you know, if you're listening on the podcast, thank you so much, for your support. Please make sure to, leave us a rating, you know, comment. Let us know. Reach out. Check your show notes as well.

Jordan Wilson [00:01:28]:
There's always ways you can reach out, to myself. And if you're joining us on the livestream, appreciate having you. As always, we got, big bogey face here on on YouTube. Thanks for joining in. Michael and Brian and Woosie and Christopher, some of our normal, some new faces. Love to see it. So before we get into this new model, and and seven things that I think you really need to know about it, let's first start as we do every single day with going over the AI news. And like I said, if you're listening to the podcast or if you are joining us live, make sure to check out those show notes, as we're gonna put in a lot more information including our website at your everydayai.com.

Jordan Wilson [00:02:09]:
Alright. So let's talk about what's going on and what you need to pay attention to in the world of AI news. So an AI robot just gave a graduation speech, yes, at a US University. So an AI robot named Sofia gave a commencement address at Deauville University in Buffalo, New York, sharing a generic advice and drawing mixed reactions from students in the public. So, the AI robot named Sophia. Sophia's address focused on common themes in graduation speeches such as embracing lifelong learning and believing in yourself. Some students felt that having a robot as a commencement speaker felt impersonal, especially after experiencing virtual high school graduations due to the pandemic. The university received backlash for their decision to have a robot speaker, but ultimately stood by their choice to showcase the potential of technology.

Jordan Wilson [00:03:02]:
Alright. Our next piece of AI news, stability AI is reportedly losing too much money and is looking to sell. So according to a new report from the information, British AI startup stability AI is facing financial difficulties and has discussed potential sale options with a buyer. This follows a restructuring process and staff layoffs along with the resignation a couple months ago of the company's CEO. So according to the report, stability AI has generated less than $5,000,000 in revenue and lost $30,000,000 in the Q1 of 2024. So I'm not great at math or financials, but that doesn't seem good. Also, reportedly, the company owns, owes close to a $100,000,000 in outstanding bills to service providers. So that's a lot.

Jordan Wilson [00:03:47]:
But, you know, stability AI may still have potential for success with its AI models for generating audio and video using text prompts. So someone might scoop them up if you, remember my, predictions in 2023. I did say one of these companies was going to get acquired by a bigger company. So let's see if that, holds true. Alright. Last but not least in AI news, the US and China finally met on AI. So US and Chinese officials recently held closed door toss on closed door talks on the risk and management of AI technology, highlighting the tension between the two countries. Both sides expressed concerns and hopes about the potential of AI, but also acknowledge risk in areas such as surveillance and national security.

Jordan Wilson [00:04:30]:
So there's obviously 2 very different approaches, with AI, with these two companies. US officials raised concern about China's misuse of AI while China rebuked the US for restrictions and pressure in the field of AI. So the talks were the first of its kind, but after a meeting in November between the country's two presidents. China is advocating for the UN though to take a leading role in global governance of AI, something that could sideline the US. So, the US is obviously against that. Alright. There's obviously a whole lot more that we didn't have time to cover. That's why we put out a daily newsletter written by me, a human.

Jordan Wilson [00:05:08]:
So make sure, if you haven't already, go to your everyday ai.com. Sign up for that free daily newsletter. We'll have a lot more on that news and more. Alright. So let's get straight into it, y'all. Let's talk about the 7 things that you need to know about GPT 4 o and what opening eye isn't really telling the truth about. Alright. So, if you're if you're joining us live, I'd love to hear your thoughts.

Jordan Wilson [00:05:29]:
Get your comments in. We'll probably, feature some of our favorite comments in the newsletter today. So if you're joining us, live, appreciate it as always. Like, Tara from Nashville or Fred from Chicago, Juan from Chicago. Chicago, we're rolling deep today, like myself. Sarah joining us from, the UK. Appreciate that. And Joe from Asheville.

Jordan Wilson [00:05:48]:
So get your get your questions and comments in. But let's just let's jump straight into it y'all. So we're gonna do a backwards countdown here. So number 7, Hardly any of the GPT 4 o announced features are live yet and most won't be free. Yes. That is important to know. And, also, this is as of today. Well, actually, as of this morning.

Jordan Wilson [00:06:11]:
Right? So May 16th at 7:30 AM Central Standard Time. Because here's the thing. Obviously, these things change quickly. Right? But there's just so much bad information out there, and I think OpenAI was a little ambiguous, with with some of the way that it announced things. You know, and a lot of people are asking what my thoughts between, hey, how did, you know, what were your thoughts between how OpenAI announced everything versus how Google, you know, announced everything at their IO. I think OpenAI did a great job in their announcement. So don't get me wrong. I'm not here to bash on OpenAI.

Jordan Wilson [00:06:41]:
You you know, Google's, announcement felt extremely scripted. Like, it it hurt watching it, whereas open OpenAI's felt, very real and natural, which I think is important when you're talking about AI, you know, to not actually sound like a robot. Right? To not sound like that, Sophia robot giving a graduation speech, at D'Youville University. Right? So so many of these features though are not out, and they are not going to be free. Alright. So let me just go ahead and break this down. And if anyone out here has taken our free prime prompt polished PPP course that we update literally, multiple times a week, and we're doing that again here in a couple hours. So if you want access, just type in PPP.

Jordan Wilson [00:07:22]:
I'll make sure to send you the link. But there's a lot of misinformation. Okay? Because here's the reality. OpenAI said that GPT 4 o is going to be free to all users. Alright? Cool. Great news. It's not right now. Alright? So people people were very confused, and I'm actually, I did check this like an hour ago, but I'm gonna check it again literally right now.

Jordan Wilson [00:07:48]:
But, yeah. So, it is still not available to free users yet. However, pretty soon, all free users will have access to GPT 4 o. Okay. Presumably, it will be logged in users because, OpenAI did just change this about a month ago where you don't even have to be logged in to use its service. I would guess that you would at least have to have a free account, to use the new GPT 4 o model. Alright. So there's a lot of lot of lot of people.

Jordan Wilson [00:08:18]:
Even, you know, quote smart on, you know, quote, quote unquote AI influencers on the Internet who are putting out the wrong information. Okay? Because here's the thing, everyone's like, oh, you know, you don't you don't need to have a paid chat g p t account anymore. Right? Because there's obviously the free plan that costs $0.0 and the chat GPT plus plan that costs $20 a month. So everyone's saying, well, hey. If the GPT 4 o model is the newest and greatest model and, it's free, everyone, let's cancel our subscriptions. So here's something that I think OpenAI definitely dropped the ball on in their communication. Right? That's the, the pros and the cons between, you you know, a startup versus having, you know, polished veterans deliver your address. But I think that they dropped the ball here by saying, hey.

Jordan Wilson [00:09:08]:
There's so many of these features that we are demoing today and that really, I think, we're the show stopping features that are not going to be available on the free plan. So just because that people are going to have free access to GPT 4 0 and the o, like I started the show off, it stands for omni or omni model. That doesn't mean you have access to all of the show stopping features. Right? So I have a little graph here on the screen. I'm not gonna go over it point by point. You know, so if you're on our livestream audience, you can go ahead and read that. But a couple of the things that I want to, you know, point out or draw attention to is probably what I would say is the 2 biggest features. Right? So, one is kind of this live, AI assistant or live, you know, AI agent.

Jordan Wilson [00:09:55]:
People are calling it her. I like, I'm calling it the live omnivision. Right? So this is the feature that they demoed in almost all of their demos, but it is when you are talking to, the app, presumably just right now on the iPhone, it can see and hear and process and talk to you in real time. Right? So the demo and we already showed this on the show but, was a student doing math, Right? Math homework and they could see it live. Right? So this new live, Omni Assistant, could see it live, talk to it, so the student could say, hey, You know, I'm trying to connect, you know, angle a and angle b here. Is this right? Right? So it can see, understand, and talk live. Free users do not have access to that or at least they reportedly don't. Right? Also, the other big feature, I think, is the desktop assistant, which is huge.

Jordan Wilson [00:10:47]:
It's actually out. People don't know this. There's a little sneaky way that you can install it. You know, not everyone has access to it right now, but there is this, there's there's a way you can install it. Maybe I don't know. Should I? It's it's it's one of those random links on the, on the Internet. Livestream audience. Do you guys think I should share that in the in the newsletter today? Let me know.

Jordan Wilson [00:11:10]:
But, the desktop agent will not be available to free users. Alright? So I'd say 2 of the biggest reasons why people would want to use ChatGPT would be those two things. Right? This this live kind of agent or live assistant, you know, that can see, hear, process, and you can communicate with in real time, not available to free users. And the chat gpt desktop app, Right? Which I think is amazing. So in one click or one keyboard stroke, you can literally share anything on your screen and talk is essentially an overlay on top of whatever you're working on and say, hey. You know, you can talk or just click a keyboard command and say, hey. What's this? I need help with this. Help me finish this code.

Jordan Wilson [00:11:52]:
Help me finish this email. Right? So all things you could technically do already in Chat GPT, but it saves you a lot of time and being about being able to talk to the assistant, is is something I guess we didn't have before. So those are y'all, those are the 2 headlining features. That changes the future of work. We had an entire episode about that. Right? So, those are the things that I think people are, missing. And they're like, ah, you know, just cancel my JetGPT subscription. It's like, alright.

Jordan Wilson [00:12:21]:
Well, also do that at your own risk because the last time that OpenAI had a huge rollout like this was, in November of 2023. And guess what? Obviously, once they released it, their servers crashed. They couldn't keep the service up. So guess what happened? They stopped allowing new paid users. Right? So, hey, if you wanna go ahead and cancel your subscription, go ahead. But there's a chance that, well, I would actually find it very likely that this is going to crash once it's rolled out to everyone, and they may not take any more paid users for a couple of weeks or a month or 2. So, you know, if you wanna cancel, do so at your own risk. Alright.

Jordan Wilson [00:12:58]:
Yes, Juan. Good question. Right now, the app, and so so the kind of smartphone AI assistant is only for iPhone right now, and the Mac app is only or sorry. The Chat GPT app right now is only for Mac. Again, that could change, but that was just, what OpenAI said in the announcement. And it makes sense. Well, it makes sense and it doesn't. Right? Because, obviously, Microsoft, maker of Windows, right, has reportedly invested 10,000,000,000 to $13,000,000,000 in OpenAI and has a 49% equity stake.

Jordan Wilson [00:13:31]:
So you'd think, oh, OpenAI is gonna make it accessible for all Windows devices. Not now. You know, reportedly, OpenAI, is locked in to provide generative AI capabilities with Apple, for, you know, Apple's new updates that they'll be announcing at their worldwide developer conference here in about 3 weeks. So, yeah. At least right now, it's gonna be Mac only or Apple only. Alright. So, yeah, Brian Brian says no, no canceling here. Brian also.

Jordan Wilson [00:14:03]:
Brian's ahead of me. He says, it will be out for Windows later this year. Alright. Yeah. I can't retain everything. You know, I don't have a 2,000,000 token memory. I read so much I forget a lot. Alright.

Jordan Wilson [00:14:13]:
So let's go on to number 6. Alright. So number 6. You probably haven't seen some of GPT 4 o's best features. Right? So, we just talked about the the the the big ones, right, which is what OpenAI spent so much time on. Right. So the the the new model itself is so much faster. The API is half the cost.

Jordan Wilson [00:14:32]:
It's smarter. Whereas before they kind of essentially stitched 3 different modes together of the model which caused latency. So now this Omni model, essentially, you know, it can hear, see, talk, and do all these things in one model with lower latency. Right? So that's huge. You have the, the smart AI assistant, whatever it's gonna be called, then the desktop app. I'd say those are the 3 big things. But some of the best things I don't even think OpenAI really talked about, if I'm being honest. Right? Man, I mean, go go go read their, you know, research paper slash blog post or whatever.

Jordan Wilson [00:15:04]:
But, I mean, some of the capabilities of this are are wild. Right? So, again, for pod podcast audience, check out the show notes. Come come I mean, you gotta see some of these screenshots, but, you know, as an example, in their very detailed blog post, you know, OpenAI gave this example of using DALL E. DALL E is very much improved, I would say. I was testing it around last night, but, you know, in this example, in iterative prompting, much more consistency with DALL E. So if you wanna get consistent images. But first, the first image it had, you know, kind of, this typed out message. And then on the next prompt, it says the robot was unhappy with the writing, so he is going to rip the sheet of paper.

Jordan Wilson [00:15:45]:
And the text is still on the sheet of paper that the robot rips. So the the detail and the continuity and the consistency, between images, pretty wild. Here's another example, and I tried this exact prompt. Right? You can go try the prompts if they're, you know, on the base model. This this is wild. Right? So here we have a very decorative, kind of journal, and you can you know, I tried this. You know, I did this exact same prompt, and then I said, oh, change everything to rhyme with the word, cap. Right? And then it did it.

Jordan Wilson [00:16:16]:
But, you you know, you almost have this very detailed journal with writing. Right? With real words. The words don't always always make sense, kind of like me sometimes in the morning, but very impressive. Other things, I did this one last night. Chat gbt, I don't see anyone talking about this, can now create 3 d files, which is pretty wild. I don't know a lot about 3 d files. I just know they're like dot STL files. I don't know.

Jordan Wilson [00:16:43]:
To me, STL means Saint Louis, but apparently, that's what 3 d files are. So I literally said using advanced data analysis, create a 3 d STL file of a simple robot. Downloaded that, and then I loaded it into some 3 d software. Right? And working 3 d little 3 d model. Right? In 30 seconds literally between hitting the prompt, loading it into a program. You know, you can get a model of 3 d working 3 d model of anything in 30 seconds. Right? I think that's a pretty big capability. Speaking of 3 d, here's another one.

Jordan Wilson [00:17:16]:
Right? We saw NVIDIA, you know, announces that GTC like text to 3 d. Hey. Same exact thing going on right here. No one talked about this, but you have, 3 d object synthesis, Being able to, create and render, 3 d objects on the fly is wild. Being able to download, that STL source code, wild. Yeah. Like, y'all, like, there's so many capabilities that OpenAI didn't even get to. You know, we just saw the saw the shiny stuff, saw the sexy stuff, you know, the, oh, here's here's what this new model is capable of, the latency.

Jordan Wilson [00:17:53]:
It's so fast. Listen to the emotion and the voice, the assistance, the desktop app. But, man, the base model itself is so improved so improved. Question from Douglas here said to create consistent images in the updated DALL E, does one still need to index tag the images as in give them number identifiers? That is a next level question, Doug. Douglas, love it. I haven't tried it yet in the new model. I tried some, just normal natural language, consistency, images and it did much better. Right? So, kind of what Douglas is talking about, you can a little hack.

Jordan Wilson [00:18:28]:
I did a video on this like 6 months ago as you can essentially sign, or have a chat gpt assign essentially a reference number. Right? So Midjourney has their s ref, and then you can, you know, use that reference number and change small little things. I haven't tried it like that, so I'll have to do that. But, you know, I I had pretty good results or at least better than normal results, still not at the level of MidJourney, but just by using natural language. Alright. So let's let's keep this let's keep this going and, you know, get to our other seven things that we need to know. Alright? So let's keep it going. Here we go.

Jordan Wilson [00:19:06]:
So the GPT 4 o model was actually leaked in quotes weeks before as the model, I'm also a good GPT to chatbot. Alright. I have a I have a misspelling there. It should say good. That's how you know I'm human. Right? It's not an everyday AI, last minute slide presentation without a without a typo or 2. Alright. So this was confirmed by multiple members of OpenAI as well as, Sam Altman, retweeted this.

Jordan Wilson [00:19:36]:
But here we go. So, if you don't know what the chatbot arena is, don't worry. I'll tell you. So the chatbot arena is by, l m s y s. So you can, go check it out. But essentially it's a place where you can just go, you know, the lmsyschatbot arena where you can go bench look at different benchmarks, what's called Elo scores. So essentially how this works is you can blindly test models, you know, put in any prompt, you know, so it can just be like, hey, you know, what's going on or tell me a joke about, you know, Chicago in the winter or it could be like code me an entire version of flappy bird. Right? And then you look at the outputs and you judge which one's better.

Jordan Wilson [00:20:21]:
And then that kind of gives you this Elo score. So, you know, what we see on the, on the graph here. So OpenAI just essentially, I don't know if it was a leak technically or a silent release. Right? But they released essentially GPT 4 0 in the wild way before it came out. All of us, you know, I had a dedicated show. They actually released 3 different models. More on that here in a second. But it outranked all the other models by far.

Jordan Wilson [00:20:50]:
It wasn't even close in this Elo score. So this head to head matchup. Right? And when you vote on something, Elo, like I said, it's it's a blind vote. You don't you you know, you don't say, oh, this is Claude 3. This is Gemini 1.5. Nope. You just put in an input. You you know, you can output.

Jordan Wilson [00:21:07]:
There's been I'd I'd actually have to have to look here. There's been more than a 1000000 votes. So it's not like, 10 10 random people. This is a very, a very popular service that's being used. But, I mean, the Elo scores for the new model, which was I'm also a good GPT 2 chatbot, were literally almost off the charts. It wasn't even close how much further ahead they were than GPT 4, GPT 4 Turbo, Gemini 1.5, Claude 3, Opus, Llama 3. Right? Just testing legit off the charts. Alright.

Jordan Wilson [00:21:42]:
So, again, people didn't know that, but a lot of us, myself included, have already been playing had already been playing with these models for a while. Alright? And that brings me up to my next point. Another common misconception, something maybe OpenEye didn't really focus on. They actually flashed it on the screen for about, 10 seconds. Didn't talk about it. Alright. But here's the other thing. There were 2 other models, quote, unquote, leaked on the chatbot arena that have not, in theory, been released.

Jordan Wilson [00:22:15]:
Alright. So here's another thing between the free version and the paid version, and everyone's like, oh, well, the the the free version is is if it's so capable, I don't need all these other things. Well, here's here's here's the the hot take. There's other foundation models coming soon. Literally, OpenAI flashed that up on the screen, kind of barely recognized it and moved on. So, yes, I would expect one of these 2 other models, which would either be, I'm a good GPT 2 chatbot or GPT 2 chatbot. Right? 3 different ones. 1 was confirmed as gpt04.

Jordan Wilson [00:22:52]:
The other 2, nothing. What's up with them? Well, I do believe that the paid version of chat gptplus will be getting an update from one of these two models. Right? Might it be called GPT 4.50? I don't know. You know? And also, is that like too many? Is that like too, like, too hard of a naming mechanism? I don't know. It's it's kind of a mouthful. But might we see a new model? Yes. Because literally on the screen, OpenAI at their spring event announcement, they did say foundation models coming soon. It's quite literally what they said.

Jordan Wilson [00:23:28]:
So that's another thing people aren't realizing. This is not the end all, be all, you know, and I don't think we're gonna have to wait, you know, another, you know, 6 to 9 months to see that update coming inside of chat g p d plus. I do think so whenever the free users, get get access, you know, OpenAI said that this is would all be rolling out in the coming weeks. I see a period of maybe a couple of weeks to maybe a couple of months until they update, the the paid tier, that $20 a month chat gpt plus to one of these other models. They're first gonna, you know, try to get everything going right with server squash bugs because this is a huge release. And then after that, I do believe they will be updating the, kind of model, for the paid version of chatgptplus. Yes. In the API, great point, Douglas, and in the playground, there's actually 2 different versions right now, of GPT 4 o.

Jordan Wilson [00:24:30]:
And you you know what? That actually brings me up to a good point. I had so, like, so many good things in here. I couldn't fit them all. Hey. Another thing that I wouldn't say OpenAI didn't tell us the truth about this, but something that probably changed under our eyes is the knowledge cutoff, which is actually one of the most important things to keep in mind when when using any large language model. Right? Is the knowledge cutoff because the very first iteration on Monday Tuesday had a knowledge cut off of, October 2023. So not terrible. Right? But not that great.

Jordan Wilson [00:25:04]:
Right? But now there's different versions, and you can go in the playground, you know, ask the exact same question to different models. I did the same thing. And it looks like the GPT 4 o that is the default version inside of chat gpt actually now has a knowledge cutoff of May. So it went back even further because the original GPT 4 version had a knowledge cutoff of December 2023. So the knowledge cutoff is getting rolled back, which you might be like, Why? Even for paid users? Yes. Right now, it is. But, again, I think that's another nod to the fact that ChatGPT Plus paid users will be getting a much more up to date model. Right? So that's another kind of, update that happened under the hood is technically everyone's chat g p t got a little dumber recently, right, by losing that, what is it, that 4 months, in knowledge cutoff and getting rolled back even more.

Jordan Wilson [00:25:59]:
So from, you you know, if if you were using the best available model, last week, the most up to date information was December 20 23. Earlier this week, it was October 2023. Now it is May 2023. So now we're, you know, a full year, back in the future with, the knowledge. However, I'm not as worried about that because one of our other things that no one knows about, it's actually number 1, is probably gonna help with that. So, yeah, like I said, here are the other, you know, some of the other models. GPT 2 chatbot was out in the wild. What's gonna happen with that? We will see.

Jordan Wilson [00:26:32]:
So let's go to some other things that you maybe missed. Well, the UI UX was already updated and it can be a little confusing if you don't know what's going on. But there are some new features kind of tucked into this new user interface and user experience. Alright. So first and foremost, you do have a completely different layout. It is more, it feels more like a text message. So if you use Imessage, it really feels like that now. You have these, little text bubbles almost, which we didn't really have before.

Jordan Wilson [00:27:01]:
Not a huge fan of this. Could this be a nod to the, to future Imessage integration, to get people kind of familiar and comfortable? Because you better believe that a big part, I think, of OpenAI going into partnership with Apple is they want Apple users who aren't chat g p t users to feel more comfortable and to be attracted, by its model. Right? So I I I do think that's a subtle marketing, UI UX nod to look and function a little more in theory like Imessage. Right? One of the most popular messaging platforms in the world. So the UI UX is a little different there. It does have these more bubbles. I don't personally like it. It's a little indented.

Jordan Wilson [00:27:45]:
I don't know. Maybe I'm weird and paid too much attention to UI UX. Another thing is your settings are now in the upper right hand corner versus the lower left hand corner. I was actually, doing a live, 90 minute consult with someone the other day, and it literally changed in the middle of the consult. And I was trying to switch over from, you know, one of our teams, chat gbt team's plan, to, chat gbt plus plan, and I'm like, where did my settings go? So yeah. Now they're in the upper right hand corner. And then in the bottom left hand corner, now it just has the option to invite teammates if you're on the paid plan, that team's plan. One other thing, a little thing in the UI UX, which is a new feature, which I'm a huge fan of, is you now have a drop down mode selector.

Jordan Wilson [00:28:31]:
Okay? So, again, we don't know if this is gonna be available in the free version of GPT 4 o or if this will only be a paid feature. I assume that it's only going to be a paid feature, because right now the options are just GPT 4 0, GPT 4, and GPT 3.5. Presumably, GPT 3.5 will be, phasing out. So free users, in theory, would only have access to GPT 4 o. I think, you know, they might still have access to 3.5. I would assume they're gonna get rid of it, but, we don't know. But a great new feature for paid users. At least right now, you can go use it so you can run an output as an example in GPT 4 0, switch down to, GPT 4.

Jordan Wilson [00:29:14]:
You can run the same model and then you can toggle between the 2 outputs left and right. Right? So maybe you just wanna learn, because prompting changes. Right? That's another thing people don't understand. You know, in the same way, you know, prompting inside of an AI image generating tool like Midjourney has changed a lot from version 3 to version 6.2. Right? Prompting changes a little bit. You you know, when the models become, newer, faster, smarter, more capable, they understand, you know, human, language a little bit more. So, you know, I think that this is important, especially if you're a heavy chat GPT user like myself, to run the same prompt in 2 different models and then just toggle left and right and look at the differences. Right? That's the, I think, the best way to learn.

Jordan Wilson [00:29:55]:
You can watch, you know, 10 different, you know, YouTube videos about riding a motorcycle. But if you wanna ride a motorcycle, you gotta hop on the Harley and go. Alright? So, number 2, here we go. Here we go. Ready? Here's where we got a little fib, on our 7 things you need to know about g p d four o. They kinda fibbed again about the context window. Alright? At least for now. Again, this is as of, I don't know, this is as of, like, midnight last night, so 8 hours ago.

Jordan Wilson [00:30:29]:
Again, OpenAI reported a 128,000 token context window. So, if you're not a dork like me, the easiest way to think about is think of memory. Right? Like your smartphone has memory. Once you run out, it's not gonna remember anything new. So a context window is essentially a 128,000 k token window. Let's just call it 96,000 words. Alright. So that doesn't sound bad.

Jordan Wilson [00:30:54]:
Right? Obviously, you know, it's still super small compared to Claude Opus, compared to the new, you know, 1,000,000, 2,000,000 tokens, you get with Gemini 1.5, but you have to use, Vertex or their studio for that. It's not available right now in the front facing consumer product. However, that's not right. It's not a 128,000 tokens. It's still not. Right? It's going to forget. Yeah. We tested this.

Jordan Wilson [00:31:22]:
Don't never never just believe what a company tells you. Right? Especially especially Google. But, you know, none of them. Always test everything out for yourself. There's a big difference between what a company markets and and advertises and tells you how something works versus you gotta go test it yourself. Alright? Here is a live exit well, not live. I I don't do these things live because this would take, like, 15 minutes of stumbling through and pushing this. But, so so here we go.

Jordan Wilson [00:31:52]:
And I did this in different chats. Okay? So what I'm doing, and this is, I think, one of the biggest things that people get wrong about using large language models is not understanding a context window. So not understanding tokenization in the context window because it's gonna forget things. Right? So that's why when you're using a large language model, things might start off great and you're super happy with the results and then you keep using it and then it just gets gets super dumb all of a sudden. And you're like, yo. Like, why is this thing not work? I think I broke this thing. It stinks. No.

Jordan Wilson [00:32:19]:
That's context window. So as an example here, I have a token counter. So for our podcast audience, I'm gonna try to explain here. So I said simple things and I did turn memory off, FYI, as you can see in the in the top there. So I said my name is Jordan. I'm from Chicago. I like deep dish pizza. My favorite color is Carolina blue.

Jordan Wilson [00:32:36]:
I like the Chicago Bears, but they stink. Hey. Maybe not this year. Got Caleb Williams now. Alright. And you'll see the number of tokens. We're only at 200 tokens. So, I just immediately quiz chat g p t.

Jordan Wilson [00:32:48]:
I ask the questions. It gets it right. It says, your name's Jordan. You're from Chicago. You like deep dish pizza. Blah blah blah. Right? Alright. So then I push it.

Jordan Wilson [00:32:58]:
And, again, each time I do this, I do this in a fresh chat so it doesn't have memory of anything else. Alright. So then, essentially, I put in a bunch of gibberish. I push the tokens to 30,000. Okay? And then I ask again. So, again, I give it the information, put in 30,000 tokens of random information, a lot of from? What kind of pizza do I like? What's my favorite color? Do I like the bears? Chat gpt at 30,000 tokens gets it right. Your name is Jordan. You are from Chicago.

Jordan Wilson [00:33:29]:
You like deep dish pizza. Your favorite color is Carolina blue. You like the Chicago bears, but think they sink. Alright. Cool. Who likes deep hey. Don't. It's it's too early to fight, Douglas.

Jordan Wilson [00:33:40]:
It's it's deep dish pizza all day, not not New York. Not New York. Okay. Lisa Lisa here also with a good comment. Sorry. I'm getting distracted. But she says free model has chat gpt and temporary chat with an upgrade to chat gpt plus. Yes.

Jordan Wilson [00:33:55]:
You still have a temporary chat option in the free version, as of now. Alright. Back to this. So so at 30,000 tokens, it remembers all of these things. So the context window at 30,000, it's confirmed. Right? Always test stuff, y'all. But guess what happens when we push it just a little bit more to 35,000? Alright. Again, fresh chat.

Jordan Wilson [00:34:19]:
I do all the same information, and then then I say, what's my name? Where am I from? What kind of pizza do I like? And then it says, based on our conversation, here's what I know. Your name, not mentioned. Where you are from, not mentioned. What kind of pizza you like, not mentioned. Your favorite color, not mentioned. Do you like the bears? Not mentioned. You haven't shared these details yet. Feel free to provide any of this information you'd like.

Jordan Wilson [00:34:39]:
Y'all, at 35,000 tokens. Right? We didn't get to, like, a122, a 125. So does chat g p t actually have this, you know, improved a 128,000 token memory inside of chat gbt? As of today, no. It does not. So little fib there. Right? But it does have it in the API. Alright. So, if you're using a third party tool or if you are using the API, the playground, etcetera, it does.

Jordan Wilson [00:35:12]:
I'm sure OpenAI knows this. I've been squawking about it now for like 6 months, because people have a misunderstanding of how the model works. That's a big difference, y'all. 32,000 tokens is not a ton. That's not a ton. You can go through that in, you know, 10 to 15 minutes of heavy usage. Right? And the biggest thing about using large language models, especially to grow your grow your company, using it in your business, integrating it into your daily workflow, building GPTs. Right? All of those things is knowing if it's remembering.

Jordan Wilson [00:35:45]:
Right? Is it retaining all of this information, or is it gonna start hallucinating? So you have to understand this. And large language models makers out there, can we please, please, please, I don't care if your context window is is 2,000,000 or 32,000. Get markers. Get markers. Right? We want to increase trust. We want to increase the explainability. We we want to demystify the the the black box of generative AI. It's very simple to put in a context window.

Jordan Wilson [00:36:16]:
Right? Markers. So, you know, green, yellow, red. Right? So you can scroll up. You know? Oh, maybe the bottom 32,000 tokens are green. You you know, maybe the, 30 to 32 are yellow and everything after 32,000 is red. That's not hard. Right? But at least users then know, hey. All of this information I was sharing with chat GPT earlier, that's very important for my output, is now out of the context window.

Jordan Wilson [00:36:41]:
So I know if I'm whether I'm writing a blog post or using this to send a pitch to a client, all of that information I was sharing is gone because people sorry y'all. Humans were lazy. So we're not paying attention to that. So you dump in all this information. All this stuff's going well, so you blindly trust everything that's going on. And now all of a sudden, it's spitting out, you know, just, like, you know, a a bunch of generic, vague, and maybe just lies, and you're not checking it. Because when you were checking it earlier, everything was good. Alright? So Christopher asking, is there an efficient way to move preexisting chat into chat gpt 4 o.

Jordan Wilson [00:37:25]:
So, yeah, you can now just toggle. So the toggle, that I talked about earlier at any time now, which is great because before you could never switch models. So, yes, now you can toggle old chats to new chats. Or sorry, the GPT 4 turbo models. Now you can go in there, use that new drop down menu, and, change change the model. Great question. Alright. Here is our last thing.

Jordan Wilson [00:37:53]:
Literally, no one is talking about this, and I think this is huge. Huge. I did a little short, 5 minute video on this the other day. So if you're not reading reading the newsletter and watching those little videos, I think you should. There was all these rumors. Right? Oh, chat g p t is building a search engine, you know, and and we talked about them and and we covered them. Right? So, you know, I don't think this is totally gone. I think it's actually chat gpt low key didn't really talk about this, but they completely changed how browse with Bing works.

Jordan Wilson [00:38:33]:
If you are an avid listener or watcher or reader of the show, I've been harsh on browse with Bing because it didn't really work well. It hasn't worked well up until this week. Right? It was a little, sporadic. It did not consistently, use browse with Bing even when you asked it to. It did not always tell you when it did or did not use browse with Bing. Right? And if you don't know what browse with Bing is, it is baked into the GPT 4 models. Right? So if you ask something about recent events or if you ask something that chat GPT thinks, oh, this might require recent information, It will essentially browse quickly browse the Internet with Bing, to bring in more relevant information outside of the knowledge cutoff. Right? So if you're asking, you know, a question about, hey.

Jordan Wilson [00:39:25]:
Who's who's, winning in the NBA playoffs in 2024? Right? The knowledge cutoff is 2023. So GPT 4, is going to query the Internet by using browse with Bing. It's an amazing feature. However, browse with Bing historically was terrible. 6 months ago, I literally told people never use it. 2 months ago, it got a little better. I think after, enough lawsuits, OpenAI started to cite things in their responses, and they had a blog post how they are, you you know, trying to cite information more, especially when it uses browse with Bing. But before, OpenAI and Chat GPD didn't always tell you when it used browse with Bing.

Jordan Wilson [00:40:04]:
Sometimes it did, and, it would give you false information. It might make up articles, make up research, right, which is pretty normal if I'm being honest for Internet connected large language models. That's something that's plagued to Google. Copilot is not as bad. I'd say Copilot is actually one of the better ones, in terms of, not hallucinating on the web. But Chad GPT had a problem with that a couple of months ago. Y'all, it is a legit answers engine now. Browse with Bing is actually pretty flipping good.

Jordan Wilson [00:40:39]:
I'm very impressed. If you haven't, really pushed the limits of browse with Bing in the last 48 hours, you are missing out. I think in theory, this could spell trouble for perplexity. Right? Obviously, perplexity uses the GPT 4, engine, or you can use the clawed engine so you can swap it in and out. But, y'all, I don't know if I'm gonna be using perplexity as much. I'm still gonna use it every day, but I think we have a literal answer engine on our hands in the updated browse with Bing. It is really good. Let's take a look, y'all.

Jordan Wilson [00:41:16]:
Let's take a look, shall we? Hey. And I'm curious. I'm curious if anyone out there has has noticed this. Hey, Joe. Thanks. Joe said I did a a decent explanation of context window. Thank you. Well, you know, I've done it at least live, like, almost a 100 times, in in in our free prime prompt polished PPP course.

Jordan Wilson [00:41:33]:
So I think early on, I did a terrible job. It was actually like, my wife told me. She's like, talk about it like the Star Wars credit scenes. Right? Like, it scrolls in. You can see it. You can see it, and then you can't. So, shout out to my wife actually for helping me with that explanation. Alright.

Jordan Wilson [00:41:46]:
But let's look at this number one thing that I think is super important. Alright? And no one is literally talking about this. Browse with Bing is actually amazing now, and I think that this is going to change how all of us should be using large language models. So as an example, I gave a general query. I said use browse with Bing and find me recent news from May 2024 about AI announcements from Microsoft, Google, OpenAI, NVIDIA, Meta, and Amazon. Take your time. Go step by step and please double check your work. It searched 6 sites, and I love that they have that up there before.

Jordan Wilson [00:42:25]:
So sometimes it would you would get a drop down that said it used browse with Bing. Sometimes it wouldn't. Sometimes it would put, kind of footnotes or links, in each bullet point. Sometimes it wouldn't. Sometimes it would just put the sources at the end. Sometimes those sources or those footnotes, were were hyperlinked. Sometimes they weren't. Browse with Bing, if I'm being honest, 6 months ago was, 6 months ago to 9 months ago was a hot mess.

Jordan Wilson [00:42:51]:
I would tell people do not use it. It is not, it is not accurate. You cannot trust it. 2 months ago, I would have said, ah, it's getting pretty it's getting pretty good, but, you know, you should probably still be using, perplexity. But y'all, now it is really, really good. It not only tells me at the top the different six sources, but then it also cites them, in the content that it spits back. Pretty amazing. Alright.

Jordan Wilson [00:43:16]:
Let's look at one more thing. So in this example, I was a little more specific. I said the essentially the exact same information, but I said use browse with Bing, searching at least 10 different sources. And then at the end, I said include information and citations in all of them. And then guess what OpenAI and ChatGPT and Browse With Bing did? It searched 11 sites. Y'all, this is the future. This is the future of the Internet. If I'm being honest, I don't think enough people know about this.

Jordan Wilson [00:43:51]:
If I was OpenAI, I would have spent at least 2 or 3 minutes talking specifically about this. Because not only you know, not that OpenAI needs to worry about market share from perplexity. Perplexity is huge. I think perplexity is gonna be taking market share, from from Microsoft Bing and from Google and their new, kind of AI search. But y'all, this is a killer feature. This changes. This changes how we should all be using. You you know, I I we we teach in our free PPP pro course about proper ways to prompt, pop proper ways, in our new refined cue method, proper ways to fetch information from the Internet, how you should call on that information, and how you should put it into, your your context window.

Jordan Wilson [00:44:39]:
With this, it's huge. It is huge. Like, you are literally combining the best of both worlds. You're combining the best of traditional search. You're combining the best of new AI search from from, things like perplexity with the most capable GPT 4 o model. Yo. This is wild. Hey.

Jordan Wilson [00:44:58]:
You can still use like like Kathy said. I don't have a problem with that. You can still use it. If I'm being honest, though, I ran these same prompts side by side, probably about 5 to 10 times. It's not a scientific test. Yeah. GPT 4 o was with browse with Bing was faster. It was more accurate.

Jordan Wilson [00:45:21]:
It it pulled information from those sources better. Y'all, I think I think, you know, some some smart people over there, OpenAI, saw the explosion of perplexity, and they've put a lot of of human power, behind this and just didn't really talk about it. Right? I do expect there to still be some sort of search announcement, some sort of answers engine announcement. Maybe when that, chat g p t plus model gets does get upgraded, at some point, kind of off or it gets upgraded from the free GPT 4 o model. I expect them. They have to be talking about this. This is one of the best features of any large language model out there. It performs very well, very fast, very accurate, and, hey, at least for some very anecdotal head to head.

Jordan Wilson [00:46:12]:
Oh, man. I'm not gonna say it it it, hands down beats perplexity, but it I I I I think it beats perplexity, at least in my limited use case of this. So, y'all, that's a lot. Chrissy with a great question here. Do you ever specifically ask it to browse with Bing? Almost always. Almost always. I would say whenever we're starting new, new expert chats, that's kind of what we teach people in Prime, Ground, Polish. I'd say 95, percent of our new chats either start with a targeted browse with Bing call or working with a Internet connected GPT, to have it go to specific web pages.

Jordan Wilson [00:46:54]:
So that's the downside with browse with Bing right now. You can't consistently get it to go to certain web pages that you want because it's essentially taking those keywords in your query and it's binging them. But with a an Internet connected GPT like Web Reader, like Foxscript, like WebPilot, etcetera, you can visit specific URLs. So, oh, that's another thing to mention too, differences between the free version and the paid version. You know, free version, you can use GPTs, but you cannot make them. So you still have to have that $20 a month chat g p t plus account to make custom versions of GPTs, but you can share them now with all your free users. Alright. So that's a lot.

Jordan Wilson [00:47:36]:
I know this episode was a long one. Let's go and quickly recap the 7 things you need to know. So if this is the too long didn't read version. Ready? 7, hardly any of the GPT 4 o announced features are live yet and most won't be free. 6, you probably haven't seen some of GPT 4 o's best features, and we'll be sharing about those in the newsletter. 5, the GPT 4 o model was actually leaked, quote, unquote, weeks before as I'm also a good GPT to chatbot. 4, there were actually 2 other models that were, quote, unquote, leaked in the chatbot arena that have not in theory been released, and we do expect the paid model to be released with one of those. 3, the UI UX was quietly updated with some new features that allow you to switch models.

Jordan Wilson [00:48:25]:
2, chat gpt kind of fibbed again about that whole 128 k contacts window. At least as of today, it is still only 32,000 before chat gpt starts to forget things. And then number 1, I think chat gpt actually did build an answers engine with its under the hood updates to browse with Bing. I think it's amazing. I think you all are amazing for tuning in, for joining us live, growing together. You know what? I'm gonna go ahead and and try something. I want everyone, if you're live right now, drop a little bit about yourself. Go connect.

Jordan Wilson [00:48:57]:
Like, where are you from? What are you working on? We have such a great community here of people tuning in live to everyday AI, like people who have met in the comments, and started businesses together. So, I normally don't do this because I like to keep comments kind of, you know, on on the topic, but go ahead. Drop in. Who who are like, hey. You made it to the end of the show. And this is you know, when I'm on my own, I kinda ramble. You know, someone said you can only listen to Jordan on 2 x and even that it's so unbearable. But go ahead.

Jordan Wilson [00:49:28]:
Drop who you are, where you're from, what are you working on, who you're trying to connect with in in the generative AI space. You you know, you're probably if you if everyone does that right now, you're probably gonna make a lot of good connections. Alright. So, thank you all for tuning in. If this was helpful, I hope it is. We spend literally so many countless hours putting together these episodes so you know the facts. Right? Because if we're all using generative AI, especially to grow our companies, grow our careers, You have to know you have to know the, the actual accurate information from the misinformation, from people who don't know what they're talking about, talking about these things online. You have to be able to to sort through the the marketing fluff through the through the reality of how all of these models work.

Jordan Wilson [00:50:13]:
Y'all, if we're using them to grow our business, someone's gotta spend the time to investigate and talk about it. That's what we do. If you appreciate what we do, if you're listening to the podcast, please leave us a rating. You know, you can leave leave us a 1 star if you want and say Jordan rambles on. But if you like it, leave us a 5 star rating. Also, if you're listening here on LinkedIn, if this was helpful, tag a friend, tag a coworker, repost this. But more than anything, please join us tomorrow and every day for more everyday AI. Thanks, y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI