Ep 39: How To Keep ChatGPT From Lying


Episode Categories:


Jordan [00:00:16]:

Is your ChatGPT lying to you? That's called the hallucination and chances are if you've used ChatGPT you might have experienced this. So we're going to talk about how to keep ChatGPT from lying to you today and a lot more on everyday AI. So this is your Daily Livestream podcast and newsletter to help everyday people like you and me not just get the most out of AI, but to actually understand what's going on and how we can use it in our everyday lives. So before we start talking about ChatGPT and why it's hallucinating, let's go over what's happening in the AI world today. A lot of interesting news and I think some important news. So as a reminder, if you are listening to us live, please leave me a comment. This is just me today. I can talk by myself all day, but I'd rather answer your questions.

Oracle stock rises after announcement of generative AI

Jordan [00:01:13]:

So if you are tuning in, please leave a comment and let's get into the news. So Oracle, their stock is rising after announcing earlier this week that they're getting into the Generative AI game. So Generative AI obviously is text to something. Large language models, text to image, text to video, all that stuff. So Oracle announced earlier this week that they were teaming up with Cohere and their stock has soared. So Oracle is obviously the world's largest database management company and up 20% in the days since announcing this new Generative AI offering. So it's crazy to see these big companies because also Oracle has continued to lay off employees, hundreds of employees, but their stock is rising at an all time high. Specifically, after announcing this Generative AI piece, we already have some comments in today.

AI is coming to Starbucks

Jordan [00:02:11]:

This is great. I'm going to get to these in a second, but let's get back to the news. So Starbucks. This one's good. Starbucks AI is coming to a Starbucks near you. So they just announced this is Fresh just was announced like an hour ago, not even. So they are getting into the AI chatbot game. So coming to a drive through near you, you may or may not be talking to a large language model soon.

Jordan [00:02:37]:

So I think it's important to remember. Remember, I'm going to give my take on this. We've all probably experienced these automated drive throughs years ago, right? So they've been testing this. But that was before large language models got as powerful as they are now. So I think we all probably had an experience at some point, especially if you live in a major city here in the US. Years ago. They started to roll these out, but I think they're actually going to start to get good. So Starbucks just announced that it should be interesting to see how that rollout goes.

Are AI watermarks coming to social media campaigns?

Jordan [00:03:11]:

And that's fresh, piping hot news. It's hotter than my cup of coffee here. That's no longer hot. Third story that I want to talk about today, another recent announcement. So are AI Watermarks going to be coming to social media campaigns? So, Ogilvy is one of the largest social media agencies in the world and they actually just recently announced and are pushing all advertisers to disclose when they use generative AI in social media campaigns. So you may not know this, but a lot of agencies and agencies are sometimes working with hundreds or thousands of different clients who are taking out ads. And a lot of people are using generative AI models in their ad campaigns. Right? So Ogilvy is leading the push to make a tag that this is AI generated and to disclose.

Jordan [00:04:06]:

Will it happen? I don't think so. Why? Money. As soon as you start disclosing that something is AI, those brands start to lose what they feel is genuine credibility. So, yeah, brands are creating, I think, pseudo relationships with their end customers because they are using generative AI and not real people. So I don't think that this will stick on. I love that Ogilvy is doing this and that they're pushing for it. I do think it's needed in the end. Money talks louder than regulation.

Jordan [00:04:40]:

There's no way around that. Money talks louder than regulation. So it'll be interesting to see. So let's talk hallucination. So if you are tuning in live, please give me an example, drop me a comment. Has hallucination happened to you before? So let's talk. But first, a couple of comments already coming in. Harvey saying, Great to be here.

Jordan [00:05:07]:

Harvey gave a fantastic talk yesterday at the AI summit. Pretty large online summit. Harvey was amazing. Make sure to check him out. I'll probably throw a comment in here. Ryan is driving and listening. Ryan, be safe while driving, but thanks for joining us. All right, so let's get into it.

What are ChatGPT Hallucinations?

Jordan [00:05:27]:

Let's talk hallucinations. All right, so what is a hallucination? So what we really have to do before we even jump into what a hallucination is, is we have to understand what a large language model is. So ChatGPT is a large language model. So without getting into the specifics, think of it like this. We've all probably used Google and you start to type in a Google search and you'll see autocorrect and it might suggest five, six, seven different things. A large language model, when you start to type something, or if you ask something instead of five or six or seven, think five, six, 7 billion different combinations of things. So that's what a large language model is. It is essentially a level of autocomplete that it is hard to fathom because of the size of these large language models and all of the parameters that there are.

Jordan [00:06:38]:

Another thing to keep in mind before we jump into hallucinations is what is a model it is. That right. It's all based. Everything that you get from ChatGPT, or if you're using Google Bard, Microsoft Bingchat enthropics Cloud, which we don't talk about on the show enough, whatever text generating model that you're using, that's all it is, it's a model. So that's why you are probably getting hallucinations. All right? So before I give you five tips on how to avoid hallucinations, I'm going to go ahead and tell you this. I've been using the GPT technology since 2020. So most people got introduced to ChatGPT or the GPT technology when ChatGPT came out, that was, I think, late November of 2022.

Jordan [00:07:39]:

So it's been out for about eight months now. I've been using it since 2020. So the GPT technology, OpenAI made that available to a lot of third party companies. So we've had a lot of experience, our team has a lot of experience using this technology before ChatGPT came out. So I did get personally, some hallucinations or some lies early on, but probably in the last five months, I haven't gotten any hallucinations. Because you have to understand how first you have to do what's on the screen here is you have to understand that ChatGPT is a large language model, okay? And then you have to use it correctly. So I'm going to give you five tips, but before I do, we have a couple of comments coming in. I want to make sure to talk about this.

Jordan [00:08:28]:

So Dr. Harvey Castro is saying the scary part is the reference and article didn't match at all for the comment. Yes, hallucinations. If you're not using ChatGPT correctly, they can spit out the craziest things. And obviously, Samsung has unfortunately been on the news. There's the case where the lawyer in New York submitted false documents to a court because ChatGPT hallucinated in what this lawyer used in a court filing and made up fake cases and they submitted it. Right, that's a hallucination. Not good.

Jordan [00:09:11]:

So another comment here saying, I use ChatGPT to generate Python Web app ideas, 60 40 at providing good code. Yeah, same thing. So that's a great point. So it's not just text or paragraphs or content. It can be code as well. It can be math. You're going to get if you're not using ChatGPT correctly, you're going to get hallucinations. It's going to lie to you.

5 Ways to Avoid ChatGPT Hallucinations

Jordan [00:09:36]:

All right? So without further ado, let's talk about those five ways you can stop getting hallucinations.

1. Using the Wrong Version

Number one is you're just using the wrong version, okay? So if you're listening to this on the podcast, I have some slides up on my screen, but they're not they're not terribly in depth. It's more just people have something else to look at aside from my face for those that are joining us live. So all I'm talking about here, using the wrong version is if you're using the free version of ChatGPT, you're going to be getting more hallucinations than if you're using the paid version. That's just because the paid version, GPT Four, is immensely more powerful. A funny story, I was helping a family member with a project and this person was using the free version, GPT 3.5, and I said, how can you do this? I use ChatGPT hours every day. And using the free version, not only are you subjecting yourself to more hallucinations, but also the redundancies are out of control, the quality is low. So you're using the wrong version.

Jordan [00:10:52]:

I'm not a paid spokesperson, but if you're using the free version of ChatGPT, you're doing this all wrong. It's $20 a month to get the paid version. Think of what your time is worth and you will save literally hours in the first 30 minutes that you use the pro version correctly. Okay? So that's number one. You're using the wrong version. As a reminder, if you are tuning in live, thank you for the comments so far, but keep them coming because I want to help you guys through this. That's what everyday AI is about. It's helping everyday people understand and use AI.

2. Using the Wrong Mode

Jordan [00:11:27]:

So if you do have a question about something that's on screen, or something just about hallucinations, make sure to drop it. So the second reason why you're getting hallucinations from ChatGPT is you're using the wrong mode. Okay? That's very important. And we're going to get into that more here in the show. So what's that mean? So in the paid version, there's different modes. Okay, so GPT four has different modes. If you start a new chat, even if you click GPT Four, you may not realize this, but it always defaults to default. Okay, so the different modes in ChatGPT four, there's actually four.

Jordan [00:12:03]:

I only have three, so I know no one from OpenAI is probably listening to the show, but if so, can I please get the code interpreter already? But the four different modes are default. That's number one. Number two is to browse with Bing, which is essentially web access. The third one is plugins. And I think the last time I checked, there's 400 plus third party plugins that can extend the functionality of ChatGPT Four. And then the one I don't have access to yet is the code interpreter, which helps you interpret code. Pretty straightforward. Christopher with a comment.

Jordan [00:12:41]:

I agree. The difference between the free version and the paid version, there is a huge difference between 3.5 and four. Chris, that's a great point. I'm going to go ahead and shout out Chris as well, because Chris, we still have to announce the giveaway winner, which we'll do next week, but Chris actually won the ChatGPT giveaway for us for the most referrals. So shout out Chris. You did an amazing job referring people to everyday AI, but we still are going to be giving away one more one year license for a random person. So we're giving away two. Chris one.

Jordan [00:13:16]:

One. So thanks for the comment, Chris. Pierre Paul, he only has two. He's missing out on the plugins in the betas. Yeah. Oh, man. Yeah, you got to get the plugins. All right, so let's keep going on.

3. Bad at Prompting

Jordan [00:13:29]:

So we said number one is you're using the wrong version. Number two is you're using the wrong mode. In the version number three, you're bad at prompting. I'm sorry? You're bad at prompting. Rob. Rob. Shout out thanks for joining us. I'm going to leave a comment in this later.

Jordan [00:13:47]:

If you aren't following Rob on TikTok, you need to be following Rob on TikTok, giving out great advice to job seekers. So problem number three is you're bad at prompting. Okay? And here's the problem. Social media is the problem with prompting, period. I have a fake tweet on the screen from every tweet ever saying, 20 powerful ChatGPT prompts to save you 20 hours a week. If you go on any social media, especially Twitter and LinkedIn, you're probably going to be seeing these every day. This is a problem with AI in general, specifically ChatGPT, because you see these influencers and these kind of strong voices in AI saying, this is all you need. And so what happens? These are kind of what's referred to as super prompts, so there may be a prompt that's 20 to 50 words long and they're like, oh, it's going to give you great results.

Jordan [00:14:45]:

It's not, it's bad in most cases, those super prompts can still hallucinate. You can use this great prompt that's supposed to save you 20 hours a week and it can still lie to you if you use it exactly how they're saying. I'm not an advocate. If you couldn't tell of using these super prompts that you see online because they're not good, period. Another question coming in. Is this the case if you use the API, which is not free? Sorry, Magel, I think I caught your question a little too late. If you explained it a little more, I think I can probably answer that. So the number three reason that your GPT is hallucinating is you're bad at prompting.

Jordan [00:15:29]:

So we actually do this, we do a free course. It's called prime prompt polish. Maybe leave me a comment if you're listening. If you've done the course, maybe just drop me a comment, let me know, or let the people know if it's any good. I think it's good. Like I said, it's a free course. We've been using the GPT technology since 2020, so I feel that we're pretty good at getting the most out of GPT. And like I said, I haven't personally had a hallucination from GPT in months.

4. Not Specific Enough

Jordan [00:15:57]:

So let's go to number four. Looks like our little subhead got messed up there, but that's fine. So you're not specific enough. And this goes to prompting, because what you probably can't see there behind is talking about priming. There's a whole step that people aren't doing in ChatGPT, which is the priming phase. So normally, even if you're using one of these super prompts, it's saying like, oh, act as an expert copywriter and here is some background and then here is a specific prompt. Even if you do it that way, you're going to get terrible results and probably some hallucinations, period. So you need to be priming, okay? So before you even give a prompt, you need to tell ChatGPT of its role with specific examples.

Jordan [00:16:46]:

You need to get background of who you are, you need to get background of your audience, of your USP, of their pain points, and then you also need to ask questions of ChatGPT before you prompt it. Say, what other information do you need to know? So I know that sounds like a lot of work, but hallucinations are a problem, right? Anyone? Or just crazy redundancies or just information, getting information back from ChatGPT, that is not useful. So that's happening a lot. So the fourth reason is you are not specific enough. You need to go through multiple steps before you even prompt ChatGPT for information you need to prime. All right?

5. Not Feeding ChatGPT Data

So let's go on to the last reason why you are probably still getting hallucinations from ChatGPT. So you're not feeding ChatGPT data. So you're probably saying, jordan, if I'm using the paid version of ChatGPT and I have browsing mode activated or I have plugins, ChatGPT should just know everything, right? Wrong.

Jordan [00:18:00]:

Because here's the thing, unless you are very specific in your prompt, which most people aren't, right? Unless you're very specific, there's still billions of web pages on the internet, right? So even if ChatGPT has web browsing mode and maybe you have a small company, or maybe you work at a Fortune 500 company and you're working on something about an annual report, right, that exists on the internet if you're filing it, right? So instead of just assuming that ChatGPT knows where to look, sometimes you need to feed ChatGPT data.

So in that priming process that I talked about and talking about giving specific information before you even prompt, I will often if I have web browsing enabled when giving background, I can say more background can be found here. For this piece of information I'm giving ChatGPT and I will leave a link and then ChatGPT, even if it's a long web page, it will go and quickly become an expert in that web page. Another great thing which we're going to feature in the newsletter today is uploading PDFs to ChatGPT when you're priming it. So you have to be giving ChatGPT data in a lot of cases to avoid hallucinations. So those are the five reasons that you're not getting accurate information from ChatGPT. The five reasons or the five ways that you can stop to get hallucinations. All right? So I'm going to go through them all very quickly.

Audience Questions about ChatGPT

Jordan [00:19:38]:

Now, to recap, let's go. Number one, you're using the wrong version. Number two, you're using the wrong mode. Number three, you're bad at prompting. Number four, you're not specific enough. And number five, you're not feeding ChatGPT data. All right, so thank you for joining me. I do have a couple of questions I'm going to go ahead and take here before we wrap up the show.

Jordan [00:20:01]:

But as a reminder, if you're still listening right now, whether live or on the podcast, this is every day. This live stream is in a LinkedIn thread. So if you're watching now, just type in PPP. Go ahead. I'm going to send you information about our free course so you can stop getting hallucinations from ChatGPT. It's a free course. We don't sell anything at the end. We just like helping people.

Jordan [00:20:23]:

So, let me get to these questions. So, let's see. Magil is asking about the difference between 3.5 and four for the API. And is the API 3.5 bad in the free version? It's a highly technical question. I'll say this very in my experience, anyways, very few everyday people have access to the 4.0 version of the API. As an example, I applied for API access as an individual, right, I think within three days of when it opened up. And I still don't have four. So even if you this is very important, a little technical, but even if you have the pro version of ChatGPT, GPT four, you probably don't have the GPT four API version.

Jordan [00:21:17]:

Usually they're only rolling those out to people that are using the API and have hundreds or thousands of paying customers. So most people don't have that version. So you're going to need to go to the OpenAI playground and click on the options. And most people don't have the API version four unless you have a service that you're providing and you have paying clients. Oh, cool. We got some people that are dropping in the PPP. Awesome. So Parvs.

Jordan [00:21:48]:

Shout out Parves. Follow Parvs on LinkedIn too. He's putting out great things about AI. So thank you. I'll send all the information to all of you as well. A couple of other questions. Okay, Chris. Chris actually shouting out the GPT course.

Jordan [00:22:05]:

Thank you. He says, I highly recommend the PPP course because you will definitely learn a lot of new information that's actually helpful. Thank you, Chris. Harvey with another comment. The million dollar question. Did ChatGPT help you come up with the top five? No. I know, crazy, right? ChatGPT is not great at coming up with its own problems or poking holes in its own armor. I've tried it before, it's not good.

Jordan [00:22:36]:

But yes, Harvey, I appreciate that question. ChatGPT did not help in the making of this podcast. This is even this C minus graphic design that you see here. That was all me. Monica asks if you upload PDF to ChatGPT, will that information become public? Oh, my gosh, great question. The answer is yes. So the answer is yes. So that's a fantastic question.

Jordan [00:23:04]:

And this is when I referenced the Samsung incident earlier, I should go into a little bit more depth. So an employee at Samsung, I believe, essentially uploaded some proprietary code into ChatGPT to help work with the code. So not good. So, yeah, a lot of companies are saying be careful when using ChatGPT or other large language models like Bard, because, yes, when you are uploading information, whether it's text based or PDFs, in theory, OpenAI does then have access to use that information to train its models. So will it become public? That's a tricky question. It's not like if I upload a PDF or if I load information about my company that any of you will be able to see it. But as far as I know, and if someone in here knows more, please correct me in the comments, but if you do upload any text, any PDFs, yes, OpenAI has access to that and they will use that to help train and refine their model. But it's not like when you do that, that information doesn't become public for anyone else per se, but it is being used by ChatGPT, by OpenAI to better train their models.

Jordan [00:24:22]:

So, Monica, fantastic question. It's something we talk about. So there is, on Monica's point, there is a way to keep all of that private. So there is a way in ChatGPT to I'm going to see if I can multitask here and also share my screen. So give me a second. So, yes, there is a way to keep everything private. So let me go ahead and share my screen here. So give me a second and I'll show you how to do this.

Jordan [00:24:57]:

If you go into your settings in ChatGPT and go to data controls, you can go to this chat history and training and you can turn it off. At that point, your information is not being sent back to ChatGPT to improve the model. However, the huge downside is the history is not saved. Right? So as soon as you close out, everything you worked on is gone. So, yes, you can technically keep all of that information from being sent to OpenAI to improve their future models, but you are then at that point losing the ability to save it. So, Monica, I hope that answered the question. Let me see. I don't think yes, like what Harvey said there, thank you.

Jordan [00:25:43]:

Harvey also asked the question, or answered the question. He said you can change your settings so that the information does not used to train your data. But yes, the obvious downside is then your chat is not saved. So if you're really putting a lot of work into your individual chats and building what's called expert chats, that's what we teach people to do, you can't do that so you can't retain it. And after you go through the process and you create a great chat that's spitting out great outputs, if you toggle that setting off, you can't go back. So that's the downside. So as a reminder, I see some PPPs in the comment. I'm going to send you guys some information.

Jordan [00:26:17]:

Harvey, thank you for the compliment saying great talk, I appreciate it. So yes, if this was helpful, if you are still getting hallucinations, type in PPP. I will send you this. But as a reminder, this is all for hallucinations. I don't want to keep this going for 3 hours, but go to your everyday Ai.com. We have a daily newsletter. It's free. Guys, there's plenty of daily newsletters out there.

Jordan [00:26:46]:

Maybe I'm biased. I think we're doing a fantastic job to help the everyday person. That's the thing. A lot of resources out there for AI news and just to keep up with AI. They're for very technical people and it's hard to keep up. So we for the everyday person, we have a free newsletter, your everydayai.com. Sign up for that newsletter. We're going to be doing a little tutorial today on how to even what Monica was saying, uploading a PDF and being able to use that information in your chat.

Jordan [00:27:16]:

GPT fewer hallucinations. So thank you all for joining and we'll see you on the next show of everyday AI. Thank you.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI