Ep 120: ChatGPT Tokens – What they are and why they matter

Episode Categories:

Resources

Join the discussion: Ask Jordan questions about ChatGPT

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan WilsonLinkedIn Profile

Overview

In today's rapidly evolving digital landscape, artificial intelligence (AI) has become an indispensable tool for businesses seeking to enhance their operations and improve decision-making processes. One cutting-edge AI model that has garnered significant attention is ChatGPT, a language model developed by OpenAI. This podcast episode of "Everyday AI" delves into the concept of ChatGPT tokens and sheds light on their critical role in harnessing the full potential of this powerful AI tool.

Tokens lie at the core of ChatGPT's natural language processing (NLP) capabilities. These tokens serve as the building blocks that allow ChatGPT to comprehend and generate meaningful responses. They can represent complete words, parts of words, or even symbols, enabling the model to predict the next input based on its understanding of the given tokens.


Jordan Wilson highlights the significance of understanding tokens for achieving accurate and reliable results. Wilson emphasizes the importance of avoiding common pitfalls that lead to subpar outcomes, such as hallucinations. By gaining a firm grasp of tokens, business owners and decision-makers can leverage ChatGPT to its full potential and drive informed actions. You can read a summary of the podcast below. 

What Are ChatGPT Tokens? 


ChatGPT tokens are the fundamental units of text that the model processes when generating or interpreting language. Essentially, a token can be as short as one character or as long as one word, and it can even represent a piece of a word, depending on the language and the complexity of the text. For example, the word ChatGPT could be split into multiple tokens such as Chat, G, P, and T. Tokens serve as the building blocks of communication for the model, allowing it to understand and generate coherent sentences by piecing together these smaller units of meaning.

The tokenization process is crucial because it converts raw text into a format that the language model can work with. During this process, the text is broken down into tokens based on a set of predefined rules and algorithms, which can handle various languages and dialects. This tokenization enables the model to manage large volumes of text efficiently, as it can focus on these smaller, more manageable pieces rather than whole sentences or paragraphs at once. The model's comprehension and response accuracy depend significantly on how well the text is tokenized.

When interacting with ChatGPT, each input and output exchange involves a certain number of tokens. The cost of using the model, in terms of computational resources and time, is often measured in these tokens. For instance, shorter interactions with fewer tokens are processed more quickly and are less resource-intensive, while longer, more complex interactions with many tokens require more processing power and time. Understanding the concept of tokens helps users optimize their interactions with the model, making it more efficient and cost-effective, especially in applications where processing speed and resource utilization are critical.

What Happens When ChatGPT Runs Out of Tokens?

When ChatGPT runs out of tokens, it essentially reaches the limit of the amount of text it can process or generate in a single interaction. Tokens are chunks of text which can be as short as one character or as long as one word. The exact length of a token can vary based on the complexity and structure of the text.

Here's a more detailed breakdown of what happens:

1. Token Limit in Input and Output: ChatGPT has a maximum token limit for both input and output combined. This means the total number of tokens in the user query plus the tokens in the generated response cannot exceed this limit.

2. Processing Input: When you provide input to ChatGPT, the system tokenizes the text (breaks it down into tokens). If the input text itself exceeds the token limit, the system may truncate it, meaning it will only consider the first set of tokens up to the limit.

3. Generating Response: Once the input is processed, ChatGPT generates a response. The length of the response is constrained by the remaining available tokens out of the maximum limit. For example, if the token limit is 4096 tokens and the input used 2048 tokens, the response can be up to 2048 tokens long.

4. Running Out of Tokens: If ChatGPT reaches its token limit while generating a response, it will stop generating text. This could result in an abrupt or incomplete answer.

5. Handling Overflow: Developers and users can manage token limits by:

- Breaking down long queries into smaller segments.

- Truncating long responses.

- Summarizing or simplifying input to reduce the token count.

Understanding and managing tokens is crucial for optimizing the performance and efficiency of interactions with ChatGPT. By keeping queries concise and considering the token limits, users can ensure more coherent and complete responses.

How Can You Conserve ChatGPT Tokens ?

To keep track of the number of tokens ChatGPT is using, you can follow these general steps:

1. Understand Tokenization: Familiarize yourself with how the model tokenizes input text. Tokens can be as short as one character or as long as one word. Special characters, punctuation, and spaces also count as tokens.

2. Utilize Developer Tools: If you're using an API from a provider like OpenAI, they typically offer tools or endpoints that can help you calculate the number of tokens used. Look for these tools in their documentation or developer console.

3. Manual Calculation: While not precise, you can estimate token usage manually by splitting text into chunks that approximate tokens. Keep in mind that this approach may not be entirely accurate due to the complexities of tokenization.

4. Pre-Processing and Post-Processing: Many APIs provide ways to check token count before sending a request. After receiving the response, you can also check how many tokens were utilized.

5. Third-Party Tools and Libraries: There are third-party libraries and tools that can help with token counting. These can often be found in community forums or through open-source repositories.

6. Set Usage Limits: Some platforms allow you to set limits or thresholds to manage token usage effectively. Configure these settings to receive alerts when you approach or exceed your predefined limits.

By using a combination of these strategies, you can effectively monitor and manage the number of tokens being used in your interactions with ChatGPT.


Topics Covered in This Episode

1. Mistakes and Hallucinations in ChatGPT
2. Token Memory Capacity
3. Tokenization and Understanding Context
4. Token Values and Comparisons


Podcast Transcript


Jordan Wilson [00:00:17]:

I've taught tens of thousands of people about ChatGPT, and there's 3 very common mistakes that most people get wrong. It gives them bad results, and it causes ChatGPT to hallucinate. And one of those 3 things we'll be talking about today, which is tokens. So if you wanna know more about what tokens are, why they're important, or even if you want to know, How can I get Chet GPT to stop lying to me? Today's episode is for you. So Thank you if you're joining us live. My name is Jordan Wilson. I am the host of Everyday AI. We are your daily livestream podcast and And free daily newsletter helping everyday people like me and you not just understand what's going on in the world of generative AI, but how we can actually Use it all.

Jordan Wilson [00:01:06]:

Right? Because there's no point just, you know, reading, AI news all day and, you know, stacking up, all these new tools that you'll never try. That's not what everyday AI is about. We are about helping you actually understand and actually use AI. So I'm actually excited to talk about tokens. It's a little dorky. But, before we do, get your questions in now. If you're joining us on the live stream, thank you. If you are listening to the podcast, don't worry.

Daily AI news


Jordan Wilson [00:01:31]:

Check those show notes. We always leave important links so you can find more and come back, actually. We usually have a pretty good conversation going each day in LinkedIn. Alright. So let's talk about the AI news before we get into tokens. So speaking of tokens and large language models, A new report, suggests that even Google employees are questioning Google's own large language model, Google Bart. So this comes, from a Bloomberg report that says, there's Google product managers, designers, and even engineers that are debating the AI tool's effectiveness and utility. And some are even questioning, according to this report, Whether the resources going into development are worth it.

Jordan Wilson [00:02:16]:

So right now, Google BARD, Google's BARD uses the Palm 2 model. But there are big, big hopes for the upcoming Gemini model. What do you, like, what do you all think? I it's when I use Google BARD, I'm not impressed. It's probably the large language model that I use least. I don't actually find a lot of utility for it at all. I'd I'd hate to say that, but, I hope the Gemini update, changes that. Alright. Next piece of news.

Jordan Wilson [00:02:45]:

There are new calls for AI regulation in Europe. So there is a new report that was produced by the IE University Center For Governments of Change. So this recent IE University study Showed that 68% of Europeans want their government to inter to introduce stricter safeguards to keep AI from taking their jobs. So it's, very, very interesting just the the difference of opinion between different countries. I don't know if you'd see that Same percentage here in the US, and I also don't think the US is going to do really anything, at least anytime soon To actually regulate AI's ability, to take over jobs. Because I do think it's gonna happen, and I don't think, you know, hey. Wall Street hates employees. Alright.

Jordan Wilson [00:03:33]:

Our last piece of news is Adobe is making big moves in the creative and gen AI space. So, they just kicked off their, Adobe MAX yearly conference, you know, unveiling a lot of new updates, new pieces of software, new updates to the software, all that good stuff. So a couple Things to keep in mind if you're a generative AI fan and you wanna know, hey. What does Adobe have up it have up its sleeve, you know, a week after Canva's, A big magic studio update. So, Adobe's announce, announced their new firefly vector model where objects can be more easily reshaped. They, announced their firefly design model, which is text to design, and updates to these models, obviously, and then, and New and improved firefly image model too, which is its upgraded text image model. And there's more. So That was helpful.

Jordan Wilson [00:04:26]:

If you wanna know more what's going on with those stories or anything else, make sure to go to your everyday AI.com. Sign up for that free daily newsletter, It will help you understand what's going on in the world of AI. Let's understand tokens, shall we? Hey. Good morning to everyone joining us. Michael, thank you for joining us. Harvey doctor Harvey Kaster, a woozy, thank you for joining us. Everyone, Some of our regulars, some new faces. Bronwyn, thanks for joining us.

Jordan Wilson [00:04:52]:

Good afternoon to Bronwyn. Good morning to most of us, like Leonard. Harold, good morning. Hey. It's good to see y'all. Like, let me know. What what are your questions, about ChatGPT and tokens? And When I when I'm talking about ChatGPT and tokens, this is actually applicable for other large language models, but they all, they all handle, tokens a little differently, process everything a little differently. So, at least in this conversation, we're gonna be talking about ChatGPT and its use cases of tokens.

Jordan Wilson [00:05:26]:

So let me start at the end. I teased in the show. You know, I've helped teach tens of thousands of people in chat g p t, whether it's, here on the podcast, in the livestream, in our newsletter. We have Every every week, we do 2, courses on prompting, and they're free. We don't sell anything at the end either. That's our, prime prompt polish course, PPP. If you wanna if you wanna access to it, we don't even put it on our website. It's a little secret for for our listeners, for our viewers.

Jordan Wilson [00:05:53]:

So if If you wanna access, just type in PPP in the comments, email me PPP, I'll send it to you. But one of the things that we teach in there, one of the things that most people get wrong is ChatGPT's memory. Because, you know, people will say, hey, Jordan. What's going on? You know, I started using Chatcpt. Things were going great, And then it just started to hallucinate out of nowhere. And and now all of a sudden, things that ChatGPT was getting right before, It's no longer getting correct. Okay? And that is because tokens. Alright.

ChatGPT breaks down language into tokens



Jordan Wilson [00:06:26]:

So so very long story short, and I'm going to, I'm gonna put this in everyday person speak. Okay? I'm gonna try to not speak in, too dork. So I took my dork hat off this morning, Try to put my everyday person hat on. Tokens, simply put, are the way That Chat GbT and other large language models, understand the words that we give it. So it's actually strange. Chat GbT doesn't, I'm gonna show you here on screen. ChatGPT doesn't technically even understand the words. It essentially assigns Tokens to the values that we put into, Chatt GPT and other large language models.

Jordan Wilson [00:07:04]:

So it actually breaks everything down into tokens. And that, essentially, in a short, short number of words, it allows, ChatGPT to kind of have this natural language processing ability, this NLP ability. And that what that's what allows it to go back and forth with us because all ChatGBT is, In essence, is the world's smartest autocomplete that you can steer, but it does that by breaking down. So a token is either A word, a part of a word, a symbol. Right? And then all ChatGPT does in other large language models is it predicts what is gonna come next Based on those tokens that were inputted and based on what it understands in its brain. Okay? And that's the other important part. And there's a lot of misinformation out here, and I'm here to also set the record straight as I sometimes try to do. Alright.

Jordan Wilson [00:08:00]:

Because you hear all these, you know, hey. 4 k, 8 k, 32 k, 100 k tokens. Like, what does that even mean? Alright. I'm gonna show you a live demonstration, but let me just break it down. So right now, depending on what version of ChatTPD you use, It has a different memory. Right? So sometimes when I talk about memory, essentially, that is its ability to recall tokens in it in its head before it starts to forget. Okay? So right now, if you are using ChatGPT, it is different than if you are using What's called OpenAI's playground or if you're using the API, it has a different memory. But if you are using ChatGPT, like most of us are, the ChatGPT Plus plan, It is 8,000 tokens.

Jordan Wilson [00:08:44]:

Okay? And it breaks down to you know, it's it's roughly 4 tokens is 1 character. It's it's a little tricky. But More or less at about 8,000 tokens, you're somewhere around 6,06,500 words. Okay? So think of it like that. Every, Every piece of that back and forth conversation counts as a token. Okay? And then after you get to that roughly 8,000, inside chat gbt, it is going to start Forget what what whatever was at the top. So it can always recall the last 8,000 tokens. But as your conversations get longer, it's gonna start to forget the things at the top.

Jordan Wilson [00:09:22]:

Alright? So, before before we jump in here, I'm gonna see what what questions we have, but I'm gonna share my screen, and I'm gonna walk, I'm gonna walk everyone through this, a little bit. Great great question here from Jackie. Jackie, thanks for joining us. You know, asking with Amazon's money now with Anthropic, how will cloud change, cloud? I don't know. I don't know. I hope it gains the ability to access the Internet. That is one of the things why I don't use cloud hardly at all. But, Jackie, it's actually important you bring up cloud because, this is kind of memory.

Jordan Wilson [00:09:56]:

And when we talk about tokens, you know, The cloud 2 from Anthropic. People say, oh, I don't use ChatGPT because it doesn't have a great memory. Correct. And then they say, oh, I use cloud cloud 2 because it has 100 k tokens. So it has a much larger memory. Also correct. But here's the reality, folks. You can't really do anything, Like, comparatively, when when you talk about the power, and I'll say the untapped potential of ChatChippity, it is because of its plug ins.

ChatGPT tokens in action


Jordan Wilson [00:10:30]:

Like, I can't talk about that on the show enough, and I can't remind people, you know, who say, oh, I don't use ChatGPT. I use, you know, I use Bard, or I use BingChat, or I use, Anthropic or PIE or PO or anything, and I say, why? I just said, that just means that you don't understand the power and the potential within ChatGPT and and and plug ins. But, obviously, the downside of that is that small memory. Okay? So let's let's investigate that a little more. Alright. I'm done I'm done ranting. Let me share my screen, and and hopefully, we can learn A little bit about chat g p t together. Let's let's try, shall we? So so here's here's what I'm here's what I'm going to do.

Jordan Wilson [00:11:12]:

I'm gonna show you, hopefully, live. Right? I I always test this, but things things happen a little differently. So I'm sharing my screen here. I am inside ChatGPT. I am in GPT 4, and I am in plug ins mode, And I have Internet connected plug ins. That doesn't matter for this conversation, but I tell people, always, always, always, if you are starting a new conversation in ChatGPT, Always give it access to plug ins. Alright. So here's what I'm gonna do.

Jordan Wilson [00:11:40]:

I'm gonna say my name is Jordan. I'm from Chicago. Okay? And, also, hopefully, if you're joining live, you can see this little thing in the corner. I do have a Chrome extension that I've covered before. I've talked about it in the newsletter. I've done a, I've done a review on it. And, obviously, now my my check g p t is not responding. You know? That's always how it works.

Jordan Wilson [00:12:02]:

You test something right before, you test something right before and then it, stops working. Okay. There we go. So I do have this little token counter up here in the right hand corner. Okay. This isn't there by default, but this And you should be using something like this, especially if you're in a new chat so you know when Chat GVT starts to lose its memory. Right? So I said, my name is Jordan. I'm from Chicago.

Jordan Wilson [00:12:26]:

Chat you, but he said, hello, Jordan from Chicago. How may I assist you? Okay. I'm gonna say, who won the 1991 NBA finals. Right? One thing I hate doing y'all is is typing live, because my mic is so close to my face. I'm typing like this and Little T Rex arms. Okay? So I just asked a question. I just want you to see this, and I'm gonna say, what is my name and where am I from? Okay? The reason I did this is I asked another question, right, about something not related. Well, the Chicago Bulls are the best.

Jordan Wilson [00:13:00]:

Obviously, Chicago Bulls won the 1991 NBA finals. But then after Chad GbT asked me that, then I asked it again, what is my name and where am I from? Okay. And it obviously got it correct. And I want, like, to just quickly call out the tokens. So So we are at a 157 tokens. Right? So ChatGPT still has all of its memory. So now what I'm gonna do, and please please allow me because I wanna do this live. Okay? So I'm I'm telling Chad GPT, I'm sending you information about my podcast.

Jordan Wilson [00:13:34]:

Please summarize everything I send. Okay? I I know I didn't spell hardly anything right. Again, I'm typing like this. It's it's hard to do. So here's here's here's what I'm doing now, and I'm going to hopefully, hopefully do a decent job at explaining it as I go along. I'm going to quickly and this hopefully will let, like, you all see this too. I'm gonna quickly make ChatGPT lose its memory. Alright? Because people always think, oh, it won't happen to me.

Jordan Wilson [00:14:08]:

Yeah. No. ChatGPT's smart. You know, it's it's a $1,000,000,000 company. It doesn't forget anything. It does. So if you keep an eye on the token counter, I just essentially pasted in a bunch of information. Alright.

Jordan Wilson [00:14:23]:

That's all I did here. You know, it's it's notes. I actually did a bunch of research with Internet connected plug ins, when I originally built the podcast, you know, I just Great the Internet, all the best ways to build a podcast, start a podcast. So even for me, I started, I don't talk about this a lot, and I'm just kind of rambling on as I eat up, ChatGPT's memory here. So if you're listening on the podcast, don't worry. This will be done in 2 or 3 minutes. We're at about 5,000 tokens. But I had a daily podcast actually back in, what was the year? 2008, which is crazy to think about now.

Jordan Wilson [00:15:01]:

So, I kinda forgot. You you know? It's like, hey. What are the what's the best equipment? What's the best software? What should I be using? So I use chat g p t like I think most people should Using an Internet connected plug in, having it read. Probably, I fed it dozens with an s, dozens of articles. You know, hey. Fifty best ways to grow your podcast. How to start a podcast. Blah blah blah.

Jordan Wilson [00:15:27]:

It probably would have taken me, I don't know, A week or more to read all these articles that I essentially summarized with ChatGPT because it knew what I wanted, and it knew exactly how to, how to get that information. Right. Obviously, my chat is being a little wonky. We're almost there. Thank you for sticking around. I know this is A a lot to make a simple point. Right? But we're almost there. And and and and that's the other thing y'all.

Jordan Wilson [00:15:56]:

Like, I tell, like, I tell everyone this. I tell everyone this that, you know, when when I give you all information on this show Or if you take our our prime prompt to polish, the free prompting course that we do twice a week. We update that thing Twice a week. We update it before it goes live because things in ChatGPT change all the time. Okay? I am pushing this over on the token limit just because I wanna make Okay. We test everything. So when we give you recommendations or when we're talking about tokens, ChatCPT losing its its memory because here's the thing. There's people out there, People who have, quote, unquote, followings and are teaching people about AI.

Jordan Wilson [00:16:35]:

And they're like, oh, Chad GPT has a 32,000 k token limit. It's like, No. Not really. Not for how people are using it. Yes. It does if you're using the API, if you're using the playground. But if you're using chat gbt, people don't and even truly understand this. And this is why so many people are getting everything wrong inside ChatGPT.

Jordan Wilson [00:16:54]:

It's because it doesn't. It has an 8 k, and I'm proving it to you right now. Right. So now we're finally there. We're at 9,000 tokens. I wanted to make sure to get to 9,000. So in the beginning, I said, what is my name? Where am I from? Because I told Chatt GPT that, and that is at the top of our conversation. Right? And I did a test right away Early on.

Jordan Wilson [00:17:15]:

Right? I asked a question about the NBA finals, got it right, and then I went back, what's my name, where am I from? Chat JPT got it right. So now when I ask, what is my name, And where am I from? Okay. So now I'm typing that in because we've hit 9,000 tokens. And guess what ChatJPT says? I don't have access to personal data about individuals unless blah blah blah blah blah. Right? Guess what that means. It lost its memory. We hit that token limit. Right? Now that I have this little token counter up here, I know that we're almost near 10,000 tokens, so I can be sure that ChatGPT lost its memory.

Advanced Data Analysis is a single-session use


Jordan Wilson [00:17:56]:

Right? So in our PPP course, we teach more about this. We give you ways using something we call a memory recall. We give you ways that you can get around this and avoid this, but I thought it was extremely important. Right? Because we always talk about, ChatGPT hallucinating. And there's usually 2 main reasons that ChatGPT hallucinates. Number 1, you're using the wrong mode. Right? I tell people there's very few instances you should ever in chat g p t use anything except for plug ins. Right? Even it it advanced data analysis.

Jordan Wilson [00:18:31]:

Like, if you've used, the mode that was formerly called code interpreter, now it's called advanced data analysis. It's great for 1 sit down. Right? So if you have any heavy data needs, if you need to do some some coding, some web development, things like that, ADA is pretty good. But when you log in and log out or end a session or start a or resume the session on the other computer with ADA, When you upload files, because that's the biggest one of the biggest advantages of that mode is you can upload files. You oftentimes get an error that says it can no longer reach the file. Right? I'm sure the the engineers are hard at work on this, but I've recreated this error dozens of times. It's Even one of the reasons why I don't use that mode very much, but you should probably be using the plug in mode for almost Every single chat you start in ChatGPT so you can give it access to the Internet with an Internet connected plug in, not Browse with Bing. Don't get me started.

Jordan Wilson [00:19:31]:

Right? So that's that's probably the 2nd biggest thing. Right? So number 1 is keeping ChatGPT's token limit, those 8,000 tokens, its memory, in mind first, and 2nd is making sure you give it access, to the Internet. And you know what? I do get your questions in. We're this isn't gonna be one of those 40 minute, 45 minute episodes. So get your questions in if you have, questions like, Jay here. Jay says, believe chat g p t paid version has 4 x the tokens of 3. I I believe, because I don't use GPD 3, and I don't test it. So, again, don't even believe anything you read on a blog.

Jordan Wilson [00:20:13]:

Always test it yourself. Right? Because, Number 1, Jay, marketing. Right? They'll say, oh, 32 k tokens. It's like, oh, no. That's actually just the playground, and the API. I believe GPT 3 has 4 k tokens. Again, I haven't used it, so I haven't tested it. So it's not, It's not 4 x the time of tokens.

Jordan Wilson [00:20:34]:

I believe it's it's just about double. Yes. So we have other questions coming in. Kevin says, I thought g b t four commercial had 4 k tokens and g b t four enterprise had 8 k. So, no, I have And, you know, I just showed you, you you know, GBT 4, you know, the commercial or what's available to everyone. Actually, the best resource for this, and I'm sharing it on my screen now. Let me bring this up. It's actually this, article from, Microsoft.

How ChatGPT interprets words


Jordan Wilson [00:21:03]:

So, yes, it hasn't been updated, but, this goes down, and it's actually more in-depth than anything on OpenAI's website, but it breaks down every single, model, you know, g p t 35 turbo. You know, even it it I believe this article even had the older models that were depreciated and are no longer available, but that's a great resource. Speaking of great resource, I do wanna also talk about this. Right? I should have started with this, but let's get back to, like, real quick. What even is a token? So I wanna talk about the word set. Right? S e t. Set. So what does that mean? The word set can mean a lot of different things.

Jordan Wilson [00:21:45]:

Right? I actually looked this up. It's apparently, the word set holds the world record for most, definitions. It's like 32 or something. But the word set can mean so many different things. So you can set something down. Right? A village can be set at the top of the hill. If you're from New England, like some of my, you know, some Some of my family, sometimes they say I'm all set. Right? Like, I'm all good.

Jordan Wilson [00:22:10]:

If you play Euchre like me, I love Euchre. Does anyone else Play Euchre. When I played, I played with my mom, and, stepdad about, 3 weeks ago, and I got set a lot. So that's something in Euchre. You get Right? So the word set can mean so many different things. So that's why I actually wanted to bring up a tokenizer. We're gonna get a little dorky, but, I I I started the show by trying to explain a little bit that chat gbt, it doesn't actually, in theory, Understand your words as words per se. Right? So when you give ChatGPT all this information, I'm gonna show here.

Jordan Wilson [00:22:48]:

I'm gonna zoom in on the screen. Let's look at how it even interprets what you put in and how it can actually, process. You know, use this this NLP, this natural language processing, by converting all of your text into tokens to give it semantical meaning, Right. To give it that contextual, context. Right? Contextual context. That's that's repetitive. But so when I put in here, I say, hi. My name is Jordan.

Jordan Wilson [00:23:14]:

I'm from Chicago. That's what I Talking about earlier. Right? Let's do it again live. So, hi, my name, You can see it count. So when I type in, I've said, hi. My name is, like, I'm Eminem from 99, and we we're at 5 tokens Alrighty. So these are shorter words. So 4 words, 5 tokens, but it's all by characters.

Jordan Wilson [00:23:37]:

So, again, it it looks at By tokens, words, parts of a word, symbols, everything counts a little different. So I'm saying, hi. My name is Jordan, and I'm from Chicago. Okay? So that's 12 tokens. But what's really cool down here is I can see the text. Okay? And it's color coded, but then I can actually click token IDs, Which is very interesting. So this is now when I click token IDs. You no longer see words.

Jordan Wilson [00:24:06]:

Okay. And this is and this is why, large language models can actually be confusing the more that you dive into it. But this just shows you. In the end, ChatGPT doesn't remember or piece together your words. It thinks and processes in tokens. So It applies, it applies all these different things. And I'm actually curious because I I haven't done this yet. I'm gonna go ahead.

Jordan Wilson [00:24:32]:

I'm gonna copy and paste, because I'm curious. I'm copying and pasting. Just some examples I had on the word set. So now in this tokenizer, it says set something down. The village is set on the top of the hill, And then I it says, I'm all set, and then I'm also typing in Euchre, you're set. Okay? So what's what's pretty, interesting, This can also show you maybe when ChatGPT is not gonna understand something. So now I have the word set on here 4 times. And, again, I'm not a 100% sure.

Jordan Wilson [00:25:08]:

You know, I'm not a a a tokenization expert. But from what I believe now, When I can look at the text and look at the token IDs, and it it does color coding. So I know for the set something down In the villages sat on top of the hill, the word sat there is in 2 different colors, okay, which is interesting, which Again, to me means that ChatGPT is using this tokenization to use the context, around that and processing via tokens, and it knows that that's different. Right? Whereas in the examples of I'm all set and, you're set, Those are the same color. The word sat is the same color probably because that's not enough context. Saying I'm all set. If you tell ChatGPT, I'm all set, and you say nothing else, and that's how you start a conversation, by by this tokenization breakdown, I can probably Go to assume that is gonna have no clue what you're talking about. Or if I just say, Euker, you're set.

Jordan Wilson [00:26:06]:

Same thing, because it looks like it's assigning a similar token value to I'm all set and Euker, you're set even though those are 2 very different things. And even if I look at the token ID, I'm not gonna be able to break all these down because the token IDs are essentially it assigns, 2 to it looks like 4 digit numbers. So I don't have time to, compare them all, but I'm I'm I'm assuming that, even the word sat there, when we put it into the tokenizer, We got very different values. Alright. I'm taking off my door cap. Sorry. I forgot to let you all know that I put my door cap back on. Took my everyday AI hat, off, but now it's back on.

Jordan Wilson [00:26:47]:

So I wanna recap, and thank you all for and For sticking, sticking around. Thank you for, your comments. And if you do have any other questions, I'll try to get to them quick. So, Mike with a good good comment here saying my token my token counter doesn't look like that, but I have the same extension. Weird. Yeah. Actually, Michael, I I I keep 2 token counters active, because one of them, Kind of gets a little finicky, and it doesn't always work. So I actually keep 2 of them active.

Jordan Wilson [00:27:18]:

Sometimes 1 works, sometimes the other works. There's only 3 of them on the Chrome. It's it's a Chrome extension. There's only 3 of them. Not a single one works consistently, so I keep 2 of them active, just in case. Alright. Yeah. Tanya, thank you for joining.

Jordan Wilson [00:27:34]:

Tanya says thanks again. Never heard of these secrets either either. Love this show. Love this show. So thank you. Let me let me just quickly let me quickly recap. We're gonna bring this to to a close. I know sometimes When it's just me on the show, I go into old Jordan, old man Jordan mode, and, I shake my fist, And I, you know, and I rant for 45 minutes.

Why you're getting hallucinations


Jordan Wilson [00:28:01]:

That's not today. Today, I wanted to have a very simple explanation of tokens. Talk about what they are and why they matter in AI, specifically in chat g p t. So like I said, yes, there are other large language models that have much bigger, token memories. You know? Like, cloud 2 has a 100,000, token memory, Which is fantastic. Right? Huge memory. But if I'm being honest, I can't do a lot. So if you are working in ChatGPT, and this is why this is important, The two reasons that you're probably getting hallucinations is you're not using number 1, you're not using ChatJPG plug ins with an Internet connected plug in.

Jordan Wilson [00:28:42]:

Again, we go over this more in our, prime prompt polish pro course, which is also free. So if you want access to that, just, You know? Hit me up. I'll tell you how. We we talk a little bit more about how to stay current with plug ins and how to not hallucinate with plug ins because all Internet connected plug ins are not created equal. I have a spreadsheet that I've shown on this, on the show multiple times where we've tested, you know, more than 20 Internet connected plug ins on 4 different criteria. Not all of them Work the same. Some of them, even those, Internet connected plug ins will will lead to hallucinations, so you need to use the right one. So that's the first thing is not using Internet connected plug ins when you start a chat.

Jordan Wilson [00:29:26]:

Again, there's really no reason to use the default mode. In ChatGPT, there's no real reason to use browse with Bing when you're using an Internet connected ChatGPT plug in. And, again, Yes. There are some limited use cases to use, advanced data analysis. Personally, I find, better results by using a plug in like, someone here in Comments mentioned, Notable, which is, a great one that does some advanced computation. Wolfram Alpha, does does some, good, computation work as well. So number 1, hallucinations. If you wanna get rid of it, use Internet connected plug ins.

Jordan Wilson [00:30:02]:

Use that mode when you start a new chat in ChatGPT. Yes. You have to have the paid version of ChatGPT Plus, which is $20 a month. Well worth it. And number 2, our conversation for today, tokens. So I hope that going over this, answering a couple questions, showing a hallucination on screen. Right? So if you're joining this live, I show this example, right, where I started a new chat. I said, I'm my name's Jordan.

Jordan Wilson [00:30:30]:

I'm from Chicago. I asked another question, then I said, what's my name? Where am I from? Got it right. Then I fed in just a bunch of nonsense, watched our token count get up to about 9,000. Right? And I said, What is my name, and where am I from? And ChatGPT didn't know anymore. That is why you need to keep tokens in mind when you use ChatGPT. Right. Anyone that's been through our our free PPP course knows this, and and and they're getting much better results because of this. But I hope this was helpful.

Jordan Wilson [00:31:00]:

I hope you know a little bit more about tokens, what they are, why they're important, and I hope You can join us back again for another edition of Everyday AI. Thanks, y'all.



We’ve helped tens of thousands of people learn ChatGPT.

And there’s one common question we always get.

What the heck is a token?

If you use ChatGPT, you probably deal with hallucinations all the time. After all, large language models lie and make stuff up.

And one of the most common causes of LLMs lying is running out of memory, or going past their token limits.

So, we decided to dork out a bit and answer that question about ‘what the heck is a token?’

To be honest, the best way to understand what a token is and what it does is to watch it live and get in on the discussion. But if you want the tl;dr version, we got you.

What’s a token?

Tokens are essentially how ChatGPT and others interpret all of the data we put in.

A token can be a word, parts of a word, a phrase or even a symbol.

How’s it work?

ChatGPT is essentially the world’s smartest autocomplete. But, it doesn’t actually work in words and phrases like we think. It uses these tokens to actually understand human language and have conversations. As an example, let’s look at this input:

You enter information.

The GPT technology then breaks it down into tokens.

In this example, you’ll see it uses context and a bit of a type of Natural Language Processing (NLP) to make sense of the words and context. (Notice, even the word “set” has different values because of the context here)

Then, GPT interprets this into unique token IDs, that helps it build context and ultimately have conversations that seem smart and human.

Still with us?

OK. So essentially, ChatGPT doesn’t actually communicate with us. It uses its big old neural network brain to understand (via tokens!) what the heck we’re saying so it can “talk” back. (Via token, of course)

Getting it now?

Great.

But the most important thing to remember here is memory! That’s what today’s show is all about. All of that back-and-forth communicating takes up ChatGPT’s memory.

And after it hits about 8,000 tokens (or roughly 6,400 words) it starts to forget.

You know. Lie. Make stuff up. Become nonsensical.

Alright, that was a bit dorky y’all.

Let’s give some practical examples and break this thang down 👇


🦾How You Can Leverage:

OK, so ChatGPT has a memory limit, measured in tokens.

All that back and forth, inputs and outputs, starts eating up what ChatGPT can retain.

And once you get past that token limit, ChatGPT starts to gradually lose its memory. And then it starts to lie, hallucinate, and just make ish up.

So, let’s talk about how to spot hallucinations, how to avoid it, and ways around using too many tokens.

Say it with me now…

1

2

3!!!!!

1 – Use Plugins for better memory 🔌

Here’s the real talk: plugins help you save your token count. Instead of copying and pasting long amounts of text, you can instead give ChatGPT access to the internet.

What’s the benefit? Crunching webpages, PDFs, etc. uses WAY fewer tokens than if you were to copy and paste that same information.

Try this:
We’ve covered this in-depth, so many times. ChatGPT with plugins is by far the most underutilized aspect of GenAI.

Side benefit of using plugins? Being able to access more information inside your chats without eating up as many tokens.

We’ve done quick tutorials on how to browse the web with ChatGPT, and even the best PDF ChatGPT plugins.

2 – Use a token counter 🪙

How do you know when ChatGPT starts to hit its memory limit? Aside from just anecdotally feeling like, ‘ChatGPT seems to be forgetting everything’ you can instead use a token counter.

We always recommend using Chrome while using ChatGPT, because there’s some great Chrome extensions (like token counters) that can help you get more outta ChatGPT.

Try this:
We’ve also already dished on how a token counter works and why you need to use one. You can go rewatch today’s show, and see in real-time how ChatGPT can at first remember information, then gradually forget it.

(Seriously, go watch this. It will instantly improve your LLM skills by seeing how ChatGPT can remember things then forget)

Also, go watch this AI in 5 where we show you how to use a token counter.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI