Ep 254: Freestyle Friday – Ask Me Anything (about AI)

AI Memory Feature in Chatbots

The use of AI in businesses has skyrocketed with the introduction of conversational AI, such as ChatGPT. This AI tool holds a memory feature that remembers across threads enhancing user experience albeit with limitations due to its memory window. The extended application in cross-chat memory, however, raises concerns about the severity of these limitations.

Dependency on Third-Party AI Tools

The dependence on third-party AI tools offers a hodgepodge of benefits and downsides. Despite providing the base model through APIs, these platforms lack additional features such as real-time collaboration and integrations, like Google Drive. However, the use of AI in business applications achieves its full potential only through these third-party services, like Microsoft Copilot, transforming the workplace ecosystem with customized solutions.

How To Utilize Custom GPTs without Prior Coding Experience

Many perceive AI integration as a complex process or one that requires coding skills, however, this is a myth. Custom GPTs in ChatGPT offer great flexibility in their creation and use. Users with no prior coding experience can easily upload their own data, knowledge base, and documents, and create detailed configuration for unique requirements. Further, intuitive instruction commands make it convenient for the user to operate the GPTs.

Balancing AI Applications and Portions of Private Life

The reliance on personal devices for AI applications is a matter of personal preference, with drawbacks cited for using small screens and causing disinterest in using phones for various tasks. In addition, sharing AI chats and tools with individuals without an account is currently an under-explored feature in large language models.

AI and the Legal Conundrum

Large language models like Llama 3 collect data from the open and closed Internet for their training sets, leaving a potential trail for legal challenges due to the utilization of copyrighted material. Comparisons between closed and open source models, not limited to Google Gemini, ChatGPT, and OpenAI, reveal a tightly-knitted legal framework surrounding AI.

Exciting Opportunities in the AI Market

Artificial intelligence market is brimming with opportunities in the fight against disinformation and fake news. Of noteworthy mention is the unprecedented growth of tech companies like Meta, that leverage AI for large-scale solutions. Moreover, AI applications in education sector, through personalized tutors, stands as a promising niche start-up idea.

AI's Potential in Business Applications

AI holds the power to revolutionize business operations across sectors. The use of AI tools such as Castmagic and Voila can assist in everyday tasks, such as creating newsletters and absorbing details from crucial business meetings.

As machine learning advances, acquisition of AI startups becomes less crucial for tech giants. They are focusing on developing their own models, some even releasing them as open-source projects, thus dramatically altering the future dynamics of the AI industry. Despite the current AI anxiety and concerns over work-life balance, AI's role is growing exponentially and its implications are not confined solely to business but extend to daily life and routines.

Topics Covered in This Episode

1. AI-related queries
2. AI in business and startups
3. Use of AI models
4. Legal and ethical challenges in AI


Podcast Transcript

Jordan Wilson [00:00:16]:
This show is for you. There's no topic. It's anything goes. You asked for it, so we're here. Our first ever who knows? Maybe this will become a thing, but our first ever freestyle Friday ask me anything about AI. What's going on, everyone? My name is Jordan Wilson, and this is for you. Everyday AI is your guide to AI. It's for us all to learn how to leverage generative AI to grow our companies and to grow our careers.

Jordan Wilson [00:00:52]:
So if that sounds like you, then you should be going to your everydayai.com and signing up for our freely free day free daily newsletter, because the learning doesn't stop here with with the live stream, or the podcast. The newsletter is how you put that all into action. So make sure if you haven't already, go sign up. And, hey. Like I said, when I say this is for you, it's literally for you because we asked in our newsletter yesterday, hey. What should tomorrow's show be? And just narrow just narrowly, you all wanted to hear, free sale Friday. So whatever that is, we're gonna find out. So hey.

Jordan Wilson [00:01:29]:
If if you're, if you're on the podcast, this one's probably gonna go all over the place depending on what our livestream audience wants to hear about today. So, hey, if you are a normal, podcast listener, if you're not doing anything at 7:30 AM Central Standard Time, it's it's a great time, here on, you you know, LinkedIn and YouTube. There's actually some of some of our greatest guests that we've had, you know, over the last year, you know, now show up to the LinkedIn livestream. So it's a great place to network, meet other people who are helping push their organizations forward with AI. So hey, live audience. So whether it's Frank who already has a question, Chrissy, our friends here on YouTube, Tara, Doug, everyone else, let me know. What is your biggest AI question? Right? If you don't ask it, it's gonna be it's gonna be a a show just full of random rambling and and rants. So get your questions in now.

Jordan Wilson [00:02:23]:
Before we do, though, let's start it off as we do every single day with the AI news. So Olympic organizers just released their plan to use AI in, Olympic sports. So the International Olympic Committee has just announced plans to use artificial intelligence in sports with a focus on identifying promising athletes, personalizing training, and improving judging. Yeah. That'll be a good one. They aim to do this in a responsible way and also use AI to protect athletes from online harassment and enhance the viewing experience for spectators. So AI will play a significant role in the upcoming Paris Olympics with plans to use it for athlete identification, training, judging, and security. The IOC, so that's the, International Olympic Committee, is determined to take advantage of AI while being responsible and ensuring the uniqueness and relevance of the Olympic Games.

Jordan Wilson [00:03:14]:
So, yeah, I we've seen AI used, in a lot of different ways. You know, I think just the masters had AI commentary. We've seen it used, you know, in in tennis. You know, the NBA just kind of announced their, AI plans a couple of months ago, so it should be interesting how the Olympics and and this is, is this gonna be kind of a a fresh take, that sticks around, or is this gonna be one of those things like, you know, how they used to have the, the hockey puck light up, you know, 20 years ago. So we'll see. Alright. Our next piece of AI news. The US military just conducted an a human versus AI dogfight in the air.

Jordan Wilson [00:03:49]:
So an AI controlled f 16 fighter jet, has successfully engaged in a dogfight against a human piloted jet. So pretty interesting there, making a significant milestone in the use of machine learning in piloting military aircraft. So the first ever AI versus human dogfight occurred apparently in September at Edwards Air Force Base in California. So this is just coming out now. The autonomous aircraft was called the X62 A Vista, and it has undergone so far 21 test flights and has a modified f16 16 fitted with an AI program. I have no clue what any of those, you know, letters and numbers mean, by the way. I I I don't follow, fighter jets, but maybe if you do that means something. But, the use of machine learning and piloting military aircraft has been historically prohibited due to safety concerns, but the success of this dogfight program represents progress in the area.

Jordan Wilson [00:04:44]:
Alright. Last, but I would say definitely not least, Meta has released llama 3. It's somewhat open source model. So Meta's new large language model and, you know, I guess medium language model, LAMA 3 is actually now live. It is already released on cloud providers, model libraries, and on meta.ai. And so far, it is the company is boasting, its improved performance and outperforming other similarly sized models and benchmark tests and human evaluations. So the LAMA 3 collection of models aims to be multilingual and multimodal, have longer context, and improve overall performance. So right now, there's just 2 released varieties.

Jordan Wilson [00:05:31]:
So there's the smaller 8,000,000,000 parameter model, which is benchmarking above the smaller models from Google's JEMMA and Mistral. There's also the medium model, 70, the 70,000,000,000 parameter model, which is already benchmarking above, you know, Google's mid tier, Gemini Pro 1.5 and Claude 3 SONNET. And the model that I think most people are gonna be looking at that is not yet released is the 400,000,000,000 parameter model, which is expected to be the biggest model, but it is still in training. Meta CEO Mark Zuckerberg did release a video yesterday talking about some of the, some of these human evaluation scores. Right? So we talk about this on the show a lot. The MMLU, kind of benchmark, which is one of the most I would say the most I don't know if the famous is the right word, but it's one of the most widely regarded, benchmarking tests. So that's the massive multitask language understanding test, and it's essentially a benchmark to say, like, hey. Can can this, model think, understand, and reason similarly to a human? So, most AI experts look at the MMLU scores to see how capable, a model is in terms of can it perform tasks like a human across a wide spectrum.

Jordan Wilson [00:06:52]:
So so far, you you know, Meta's smaller, 88,000,000,000 parameter model and 70,000,000,000 parameter model code. So if you think about it as small and medium, already out benchmarking, the leading models out there right now, which is interesting. So we'll see if the 400 b is going to out benchmark, you know, Quad 3 OPUS, which is kind of the leader now by, like, 0.1 over GPT 4. So there's gonna be a lot of of model talk and a lot of meta talk and, you know, even what that means. Right? It it is, meta is not completely open source, but it is essentially open source for all intent and purposes. That's another episode for another day. But, I mean, that's pretty big news, meta going this, open source route with, these pretty big models. Alright.

Jordan Wilson [00:07:40]:
So let's get into it. Let's talk, what you wanna talk about. So let me know. What is your biggest AI question from our audience? So, hey, maybe you wanna know I don't know. Maybe you wanna know something random about AI. Maybe you wanna know something meaningful. Maybe you're still trying to implement AI in your company and you've you've hit a roadblock. Maybe you just wanna know what's better between these two tools.

Jordan Wilson [00:08:10]:
Maybe you wanna know about GPU chips or, you know, energy required for the future of generative AI. Maybe you wanna know where to start. It doesn't matter what your question is. Today is the day you all are running the show. You're interviewing me, our livestream audience. So, let's go ahead. I'm gonna go ahead. If you don't have a question in now, please get it in.

Jordan Wilson [00:08:33]:
Please get it in. I want this to be a fun, interactive, show. Don't be shy. Our our audience, no one's gonna laugh at you. People laugh at me, not you. So get your get your questions in now, and I'm going to try to answer them as I see them, coming up. So, we already have a couple questions, so let's just start running them down. And, hey, if you wanna put me on the spot, put me on the spot.

Jordan Wilson [00:09:01]:
You know, you can you can grill me. You know, sometimes I grill our our, audience or sorry, our guests. So maybe, this is your time to to grill me. Doug Doug's starting off topic, but that's fine. So so Doug Doug here asking, coffee of choice. Hey. That that that question counts. You can ask about AI.

Jordan Wilson [00:09:20]:
You can ask about not AI, I guess. I don't know if anyone's interested in that, but full Nespresso. Doug, full Nespresso. I can't, you know, that's that's what I have right here that's already kind of, kind of warm and not not, hot anymore. You know what? I actually used to I didn't drink coffee until I think I was maybe 23 or 24, little older than that now. And, you know, each time, you know, you you try to do coffee, you're like, oh, these these k cups, these are pretty good. And then you, you know, get your own beans or whatever and you're like, oh, these are great. And then when you have an espresso, there's no turning back.

Jordan Wilson [00:09:57]:
So it's it's an espresso all day, but I fast. Right? So I fast. So it's not like, you you know, latte. It's just, you know, straight, straight black with a little bit of, sugar free something syrup. Alright. So here we go into the AI. Frank. What's up, Frank? Frank, thanks for joining us from from the YouTube land.

Jordan Wilson [00:10:13]:
So Frank's asking, does chat 4 so does chat gb d 4, remember across threads now? That's a great question, Frank. So as far as I know, I haven't checked since last night, but no. So there's actually 2 different, things that you could in theory be asking about here. So there is a feature called memory, that ChatGPT. I wouldn't say they they had a broad rollout, but a fairly broad rollout. But still not everyone has access to that yet. I'm even checking as of last night. I usually check my chat gbt settings, for all my accounts every single day just for when new features or beta features or features that have been, you know, teased do get publicly released, that I can kind of know about it.

Jordan Wilson [00:11:01]:
So it looks like at least right now, I don't have access on, that account. Let me go ahead and, check check my other one. So if you wanna know, there should be a in your settings, you should have a thing that says personalized. So, yeah, I still don't have it. So there's 2 different things here, Frank, and let me try to explain this to our livestream audience. So, there is just a memory, feature that OpenAI, they promoted it. Right? But still not everyone has it. I'm not sure why.

Jordan Wilson [00:11:33]:
So this one, I think there's good and there's bad. And I I did a kind of a 5 minute you know, we do an AI in 5 almost every single day. I did a rundown on the different pros and cons of this. So, essentially, when you're having a chat within ChatGPT, open or, you know, ChatGPT is going to decide, you know, what are these things it wants to commit to memory. And then those memories are going to follow you follow you around from all chats. So maybe it might be something personal, like, oh, I'm a marketing consultant, and, you know, I I I care about, you know, edge computing as an example. So then if you're asking ChatGPT in a new thread, about something, it's going to keep that in mind. However, I don't like this implementation or at least how it was previewed.

Jordan Wilson [00:12:15]:
But again, this hasn't been a public release yet either. So, the the bad thing is is ChatGPT is, picks up on a lot of those things on its own and commits it to your memory. You can obviously go in, you you know, and click in your settings and delete things from your memory, but I don't like that. I I don't like that ChatGPT automatically applies things to memory and then, kind of takes that knowledge and applies it unilaterally to all all your chats because what if it's something that you necessarily don't care about. Right? What if I'm talking, you know, getting restaurant recommendations or, you know, things in my in my personal life and then I'm working on something, you know, for everyday AI or for my other company, Accelerant Agency, and I don't want that information, you know, following me around and influencing the output. So I don't personally like it. So there's that aspect. And then there's another thing called cross chat memory, and I'd say even fewer people have access to this, and this was previewed like 4 months ago.

Jordan Wilson [00:13:11]:
So what that is is it's more about the context window being shared, between multiple chats, which I don't see how that's going to work well in the short term until ChatGPT has a much larger, memory. Right? OpenAI has said that, you know, you have a 128,000 tokens of memory. You do not. At least the last time I checked it was Tuesday. It still suck at 32,000. So, if you're sharing a context window of of 32,000 or even a 128,000 tokens, right, which is, let's just say, for a 128,000, that's let's just say a 100000 words. So think a 100000 words of context between all of your chats. I don't think that's great.

Jordan Wilson [00:13:52]:
Right? If if we were talking, you know, Google, you know, Google Gemini's 1,000,000 token context window. Perfect. You know, share that knowledge, that working knowledge or working context window across all chats. Right? And it is different. The the kind of, cross chat memory is remembering literally everything as a context window across all of your different chats. So, whereas this other feature just called memory is more of, you know, it creates a memory bank that you can then go in there and modify. So 2 different things. So hopefully hopefully, Frank, that was a a good good answer to the question.

Jordan Wilson [00:14:26]:
Alright. So now Doug. Doug says from yesterday, who can see custom GPTs in Copilot? Doug, that's a great question. You know what? You already stumped me. That's that's so easy. Right? Just because, our team right now is not using Microsoft 365 Copilot, You know, I think we are gonna start testing out Copilot Pro a little bit more. You know, we're just kind of standard Copilot users right now. But I do believe, especially in the pro and enterprise version, Doug, in the same way that, in ChatGPT and GPTs, you have, sharing controls.

Jordan Wilson [00:15:00]:
I do believe the same thing, can be said for, Copilot 365. You know what? We I know we have a couple people from Microsoft who tune in the show, so maybe they'll, and and the live stream. So maybe they'll answer that question a little better than I than I have, or than I can. Tara. Oh, good question here, Tara. So Tara's asking, what AI apps do you have on your phone? So I might get judged for this, Tara, but just ChatGPT and, Copilot. That's it. Here's why.

Jordan Wilson [00:15:35]:
And, you know, anyone who knows me personally can can attest to this. I suck at everything on the phone. I have hundreds of unread emails. Every day, I have hundreds of unread tasks texts. So I'm I'm everything Apple right now. Right? So I'm I'm on a Apple computer. I I have an iPhone. So I do all of my texting for the most part on my computer.

Jordan Wilson [00:15:56]:
I very rarely text on my phone. I have fat fingers. You know, I'm 38 going on 83. I I don't really like, I would say I don't like to be on my phone, but I don't like to use it, I guess. I've always been kind of spoiled by big screens. You know, I I I have, you know, 2 20, I don't know, 27 inch screens in front of me. So, when I try to do anything on my phone, I don't like it. I don't like doing tasks on my phone, but I do have obviously, ChatGPT on my phone.

Jordan Wilson [00:16:26]:
I do have Copilot on my phone. I should probably get perplexity on my phone because I love perplexity, and I use perplexity often. So I actually don't know why I don't have perplexity on my phone. That's that's a great question, but, yeah, I don't really use apps on my phone a whole lot, which I know I'm probably, in the minority, not just for my age group, whatever someone in their upper thirties is. I don't know what that is. But, John Ruh, I don't know if that's a millennial or a gen something. I don't know. Is anyone else in their mid thirties? What are we called? So, yeah, I I don't actually use apps a lot on my phone in general.

Jordan Wilson [00:17:03]:
Alright. Chrissy Chrissy. So Chrissy asking thoughts on how open source, in quotes, Llama 3, will impact the AI space. Do you think it can handle the PPP model? Oh, good question, Chrissy. Okay. So let me let me start just with a very, very brief overview of of this, you know, open source and quotes. Right? I said that to to start with. So essentially this, most models that we all use and, you you know, if if if you're a little bit of a newbie I'm gonna generalize so you if you're an expert, some of this is probably a little wrong, but I'm just trying to generalize it.

Jordan Wilson [00:17:43]:
Right? So most models that that we talk about, are closed. They're proprietary. Right? So, you know, Google Gemini, ChatGPT, you know, ironically enough from OpenAI is closed. So, you know, ChatGPT, Gemini, Copilot, Claude, those are all closed models, proprietary. Right? You can't go in and and see how they work or or build off of them. You you can with with with llama. Llamas, it it is somewhat open source. Right? So I guess the one thing is it's not an approved open source license.

Jordan Wilson [00:18:17]:
So there is the open source initiative or OSI in the, LAMA and and Meta do, I guess, not adhere, to those standards. So it it's Meta is almost trying to redefine what open source means, which I guess when you are one of the biggest companies in the world. And right now, you have probably one of the leading, open source AI pro well, yeah, I don't know any other AI product that would compete with them in the open source space. So maybe they get to redefine what open source is. But traditionalist will look and say, open source, Meta, Llama, not really. Okay. But thoughts on how it will impact the AI space. Oh my gosh.

Jordan Wilson [00:18:55]:
This is a great question, Chrissy. So I'm actually, I'm actually gonna pause, and I'm gonna go back and see how many more questions we have so I know how long I can go on a small little rant on this. So, hey, live stream audience, keep the questions coming. Alright. Alright. Looks like we got a lot, so I'm gonna I'm gonna try to go a little quickly on these so I can try to get to them all. And if you could keep them a little shorter too, that that that would be helpful, just just so I can kind of read them live, and and still, you you know, multitask and and process them here. So, Chrissy, how will it impact huge, huge power move from Meta if I'm being honest.

Jordan Wilson [00:19:34]:
Right? So, again, these are Meta's benchmarks that they release, so we'll wait until we see everyone else's benchmarks. But it's very capable. Right? You can go, you can go right now. Go to meta meta dot a I. Right? So you have to sign in with, I think, a Facebook account. You know, I was playing around with it for 20, 30 minutes yesterday. It's a very capable model for being an open source model. So this changes.

Jordan Wilson [00:19:56]:
Right? I think this is one of those things where Meta is looking at the open AIs of the world. They're looking at the anthropics of the world. They're even looking at, you know, Google and Microsoft, and and they're saying they're in a place of dominance, Meta is. Right? Because they're not as affected as everyone else with the potential downside, the potential financial downside of how we as humanity start to use large language models. Specifically, I'm talking about search. So I'm talking, you you know, the big players in the room, you you know, Google and Microsoft. Right? Because they get a lot, you know, Google more so than than Microsoft, but those are 2 big competitors. So they get so much of their revenue from search.

Jordan Wilson [00:20:38]:
Right? So there's been so much of this, uncertainty about how, you know, all these large language models are gonna monetize, how AI search is going to change, how traditional SEO is gonna change, how search engine search engines are gonna continue to make money. So that's something that, you know, is is on top of mind for sure for Microsoft, for Microsoft and Copilot and for Google and Gemini and by proxy technically then, perplexity as well. Meta doesn't care. Right? I think this is one of those things where, you know, yes, OpenAI now is, you know, probably at the point now they're gonna be crossing about $1,000,000,000 annual revenue, which is a ton for a company that, you know, just put out its first kind of quote unquote flagship paid product, you you know, like a year and a half ago. This is one of those things where Meta, I think, is just being like, hey. Let's just create something better than everyone else. Not like squash the competitors, but make the competitors less meaningful to their overall strategy. Right? At least right now, the AI space is a very small space for Meta and its its current monetization.

Jordan Wilson [00:21:44]:
Right? Because Meta, they obviously don't have a monopoly on social media, but they are bigger than everyone else. Right? Between Facebook and Instagram, you throw in, you know, WhatsApp as well. You know? They they have a straight up monopoly on that space where everyone else is still competing in the search space and, you know, kind of the the crossover between search and, large language models is starting to blend as well. So I think this is a power move from Meta. Just going out there and saying, hey. We're gonna release models that are at the same tier or punching above the most powerful models out there, and we're gonna create them open source. Right? I think it almost builds this this moat to, like, to speak where they don't even necessarily need a moat, which is super interesting. So, you know, I'm fascinated by that.

Jordan Wilson [00:22:34]:
I'll probably have a dedicated show, in the future about this strategy from Meta, but I like it. Right? At least right now, you know, I'm sure how like, they are rolling out AI in all of their platforms. Right? So it does make sense for them to make a heavy investment into AI even though the AI model itself right now is not you know, it's open source. So they don't need to necessarily monetize it, but they wanna get it better. Right? That's the that's the main advantage when you put something out there, open sources, then you have, you know, some of the world's brightest developers working essentially for you, making your model better, poking holes in it. You know, so I I love the move. Alright. Lorena joining us from Australia.

Jordan Wilson [00:23:17]:
Good evening. So, first time live. That's awesome. Is anyone else first time live here? So Lorena's asking, hearing about the benchmarking scoring, MMLU system. Why is this important? Great question. So, you know, essentially, there's no one good way to to measure models. Right? Even since the MMLU, which again is the multitask language understanding or the massive multilanguage task understanding, there's there's been different benchmarks. So this is, you you know, to put it in the most oversimplified terms, think of this as, a test that a college graduate would take.

Jordan Wilson [00:23:56]:
Right? That is just testing their abilities as as a human across so many different areas. You know, testing a human's ability to to reason. Right? So if there's a test, like, above the ACT or SAT that we have here in the US for high school students, if there's the equivalent of that, that essentially you you had to take to get out of college to say you are a very smart human being who is well educated and you understand how the world works. You can reason, you can understand, and you are the smartest of the smart. So that's you could think of it's like MMLU is like that but for AI models, for large language models. Right? So it's the ability to to reason and really understand how this knowledge in its datasets, how the training of this all of this knowledge in the dataset can apply to real world tasks and completing real world tasks at a high level across multiple mediums. Right? So, that's that's an easy way to think about it and it's important because at least for now, the MMLU is kind of the industry standard benchmark in saying, hey. Like, testing it almost like against human performance.

Jordan Wilson [00:24:57]:
Right? Because others, you you know, there's tests that, oh, this is on math, and there's other, tests that are on specified fields or, you know you know, kind of, specialized sectors, so to speak. So MML use is kind of in across the board, how close is this model to being able to reason like a human, to be able to understand like a human, you know, trick questions, so to speak. Maybe I'll have to do a dedicated episode on MMLU, in the future. Great question though, Lorena. I love love that. So, hey, if you are joining us midway through, thank you for tuning in. This is the free sale Friday. You guys voted for this in the newsletter.

Jordan Wilson [00:25:33]:
This is what you wanted. I know it's a little random for our podcast audience, but, hopefully, this is fun for our our livestream audience. So Christopher joined from LinkedIn. Thanks for tuning in. Can you say that? Are we on AM radio tuning in? Shout out WBBM. That's that's my favorite AM radio station here in Chicago. So Christopher says, what are some good resources to learn how to create a local model or cloud that can be treat pre trained with my data? Yeah. This is a little more technical, for the Everyday AI Show, but, I'll I'll tell you this.

Jordan Wilson [00:26:06]:
There has been so many new models recently that are now open source. So, you know, obviously, I would look at Mistral. I would look at Llama's new model, you know, and then essentially applying RAG. So, you know, there's different processes without getting too technical, you know, that you can kind of apply your own data through RAG and there's, you know, 3rd party systems that make that pretty easy now for, you know, people who aren't even very technical. So, yeah, without without getting too far into the details there, Christopher, I would say look at some of the, Mistral, Lama, and then look at some of the, kind of, popular, third party ways that you can bring in your your data, your own data via Rag. Also chat with RTX from NVIDIA. You you know, you can tap into other models that way and it's kind of like a built in almost RAG system that you can run locally on your PC if you have a certain level, of of NVIDIA RTX chip. So, hopefully that helps.

Jordan Wilson [00:27:03]:
Brian. What's up, Brian? Long time, long time, kind of viewer here. So appreciate your support as always. Okay. I like this. What is your number one go to AI process? Interesting. Okay. I'd say a lot of them are tied.

Jordan Wilson [00:27:21]:
Right? And it depends on what I'm working on. But let's just say, hey. For everyday AI. Right? Because that's what I spend, you know, a couple hours every single day working on. I'd say my go to process, which is weird, is chatting with my transcript. Because one of the things that, I still do manually as a human is I write our newsletter every day. Right? And our our newsletter usually goes out about 2 to 3 hours after the show is done. However, what usually happens in that time is I, a lot of times, have other meetings.

Jordan Wilson [00:27:54]:
You know, once I'm done with the show, I might be answering some some, you know, LinkedIn comments, questions, etcetera. And then, you know, maybe I start writing the newsletter, which, you know, every single day we recap, you you know, the show. So probably today, what we'll do is I'll probably pick my, you know, 3 to 5 favorite questions and, you know, just do a little bullet point recap of our take on that. But so so, Brian, what happens is a lot of times I'm already forgetting. I'm already forgetting. Right? Like, if I interview someone, you know, we had Darren on the show this week and we were talking about AI and education. And as I recall, I think I had a meeting right after. And so I'm going in to, you know, type up the newsletter.

Jordan Wilson [00:28:30]:
Yes. Written by me, a human. And I'm forgetting things. So a tool I love 2 tools I love aside from chat gbt. So if you're saying my go to AI process, well, one of my go to AI processes is chatting with a conversation that I had. Right? And also, full disclosure, you know, sometimes when I'm interviewing someone here on the show, I'm I'm typing down notes or I'm looking at the comments and, you know, looking for the a good question for for the guest. So sometimes I might miss something that a guest says even though I'm trying my best to both tune in and to, you you know, he, you know, read and, kind of put questions up from you all. So but to chat with the transcript, what that means is I use Cast Magic.

Jordan Wilson [00:29:14]:
Love Cast Magic. We had the the cofounder and CEO, Blaine on the show. So Cast Magic is a an AI powered tool where as soon as I'm done, my coworker Brandon, like, he's so quick, but I can you know, a lot of times I'm chatting with the guest right after the show, and I can jump into Cast Magic. It's already there. It has a breakdown of all the key topics, which is great, but I can have a conversation. Right? And I can say, oh, what did you know, I'm just using Darren as an example because he was a guest that we had on the show this week. I can say, oh, what what did Darren say, about about how his son was using chat gbt. Right? Because I wanna write about it in the newsletter.

Jordan Wilson [00:29:52]:
I know he referenced it, but there was a detail I forgot. So versus reading through the transcript or, you know, hitting command f and it's like, oh, what what was he talking about? I can just ask a question in cast magic of the transcript. I'll often do the same thing with, Voila. So Voila is, another a it's a Chrome extension, that I use that I like. I technically have a paid paid version of it, but I think there's a free version as well. So it does something similar. So I can do the same thing. I can go on our website, you know, pages.

Jordan Wilson [00:30:21]:
So as an example, I I spoke at a, a DePaul, event here here in Chicago. So DePaul is one of the largest, you know, private universities in the country, and I was on an AI panel this week. So, hey, shout out, if you're listening. Shout out Jackie, for inviting me. 1 of our, you know, loyal, livestream listeners. But I went through with Voila. I looked at some of my old episodes, and I just said, hey. You know, recap the 5 main points that we talked about here or, you know, I might remember, oh, a guest said this or, you know, I was talking about a certain news item that day that I wanted to talk about, at this, you know, event that I was part of the guest panel for.

Jordan Wilson [00:31:02]:
So I like, voila. It can, you know, use the context of a web page, but you can also save, kind of like different prompts, etcetera. That was a long one. I gotta start going through these a little faster. Otherwise, this is gonna turn into a 3 hour show. Alright. HeyTheme. Thank you for the question.

Jordan Wilson [00:31:17]:
What are the most what are the most start up ideas to integrate AI with education, and do you suggest any? Okay. It's a great question. I would say this. If if you're just looking for a random startup idea, in AI and education, I would say go niche. Like right? There's there's riches in the niches. Right? So, personalized tutors, I think, are gonna be great. There's already great versions. There's great GPTs that do this, Khan Academy, etcetera.

Jordan Wilson [00:31:43]:
So I would say to do one that is very specific. As an example, create a, you know, kind of a character, version of a tutor that is specific for studying for the ACT or something like that or, that is specific to help, you know, college students, study for the LSAT. I would do something like that to make it very narrow, because there's already great wide applications. And like we talked about on the show, sometimes smaller, more refined models work better than large models. So maybe that's what something I would do there. Woosie. What's up, Woosie? So Woosie says, last stock in the tech space that everyone loved to tell me they were early on was NVIDIA. What's the next ones talking about yet in your opinion, Jordan? Oh, gosh.

Jordan Wilson [00:32:29]:
Alright. Well, I'll say this. I'm not this is not financial advice. I'm not a financial adviser. Yes. I was obviously pretty early on NVIDIA. Well, I mean, there's obviously people earlier, but, you know, 9 months ago, I told everyone. I said I said NVIDIA is the most important company in the AI, and they're going to blow up.

Jordan Wilson [00:32:47]:
And they did in that time, they've done unprecedented like, literally unprecedented growth that has never been seen before. Meta. Right? I'll say Meta. Right? Like, alright. Yeah. Meta's been around for a while, and Meta's stock is strong. Just yesterday hey. I, like, I should be full disclosure.

Jordan Wilson [00:33:06]:
Right? After I saw what what Meta what Meta did yesterday, I'm like, okay. Yep. Gonna gonna put a little more, Meta stock in in my portfolio there. Right? I'm not doing that a lot, but sometimes I'm like, oh, okay. You know, if if if I see, you know, a company make a strong move or come out with a product, then I'm like, not really. I might go adjust things. Right? Like, even my, portfolio is obviously a little heavy on the tech, the AI side. Right? I I I think there's a fairly popular what is it? It's it's, Invesco qqq, which I think is a, more of a kind of tech heavy, AI heavy, ETF.

Jordan Wilson [00:33:48]:
So again, y'all, I'm I'm I'm not. I'm probably the worst person to talk about like, hey. What what stocks in in the AI space are worth looking at? To tell you the truth, I I don't follow kind of these, you you know, smaller, you know, smaller public companies up and coming. Don't follow them. For the most part, I'm looking at, the big boys, the fortune 5 100. So, yeah, I don't I don't have any, great takes there. Sorry, Woosie. Douglas, let's see.

Jordan Wilson [00:34:14]:
What do you think about PPP course that is CoPilot? Oh, yes, Douglas, I am. You know what? I've gotta reach out to my friends at Microsoft. Yeah. I need I need more computers. I need more compute. Yeah. I I I need to get a Windows machine. I haven't done it yet.

Jordan Wilson [00:34:27]:
A couple couple ones that I want, I'm like, that one's a little expensive. But so many people are asking about Copilot. Minimum, I think we're gonna get on get our team on Copilot Pro just because I've been personally, underwhelmed with Gemini. Although, Google did just have their big, Cloud Next, announcement, and they're saying, hey. All these things are gonna be public rollout. So I might give, Gemini one last try across the, kind of the Google Workspace, across the, you know, quote unquote old school G Suite of products. But, yeah, we might we might become a Microsoft, Office, you know, Microsoft Copilot Pro team. So once we do, Douglas, I'm sure we're gonna have some, some Copilot training.

Jordan Wilson [00:35:12]:
And you know what? I know I've been talking about this for a while, but that is one thing that we also are gonna have planned for our free community once we launch it. I know I've been talking about this since January, y'all. It's just we've been getting these crazy opportunities that I'm super thankful for. Like, when NVIDIA says, hey. Come out and partner with us for our conference. You know? Gotta gotta sometimes prioritize those things. But, yeah, I think we'll be doing, a lot of, you know, more, more in-depth trainings, recorded trainings, in our, in our community that is going to be free to join. So I know most of you have already hit me up, but you can just, you know, hit me up, say inner circle.

Jordan Wilson [00:35:50]:
We'll give you early access, which is, like, I know, 3 months late now. Lauren, what is the most effective approach to utilizing AI to counter disinformation? Florent, that's great. So we've had some great guests on the show. And actually, what a lot of companies in cybersecurity are doing is using AI to detect AI deepfakes, to detect, AI threats in cybersecurity. I'm not an expert necessarily, and it is a little bit different because, you know, misinformation or disinformation is a little harder, to detect versus deep fakes. So you know what? Florin, I'm gonna add that to my list of of experts. Or if anyone listening, maybe if if you are an expert in that space, let me know. Maybe we'll get you on the show.

Jordan Wilson [00:36:33]:
So, yes, disinformation, misinformation is a little harder, to detect than, you you know, kind of like traditional deep fakes or AI, you know, AI generated media, you know, because a lot of just misinformation, disinformation is text based. Right? So there's different methods, and there's different, in I guess there's different watermarking systems that a lot of these companies are trying to implement a little more on the media side, you know, on the video, on the photo side. They're you know, know, a lot of the big companies are trying to embed invisible watermarks, whereas on the tech side, not as much. So it's I think it's gonna be a little more difficult to detect, misinformation and disinformation that is text based versus, media based, you know, photo or video. But again, hey, if you're an expert on that, hit me up. We'd love to have you on the show. Chrissy, do you get AI anxiety? It changes so rapidly. How do you balance the urgency regarding your work life balance? If I'm being honest, Chrissy, I don't have great work life balance.

Jordan Wilson [00:37:29]:
Love my wife for that. She she pushes me to, you know, always invest, you know, in everyday AI, because we we believe what we're doing is is is, you know, needed. I need more work life balance, a 100%. So I'd love to say I I have a great answer for you. I don't necessarily on how do I balance it. Do I get AI anxiety? Yeah. For sure. Right? You know, I was kind of on a pseudo vacation, for for a couple of days, this past week.

Jordan Wilson [00:38:01]:
And even though I was still kind of working every day, you know, reading and and writing the newsletter, like, late at night, yeah, I felt behind. Right? I'm like, man, you know, even just, you know, working half days for a couple of days, I'm like, man, like, am I losing my skill set? Right? Just working you know, I was working 2 or 3 hours every day, and I still felt like, man, I'm gonna fall behind. So, yes, I do feel that. I think it's normal. I think as humans, it's something we're hopefully, going to, quote, unquote, get over. You know, I I think what it is right now is, you know, you just see thousands of new products popping up every day. I don't see that continuing to happen because I think eventually, venture capital firms, private equity firms are gonna stop investing in so many of these startups that I think don't have a moat, and they're just gonna get squashed by OpenAI, by Google Gemini, by Microsoft Copilot, by Claude anthropic. So I don't see this, like, oh my gosh.

Jordan Wilson [00:38:59]:
There's these 50 new, you know, AI writing tools. There's these, you know, 20 new, AI image tools. I actually see that slowing down, because I see the VC and private equity money hopefully drying up. I think, Pete, like if I'm being honest, a lot of these companies, hey, you can judge me for it. Venture capital, private equity, a lot of you have no clue what you're doing. Sorry. A lot of you do. Right? Yes.

Jordan Wilson [00:39:24]:
Yes. The biggest ones. But, like, still to this day, I'm seeing these companies, right, announce because I I follow these companies on on LinkedIn, and I'm looking at their websites and it's like, hey. You know? Oh, this company just raised, you know, $2,300,000 seed round. And I and I'm looking at the companies that invested in them, and I'm like, that was a bad investment. Right? Obviously, they they know the the the company's road map, but I'm also saying, hey. I know this next version of, you know, GPT 4.5 or GPT 5 is going to kill thousands of these startups. So I don't understand sorry.

Jordan Wilson [00:39:58]:
I know this isn't Hot Take Tuesday. VCs, private equity, reach out to me. You should be talking to me because you're making terrible investments. You're gonna be losing so much money. I'm sure you don't care because you're, you know, have $100,000,000,000 of in in in your portfolios and you're just taking flyers on random companies thinking they'll get acquired. You know? That's the thing. The acquisition space is not like it like what it used to be. Right? With with with meta and, you you know, social media, you know, these little startups would go raise a couple $1,000,000, get a bunch of users and, you know, Instagram or Facebook or Twitter is just, you know, acquiring them by the dozen.

Jordan Wilson [00:40:35]:
It's not happening in the generative AI space. That's not how it necessarily works. Yes. There is still going to be some acquisitions by these big tech companies, but for the most part, they don't need to because the the next versions of their models, right, especially as it becomes more multimodel, input output, as it becomes more Internet connected, they don't need your startup. That's that's not gonna be worth anything. So hey. Random hot take Tuesday there. But, VC companies, private equity, you're losing you're losing a lot of your money, because you don't know what you're doing.

Jordan Wilson [00:41:07]:
Alright. I'm gonna try to get to 1 or 2 more questions. Some of these are long. Sorry. So I'm not sure if they're if they're questions or just comments. So I'm gonna go through at the end and, see if I missed anything. So I think we have 2 or 3 more here, 2 or 3 more. So let's go ahead and tackle these, and then we'll wrap up the 1st edition of ask me anything.

Jordan Wilson [00:41:30]:
So Don wait. Don on Twitter. Wow. Cool. Someone's on on our Twitter or x or whatever it's called. What's up, Don? You're our first. So Don says, what do the AI large language models do with all the data they collect? Yeah. Well, it goes into their training set.

Jordan Wilson [00:41:48]:
Right? So as an example, OpenAI just updated its, knowledge cutoff date to December 2023. When I tested Meta last night, it looks like their knowledge cutoff is December 2023, which I love it. Now we have 2 different models, that only have a, you know, 4 month gap in the knowledge cutoff. But essentially, Don, you know, and this is a whole another conversation for another day because I could talk for hours about this. More and more companies are gonna block all of these, you know, the Google, you know, Google has a separate scraper, for kind of SEO versus for Gemini. You know, you have the OpenAI bot that scrapes websites for data. More and more publishing companies have already blocked these, so they're finding it harder to scrape data. But, essentially, it goes into their models.

Jordan Wilson [00:42:36]:
Right? Yes. Copyrighted data. Yeah. Hey. Whether whether we wanna talk about it or not, the elephant in the room, these these large language model these companies, they're gonna be facing a lot of lawsuits, especially after we, see whatever settlement, which I think is gonna be a settlement from OpenAI versus New York Times is gonna be is is gonna be a cascading effect of, you know, dozens or 100 of lawsuits for against these companies that are, you know, scraping the open web. And I know, you know, companies like OpenAI are just even making, arguments, right, about, like, hey. Let's try to argue against copyright law. Right? And they're you know, I know that we we talked about in the newsletter before.

Jordan Wilson [00:43:15]:
You know, they kinda put out a letter to the European Union just not saying like, hey. Yeah. We're using copyrighted data, but they're kinda challenging. Like, hey. This copyrighted, you you know, data law here in the US, it's they're challenging its application and its meaning. So I think that's gonna be the norm as well. But, essentially, all these companies are scraping everything on the open Internet, the closed Internet, works of art, copyrighted material. It goes into their data, or into the training set.

Jordan Wilson [00:43:40]:
Humans are, you know, training the model on it. They're using a lot of, different training, techniques. But, essentially, what humans do is they look at all this data. They do question and answer pairings, and they kind of train the model. Right? And I guess what the the the companies are saying is, hey. We don't use any one copy you know, any one piece of copyrighted material. It's a collection. Right? So it's like, oh, okay.

Jordan Wilson [00:44:02]:
As an example, if the New York Times releases, you know, something, it's obviously it's copyrighted. But if the whole world is then talking about it and people are talking about it on podcasts and blogging about it. Right? So then they're saying, well, it's hey. This is public discourse. Right? Yes. It was maybe originally copyrighted material, but now it's public discourse. So, you know, these models are essentially very advanced NEXX token prediction engines. You you know, are taking the all the public discourse because once something is out there on the Internet, everyone else is talking about it.

Jordan Wilson [00:44:32]:
Right? Social media, blogs. Are blogs still a thing? Did I just date myself? Other news websites, podcasts, YouTube, etcetera, everyone's talking about these things, and then that just becomes part of the public discourse, but all of that data becomes part of the model. Right? So that's hopefully helpful. Douglas, given the acceleration of AI, any considerations for specialized live streams in addition to the daily general? Example, everyday AI health, AI science, AI wearables, etcetera. Douglas. Great. I love that. So I've I have all these ideas.

Jordan Wilson [00:45:10]:
Right? I use ClickUp. I have this task in my ClickUp for just, like, great ideas. It's like, oh, I'd love to do this and this and this. So I've had thoughts of doing, essentially a weekly series with other guests that, you know, come on, you know, once a month and having, like, 4 rotating categories. Like, as an example, maybe, like, creativity, health, finance, and, you know, general business, whatever. And then having, like, rotating guests that come on the show and we do one deep dive. It's kind of this panel, the same group, and then doing that show, like, once a week. Right now it's a capacity thing for myself.

Jordan Wilson [00:45:47]:
Right? If if I'm being honest, because I have so many great ideas and I love these great ideas, keep them coming, you know, because at the same time, we're also getting a lot of great opportunities, you know, to train large companies on prompt engineering, which is something that we're excited about. Great opportunities to go speak at conferences, to speak in panels. So, yeah, it's it's balancing new initiatives, with, other opportunities that kind of quote, unquote, pay the bills. So, Douglas, I love the idea. It's been an idea now for for many months, but I love, I love your suggestion here. Even the wearables, never thought about that one. That could be a cool kind of a specialized thing there. Alright.

Jordan Wilson [00:46:27]:
Ty from YouTube. What's up, Ty? What are your suggestions about sharing AI chats and tools with people who don't necessarily share an account. I'm thinking about coworkers with separate accounts or teachers and students. It's great. It's a great way to learn. Right? It's a great way to learn prompting. It's a great way to learn, how to interact with the model. So if you have one person on your team who is great at prompt engineering, who's great at getting the most out of models, you know, they can go through that whole chat.

Jordan Wilson [00:46:53]:
They can share that chat with others. Right? They have to have an account. So as an example, if I have a, you know, great chat in GPT 4 and I'm calling on all the all these other GPTs, I can't just share that to someone who doesn't have an account. They have to have an account. And if they wanna use it in the same way that I'm using it, not only do they have to have a ChatGPT Plus account. Right? Because otherwise they won't be able to access it, but they'd also have to first, install and use those GPTs, if they want to continue using them in the chat context that I share with them. So I think that's a great way, to collaborate cross team. If I'm being honest, I think it's one of the most underutilized features of most large language models is the ability to share chats.

Jordan Wilson [00:47:36]:
You know, they're not quite collaborative yet. So it's essentially, you know, the the equivalent of, you know, if you have a file on your desktop like a Word doc and you create a a copy of it and send it to someone else. They only get what, you know, what you did up until that point. It's not like you can both collaborate on a live chat. That's not how it works. Although that's where I see I would have thought Google would have been the leader in that. Didn't happen yet, but I do see that's where it's going. There's other third party systems that allow that, kind of real time collaboration working with APIs.

Jordan Wilson [00:48:10]:
I don't necessarily think those third party tools are very good mainly because you can't always use all the features. Right? You know, like, you are only getting the base model that's available via the API and those third party tools that do allow that real time collaboration between teams working, in unison on the same chat. But you don't get to, as an example, you know, connect with, you know, your your Google Drive, you know, if you're talking Google Gemini or you don't get to mention GPTs, right, in in ChatGPTs. So, when you're doing that in these 3rd party tools, you're using a much more limited version, because all you're getting is essentially the model, nothing else. And I think the true capabilities of large language models, especially in real business applications come from these quote unquote third party features, integrations, you know, whatever you wanna call them. You know, you have a couple plugins from, you know, Microsoft Copilot, etcetera. That's where I think you get the real power in business utility. Whoo.

Jordan Wilson [00:49:07]:
That was a lot. Alright, y'all. Was this was this good? I I I probably I probably missed a couple, I probably missed a couple comments. I'll go through, the the livestream here and and try to get to any questions. But let me know. Yes or no. Did you like this show? Was it terrible? Should we never do it again? Or maybe should it be a thing that we do every every once in a while. Right? Maybe every once in a while, if we don't have a show planned for Friday, we'll throw out a little freestyle Friday or whatever this thing's called.

Jordan Wilson [00:49:38]:
Hey. You guys you guys voted for this. Right? Not if so if it's if it stinks, I can't take, you know, all the blame. You all wanted a a little, like, ask me anything, informal q and a. Hey, podcast audience. I know this one was all over the place, but hopefully, you learned from some of our, great questions from our livestream audience. So, hey, as a reminder, if you haven't already, please go to your everydayai.com. I'm gonna pick either the favorite questions that we talked about here on the livestream or, you know, maybe I'll I'll actually pick 1 or 2 that I didn't get to as well and answer those in the newsletter.

Jordan Wilson [00:50:16]:
So should be a fun kind of random newsletter today, but there was a lot of news that we're gonna dive into a bit deeper as well. So thank you for tuning, tuning in for our first Ask Me Anything Freestyle Friday, show. Hope this was fun. Thanks for tuning in. We hope to see you back next time and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI