Ep 272: How OpenAI’s Shot at Google (and GPT-4o model) Will Change How We All Work

OpenAI's GPT-4 Omni Model and its Implications for Businesses

Striding ahead in the AI domain, OpenAI's collaboration with industry bigwigs like Microsoft and Apple marks a significant stride in the formidable world of artificial intelligence. With the unveiling of the robust GPT-4 Omni model, businesses may be poised to experience improved AI capabilities with minimal latency, thereby revolutionizing their operational landscape.

GenAI Impacting the Future of Work

Generative AI tools are no longer relegated to the background – they are taking center stage and significantly shaping the work environment. Their potential impact on productivity and efficiency is profound, and businesses not integrating these tools into their work processes may risk being left behind.


GPT-4o Real-time Assistance through AI

Imagine a desktop app that aids in codifying in real time or a portable device, like an iPad, where the AI assistant solves a complex mathematical problem right before your eyes. This is the power of artificial general intelligence (AGI). It holds the potential of transforming the educational landscape and the future of work in diverse ways, catering to an array of tasks.


Artificial General Intelligence (AGI) and Google

Advancements in the realm of AGI carry the potential to reshape the technological giants like Google. With the amalgamation of a desktop app, iPhone and iPad apps, and AI assistant, the course of daily work will certainly witness tectonic shifts.


GPT-4o Model

Designed to change human interaction with technology, the new GPT-4 model by OpenAI has garnered significant attention. With companies like Klarna stating that almost 90% of its employees are now leveraging generative AI tools for enhancing productivity, it is certain that AI will transform the business bottom line.


The Future of AI with GPT-4o

Meanwhile, OpenAI is excitedly progressing with its new vision – the GPT-4o or the Omni Model. This new version is set to amalgamate transcription, intelligence, and text to speech, reducing latency significantly. Offering advanced features like reduced latency, real-time voice communication, and improved AI voice emotion, the Omni Model will surely redefine AI interactions.

The strategic push given by OpenAI to its innovative technology and its subliminal stance against Google's Gemini might disrupt the AI landscape significantly. The AI revolution is knocking on our doors, ready to change the way businesses operate. The question that every business decision-maker must ask now is, "Are we prepared for the new AI onslaught?"


Topics Covered in This Episode

1. Future impacts of GPT-4o
2. OpenAI, Microsoft and Apple Collaboration
3. Live demonstrations of GPT-4o
4. OpenAI's strategic releases
5. New Features and Plans for GPT-4o


Podcast Transcript

Jordan Wilson [00:00:17]:
The future of work was debuted yesterday. I'm not just talking about the new GPT-4o model from OpenAI. I'm talking about how this model can be used, and the reported kind of partnership or coming together of 3 of the biggest names in tech, and how this kind of unlikely but connected partnership of these 3 big companies is likely going to change the way that we interact with technology in the future. Alright. So we're gonna be talking about that today and more on everyday AI. What's going on y'all? My name is Jordan Wilson. I'm the host, and Everyday AI is for you. This is a daily livestream podcast and free daily newsletter that serves as your guide to how you can leverage generative AI to grow your companies and to grow your careers.

Jordan Wilson [00:01:11]:
I hope that's you, you you know, so thanks for showing up. If you're listening on the podcast, make sure to check out your show notes as always and go to your everydayai.com. Not just for a recap of today's show later in the newsletter, but you can visit, you know, an entire library of great and free 100 of hours of free, generative AI content from experts in the field on our website. So make sure you go check that out. Alright. So before we get into how I really think this new announcement from OpenAI yesterday and their g p t four o model is going to change the way we work. Let's first start as we do every day by looking at the AI news. Alright.

Jordan Wilson [00:01:52]:
So the US and China will be meeting on AI in Geneva. So high level envoys from the US and China will meet in Geneva to discuss the risk and shared standards of artificial intelligence and the first meeting under an intergovernmental dialogue on AI. Both countries view AI as crucial for national security and economic growth. So the US plans to focus on developing safe, secure, and trustworthy AI through voluntary commitments with leading companies and safety tests of AI products. China and the US will reportedly today take up issues including technological risks and global governance of AI during the meeting. The talks, while they're looking both sides, they're looking to build trust and understanding between the two countries on AI issues, but, immediate binding agreements are not expected. Alright. Our next piece of AI news for the day, Klarna is all in on AI.

Jordan Wilson [00:02:47]:
So the Swedish Fintech company has reported that almost 90% of its employees are now using generative AI tools in their daily work with a high adoption rate among even nontechnical groups. So the use of AI has been touted as a major boon to the company's bottom line with its 1st quarterly profit in 4 years attributed to investments in AI. So a couple kind of key takeaways here. Klarna reported the high adoption of generative AI tools among employees with over 87% using tools such as OpenAI's ChatGPT and its own internal AI assistant. AI has been especially beneficial for the company's communication and legal teams with tasks such as evaluating press articles and drafting contracts made much more efficient. Klarna has seen success in implementing AI in its business with its 1st quarterly profit in 4 years $40,000,000 in reported savings from its AI chatbot. Yeah. So we talk about that.

Jordan Wilson [00:03:43]:
People are always like, oh, how would I use this? Well, there you go. Korna re reported that their usage has saved them $40,000,000, in employees' time. Last but not least, you know, hot off the heels of OpenAI's announcement, Today, we are expecting Google's, announcements at their Google IO developer conference as it kicks off in hours, and the focus will obviously be on AI. So Google's annual developer conference, Google IO, will take place today and tomorrow in Mountain View, California. So the event is expected to focus on artificial intelligence with potential announcements regarding AI, apps and services, and possibly even debuting some new hardware. Google is expected to make major announcements regarding AI potentially updating, its Gemini model and some other AI updates. Also, Google already teased a live multimodal AI video assistant similar to what OpenAI announced yesterday. We'll be talking about that more today.

Jordan Wilson [00:04:43]:
Alright. So, let's get into it y'all. Thank you for joining us live. And we're doing something special today. Well, something different anyways for our livestream audience. So, just go ahead. We're gonna do a giveaway at the end of the show. I'll tell you what it is then.

Jordan Wilson [00:04:57]:
So just go ahead and use the hashtag hot take Tuesday in your comments, and then we're going to be drawing, someone live that comment. So we're trying trying something new today, so go ahead and give that a try. And, you know, hey. If you're on the podcast, make sure to come join the livestream every once in a while. It's a lot of fun. I also wanna know from our audience, what do you think of the new OpenAI's GPT-4o model? So, a, do you not know anything about it? B, are you not super impressed? C, are you pretty impressed? Or d, are you blown away? Alright. So, let's just start. I'm gonna give you our hot takes.

Jordan Wilson [00:05:34]:
It is it is hot take Tuesday after all. So here are my thoughts and then we're gonna be diving into, this in-depth here a little bit. So I'll say this is our first mainstream taste of AI agents. Yeah. This is it. I'm gonna describe what that means, to you here in a second, but, you know, we've been talking about AI agents here on the show for probably a year plus. Right? Going back to the very beginning, I think this is our first taste of actually working with an AI agent and not just typing something to a large language model and hoping that you get a good response. I think this is also a big step toward AGI, artificial general intelligence.

Jordan Wilson [00:06:12]:
I don't think people want to admit this, but when you, kind of see at least, what was previewed yesterday, not all of this is out yet, I think you'll understand, what I mean on that. Also, I think this is actually going to create huge gaps, between whether it's companies, departments who are using AI versus not using AI. You know, as an example, we talked about Klarna, you know, getting this 90%, 90% of their employees using generative AI. I actually think this newest update is going to drive a divide in the gap between people who are using it, and those who are not, and companies who are using it, and companies who are not. So both in efficiency and productivity, you know, that's that's the downside that people don't talk about generative AI and, you know, all of its great potential and and upside in helping us be more efficient, more productive, etcetera, is those not using it. Right? There's still gonna be companies and in people and individuals and departments who are not using generative AI. And I think that this latest update is going to make that efficiency gap, that skill gap much wider, which is actually extremely problematic for people. And I don't think, enough people are realizing this or talking about it.

Jordan Wilson [00:07:26]:
It's it is a literacy issue. It's an educational issue, but, you know, as we, you you know, take a look at this g p t four o announcement, from OpenAI, I think that will make sense. And then last but not least, Google is in big trouble. They are in huge trouble. So we're gonna talk a little bit. So, you know, obviously, it was reported, over the last, day or 2 that, OpenAI and Apple are, going to be working together and, that Apple will, reportedly be using OpenAI's models in their next iOS 18, that will bring for the first time AI to all of their billions of devices around the globe, which is huge. Obviously, we already know about the So

AI [00:08:19]:
combine that with the new, model that we

Jordan Wilson [00:08:19]:
saw from OpenAI, So combine that with the new, model that we saw from OpenAI, the accessibility, them making major aspects of it free. Google's in trouble y'all. Like, I know they have some some announcements here in a couple of hours and they're essentially gonna be debuting a lot of the things that OpenAI did yesterday, but I think they're in trouble. That's me. Alright. So let's first do a very quick recap. And, hey, thanks thanks for everyone joining, you you know, trying to get into this hot take Tuesday. Hey.

Jordan Wilson [00:08:49]:
Make sure you get it all in 1, all in 1, kind of, sentence here, Rolando. I wanna make sure everyone who's who's putting it in gets entered. It's just a software. So if if you put a space, it's not gonna grab it. Alright. So let's let's go ahead. Let's go over a quick recap of what actually happened yesterday. We did an entire episode on this.

Jordan Wilson [00:09:08]:
We did actually 2 episodes yesterday. So I'm not gonna take a super long time. I just wanna do a very high level and quick recap for those of you that maybe missed it. Alright. So, the new version of GPT 4 is called g p t GPT-4o, which stands for Omni Model. Alright. G p t four o is already available right now to paid users, and it will be going out to free users as well. I checked last night.

Jordan Wilson [00:09:35]:
It wasn't available to free users yet. Could be as of this morning. So make sure if you do have access to it on the free account, let me know. But the most powerful model as of now will be available to free users and paid users as well. Right now, paid users will have 5 times the capacity, limit as for users. So it will kind of be throttled and capped for free users, and paid users won't have that same limitation. Number 4, even free users will soon be able to access the GBT store. That is huge.

Jordan Wilson [00:10:04]:
Alright. And I'll probably have that, more on that in another episode. But, yes, even free users to chat g p t will be able to use GPTs. So these custom GPTs that anyone will build, free users cannot build them, but they can use them, which I think really changes how ChatGPT will ultimately be used in a team environment. Another one, GPT-4 o, combines transcription, intelligence, and text to speech all in one mode. Whereas before, you kind of behind the scenes had the different modes working with each other, which created some latency, which, hey, if you wanna talk about a helpful agent, an AI agent, you can't really have a lot of latency on multiple ends. So this new g p t four o model changes that. So we are going to see a new so kind of point 6 here.

Jordan Wilson [00:10:49]:
We're gonna see a new desktop assistant that can hear and see what you're working on and I'm gonna show you guys a demo of that here in a second. 7, g p t four o is rolling out to the API at a reduced cost. It is 50% the cost of the previous version of GPT 4 Turbos API. So it is, twice as fast and half the cost. Alright? So that's gonna change, a lot for the tens of thousands of, you know, other products and services that use OpenAI's API. Number 8, OpenAI demoed a live view mode, presumably being able to use vision in real time, which is huge. And a a a lot of people are confused on like, hey, verse like, the free versus the paid. If this GPD 4 o is going out to all free users, why would I remain a paid user? Well, because these features that haven't been rolled out yet, specifically kind of this live view mode, will not, at least right now, will not be available to free users.

Jordan Wilson [00:11:46]:
So, there are features that have not been rolled out and they will be rolling out in the coming weeks, they're only available to paid users. So a lot of people, kind of miss that and they're like, oh, what's the point? Well, that's the point. Alright. Number 9, things you need to know here is there's a reduced latency with real time with a real time feel in voice to voice communication. Number 10, it is a much more human feel. I mean, the emotion, in in, kind of this range in the voice of the AI agent is pretty outstanding. Is it scary? Yes. It is actually scary how emotive, this new GPT-4o voice assistant, is and how responsive it is as well.

Jordan Wilson [00:12:29]:
Number 11, like I said, a lot of these newer features aside from the actual base model are gonna be rolling out to, paid users in the coming weeks. Right? So, we're gonna show some some demos of this here in a second from OpenAI. And last but not least hey. I already talked about this, but Google, I think, is in big trouble is in big trouble. Alright. So let's talk about that first. Alright. Let's talk about why Google is in trouble.

Jordan Wilson [00:12:58]:
And and, hey, Woozy with the comment here, is is picking up on the point I'm about to make. Yeah. They were using they were using Macs and Apple everywhere, which is interesting. Right? Because reportedly right? Well, not reportedly, but Microsoft has a huge equity stake in OpenAI. Right? Reportedly, they've they've invested between $10,000,000,000 and $13,000,000,000 and have a 49% ownership or equity stake in OpenAI. So you would think, oh, okay. So hey. Any demo, you know, presumably Open a OpenAI is gonna be using Windows machines.

Jordan Wilson [00:13:34]:
Right? They're gonna be using Microsoft products. No. They were using all Mac. Again, I think this is, because we had our kind of first official ish reports, yesterday that Apple will be moving forward and announcing at some point between now and its worldwide developer conference, WWDC, in June in less than a month that they will be using OpenAI's, GPT model in their devices, in their iOS, bringing kind of edge AI to Apple, which is a huge announcement. Right? So yeah. In in interesting takeaway there, but that's another huge shot at Google. But let's let's actually kind of look at the shots. Right? Because this was, I'd say, very intentional.

Jordan Wilson [00:14:18]:
Very intentional. And I don't think everyone picked up on it. So the thing that people did pick up on obviously was the timing. Right? So, Google announced their, IO conference months ago. Right? I think at least 3 or 4 months ago, this date has been set in stone. OpenAI, about 4 days ago, said, oh, we're gonna have an announcement, and it's gonna be Monday right before, literally 24 hours before on the dot to Google's IO announcements. So, presumably right? And and and we talk about this here on the show all the time. These companies are poaching each other's top talent.

Jordan Wilson [00:14:57]:
So each company knows what the other is doing. They know what they're working on. Presumably, they know what they're going to announce. So, I think a huge power move here by OpenAI, essentially, at the last minute coming in and saying, like, I I I was thinking this. I don't know. Does anyone in here maybe I'm a a a dork. You know? I I I'd like nineties rap. I like early 2 thousands rap.

Jordan Wilson [00:15:18]:
But does anyone mile, remember 8 Mile, the movie, you you know, with Eminem? I kind of thought OpenAI did like a a reverse inverse of the, you know, the Eminem, you know, 8 Mile, where essentially, you know, he's in this rap battle. And at the very, you know, at the very end, he's, you know, battling someone. And he, you know, he essentially says, hey. Here's here's all the all the facts. Here's everything that you're gonna say about me. So good luck coming back at that. Right? I think actually OpenAI did the inverse reverse of that, and they said, we already know your quote, unquote showstopper that you're going to announce. So we're gonna announce it first and then we're gonna take some very subtle, not so subtle shots at you because you guys really screwed up with your Gemini model in your original Gemini announcement.

Jordan Wilson [00:16:05]:
So, that's the way that I took it, but I I I wanna show you a little bit what that means. So number 1, it was the timing. My gosh. You couldn't I mean, that was a direct straight, you know, Will Smith slap in the face, from, from OpenAI straight across the face of of Google. Number 2, the one x speed call out. Yeah. I'm coming in hot today, Chrissy. I must have slept enough and and had the the the right amount of caffeine.

Jordan Wilson [00:16:33]:
So so let's talk about this one x call out. So, on the OpenAI blog post and, again, make sure to go subscribe to our newsletter at your everydayai.com. We're gonna be breaking down this down in a lot more, detail. But you gotta love to see this because, it said, a it said all videos on this page are at one x real time, which if you don't really know anything or maybe if you don't follow this space closely, you're like, okay. That's good to know. You know, that just tells tells us that any video that they put out there are real videos and they're not sped up. They're not slowed down. You know, they're not, edited and pieced together.

Jordan Wilson [00:17:14]:
Seems like a pretty common thing to say. Right? Well, not when you talk about what Google did. Alright. So, and more on that right now. So let's talk about the demos chosen and even how that correlates to that. Hey. All these videos are in 1 x speed. Right? So, and it's definitely worth checking out, on OpenAI's YouTube channel, and we're gonna be linking some of our favorites in today's newsletter.

Jordan Wilson [00:17:41]:
But one of the one of the videos and I didn't see anyone else talk about this, or anything. So one of the things they did is a, rock, paper, scissors demo, with GPT 4 0. Right? So presumably, this is the feature that, no one has access to yet, or, you know, not even paid users have access to. And this is one of those features that will kind of separate the paid tier from the free tier. Right? Because everyone gets access to the GPT-4o model, but not everyone is going to get access to essentially this live assistant or what I am going to call as a live agent. I really do think it's a live AI agent, but here it is. And I love how OpenAI did this. They showed a video of the, of the program running on an iPhone.

Jordan Wilson [00:18:29]:
Right? So it's a front facing camera that is, that is showing 2 people playing. And I know I say this wrong, paper, rock, scissors. I know it's rock, paper, scissors. I always say paper, rock, scissors. Right? But, you you have this live demo of them going, you know, paper, rock, scissors in real time and then, ChatGPT, the voice assistant says, oh, it looks like this person won. Right? And you might say, alright. Well, what's the big deal? Well, because that is literally the exact demo that, Google originally, did with its Gemini initial release in the 1st week of December 2023, so about 6 months ago. So you might be like, alright.

Jordan Wilson [00:19:14]:
So why why does this matter? Well, here's the thing. Google straight up lied. Right? They straight up lied, which is why I think a lot of people, didn't really investigate Google's model at the level that they maybe could have. You know? Google's Gemini is a very powerful model, but I think, the distrust was at an all time high. Because as an example, you know, people saw this this Google marketing video, which I will say it was not a demo video. Right? What we got what we got from OpenAI presumably is live one x real demo videos. What we initially got from Google Gemini in December was a marketing video. Right? So they have this same sequence of one person kind of doing this paper, rock, scissors.

Jordan Wilson [00:20:06]:
Right? And then the model saying, oh, it looks like you're playing a game. It looks like you're playing paper, rock, scissors. So here's the difference. It wasn't live. Right? What Google did is they put out this marketing video that made it seem that Google Gemini, you can interact with in real time. It can see a video live. It can process that video live and respond in voice live because that is what the marketing video showed. Here's the thing.

Jordan Wilson [00:20:36]:
It was all a farce. It was all a farce. It was deceptive to say the least because they actually shared in their research paper, here's actually how we did it. We actually took a bunch of screenshots, very strategic screenshots, and then we did a bunch of back and forth prompting, multiple shot prompting in order to get, Google Gemini to say this. And then we just kind of, you know, put it in, you know, text text to speech and made it look like it did this all on its own with its own reasoning. Nope. So, you know, OpenAI, jeez. I mean, my gosh.

Jordan Wilson [00:21:12]:
Just straight flamethrower, with this subtle or not so subtle just shots fired at Google saying, yeah. These are one x speed. And, hey, Google. Ahead of your announce ahead of your big announcement, here's the demo you were actually trying to do, and we got it figured out before you did. Right? And we're showing this live and in real time. Alright. And and and, hey, full disclosure full disclosure. I think a lot of people aren't paying attention or as much attention to Google because of this.

Jordan Wilson [00:21:45]:
Because Google, I think, created a huge, just this huge level of of mistrust and distrust, with their original Gemini rollout. You know, every single big publication essentially covered it verbatim, fell for it, got duped, and then had to run stories 2 to 3 days later. I'm saying the biggest news publications in the world that either retracted their original reporting or they had to run a story that said, oh, well, hey. Google essentially deceived us all, and this wasn't actually real. It was actually very manufactured. Right? So I think, you know, OpenAI saw that opportunity to really, you know, talk about, like, finishing blow, you know, Mortal Kombat, you know, woozy opponent finish, you know, finish him. That was an actual wordplay there with with, with Woosie Rodgers, but we'll say it was. But they they they knocked them out.

Jordan Wilson [00:22:35]:
I'd say when I saw this, I was like, jeez. They just they just went for Google's throat there. Alright. So, hey. As a reminder, if you joined late, make sure, comment hot take Tuesday. We're gonna be giving that away here, at the end. Alright. So let's talk about what this actually means, the big picture.

Jordan Wilson [00:22:54]:
Right? Because we already talked about this partnership, right, between Microsoft and OpenAI. Microsoft reportedly has invested 10 to 13,000,000,000 for a 49% equity stake. And now you have this now reported marriage, between Apple and OpenAI. Again, that's not official, but it does seem like the most recent, round of reporting, from Bloomberg makes it seem like, yep. This is a done deal. Apple is going to be using OpenAI, for their next iPhone, which is a huge deal. So you now have this, you know, when if if if you're an NBA fan, you know, this this term like a super team here. Now you have this so so technically, they're not working together.

Jordan Wilson [00:23:34]:
And what's what's very interesting here is you have people that are technically enemies, now becoming frenemies, right, in Microsoft and Apple. Right? So even when we talked about the demo, OpenAI was using in everything. They were using MacBooks. They were using iPhones. They were using iPads. Even though the very company that invested more than 10,000,000,000 dollars, their main line of business is PCs. Their main line of business is Windows operating system. Right? So this just goes to show you that OpenAI is in such a powerful position because they are so far ahead of everyone.

Jordan Wilson [00:24:20]:
Right? And we'll be sharing the benchmarks. I didn't wanna make this a a a benchmark episode and talking about, MMLU and and, you know, human eval and all of these, you know, benchmarking tests of the large language models. But, obviously, I believe in every single benchmark except one, this new GPT-4, GPT-4 o model is out benchmarking every single model. You know, we'll obviously see. I'm sure Gemini, Google Gemini will release a new one today. But regardless, now you have this almost unfair fight of these 3 companies that were kind of frenemies, but now they're all essentially on the same page. And and here's here's what why that matters. Right? In something that people aren't, they they aren't taking this into consideration.

Jordan Wilson [00:25:02]:
So a big part of large language models in in usage statistics. Right? In feedback, reinforcement learning. This partnership, if if this is true, if OpenAI is is going to be used and their GPT, model is going to be used with iPhone, they're gonna get so much training data. And guess what? That makes the GPT 4, if that is the model that will be used on, you know, devices, whether that's 3.5, whether it's the new 4, 4 o, we're not sure. But all of that training data between those billions of devices that Apple has out in the wild that are presumably gonna be using, GPT technology, guess what? All that training data makes the GPT model and the future of that technology exponentially better. And guess who benefits from that? Microsoft. Yeah. Yeah.

Jordan Wilson [00:25:56]:
The Apple OpenAI work. That partnership, Microsoft benefits because the base of their copilot is obviously GPT 4. So as that model improves from either 100 of 1,000,000 or 1,000,000,000 of devices and all of this usage and the reinforcement learning that will come from this, Microsoft benefits. The 3 of them now have this almost unfair advantage. Right? Which I think now is why you have a lot of, you know, government scrutiny into these high level partnerships, because it creates an almost unfair advantage. Google has an uphill battle. Right? I'm I'm gonna be curious to see. I, how Alphabet, you know, the parent company of Google, how their stock reacts both, today and in the coming, weeks, and, especially after, the Apple ww, DC announcement when if this does become official.

Jordan Wilson [00:26:54]:
Right? Because if you're a smart analyst or if you're just someone that understands how technology in the world works, you're already seeing. If this is true, Google is not in good good condition. You know, we're not even talking about the future of search necessarily, but huge uphill battle. Alright. Let's let's talk about now how I think that this is going to change the way that we work. Alright. So something that, again, not a lot of people are talking about, but I think is actually the biggest thing, is you have the desktop app. Alright? This is literally bringing what I would say are agent capabilities.

Jordan Wilson [00:27:32]:
What I would say is a hint of AGI. I know people are gonna disagree with me, but, I mean, as you see some demos, I think you'll start to understand what I'm I'm saying when this, like like, yeah, this brings in a hint of AGI. So I think the combination of the desktop app, the iPhone and the iPad app, and kind of this agent assistant, is going to create is is going to completely change the future of how we work. I I do think up until now. Right? And again, we don't have all of these capabilities yet. They will be, OpenAI said that the they they will be rolling out in the coming weeks. All we have access to now is the base model, but we don't have all of these, kind of, agent or assistant capabilities yet. But I'm gonna go ahead.

Jordan Wilson [00:28:17]:
Hopefully, this will work for our lives, for our podcast audience as well. So I'm gonna go ahead and play, the, a video here from OpenAI. So let's go ahead and take a listen. So this is, a quick demo of the desktop app, and someone presumably doing this live. So let's just go ahead, take a watch, take a listen, and, hey, for our commenters, I never know. Can you let me know if you hear the audio on this? Alright. Here we go.

Person [00:28:46]:
Think so. Okay. Next, let's try something, coding related. So let me take out, a computer for this. Okay. So I'm gonna open the ChatGPT desktop app like Mira was talking about before. Okay. And to give a bit of background on what's going on, so here we have, a computer and on the screen, we have some code, and then the ChatGPT voice app is on the right.

Person [00:29:13]:
So ChatGPT will be able to hear me, but it can't see anything on the screen. So I'm gonna highlight the code, command c it, and then that will send it to ChatGPT, and then I'm gonna talk about the code to ChatGPT. Hey, ChatGPT.

AI [00:29:26]:
Hey there. How's it going?

Person [00:29:28]:
Yeah. It's going really well. I was wondering if you could help me with, coding problem today.

AI [00:29:32]:
Of course. I'd love to help you out. What's the coding problem you're dealing with?

Person [00:29:37]:
Okay. I'm gonna share with you some code. One second.

AI [00:29:40]:
Sure thing. Take your time.

Person [00:29:44]:
Okay. So I just shared some code with you. Could you give me a really brief one sentence description of what's going on in the code?

AI [00:29:51]:
This code fetches daily weather data for a specific location and time period, smooths the temperature data using a rolling average, annotates a significant weather event on the resulting plot, and then displays the plot with the average minimum and maximum temperatures over the year.

Jordan Wilson [00:30:08]:
Awesome. Thank Alright. So hopefully, even for our podcast audience, that made sense. But essentially, what we had there was, the this, demo where they launched the desktop app and it has an overlay. Right? So we're gonna we're we're gonna find out a little bit more exactly how this desktop app works because it does look like there's an icon that you can click that will share your current screen. But in this case, they just copied the code. They didn't have to paste it in anywhere. The desktop app knew that they copied the code and then they talked in real time, to go through and change and kind of debug and decipher and tell the person what this code actually meant, and they were able to get their answers in real time.

Jordan Wilson [00:30:51]:
Okay. So let's also now talk a little bit about, this. So this is, I think, has the potential to change education. So we see here and and right before this demo video, we'll link to in the newsletter as well, is you have a, what they said, a a dad and his son working on an iPad. So on the iPad, essentially, on the left hand side, it's like kind of a split screen. So it looks like there's 2 different things running at once. So there is a essentially a math problem on the right hand side, you know, with a triangle and and angles, etcetera. And then on the left hand side, so split screen on this iPad.

Jordan Wilson [00:31:28]:
Yeah. IPad, not a surface or, you know, again, interesting. And on the left side, you have the, app for chat gbt with this feature that is not yet, available yet. This, you know, this AI assistant or AI agent, that can see, hear, react, and talk in real time. So, this one's a little shorter. So let's go ahead and, watch and listen to this one here.

AI [00:31:55]:
Let's look at the problems together. Can you first identify which sides of the triangle are the opposite, adjacent, and hypotenuse relative to angle alpha?

Person [00:32:09]:
Alright. So I'm pretty sure this is the angle alpha right here. Right?

AI [00:32:13]:
Correct. Correct. Now looking at the triangle, which side do you think is the hypotenuse?

Person [00:32:23]:
Remember any hypotenuse? I'm not totally sure. I think I think it might be this one, but I really am not sure. This side AC?

AI [00:32:32]:
You're close. Actually, side AC is called the adjacent side to the angle alpha. The hypotenuse is the longest side of a right triangle and is directly opposite the right angle. Can you find which one is the hypotenuse?

Person [00:32:48]:
Oh, okay. I see. So, I think the hypotenuse is this really long side from a to b. Would that be correct?

AI [00:32:56]:
Exactly. Well done. Now that you've

Jordan Wilson [00:33:00]:
Yeah. This is wild. This is wild. So, you know, again, hopefully, even our podcast audience could understand that here, but you have this student, you know, appears to be a teenager, you know, maybe maybe in high school. But going through, you know, what could be a homework assignment on the iPad and learning literally in real time, got something incorrect on the on the first take, and then you heard this assistant in a very human tone could actually see what was going on on the iPad. That's wild. That is literally the equivalent of having a live human being standing over your shoulder, watching you, dedicated. Right? When you talk about access, I mean, part of this just has me so excited, when you know, I think people always talk about, the downsides of AI and and and will it take our jobs? Like, yeah, it's gonna take our jobs, obviously.

Jordan Wilson [00:34:00]:
But, you know, people don't understand, I think, or or don't look at or talk enough about the overwhelmingly positive aspects of this insanely powerful technology. Right? How about access to education? Right? You know, I was lucky enough. I went to a good school growing up, but there are are, you know, millions or or hundreds of millions of of of children and kids around the world that either don't have access to high quality education or they don't have access to access to education at all. Right? So, yes, you know, these devices are expensive. IPads aren't cheap, you know, and $20 a month for certain people, is is very, maybe insurmountable. So there's challenges there. Yes. However, what this does, not just for the future of work, but the future of education, it is hard to wrap your brain around.

Jordan Wilson [00:34:52]:
Right? And this is where I start talking about, are we witnessing kind of the first, glimpses of AGI, of artificial general intelligence? And I say kind of, if I'm being honest. Right? And, again, there's there's arguments on this on both sides. This isn't a is this AGI or not episode, but, I mean, when you have an agent that you can talk to, right, in real time, who presumably and and with some training, this is the very first iteration of this. It's not live yet. But, essentially, if you know how to prompt it correctly, if you know how to use it, it is smarter than the average human in just about any general task. Right? Is is that, is that, kind of math tutor right there going to be smarter than the smartest? You know, math professor? Absolutely not. Is it instantly smarter than the average human being? Absolutely. Right? I have a master's degree.

Jordan Wilson [00:35:55]:
I don't remember any of that. It's far smarter than me. So when you think about the applications that this can be used in in both how we work, how we learn, how we connect with each other, it's it's truly mind boggling. Right? As someone that covers generative AI on a daily basis, so, like, yes, a lot of my time is is is covered, you know, or is spent trying to understand, the technology, but so much of my time is also thinking, how does this impact the future of how we work? Right? And I'll tell you this. I'll tell you this as we wrap up today's show. This changes everything. Right? And I'm not one of those that that, you know, speaks in hyperbole. Yes.

Jordan Wilson [00:36:42]:
This is hot take Tuesday, so I come in with hot takes, but I don't say this often. But right now, I think the combination of this, you know, super trio, this super team, you know, kind of behind the scenes unofficial pairing of OpenAI, Microsoft, and Apple, and what that partnership and the data sharing and the improvements of the model for those three means over time. And we're this is just a glimpse. This is just a glimpse. This isn't even, you know, g p t 5 or, you know, whatever, whatever the next model may be called. This is just a glimpse of of what is possible. Right? This is just the first iterations. Yes.

Jordan Wilson [00:37:25]:
These are demos. We don't know. Did they have to do 50 takes? Yeah. They told us this is one x kind of unedited. Did they have to do 50 takes to get this? I don't know. But if this technology functions like it is functioning in these live demos, The future of work is uncertain to me. Right? I don't see hey. It's hot take Tuesday as we wrap this up.

Jordan Wilson [00:37:52]:
Here's what I'll say. If you aren't already using generative AI in your day to day, your company, if you haven't already implemented generative AI, I've been saying this for a long time, you are in for a tough rest of 2024. There's good and there there's good and there's bad. One of the things that up until this release, there wasn't quite that line in the sand. Right? Essentially, if you were using generative AI, you were just much further ahead, than your peers. But now when we almost have this agent workflow, we have this assistant who in real time, in real language can see. So this is one model. Right? So before, technically, ChatGPT could do all of these things, but they had a different model.

Jordan Wilson [00:38:39]:
Right? So behind the scenes, there is this latency. There is this delay. Because if you were talking to ChatGPT, it had to use their whisper technology to first, change your, voice into text, and then it had to reason with one part of the model, and then it had to use kind of a separate model for text to speech and speak back to you. Now it's all one model with this g p t four o, which is for, Omni, right, or everywhere. Now it's one model, and the latency is is next to nothing. Right? I think there's some tricks that they did to do, to do that. I noticed that usually the first, you know, two words to, 8 syllables were all just general responses. So I think that's one of the reasons why they were get able to get that latency down to, like, faster than a human.

Jordan Wilson [00:39:21]:
Anyways, this is literally it does appear that you have a expert in whatever you need readily available at all times that can see, that can hear, that can reason, that can speak with almost no latency. And if you know how to use that correctly, if you know prompt engineering 101, if you know the limitations of models, if you know their capabilities, and if you use this correctly, y'all, this changes everything about how we work. Alright, y'all. Let's let's go ahead. Let's wrap this up. We're gonna try this. So we're gonna try this for everyone that, did this did this hot take Tuesday. So let's go ahead.

Jordan Wilson [00:40:09]:
Let's let's make sure we can do this. Hopefully, this doesn't, blow up in my face. I've never done this before. So, we're gonna start collecting comments here. Alright. Let's see if this work. Alright. So I'm gonna start doing this.

Jordan Wilson [00:40:24]:
Let me know if this is fun, but essentially, I'm gonna be giving away, an hour consult to the winner. So if you are the winner, I'm gonna try to reach out to you. I don't know how this works. We're gonna find out live. Right? That's one of the things about a a livestream. Who knows what works? But, you know, this is something normally we charge a decent amount of money for. So, you know, we're probably gonna do this once a week if y'all like it. So, are you ready? Should we draw for this? Alright.

Jordan Wilson [00:40:48]:
Let's go ahead and see if this works here. Alright. So, actually, here. Before sorry. I I know we started there. I saw I saw a quick problem. I saw a quick problem. It looks like it wasn't, bringing in all the comments.

Jordan Wilson [00:41:07]:
I tried to stop it. I tried to stop it right before, right before it finished. So let's do this one more time. It looked like it was only grabbing, 3 comments. So let's try this one more time. Alright. Here we go. Let's try it one more time.

Jordan Wilson [00:41:24]:
I think I think I, I got to it right before it finished. So alright. Let's try it one more time. Here we go. Hopefully, this works better. There we go. Well, it seemed like there's still only, like, 4 people. Alright.

Jordan Wilson [00:41:35]:
So whoever gets this is still gonna win. Alright, Kristen. We got you. Kristen, I can't reach out to you on YouTube. Make sure you reach out to us. You can just reply. And hey, I don't know if that got everyone, so we'll have to do this again next week and I'll have to test it out. It looks like it only pulled in a handful of them, so sorry y'all.

Jordan Wilson [00:41:54]:
Alright. Yeah. I think it did just grab YouTube, Kevin. You're right. Alright. I'll have to make sure next time, that we get all of our our LinkedIn friends as well. Yeah. Yeah.

Jordan Wilson [00:42:03]:
Total hot take Tuesday bias. I agree. Alright. So I hope this was helpful y'all. If, make sure to go to your everyday AI, .com. If this is helpful, share about it. You you know, leave us a review. Also, if you're listening on Spotify or Apple, appreciate y'all.

Jordan Wilson [00:42:21]:
Make sure to join us back tomorrow and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI