Ep 279: Google’s New AI Updates from I/O: the good, the bad, and the WTF

Unpacking the AI Announcements of Google's I/O Conference

At the recent Google I/O Conference, a panoply of AI updates riveted the global tech community. From the unveiling of following-gen models such as Gemini Nano and Gemini 1.5 Pro to the release of Project Astra, these advances pose potential game-changers for the digital landscape.

Google Workspace and AI: The Nexus of Efficiency

Google Workspace demonstrated how the integration of Gemini can ignite workflow efficiency. Fast-moving enterprises need to understand what this means for their operations, team collaboration, and productivity. Leveraging AI in tools already being used daily, such as Google Workspace, is the way forward.

Running Search Results Generation and AI

Google unveiled its revamped search feature, utilizing generative AI for neatly-organized search results. This advancement could imply faster, more accurate searches, prompting businesses to rethink their SEO strategies.

Veo and Imagine 3: How AI Shapes Media Creation

Google's foray into media creation through Veo, a text-to-video generator, and Imagine 3, an AI image generator, represents the dawn of a new era in content creation. Coupled with their AI music generator Lyria, there's a radical shift of assembling media content, making AI mastery a business necessity.

Project Astra: A New Interface with Generative AI

Through Project Astra, Google allows real-time interaction with large language models, competing with formidable rivals like OpenAI's new GPT-4o model. Being conversant with such developments can aid businesses in comprehending how AI will shape consumer behaviors.

Navigating Discoverability and Marketing Issues

Despite the promising advancements, businesses must be cognizant about the concerns about Google's deceptive marketing tactics and the difficulties in accessing new features. Knowing how to sift through the AI buzzwords, and understanding the true capabilities of tools like Google Gemini and Google Gems, play a crucial role in AI deployment success.

Turning Skepticism into Wisdom: The Google Lesson

While skepticism is called for, so is wisdom. It's wise to cautiously examine Google's AI promises before integrating them into business strategies. A balanced approach will allow businesses to enjoy the benefits of AI effectively and ethically.

Criticism can be a tool for advancement. The critique of Google's overly polished presentations and the inaccessibility of new features is a beacon for other businesses. Honest critiques can help AI developers endeavour efficiency and accessibility in their products, signaling a more fruitful AI future.

Turning the lens on Google's legislative AI, companies are urged to remain savvy, discerning, and adaptive. The application of AI in the business world is inevitable - finding a winning formula amidst the hype is the real business advantage.

Topics Covered in This Episode

1. Google's Updates and Announcements
2. Google AI Evaluations
3. Concerns Over Google's AI Development and Marketing

Podcast Transcript

Jordan Wilson [00:00:16]:
Google just released more AI updates than I can even count. Right? So at their IO conference, about now a week ago, Google announced literally AI everywhere in almost every single product and service that they offer. So today, we're gonna be doing a little bit of a deeper dive and talk about what's good, what's bad, and what just has us scratching our head and saying WTF. Alright. So we're gonna be going over in-depth a little bit more today on the Google IO conference and talking about what it all means and a little bit more on everyday AI. So what's going on y'all? My name is Jordan Wilson, and I'm the host. And this show, it's for you. Right? So this is a daily livestream podcast and free daily newsletter helping people like you and me not just learn generative AI, but how we can leverage it to grow our companies and to grow our careers.

Jordan Wilson [00:01:06]:
So if that sounds like you, first of all, thank you so much for joining us. You know, always appreciate, hearing from you all. So if you are new here, yeah, we do this live every single weekday, Monday through Friday, covering the latest in AI news. And speaking of the latest in AI news, you can always catch that in our newsletter as well. So make sure you go to your everyday a i.com. Sign up for that free daily newsletter. Alright. Let's start as we do every day going over what's happening in the world of AI news.

Jordan Wilson [00:01:35]:
So NVIDIA showed some staggering growth on its earnings call. So NVIDIA reported their net income in the Q1 of 2024 rose over 7 fold compared to a year earlier, reaching 14,800,000,000 with revenue more than tripling to 26,000,000,000. Yeah. Like, 7 fold. That's nuts. So NVIDIA CEO Jensen Huang foresees a new era of AI factories powered by NVIDIA chips designed to accelerate the production of artificial intelligence. So NVIDIA's early investment into AI technology has propelled its hardware and software in AI applications, gaming, automotive, all over the place. Right? And, as we talk about here on the show a lot, demand for NVIDIA's specialized GPU chips that power generative AI has soared due to the need for generative AI systems among tech giants like Amazon, Google, Meta, Microsoft, etcetera.

Jordan Wilson [00:02:27]:
Also, with NVIDIA's chips making such a huge profit, they are now planning to release new chips every year instead of every 2 years. So updating their production cycle, pretty big news there. Alright. Next, what's going on in the world is Meta may be looking to acquire an AI agent startup according to reporting from the information. So Adapt, a 2 year old AI startup founded by X OpenAI and Google AI Developers, is considering a sale or strategic partnership with tech giants, maybe not up. So investors have valued Adapt at over $1,000,000,000 last year, reflecting the high stakes in the AI startup landscape. So the heavy costs associated with training and maintaining AI models post challenges for startups like Adapt even with significant initial funding. And obviously, competition in the AI agent field is intense fine like crazy.

Jordan Wilson [00:03:18]:
Right? Even Google and Microsoft have announced that in the last week, with these established players, you know, looking to get involved. Alright. Last but definitely not least, OpenAI has struck a 250 $1,000,000 deal with a with, the News Corp, for AI training. So OpenAI has signed a deal with News Corp worth a quarter $1,000,000,000. Yes. A $250,000,000 licensing deal that lasts for the next 5 years. The agreement grants OpenAI access to current and archived articles from News Corp Publications for AI training and user question answering. So talked about this a lot on the show, but major companies like already the Associated Press, Financial Times, and political owner Axel Springer have already partnered, in content deals with OpenAI.

Jordan Wilson [00:04:08]:
And on the flip side, other major, outlets such as The New York Times and my newspaper up the street here, The Chicago Tribune have filed lawsuits against OpenAI and Microsoft for alleged copyright infringement. So the News Corp will provide journalistic expertise to OpenAI to uphold journalism standards. Also so this this partnership, if you don't know about News Corp, that includes outlets such as Barron's, Barron's MarketWatch, Investors Business Daily, FN, The Sunday Times, The Sun, The Australian, a lot of others. So, pretty pretty interesting. And, you know, I said this about 6 or 7 years ago. I said the only way this or 6 or 7 months ago, I said the only way this shakes out is a ton of lawsuits and a ton of huge partnership deals because, hey, let's face it. The reality is large language models are largely built on copy ready content. So, you know, more on that later, but let's talk about what we came here for today is to go over Google's new AI updates from their IO conference last week, go over the good, the bad, and the what the freak is this.

Jordan Wilson [00:05:09]:
Alright. So, thanks to our live audience tuning in as always. Peter joining us from Belgium, Tara from Nashville, someone joining us on LinkedIn here from Florida, Woozi joining us from Kansas City. Thank you all for joining. We'd love to hear your questions and comments on what you thought of Google's announcements. So let's just start with a recap. Right? So, if you didn't catch our newsletter last week where we recapped it all, let's just go over what was kind of announced because, yeah, it was a lot. Like, I started the top of the show.

Jordan Wilson [00:05:41]:
It was maybe too much AI. Right? And that's a lot for me to say. I like AI everywhere, but I'm like, yo. This is is this too much. Alright. So a couple of the the what I think are going to be the highlights and, some of the AI initiatives from Google impacting everyday people like you and me. So, Google unveiled Project Astra, which is an impressive new kind of a AI agent, of sorts powered by Gemini. This is very similar to what OpenAI announced in their GPT 4 0 and kind of the ability to see, hear, react and interface with humans in real time in the app.

Jordan Wilson [00:06:19]:
So, we're gonna show an example of that here in a minute. Google also announced enhancements to Gemini models including, Gemini 1.5 pro with, between 12,000,000 tokens of context. Right? One thing Google is absolutely crushing right now is the context window. Also, they announced Ask photos powered by Gemini. It was revealed for Google Photos providing enhanced photo memory summaries, which I thought was a pretty cool piece there. So if you use Google Photos, I do. Once this is released, you'll essentially just be able to talk or ask Google Photos to, you know, like, let's say, hey. I wanna see, you know, my pictures of my my cat, you know, growing up over the years.

Jordan Wilson [00:07:02]:
Right? And and once it identifies, hey. This is your cat. It's gonna show you, and then you can ask questions. Right? Like, where has my you know, what toys has my cat played with over the years or whatever you might use it for. Right? That's a fun example, but, imagine what that could do for, you know, even business. Alright? Like your screenshots. You know, if you're taking pictures, you know, out on a construction site or something like that, pretty, pretty big. So I do think ask photos will be a pretty popular feature.

Jordan Wilson [00:07:29]:
So Google announced the rollout of Gemini Nano, with multimodality on Pixel phones. So, yeah, a lot of this, you know, geared toward, edge AI on Samsung devices and Pixel phones. Android 15, another huge one, was presented with AI powered search, with Gemini as the new AI assistant and on device AI for new experiences. Yes. So that cannot be overlooked. So, you know, everything that we're kind of, talking about, with, you know, Apple working on in the future, well, Google already just announced it. Right? I would say that it at least for smartphone dominance. I do believe that's Apple.

Jordan Wilson [00:08:09]:
Right? Or maybe I don't know. Maybe I live in a bubble here here in the US living in a big city like Chicago, but, you know, I don't really know anyone personally that, has an Android phone. I'm trying to think maybe like a, you know, a cousin or something like that. But basically everyone I know has an iPhone, which is why people are looking, you know, at WWDC in June for Apple to see what they announced. But, hey, Google just did it. Right? So Google just announced Edge AI. They've already it's already been rolled out, in a phone. I believe it was the s 24 a couple of months ago, but, you know, talking about Android 15 here, bringing it to the operating system.

Jordan Wilson [00:08:44]:
Google also announced GEMS, which are customized versions of Gemini, its large language model for, you know, specific tasks. So more or less, this is GPTs. This is the version of, you know, GPTs that, the world has already had access to inside of OpenAI's ChatGPT, essentially creating a, you know, version of Google Gemini with some specific features, you know, specific requests, etcetera. So calling that gems. So that I think is gonna be pretty big. Also, Google demonstrated Google AI teammate, a virtual teammate for work tasks. So we saw something similar with, team copilot from Microsoft this week. You know, so Microsoft just had their build conference, which is actually wrapping up today, I think, is the last day.

Jordan Wilson [00:09:31]:
So, Microsoft announced something similar, but, you know, we really are moving into this space now where you are literally assigning an AI, right, like, to your teams. Like, it has a it has a license. You know, you grant it access to certain software. Think of it like, you know, if you've ever managed, you know, network access or something like that. You know, it's literally giving an AI a seat on your team and giving it access, you know, to, you know, depending on if you're using, you know, Google Workspace suites of product, you know, obviously, now you have that with, you know, this new kind of virtual teammate, you know, within Google AI teammate. Similarly for, team copilot that Microsoft just announced as well. Also, updates to Google Workspace were highlighted, integrating Gemini to enhance efficiency at work. Yeah.

Jordan Wilson [00:10:17]:
Just about every, Google Workspace product, they announced some new, AI feature, some Gemini integration. Don't have time to go over them all. Google has, like, dozens of products, and they rolled it out. They rolled, you know, AI integration or Gemini integration into just about everything. Also, new features for Google search were introduced, leveraging generative AI for what they're calling organized search results. And that's actually gotten some mixed, some kind of mixed reaction so far, which has been kind of interesting, and we'll share about that in today's newsletter as well. 2 other things worth noting, I think Google revealed Veo, a text to video generator, and imagine 3, an AI image generator. So Veo, you know, will in theory be a competitor to Sora.

Jordan Wilson [00:11:03]:
And I do think the quality we saw from Veo was pretty good. Right? Probably better than what we've seen from companies like Runway, Pico Labs. But I would say, still very far behind, OpenAI's sort of. But still, this was kind of kind of a surprise to most. I would say the, you know, text to video, you know, there wasn't a lot of reporting out there, on this. Obviously, we knew that, you know, Google already had their, image generator. So whether you wanna call that a, you know, DALL E 3 competitor or MidJourney competitor, you could stay you know, you could say stability competitor even though they may, you know, get bought out. But still so, imagine 3, some some updated image capabilities, text to image, within Google Gemini and, Vio.

Jordan Wilson [00:11:51]:
Vio is not released yet. And then Lyria. So last but not least on our on our update list, Lyria, which is Google's AI music generator, was demonstrated, with musician Wyclef Jean. So yeah, and that's just some of the highlights that we picked. Right? There are literally you know, I I went through I watched, watched the presentation, had dozens of bullet points of, like, main announcements. So that's just the ones that I think ultimately are gonna be affecting a lot of everyday people. Right? So, if your company especially, right, if your company uses or if you personally use, Google you know, Google's suite of products in Workspace, you know, that's what our company you know, my my company's Accelerant Agency and Everyday AI. Right now, we use Google Workspace.

Jordan Wilson [00:12:38]:
Right? So some of these features have already been rolled out. Some of them, for whatever reason, are available on, you know, my free Gmail, you know, my personal Gmail, but not my work accounts even though we pay extra. Right? So not only do we pay monthly for a seat on those, but we also pay extra for all these AI features, and they still haven't rolled them out. So, more on that here in a bit. But, you know, I'm curious from our, you know, livestream audience. What what are your thoughts so far on, the Google Gemini, or or, sorry, the Google's IO announcements literally sprinkling AI everywhere. I'm gonna go over now what I think is the good, the bad, and the WTF. But, you know, I'm curious from you all, or if you have questions, get them in, to our podcast audience.

Jordan Wilson [00:13:25]:
Yeah. We we always leave in the show notes ways that you can reach out. Send us an email. You can just actually send us a text message now, straight from the show notes. So if you are listening on the podcast, we'd love to hear from you as well. And, you know, we'll probably feature some of our favorite, comments or feedback in today's newsletter. So let's just go ahead and let's talk a little bit about the good, the bad, and the what the freak. Alright.

Jordan Wilson [00:13:47]:
So, the good, Project Astra. Right? So project Astra, I'd say was the headlining, feature because it was something that I think at the time was competing directly with OpenAI. Right? So OpenAI, obviously, and we covered this. Well, maybe not obviously. Let me tell you. So, you know, essentially, Google had their IO date set 3 months prior. 3 days prior, OpenAI came and sprung a, quote, unquote, surprise announcement even though it was kind of reported the day before Google. Right? And one of the big things that OpenAI released in its new GPT 4 o model is what some people are calling, you know, HER.

Jordan Wilson [00:14:31]:
There is no name for it. We're calling it kind of OmniLive, but, you know, that's the g b d four o is for Omni. So we call it OmniLive. They didn't give it a name. You know, OpenAI didn't, but, essentially, it's the same thing here as Project Astra. So this is a new way to interact with generative AI, with large language models. And, essentially, it has the bottle, the the ability, Project Astra, to see things in real time, and to also it you can interact with it in real time. Ask it questions.

Jordan Wilson [00:15:03]:
Essentially, think of, you know, how you FaceTime someone. Imagine FaceTiming an expert via a large language model. It can see what you see in real times, in real time, identify objects. You can talk to it. It can hear you. It can process, tap into, you know, the large language model and spit back answers. So think of no matter you know, there's so many applications, for a project Astra type product or, you know, an omni, live product, in our everyday lives. But I think maybe it'll make sense once we play a little bit of a clip here.

Jordan Wilson [00:15:36]:
So I do hate and people might roll their eyes at me. I do have to preface this. Right? Which I feel kind of bad doing because I'm not ragging on Google. However, we did see in December when Google announced Gemini, they released a marketing video, and it turns out that most of it was a fib. Most of it was made up. Most of it was overly produced. And Google I mean, yeah, they lied. Right? I can say that because just about every, just about every news organization literally said Google lied, Google, you know, deceptive marketing, whatever you wanna say.

Jordan Wilson [00:16:12]:
So I do have to say that out loud, and it feels like I feel bad saying it because, you know, I don't wanna, come off too biased, you you know, here on the everyday AI show, but, you know, I ever since that happened, right, you have to take everything. I'm sorry. You have to take everything that Google does with a grain of salt. Right? So if you miss that whole situation, you know, when Google announced their Gemini model, they showed this marketing video that essentially was Project Astra. It showed that Google Gemini had the ability to kind of see and interact with it in real time, which it didn't. And then later, they kind of released a paper, that said, oh, actually, it wasn't live. We just took all these screenshots, and then humans, did a bunch of text prompting based on these screenshots over and over and over produce the result. And then, you know, we essentially did some nice marketing and made it kind of look real time.

Jordan Wilson [00:17:03]:
Right? Which it wasn't. So I have to say that because, you know, again, Google says that this is, you know, all, you know, one x unedited, but yet you have to preface this. Right? So I I don't a 100% trust Google. You know, I don't I don't know if I'm alone in that. Maybe someone out someone else out there does it, but still I have to put that out there before I show, show the livestream audience this and also on the podcast. So I'm gonna just, preface this here real quick. I'm not gonna play the entire, the entire piece here. Alright.

Jordan Wilson [00:17:36]:
So here's what we have. This is I'm gonna play just about, maybe 45 seconds of it. So this is showing this project Astra. So like I said, the ability for, Google Gemini to essentially see, to hear, and for you to interact with it real time. So they said this is in real time. So someone is using Project Astra, and, let's just go ahead. Take a take a watch, take a listen. We're gonna let it go a little bit here.

Person [00:18:00]:
K. Let's do some tests. Tell me when you see something that makes sound.

AI [00:18:07]:
I see a speaker which makes sound.

Person [00:18:12]:
What is that part of the speaker called?

AI [00:18:16]:
That is the tweeter. It produces high frequency sounds.

Person [00:18:23]:
Give me a creative alliteration about these.

AI [00:18:28]:
Creative crayons color cheerfully. They certainly craft colorful creations.

Person [00:18:38]:
What does that part of the code do?

AI [00:18:43]:
This code defines encryption and decryption functions. It seems to use AES CBC encryption to encode and decode data based on a key and an initialization vector, iv.

Jordan Wilson [00:18:56]:
Alright. So, hopefully, let me know live stream audience. Hopefully, y'all could hear that audio, and you weren't just sitting there in silence for 40 seconds. So let me just kind of explain to our podcast audience what happened there. So, it started with someone going up to a speaker. Right? And then they on the screen, which I think this is a great feature. And if this works, this is pretty amazing. And we didn't see this out of OpenAI's announcement, but it essentially it looks like there is the ability to kind of draw live.

Jordan Wilson [00:19:26]:
So, you know, there was this speaker, and she kind of live drew with with her finger on the, Pixel phone an arrow and pointed to a certain part, of that speaker while it was still live in real time, which I think is really cool. Next, she didn't say this, but the the thing she was asking for creative, like, alliteration was a a jar full of crayons. Right? And then it said, creative crayons color cheerfully. They certainly craft colorful creations. Right? So, showing it a jar jar of crayons and, you know, responding reportedly in real time. And then last but not least, it looks like one of her coworkers, was at the desk there writing some code, and then that's what she, said, you know, what does this part of the code mean? And then Gemini via project Astra said this code defines encryption and decryption function functions, etcetera. So, there's a couple other, kind of, examples of project Astra that just actually dropped, about a day ago. So, really, all we had was 1 or 2 examples early on, but a couple more have dropped, and we'll share those in the newsletter today as well.

Jordan Wilson [00:20:35]:
Alright. So I'll say the good right away. Project Astra looks pretty amazing. Right? Again, if this is actually what is going to be rolled out, you you know, this looks like it does have some real time capabilities that OpenAI's GPT 4 o, Live Omni, might not have, out of the gate. So pretty pretty impressive. Something else that I think was good, Veo. Yeah. So let's let's take a look at this.

Jordan Wilson [00:21:03]:
So, similarly, I'm just gonna play, just a portion. So, again, Google says this was all created, you know, with VO all stitched together. So let's just take a quick look. I don't even know if this has sound. It doesn't matter. It's just visuals. So I'm gonna try to do my best to kind of explain what's going on here. Alright.

Jordan Wilson [00:21:23]:
So, let's go ahead and click play. I forgot if this even has sound. Alright. It doesn't have sound, so I'll kind of narrate it. So, we have kind of what looks like a drone shot over, like, a neon type city. The visuals look pretty crisp. This shot looks very smooth. Alright.

Jordan Wilson [00:21:42]:
So now it's speeding up, and it is almost like you're kind of flying through a city that goes straight into this, you know, futuristic kind of Tron looking, you know, race scene, you know, again, in a futuristic city. That seems to be the theme here. A lot of neon lights, futuristic city. But, again, this video presumably was all from text prompts. So it almost looks like a car race scene. You can obviously tell this is not, quote, unquote, you know, real life, but it does look like extremely high quality. Right? Like, whether that was CGI or animation, it looks like something that would take a lot of money and a lot of skill, to build here. So, again, we're just kind of going through this race car scene, and now it does look a little more real life.

Jordan Wilson [00:22:24]:
So now it is a, driving through a city in the daytime where the first two scenes of this looked more like CGI, things like that. And then it obviously, they share the, the prompt here, the text prompt that they said created this, which was, you know, a fast track a fast tracking shot through a bustling dystopian sprawl with bright neon signs, flying cars, and miss blah blah blah. So, that's all on the screen there, and we'll make sure to share the link to this in the newsletter as well. So, VO, I think, was was pretty impressive. So, you know, I'm I'm I'm curious for our our livestream audience. Were you impressed with with Vio and with Project Astra? I personally was. Right? Again, I have to take things with a grain of salt, with Google, but it seemed like there was a lot of hype. Right, especially with OpenAI's soar up.

Jordan Wilson [00:23:16]:
It kind of shook, if I'm being honest. Right? It shook the Internet. Right? I I I mean, it was on cable TV. Right? It it Hollywood was reacting to it. You know, there's the story where, you know, Tyler Perry had a, you know, 8 or 9 figure expansion to his studio that he reportedly put on hold because of after he saw Sora. Right? So I I do think that there was a lot of of eyes and ears and in news coverage even on OpenAI SoRaa. So I don't think that Veo, is really as good quality wise. It doesn't look like it, but it's still pretty good.

Jordan Wilson [00:23:53]:
Right? I would say, you know, again, Sora is in its own category, but, it looks like right now, at least what Google, decided to show, was pretty well ahead of every other, you know, at least publicly, available text to video or photo to video product out there, such as, Runway and Pico Labs. So, you know, this field product actually looks pretty good. So I'm surprised. Again, Google just dumped so much. So I think I think so many things are being, overlooked here. Right? So, it's I think that's important to point out is, you know, there was there is and reasonably right. And I think rightfully so. A lot of hype and a lot of attention on sort of, but Vio literally just flew under the radar like that futuristic car in that scene.

Jordan Wilson [00:24:44]:
Another good thing that I like is gems. So I think gems, in the same way that I think that custom GPTs can kind of change how people interact with large language models. Similarly, I think Google Gems can do that as well. Again, all we got was, you know, kind of a little demo video and a bunch of marketing. But, again, I think this is the future of how people interact with large language models. If I'm being honest, and this is another show for another day, I think the future is actually small language models and working with many of them that are highly tuned for specific purposes. However, you know, I think in the interim, you know, something like GPTs or GEMS is how people are ultimately going to get the most out of large language models. Right? So, if you don't know, you know, GPTs and presumably GEMS, it kinda looks like, you know, Google's GEMS are just GPTs, which are already have already been announced and they've been out for, you know, 6 months.

Jordan Wilson [00:25:42]:
So, here's one area that it looks like Google is actually playing catch up on, compared to OpenAI. But, essentially, it allows you to kind it's it's not custom instructions necessarily, but it allows you to configure, a customized version of the model, so for different tasks. So let's say you wanted something that was a creative copywriter. You know, if you start a new chat, you know, in Gemini, you might have to kind of quote unquote coach it to get it there. And then maybe you want something that helps you, summarize long PDFs in a very, formal tone in bullet points. Right? Like, as an example. So normally, you would have to start a new chat and kind of go through a process, to get them there. And instead, this is a a way to customize, Google Gemini and, kind of save it.

Jordan Wilson [00:26:29]:
I do believe also you can give it access to certain, files and information in your Google Drive. So, also, you know, I I call that mini rag, right, retrieval augmented generation. It's it's not true rag, but, I think when you can upload, you know, files, it it is kind of this, you know, miniature, rag, how you can bring in your company's data or, you know, personal data that you wanna work with into the model so you can improve, you know, the capabilities and the output out of the gates. So I do think gems, another huge announcement, that I think I'm I'm pretty impressed by. Again, if it all comes to fruition as it was marketed, which is another story with Google sometimes. Alright. So let's talk the bad. Yeah.

Jordan Wilson [00:27:12]:
And, hey, I I agree I agree with, with Adam here joining us from YouTube. Said it would be cool if we knew that that it was real real Google fakes everything. Yeah. I will I will say they're very good at marketing. Right? Google is a marketing company that also has other products. Right? Like, I know that sounds wild, but, at least when it comes to their AI, that's what I think it is. It's about marketing, branding, etcetera. Tara Tara says clear and never a disappointment, great as usual.

Jordan Wilson [00:27:45]:
Alright. So let's go talk about the bad. I think one of the bad things here is access. Right? You know, as an example, all these, Gemini updates stole the show, you know, especially, with its long context window. But here's the thing. Google does not make it easy for the average person to get access to all of this. Right? And also Google is very right? So if you're just using the front end of Google Gemini. Right? So you just log on to Gemini and you're just using it like you are chat gbt.

Jordan Wilson [00:28:24]:
Google is not very clear about what capabilities of Gemini you're actually getting. Another thing is they were, especially when it first came out, they were changing the name constantly. It was it was, Gemini advanced and, you know, then it was, I think, Gemini Pro, and then they started using Gemini Ultra versus Pro, and then they took that away and went back to advance. Right? Like, from a branding and even knowing what you're getting access to, from the front end of Gemini, Google does not do a good job, if I'm being honest. Right? I think OpenAI and Claude, you know, do a much better job. They say, oh, here is gbt 4. You are selecting the model. Here are g, e you know, or gbt 4 o.

Jordan Wilson [00:29:02]:
Here are the capabilities. Right? Within anthropics, Claude, you go in to the front end and, you know, you're like, okay. I'm using Claude 3 SONNET. I'm using Claude 3 Haiku. I'm using Claude 3 Opus, and you know what you're getting. That's the thing. The access to all of this, you know, Google makes a big deal, and they just say, oh, you know, Gemini, you know, 1.5 pro, 1.5 ultra, whatever. Right? And 1 1,000,000 tokens, 2,000,000 tokens.

Jordan Wilson [00:29:26]:
That's huge. Right? But here's the thing. You don't have access to that. The everyday person doesn't. You gotta be a not a little bit of a dork, but you gotta be a little bit of a dork to access all of that. Right? So, you know, sharing this here on the screen for, my, livestream audience. But if you want access and, you you know, they even kind of announced a newer, faster, lightweight model, called Gemini 1.5 Flash, right, which is supposed to be lighter weight, faster, than the kind of 1.5 Ultra or 1.5 Pro, whatever they're calling it today. But you have to actually go into Google AI Studio, and you have to create a new Google Cloud project to create an API key, and it's pay as you go.

Jordan Wilson [00:30:10]:
Right? So I I do understand that. Right? Because they probably can't just release, something with 1,000,000 tokens or 2,000,000 tokens into the kind of consumer or the the front end version of Google Gemini, because, yeah, from a billing standpoint, that's expensive. Right? And if you don't know anything about tokens, first of all, the fact that Google is even in its AI studio is offering a a 1000000 tokens, Very impressive. It is. So, you know, as an example, let me just very quickly talk about, you know, what that means Because it is it is big. It's it's it's impressive. But essentially, you know, if we compare it to chat gbt. So right now on the front end of ChatGPT, you have a, 32,000 token, memory.

Jordan Wilson [00:31:02]:
So what that means, that means after about 26,000 words or so, ChatGPT is going to start forgetting things. Right, so it doesn't have a very good memory, but this does. Right? So, Google Gemini, in the AI studio, does. A 1,000,000 token memory is huge. Also, another great thing, and something aside from that memory, that context window that Google is leading the pack in, another another thing that they're leading the pack in is multimodal. So you can input, you know, you can input video. And I did some, I did some testing, on it, last night. So you can upload videos and ask questions of your videos, which is amazing.

Jordan Wilson [00:31:51]:
And that's the future, that is the future of large language models. Right? But here's the thing. Everyday AI is for everyday people. That is not going inside Google AI Studio or, you know, Vertex, you know, Google AI's, you know, Vertex AI, which is more of kind of like a developer playground. Right? So you kinda have to be a little technical to be able to use all of these features right now. So on the front end of Gemini, which is how I would believe, you know, 90% or more of our audience would, you know, access Gemini that way. The same way that, you know, there's there are capabilities with, you know, GPT 4 that you can only get if you're using the playground or using their API. But for the most part, what we get in the ChatGPT version is pretty similar, and that's not the same case, with Google Gemini.

Jordan Wilson [00:32:44]:
So, yes, you always hear these things like, oh, you should use Gemini because of their, you know, their their million token context window. Okay. Well, yeah, you gotta be a little bit of a dork, you know, and use Google AI Studio, or use Vertex AI. And that is, at least for now, pay as you go even if you are on a monthly paid plan. Right? Because the context window, yes, is very expensive. So, that's something the access, to all of these features right now is bad. And it's not just that. Right? The vague release dates also related to access.

Jordan Wilson [00:33:20]:
Right? So let's talk a little bit about that. Hey, Gemini 1.5 pro with this 2,000,000 context window. When they said release date, all they said is private preview, and there's no specific date provided. Ask photos in Google. That looked awesome. No specific date provided or mentioned at the IO conference. Project Astra. There was no date given.

Jordan Wilson [00:33:45]:
Right? So I think a lot of these kind of headlining features one thing that I think was kind of bad about that announcement is they didn't do a good job of saying when we were getting all of these features or if they were going to be publicly available. Right? Which, again, I think goes to Google is just I think they are more marketing right now than they are like meat, if that makes sense. Right? There's not a lot of meat on the bones right now. It's just a lot of marketing. So, yeah, they obviously, if you watched Google's IO, like I did, it seems very impressive. Right? But when will we get access to all of this, or will we? Or will it only be, available for developers? Right? Or might it be another year or 2? So that's the other thing. Right? So, yes, you know, OpenAI, when they had their announcement, they didn't roll out everything, but one of the main things was their model last week, GPT 4 o, and they rolled that out immediately for paid users. Like, literally, I had it within an hour.

Jordan Wilson [00:34:54]:
Right? And they said, hey. The rest of these features, kind of this live Omni and the desktop app, will be rolling out in the coming weeks. Okay? So although they didn't give a specific date, GPT 4 0 was available almost immediately to paid users. And then they at least said, hey. All these other features are gonna be rolling out in the coming weeks. Microsoft, similarly, at their, you know, build conference, I think they were a little bit better. You know, they gave a date. Hey, a lot of these, you know, copilot, copilot plus PCs, June 18th, a lot of these features, June 18th, they gave a hard date.

Jordan Wilson [00:35:34]:
So I don't know why Google has always struggled with this. But, actually, as we get into the the WTF, we're gonna get in there. Because I think the branding, the marketing, or or sorry. The branding and the marketing focused execution and discoverability were just head scratching. Right? Legitimately head scratching. Alright. So let's go over some of the WTF stuff here. Right? And I know this has just become a joke, but Google said AI more than a 140 times in its Google IO keynote.

Jordan Wilson [00:36:14]:
And they kinda made a joke. They made a joke about it. Right? So Google CEO at the end said, oh, I know we you know, there's a meme last year at our conference about how many times we said AI, and, you know, we actually had AI count how many times we said AI this time, and they put it up on the screen and everyone laughed and claps. Alright. Well, I get it. But, also, I think companies need to start focusing less on all these buzzwords. Right? I mean, same thing, like, we've actually covered this on the show before looking at earnings calls and looking at how, how much more of the terms AI, large language models, generative AI are being announced. But, again, it was just so so scripted.

Jordan Wilson [00:36:56]:
Right? Google's announcement, especially compared to, you you know, in 9 days, we essentially you know, it's kinda like the, you know, the NBA finals week of of AI. Right? This this week. So in 9 days, we had big announcements and big events from OpenAI, from Microsoft, and from Google. And out of those 3, Google's just I mean, it almost seemed like it was done by AI. It was it was overly polished. It was very robotic. It was almost, like, overly sanitized, if that makes sense, which, again, I think one thing about AI, it's it's it's trust. Right? And I don't know.

Jordan Wilson [00:37:36]:
I don't have a ton of trust with Google right now in their AI products. It seems overproduced. It seems like marketing. It seems robotic. It doesn't seem friendly. It doesn't seem inviting. Again, maybe that's just my personal opinion. And, you know, hey.

Jordan Wilson [00:37:52]:
I mean, also, am I a little biased? Sure. Because, you know, even this this podcast, this livestream, it's unscripted. Right? Yeah. I sometimes have slides and notes, but it's unscripted. It's unedited. Right? Yeah. I make a lot of mistakes sometimes. But I think especially when it comes to AI, I think you need that ability to seem human.

Jordan Wilson [00:38:10]:
You need that relatability. Right? You know, we saw it even in OpenAI's announcement. You know? They they did it live. They demoed it live. Right? And there were some hiccups. There were some things that went wrong, which I think, though, is relatable. And people need to understand, you know, large language models, AI, it's a lot of it's generative, and it's normal to have things go off track. So when Google had this overly polished, overly marketed, very robotic presentation, I don't know.

Jordan Wilson [00:38:37]:
To me, rubbed me the wrong way. Didn't like it, and they just were saying AI, like, every third word. Also, pretty interesting, and this kind of went a little viral. So, someone named Scott Jensen who, you know, was a former employee, of more than a decade here at Google. I'm gonna share this and, this came out kind of just right after, Google's IO. So, what he said in his LinkedIn post that went super viral said, I just left Google last month. The, quote, unquote, AI projects I was working on were poorly motivated and driven by this panic. As long as it had AI in it, it would be great.

Jordan Wilson [00:39:22]:
And then saying it is stone cold panic that they are getting left behind, and it's not being driven by user need. And that's kind of at least the the takeaway that I got from Google's IO announcements. Right? They're seeing things, that OpenAI is, you know, released. I mean, they obviously know they have inside information. They hire, you know, employees from all the other companies, so I'm sure they know what other companies are working on. And it did seem it just seemed like a bunch of marketing. Let's just throw some slides and some demos up there. Let's not really have a release date, an actual road map.

Jordan Wilson [00:39:59]:
Let's just, you know, essentially wow people with marketing. Hopefully, analysts respond positively. Hopefully, our stock price goes up. And, you know, we'll start delivering whenever. Right? Just throw a bunch of AI in it. Make a cool marketing presentation, a cool demo video. Hopefully, it works. Whether it's all real live unedited, nah.

Jordan Wilson [00:40:19]:
I'm not sure. It was at 1 x speed. I don't know. Right. But that's just kind of the vibe that I got. And, you know, here you have, a veteran Google employee, who said that he worked on AI projects, and he had been working at Google, for more than a decade said it. You know? He said it was just driven by panic. And, hey, as long as there's AI in there, it's going to be good.

Jordan Wilson [00:40:42]:
So I don't know. I kind of I kinda had this, this same feeling, and, you know, Scott in his post, which we'll link to in our newsletter as well, said, you know, it's not you you know, he said it's not just Google. He said Apple is no different. Alright. Here's another thing. Discoverability. This is one thing it really irks me with Google. First of all, they change names for things all the time.

Jordan Wilson [00:41:06]:
All the time. Right? So I don't know. Maybe maybe something wasn't working very well, and then they, you you know, needed to give it a, quote, unquote, facelift, and, they need to make it better, etcetera. But, you know, an example, Google Gems. Right? You wanna go find in more information on Google Gems? Even Google search can't find it easily because when you search in Google Gems, you get an autocorrect that says Google Games. Right? And you gotta scroll quite a bit to find information about Google Gems, which I think is going to be one of its biggest, you know, its biggest announcements coming out of this is Google Gems. I think that's going to make Google Gemini actually usable for a lot of people. Right now, I don't think Google Gemini is a good model.

Jordan Wilson [00:41:50]:
I think it is in 4th place, right, behind, GPT 4 o, behind, Windows Copilot, which obviously uses GPT 4, GPT 4 o, and then behind Claude. I don't think Google Gemini right now, is a great large language model. Yeah. It's got a great, you know, a very impressive context window. It has multimodality input, but you gotta have, you know, that tech background. The front facing Gemini, I don't think is very great. So I think Google Gems, you know, the ability to quickly kind of make it better for your own use case and then reuse it via these gems is huge. But if you wanna go find information on Google Gems, you really gotta do some digging.

Jordan Wilson [00:42:32]:
So I think even discoverability. Right? When Google is constantly changing its branding, changing its marketing, changing names, and then you go look for something and you can't even find it on Google's own search engine, pretty bad. And and, hey, for a laugh, go go go into Google Google Gemini and ask Google Gemini about all of these individual products, and, yeah, you're you're gonna laugh about it. Much better results in chat g with Keith, by the way. Alright. So that's it. Covered kind of what's good, what's bad, and what's WTF. So, quick recap.

Jordan Wilson [00:43:05]:
So what I think is good, from Google's IO announcement, Project Astra, super impressive. Veo, their AI video product, Gems, kind of their version of GPT, I think all super impressive. Some bad things is access. It's not easy to access all of these great features, and also the vague release dates. Right? You gotta at least say, hey. By fall or by, you know, this date or in the coming weeks like OpenAI. So, I think that was pretty bad from the IO announcement. And then the scratching, the WTF, I mean, the branding, just being overly market like, overly marketing, polished, you know, the and then the discoverability.

Jordan Wilson [00:43:47]:
I just think things work WTF. So, I hope this was helpful. Thank you for joining us. If you're on the podcast, make sure to check out your show notes. You can always reach out to us, but leave us a review, and rating on Spotify and Apple. If this was helpful, we'd appreciate that. Also, we'd appreciate it if you haven't already. Go to your everyday ai.com.

Jordan Wilson [00:44:05]:
Sign up for the free daily newsletter. Every single day, we recap the topic of the day as well as going on, going over every single thing that you need to keep up in the world of AI to grow your company, grow your career. Thank you for joining us. We hope to see you back tomorrow and every day for more everyday AI. Thanks, y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI