Ep 230: Google’s Video Game AI Agent, OpenAI Sora Data Concerns? AI News That Matters

Episode Categories:

The Future of the Conversational Humanoid Robot

Pioneering technologies are constantly pushing boundaries in the AI field. One such technology that has come to the fore is figure o one. This incredible innovation enables real-time conversation and interactions with humans. More than this, it has the ability to explain its rationale. The technology is powered by the largest language model, chat gpt. Major tech entities such as OpenAI are significant partners backing the development of this game-changing tech.

Understanding the Potential of Neurodiversity

Neurodiversity encompasses a range of neurological differences, including autism, dyslexia, ADHD, and dyspraxia. It emphasizes the value of diverse ways of thinking and perceiving the world, challenging the traditional deficit model and recognizing the strengths that neurodivergent individuals bring to the table.

NVIDIA's Groundbreaking GTC Conference

As a global pillar of technological innovation, NVIDIA is hosting an in-person GTC conference in San Jose, California. The event reflects the future of tech, marketing, and business and will display advancements in AI. Exclusive discussions on AI in financial services, generative AI, robotics, and AI-powered transportation will form a significant part of the agenda. Attendees can also look forward to an exciting keynote and an interactive Q&A with the CEO.

Apple's Large Language Model: MM1

Distinguished tech giant, Apple, recently released a preview of its new multimodal large language model, MM1. This revolutionary model comes in three variants with 3,000,000,000, 7,000,000,000, and 30,000,000,000 parameters respectively. Though it stands toe-to-toe with major competitors on tasks such as image recognition and natural-language reasoning, it has some catching up to do when put up against Google's Gemini and OpenAI's GPT 4. The model's importance lies in the significant implications it holds for the future of edge computing and on-device AI.

Introducing Figure 01: The Humanoid Bot

Spotlighting the advancements in AI robotics, Figure 01, a humanoid bot, is making waves in the industry. Released by a team of innovative developers drawn from prominent tech giants such as OpenAI, Boston Dynamics, and Tesla, this bot is a powerhouse of technology. Highly esteemed investors such as Microsoft, OpenAI, NVIDIA, and Salesforce back the ambitious project. The bot's real-time conversational ability and task performing capabilities are a testament to the advancements made in AI technology.

Google's Foray into Smart AI with Sima

DeepMind, a subsidiary of Google, recently introduced Sima, a new AI model. Unlike any conventional AI model, Sima can play video games based on natural language instructions. The developers collaborated with 8 video game studios to refine the AI's efficiency. The ultimate goal is for the AI to be able to play games autonomously. This marks Google's first official step into the world of agents or smart AI.

A Controversy in AI Data Source: OpenAI's SoRA

On the other side of major strides in AI, we also see some controversies in data training sources. OpenAI's new text to video model, SoRA, came under fire for its data sources used in training. The tech firm’s Chief Technology Officer claimed the sources were publically available and licensed data. Yet, this has raised concerns and scrutiny over the accuracy of the model's training data.

The world of AI continues to evolve rapidly with promises of extensive tech overhauls in numerous industries. With all these advancements and ongoing research, it's clear that the future holds countless exciting opportunities to explore.

Topics Covered in This Episode

1. Google DeepMind's New AI Model, Sima
2.  Scrutiny Around OpenAI's Sora Model
3. Figure 01, by OpenAI and Figure
4. Apple's New Large Language Model, MM1


Podcast Transcript

Jordan Wilson [00:00:17]:
Is Google DeepMind building agents? And Apple finally has a large language model, kind of. And there's a new humanoid robot that uses chat gbt and interacts in real time flawlessly. There's so much going on in the world of AI news, and you can spend hours every single day trying to keep up, or you can just tune in with us. We do this almost every Monday. So welcome to Everyday AI. My name is Jordan Wilson, and I am the host. And if you're new here, well, Everyday AI, it's for you. It's to help everyday people learn what's going on in the world of generative AI and how we can all leverage that to grow our companies and to grow our career.

Jordan Wilson [00:00:57]:
So if you're new here joining us on the podcast, thank you. We normally do this every Monday as long as, you know, we're not on the road somewhere else. Speaking of being on the road, we are on the road, but more on that, here in a bit. So let's get going. And just as a reminder, if you haven't already, go to your everydayai.com. You know, if you can't catch us every day, don't worry. We always recap each and every show on our newsletter as well as we now have more than 220 back episodes on our website. So it is a free generative AI university.

Jordan Wilson [00:01:27]:
Go check that out at your everydayai.com. Alright. But let's get into the AI news that matters. And if you're joining us live, thank you. Like, Peter Scooter, thanks for joining us. If you have any questions, get them in there. It it should be a a fun show. There's a lot going on.

Jordan Wilson [00:01:43]:
So let's just dive straight into the news that matters for the week of March 18th. Alright. So first and foremost, Google DeepMind has introduced Sima, an AI agent that plays video games. So Google DeepMind has announced this new AI model called Sima, and it can play video games based on natural language instructions. So this model has been trained with the help of 8 video game studios and can perform over a 600 basic skills, making it more efficient than models trained on one specific game. So Sima is a generalist AI agent that can play video games based on natural language commands. So the ultimate goal from DeepMind here is for the AI to be able to play games by itself. So you might be thinking, alright.

Jordan Wilson [00:02:25]:
Why does this matter? Well, this is kind of, I think, the first official foray from Google into what could be considered agents. Right? So, this new initiative from DeepMind is widely being reported as Google's first semi official foray into agents or smart AI that can make human like decisions in real time without any human input. Right? So that is, kind of the big trend of 2024. So, so far is, autonomous agents and, you know, this AI being able to work without much human input really at all. So, this thing from Google DeepMind, a lot of people are just looking at this as, okay, Google's, you know, creating an AI that can play video games just to showcase, its ability, but it's really much more than that. This is kind of a a a fun, and engaging testing ground, for what we all expect to be the next phase of generative AI, which is agents. Although the big companies are working on this, and when you combine it with, humanoids, yeah, it's it's it's gonna get wild. So more on that here in a second.

Jordan Wilson [00:03:25]:
Alright. Our next piece of AI news. And if you caught this a couple days ago, you might have been scratching your head kinda like I was. Alright. So OpenAI's CTO gave some mixed answers on training data for its AI image model, SoRA. Alright. So if you don't know AI, OpenAI's new text to video model, SoRA, has sparked controversy for the sources used for its training data. And CTO, Meera Marathai, avoided questions about the subject in a recent interview.

Jordan Wilson [00:03:56]:
Alright. So if you don't know Soera, don't worry. I'll give you the super high level overview. So, they just started, OpenAI just started releasing, kind of new outputs, just about, a month or 2 ago. So right now, it is not open to the public, but, OpenAI researchers as well as a, select few kind of, designers and motion graphic artists, have used, or have the ability to use sort of right now. So essentially, you can, with text prompts, create some very impressive, AI videos and and kind of the big thing that has separated Sora from other, you know, text to video, kind of offerings so far is is the quality. Right? So we've seen from, runway, we've seen from Pico Labs and others, the ability to create some still, you know, very usable, very inspiring. I might even say, text to video, with their different, generative AI models.

Jordan Wilson [00:04:50]:
However, Soera, it it blows it out of the water. So we've, we've, you know, shown comparisons before. That's not what this is about. But, you know, in general, a lot of people were wondering because of the high quality of the output when OpenAI just released this is, you know, how did they train this? And in this interview, it's it's a little, interesting. So we're gonna play clip, here. But it it was in a recent interview with The Wall Street Journal. So OpenAI CTO Mira Muratai avoided questions about the sources used for Sora's training data, claiming that it was just publicly available and licensed. Alright.

Jordan Wilson [00:05:26]:
So, I'm gonna go ahead and, play a clip here and let you all hear kind of, in real time. So this is, reported from the Wall Street Journal asking OpenAI CTO, Mira Muratai, about, about Sora's training data. Alright. Let's let's go ahead and take a listen.

Reporter [00:05:45]:
What data was used to train Sora?

Mira Muratai [00:05:48]:
We used publicly available data and licensed data.

Reporter [00:05:53]:
So videos on YouTube?

Mira Muratai [00:05:58]:
I'm actually not sure about that.

Reporter [00:06:00]:
Videos from Facebook, Instagram?

Mira Muratai [00:06:05]:
You know, if they were publicly available, available yeah. Publicly available to use. There might be the data, but, I'm I'm not sure.

Jordan Wilson [00:06:18]:
Yeah. Alright. So we'll we'll go ahead and link this, out from The Wall Street Journal reporter in today's newsletter. But, yeah, a lot of people were kind of taken aback by, by her response as was I. You know, it it almost seemed like, she wasn't fully prepared for that question, probably knowing that that would be one of the main things that someone would be asking you about when sitting down just because, you know, all of the conversation so far around OpenAI's SoR has been how are they so far ahead of everyone else in terms of quality. So, the the the training data has been a topic of debate before this interview with The Wall Street Journal. So so definitely, it's definitely worth talking about. And, yeah, if you're if you're listening on the podcast so Juan here, joining live.

Jordan Wilson [00:07:06]:
So thanks for joining. Juan said, the facial reaction says it all. So, yes, when asked about, the training, the training data, Mira had a very uncomfortable look on her face and, yeah, it's already been making the, the meme rounds on the Internet. But, yeah, this, to me is a pretty big miss from open AI. I think if you're gonna go out and and have a sit down interview, about a model that has really taken, you know, the creative and the advertising, the marketing, the AI worlds by storm, you have to be prepared to answer some basic questions about training data. So, yeah, the facial reaction, it it kind of like what what Mike 4 g here is saying live as well. That mouth drop is a guilty signal. Yeah.

Jordan Wilson [00:07:49]:
It was a little, it was a little cringey if if if you were watching on the livestream. But, yeah, don't worry. If you didn't see that yet, we're gonna be linking that out in the newsletter. Alright. Let's keep going with more AI news that matters and a big one. A big one. I actually intentionally plopped this in the middle of the show just in case a couple of y'all joined in late. Alright.

Jordan Wilson [00:08:09]:
So Apple has released a paper previewing its large language model, m m one. Yes. That's right. So it's an interesting approach here on Apple's release of this, but let's just go ahead and go over, the details first. So Apple researchers have released a paper previewing their new large language model or actually it's their new MLLM or multimodal large language model. So, yeah, we're gonna be hearing, and and shifting that conversation probably over the next year from LLMs to MLLM. So the difference being, you know, with multimodal inputs. So, Apple's research team has developed a new highly capable multimodal large language model called m m one.

Jordan Wilson [00:08:54]:
Alright. So interesting, naming so far. So we're not sure if this is gonna be the name when Apple when and if Apple finally releases this. However, little confusing, you know, in in in my opinion, just because Apple's, new chips, you know, that they debuted about, you know, 2 or 3 years ago were called the m one. So there was a lot of talk around m one chips. So now you have the m m one model. So, again, we're not sure if that's what it will ultimately be called, but that is what is it is being referred to now. So right now, this model comes in 3 size 3 sizes and outperforms most competitors on tasks such as image recognition and natural language reasoning, but it still lags behind Google's, Gemini and OpenAI's GPT 4.

Jordan Wilson [00:09:40]:
So right now, one is a multimodal large language model developed by Apple, like I said, in 3 different sizes. So they have a 3,000,000,000 parameter model, so much smaller, 7,000,000,000 parameter and 30,000,000,000 parameter. So even the, the larger one, at least right now at 30,000,000,000 parameters is still a fraction of the size of, you know, Google Gemini, Ultra 1.5, as well as, GPT 4 turbo, which, is reportedly 1.8 trillion parameters. Alright. So, again, right now, this is as far as I know, at least as of, you know, couple hours ago when I checked last, you know, you can't go out and, use this model at least right now. It is not publicly available. So all we have right now, which is important to talk about and which I think it hasn't been grabbing so many headlines just yet, is because this is just a research paper kind of showing, showing some different, results, inputs, that that they, generated and outputs that they were able to get from those inputs, and and kind of looking at the multimodal, aspect of it. And I think one of the the things that Apple is seemingly, stressing about, its models, kind of focus in the paper at least.

Jordan Wilson [00:10:51]:
And as you'll see on screen here is is the ability to better kind of, both work seamlessly within text and images and also images within text, which I think, you know, if you want to have a highly capable, multimodal, large language model, it has to be able to both read and understand text in photos as well. So it seemed like that was one, kind of key differentiator that Apple was really pushing with its new m m one. But, yeah, I'd love to hear yeah. A big, agree with Carolyn here saying it's a big, reveal. So, I did kind of mention that because it was interesting. Right? Because we've been hearing now for months. Right? And I I've said it all along. Apple is never first, to the party.

Jordan Wilson [00:11:39]:
Right? So, you you know, we didn't expect, you know, Apple to release a large language model, you know, months after chat GPT or anything like that. Apple has been historically known to not be the first person at the party, but to be the coolest kid. And to usually have, the most polished interface, the best user experience, etcetera. So there have been many reports that Apple has been spending 1,000,000, yes, 1,000,000 with an s, 1,000,000 of dollars a day on development of their generative AI, on development of their, large language model. So, again, we're not sure if this is all they have, if this is just an early iteration and what may eventually make its way to our devices. Right? That's what is most important. And and I think, you know, the obvious thing on why this matters, for all of you out there listening is because of Siri and because of the future of edge of edge computing, edge AI or on device AI. Right? And I think that's, you know, even by looking at the parameters of the model, you have to think that that's where this is heading.

Jordan Wilson [00:12:42]:
Right? So the reason why, you know, something like a, GPT 4 turbo is so incredibly powerful because it's it is a large it is a huge language model with, you know, reportedly 1.8 trillion parameters. So when you look at these kind of 3 reported sizes, or or these 3 kind of published sizes that were in the paper of 3,000,000,000, 7,000,000,030,000,000 parameters, presumably, these are models that could fit on devices. So similar to how, Google's Gemini Ultra, now lives locally on their Samsung s 24 phone. And that really changes what you can do with a large language model. It changes, you know, the the capabilities of of generative AI, by to be able to run something locally. Right? Like, as an example, I was you know, over the last, 2 weeks, I've been on probably 25 hours worth of worth of flights or maybe 20 hours with the flights. I can't do the math right now, but, you know, I couldn't use a large language model. I probably would have liked to, but, you know, you have to have a very fast, you know, Internet connection, which as an example, you know, airplanes don't.

Jordan Wilson [00:13:46]:
But, you know, I think when you talk about Apple, where this really is important is the next iPhone. Right? So we've been talking about, the WWDC, the Worldwide Developer Conference, from Apple in June. Presumably, they will be announcing what they're going to be doing, with this, whether it is this m m one large language model, whether they're going to be releasing a couple, but presumably what's gonna happen is this. You're gonna have a Siri that is actually smart. Right? So I, you know, I I know I'm hard sometimes on, you know, smart assistance like Siri and Alexa. But if you use large language models a lot, like I know a lot of our audience does, and then when you use something like Siri or Alexa, there's a lot to be desired. However, I do think even just by looking at the parameters and the sizes of these three flavors of m m one, from Apple, you have to think that this is coming to edge. This is coming to on device AI.

Jordan Wilson [00:14:42]:
And to be able to run a model locally, as an example, on your iPhone, on your smartphone, is is is really a game changer in terms of of productivity and also how we all interact, with generative AI. And that's something that I talk about a lot here on the show because, I don't think people fully realize or understand, even the importance of something like prompting. Right? So what happens now when we have these, you know, maybe m m one coming to our phone? You have to be able to learn how a model works and how to be able to work with it to get the most out of it. So this is really gonna be a a common theme that we're gonna continue to see, but we did expect, you know, kind of some more announcements from Apple. But it was an interesting announcement that they did this via a research paper release. It's not the normal Apple playbook. Right? So, normally, you get complete silence and you get a big reveal at a conference, or maybe you get some leaks ahead of time, something like that, you know, a spicy promo video that makes people oh and ah. So this is an interesting, approach here from Apple to go these scientific paper routes.

Jordan Wilson [00:15:50]:
Personally, I like it. Right? We even saw with, with with Google with their original Gemini. I I I think they they really botched its release. You know, a lot of people, myself included, really criticized Google heavily for a heavily, you know, heavily edited in in marketing and not, to tell you the truth, not super truthful, representation of their Gemini model, you know, in their original marketing video that came out, I believe, like, December 5th. It it it was kind of showing all these, you know, capabilities, or reported capabilities for Google Gemini that it could not actually do. So I don't hate the approach here, from Apple, but an interesting approach, nonetheless, to go the scientific paper route. Alright. And and, hey, let me know from the live stream audience.

Jordan Wilson [00:16:35]:
What do you guys think? What do you guys think of, Apple's, approach here? Do you like it? Is it confusing? Have you read the paper? Again, we'll be linking to the paper. If you can't find it, don't worry. We already dug it, dug it up off the Internet. So we'll be, linking to that in our newsletter today as well. So make sure you sign up at your everydayai.com. Alright. Our next piece of news, and this one, as crazy as it sounds, this one might be the biggest piece of news that we're talking about today. So, Figur has demoed its chat GPT powered humanoid bot figure o one.

Jordan Wilson [00:17:12]:
So we've talked about figure o one here on the everyday AI show a couple of times, but, up until, you know, up until a couple of days ago when this, demo was first released, all we really had or all we really, saw were were specs. You know? All we we we saw and heard, were promised. So, anyways, let's let's go into a little bit on, on Figur o 1, and then we'll we're actually gonna play a little, demo as well so you guys can can hear and see and watch this as well. But, Figur o 1 is equipped with an advanced AI and visual recognition capabilities, and it showcases its ability to perform tasks and engage in conversations with a human. That's the key differentiator and potentially revolutionizing many industries. Alright. We have to pay attention to this. It is going to, I think, be a, I I hate using the word game changer, so I will just say this.

Jordan Wilson [00:18:04]:
I think it is going to greatly impact, the future of how we all worked. So, the figure o one robot is equipped with what figure is calling a new visual language model, so VLM. We have so many acronyms today. Right? We have LLM, MLLM, and now we have visual language model. So, the visual the visual language model, allows figure o one to smoothly interact with humans and perform tasks simultaneously. So, its developers over their figure have included, those that have worked at companies like OpenAI, Boston Dynamics, and Tesla, and they have very ambitious goals, for their humanoid robots to control and yeah. Billions with a b. Billions of robots.

Jordan Wilson [00:18:48]:
Yeah. Crazy. Right? But this is also why they have some of the largest investors, in the world including, I believe, we talked about this on the show 2 weeks ago, but, Microsoft, OpenAI are both huge investors. NVIDIA, Salesforce, I mean, it's a who's who of who is investing in figure, and their new figure o one. So, let's just go ahead and let's just show a quick little demo here. And if you're listening all the podcasts, I'm gonna play about a minute of this. Like I said, we're gonna go ahead and link this full, link this full video in today's newsletter. But, hopefully, even if you're listening on the podcast, you can kind of understand what we're talking about just from this short little clip.

Jordan Wilson [00:19:29]:
Let's go ahead and take a watch and take a quick listen.

Robot [00:19:42]:
I see a red apple on a plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table.

Person [00:19:52]:
Great. Can I have something to eat?

Robot [00:19:57]:
Sure thing.

Person [00:20:05]:
Great. Can you explain why you did what you just did while you pick up this trash?

Robot [00:21:21]:
On it. So I gave you the apple because it's the only, edible item I could provide you with from the table.

Person [00:21:34]:
Great. So based on the scene right now, where do you think the dishes in front of you go next?

Robot [00:21:45]:
The dishes on the table like that plate and cup are likely to go into the drying rack next.

Jordan Wilson [00:21:53]:
Y'all, is anyone else was anyone else just extremely impressed with that demo, or is it just me? So, let's let's talk about if if you weren't joining us live. If you're listening on the podcast, you can probably assume, what was happening there. But, again, this is is set to be real time, which is always important because, you know, again, like we saw with the, the Google Gemini and with just all, new generative AI systems in general, You always have to know exactly what is going on behind the scenes. So presumably, this is in real time. Yeah. And and like Juan is saying here, and and everyone yeah. Everyone's saying, wow. Monica's saying unreal.

Jordan Wilson [00:22:35]:
Cecilia with the PG 13 language edited, though, so I love it. Yeah. It's sign yeah. Tara says sign me up. Love his voice. So, I mean, what is literally happening here is presumably this is in real time, and the figure o one is powered by chat gbt. So aside from, yes, this seems like a very, realistic conversation. But what's wild to me is, you you know, when we talk about AI and the, capabilities of AI, I think this is something that we overlook is is having someone that can both help with everyday tasks.

Jordan Wilson [00:23:09]:
So on that side, hey. Who wouldn't love, you you know, a humanoid robot? I think it all all it takes right now is, like, a quarter million, quarter $1,000,000 and probably a minimum order. But who wouldn't love, you know, a humanoid robot to be able to go around and put your dishes away. Right? Like, that would be fantastic. Or to, be able to fold laundry. So speaking of that, I do have to now draw a comparison because, with figure o one, at least with the demo so far, it can operate independently and is not necessarily preprogrammed. Speaking of folding laundry, it was Tesla's Optimus bots that went viral a couple months ago for simply folding a t shirt, but that was something that it was programmed to do. So I I can't, I can't state loudly enough how impressive, this demo is with figure o 1, powered by chat g p t.

Jordan Wilson [00:24:03]:
Here's why. We've been talking about, you know, robotics on this show for a while. We actually have a great robotics show, for you later this week, which I'll get to here in a second. But I think that for the most part, you know, kind of these these humanoids or or robots or whatever we're supposed to be calling them, they've been very limited in scope. So they've maybe been trained to perform a a series of tasks. But for the most part, that is not a a relationship per se or that is not something that, is is really applicable across different fields, whether it's it's in in home, work, automation, manufacturing, etcetera. Because guess what happens in real life? Things don't go according to plan. So, you you know, as an example with Tesla's Optimus bots, if it was folding laundry and, you know, there was a huge gust of wind, could it continue to fold the laundry? I don't know.

Jordan Wilson [00:24:55]:
Presumably, it could. Right? But that's why I'm in super impressed so far with figure o one and what it's been able to do because it is a as you just saw there, in the in the demo or as you just listened to, it is having a real time conversation with a human. Again, presumably, it's real time, a real time human conversation and an interaction. And not just that, but being able to explain its rationale. The the thing I love is is in that demo, when the gentleman asked, figure o one kind of why it did what it did, he asked it to do it while putting trash away. Guess what? I'd hate to admit this. I wouldn't be able to do that. I would make a mistake.

Jordan Wilson [00:25:36]:
I am terrible at multitasking. Right? Like, if if my wife is is asking me to, you know, a question while I'm putting dishes away, I'm definitely either gonna put the dish away in the wrong spot or I'm not gonna be able to fully process her question. So that is why I think it is really impressive because in this demo, figure o one was able to not only complete a task, but also to hold a conversation about something that was unrelated to the task. So when we talk about, the future of generative AI, when we talk about even, you know, the real the, like, the real world application of large language models. Right? This is powered by chat gbt. This does not work without, you know, a large language model like chat gbt. So it is using, you know, its ability to process information and then to speak the information as well. And if you didn't check out our, kind of AI and PHY video last week, OpenAI and Chat GPT did just, unveil a whole lot of of new features for its, kind of its ability to speak back with you with some much more realistic human voices.

Jordan Wilson [00:26:42]:
So if you heard that, that that voice there, super realistic. It doesn't sound very AI generated, Kind of a smooth voice. Right? Yeah. It's kind of like, like, what Juan's saying, having a Rosie the robot back from the Jetsons. Yeah. I can't, Tanya also can't wait till I can afford one of these to clean my garage. Yeah. I I would love for it to clean my garage.

Jordan Wilson [00:27:01]:
Alright. Let's let's wrap this up and talk about one more piece of AI news that matters, and that is the NVIDIA GTC. Alright. If you are joining live, you might know my you might notice my setup's a little different here. Or maybe even if you're listening on the podcast, maybe my audio quality isn't as crisp as normal. But that is because I'm on the road right now, just a couple blocks away, from NVIDIA's GTC. But, let's talk about what's going on. So, NVIDIA is having its first in person GTC conference in 5 years to be held right here where I am today in San Jose, California.

Jordan Wilson [00:27:39]:
And this event will bring together AI leaders and top companies across various industries to showcase breakthroughs and advancements in AI. You know, we have, the Jensen Huang keynote today, which should be extremely exciting. I'll be there. So if you have questions, I'll probably either gonna be on on LinkedIn or maybe on on Twitter, so make sure that you're following us there and ask questions too. What do you want to know? I I believe I'm gonna have an opportunity, tomorrow to do a a, kind of, informal q and a in a small group, with the NVIDIA CEO. So I'd love to hear from you. What do you want to know from NVIDIA? What questions do you have? Where do you see everything going? So, pretty pretty exciting, and this is, I would say one of the most anticipated, kind of tech conferences, I'd say, it's gotta be top 5 in the last 10 years. Right? NVIDIA hasn't had an in person conference since 2019, because of, the pandemic.

Jordan Wilson [00:28:38]:
But also, I mean, here's here here's the reality as well. Right? 5 or 6 years ago, a lot of people maybe thought of NVIDIA just as a chip maker. Right? Maybe you thought of NVIDIA if you're a gamer and, you you know, you need a certain chip or if you're a video editor and you need a better graphics card. Right? That is not what NVIDIA is anymore. I I had an episode last year where I literally told people that NVIDIA was the most important company in the United States, and it was the most important company to the American economy. Guess what happened after, you know, after that show? NVIDIA has almost tripled, in in market cap in less than a year, which is historic. And the reason why this is such a highly anticipated conference is because right now, NVIDIA has an unfair advantage over everyone else. All of the, you know, large language models.

Jordan Wilson [00:29:25]:
Right? So even, you know, presumably, the the figure o one because it's

Person [00:29:32]:
powered by OpenAI. OpenAI, one of its,

Jordan Wilson [00:29:32]:
you know, biggest, partners is OpenAI. So everyone is or or sorry, NVIDIA. You know, everyone is using NVIDIA's GPUs, to power the future of AI, to power large language models, to power, generative AI image video models, everything. So NVIDIA is actually the epicenter of the future of tech, marketing, and business and how business is getting done. So, it is a highly anticipated conference. I'm extremely excited for today's keynote as well as the conference. Alright. So speaking of the NVIDIA GCC conference, if you haven't already, go ahead and check out our show notes.

Jordan Wilson [00:30:06]:
We have a link on there where you can register for free. Unless you're from San Jose or San Francisco, Alright. So again, check out the link, we have here in the description of the live stream and our newsletter and our podcast. You can go ahead and sign up. And at the same time, you know, we have instructions there to enter into a giveaway for a free GPU. So, yeah, maybe maybe your computer's a little slow or maybe you wanna run, NVIDIA's new chat with RTX, but you need a certain, a certain chip to do that. So go ahead and, you know, just by signing up for the free conference, you can enter into the giveaway. Alright.

Jordan Wilson [00:30:45]:
So here's what we have this week. Speaking of NVIDIA, I told you, I'm in a different location. Yeah. So here's what we have. A lot going on this week, some special shows, some exclusive interviews that I am extremely excited about. And, also, if you are, an avid, you know, viewer of our livestream, can't thank you enough. But today or this week, we're gonna be doing double duty even today. So make sure to, you know, check out our newsletter that we sent out last night with our complete schedule.

Jordan Wilson [00:31:16]:
But, tomorrow, we will be having, Malcolm DeMaio, the vice president of global financial services at NVIDIA, talking about making money moves and how NVIDIA is using AI to change financial services. Wednesday, we're gonna be talking with an NVIDIA partner, Evan Sparks, about the how to create and capture value throughout your biz with generative AI. That is one I am going to be extremely excited about. Everyone's always asking, how can I create value or how can I actually use generative AI? And then how can you tell of its impact? We're gonna be, bringing that to you on Wednesday. And then on Thursday, speaking of robotics, we are literally talking with the director of robotics at NVIDIA. This is gonna be a great conversation with Amit Goel, talking about robots among us, how NVIDIA is building the future of robotics. And then last but not least, on Friday, driving the future forward, NVIDIA's vision for an AI powered transportation. Alright.

Jordan Wilson [00:32:16]:
So so many great conversations planned for this week. We're going double time. So, maybe we, we had a little break from, from live streams. Maybe you missed, you know, the live streams. So we're just doubling down. We have a lot of great shows planned for you this week. I hope this show was helpful. If so, please consider sharing this.

Jordan Wilson [00:32:35]:
Repost this if you're listening here on LinkedIn. We spend hours sometime for each show, to put it together. It takes you about 10 seconds to go ahead and click repost or maybe on Twitter, etcetera. Or if this is helpful, please leave us a review. So that is it. So I hope to see you back even today and every single day this week for more info and more action and announcements and breaking news from the NVIDIA GTC, and we hope to see you back every day for more everyday AI. Thanks, y'all. And

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI