Ep 233: Robots Among Us – How NVIDIA is building the future of robotics

Resources

Join the discussion: Ask Jordan and Amit questions on AI and robotics

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan Wilson: LinkedIn Profile


How NVIDIA is Transforming the Future of Robotics

If robotics is the vehicle to the future, then NVIDIA is designing the engine to drive us there. Remarkable strides are being achieved in the robotics field, thanks to AI and technological advancements. With NVIDIA providing the essential platform for robotics developers and companies, the future of technology seems to be closer than ever.

The Power of Large Language Models

The robotics landscape is being reshaped, largely due to the advancements in large language models. These AI systems can process extensive heaps of data, enabling a capability leap in the development of robots. From translating complex human language into robot-specific benchmarks to simulating intricate digital environments for robotics applications, large language models are playing a game-changing role.

Emphasizing Human-Robot Interaction

A pivotal aspect in the progression towards humanoid robots is the emphasis on human-robot interaction. As robots become more prevalent, it is vital to contemplate their operation in shared spaces. Robots have the potential to offload dull, dangerous, and dirty tasks from human hands, freeing up time and enhancing productivity. These advances, however, call for appropriate preparation and adjustment of operations and IT systems for effective robot integration.

Simulation and Digital Twin Creation

Pairing up technology giants like NVIDIA with industry stalwarts has resulted in significant transformations, such as in the programming of industrial robots. The concept of a digital twin or a virtual replica of physical robots allows for fine-tuning in a controlled environment before deployment in the real world. This not only enhances the efficiency of robotics but also saves time and resources in industries like manufacturing and logistics.

Clearing the Path to Mass Deployment

The journey to mass robot deployment is not without its share of challenges. The complexities of the physical world, the need for data, and the gaps in real physical implementation form some of the hiccups that need to be addressed. However, the pace of innovation in the field of robotics is impressive, and rapid iterations, facilitated by faster training, are accelerating the deployment of these technologies.

Bridging Gaps with Generative AI and Language Models

One challenge in utilization of robotics is the translation of human instructions into a format comprehensible by robots. However, the advent of Generative AI and large language models is bridging this gap by making it possible for everyday people to interact with robots using natural language.

While the rise of robot technology might seem a touch discomforting to some, it is important to realize the opportunities such advancement offers. Robots, powered by AI, are going to be part of our future and the quicker we embrace this evolution, the more we stand to gain. NVIDIA is building the foundation for robotics today and this brings the projection of our wildest imaginations closer to reality. Let's be ready to welcome the era of robotics!

Topics Covered in This Episode

1. Robotics and AI Advancements
2. Public Perception Towards Robotics
3. Deploying Tasks to Robots
4. Use of Generative AI in Robotics
5. NVIDIA's Role and Impact in Robotics


Podcast Transcript

Jordan Wilson [00:00:03]:
It seems like in the past couple of weeks, robotics have been everywhere. And that can't even be truer than this week as we've seen at GTC here at the GTC conference with NVIDIA. I'm extremely excited today to talk about the robotics industry and what's been going on and some recent announcements from NVIDIA with the man himself, but more in that in a second. As a reminder, if you're joining us live, thank you very much. Please make sure to check out your show notes or if you're on the podcast because you can still it is not too late to register for free for the NVIDIA GTC conference. So you can do that. You can go watch sessions, win a GPU. You know, there's DLI credits so you can go out and learn a lot of things, so make sure to check that out in the show notes and in the newsletter as well.

Jordan Wilson [00:00:54]:
Alright. And you can always get that information on our website too at your everydayai.com. Alright. Enough of that. Let's get into why you showed up today, why you're listening to the podcast. It's to hear about the future of robotics, and there is so much going on, especially it seems like in the past couple of weeks, in the past couple of days here at NVIDIA GTC. So with that, please help me welcome our guest for today, Amit Goel, the Director of Robotics at NVIDIA. Amit, welcome to the Everyday AI Show.

Amit Goel [00:01:22]:
Thanks, Jordan. Thanks. My pleasure to be here today. Yeah.

Jordan Wilson [00:01:24]:
Can you tell us just a little bit about what you do at your role at NVIDIA?

Amit Goel [00:01:29]:
Sure. So I head up our ecosystem team, for robotics and product management, for our, embedded products that go into the robots. Been at Invader for about 13 years. And, so my team is, is working with all the partners that you see that are building, the next generation of AI robots.

Jordan Wilson [00:01:48]:
And I'm I'm sure, you know, over the course of 13 years working at NVIDIA, there's been a fair share of of excitement and and new advancements. But how do you feel about the state of robotics today? At least from an outsider, from an everyday person like myself, it seems like the the the the sector is just exploding. Is that how you feel as well, or is it always just feel like, you know, like like this?

Amit Goel [00:02:14]:
No. I think, you know, with the coming of AI, robotics has been making steady progress. But, over the last 6, 12 months, we have seen, sort of a shift, that has happened, an inflection point, if you may, where people are starting to realize that there is a lot that can be done, and a lot of that can be attributed to the large language models in generative AI. The fact that you can now have a lot of data and use that to make new tasks for robots is just a turning point. The robots have been there for a while. AI has been helping, but they have been typically still a single function robot. Right? You Amazon talks a lot about their robots in the warehouses that are picking and packaging things for you. But that robot, if you ask it to make a coffee for you, can't do it.

Amit Goel [00:03:13]:
Right? So that is the change that has happened. The change where now, with with the large models, you can really train a robot on multiple modalities, understand from what you and I are doing. Right? You don't need a D. To be programming a robot. That's what is going to really unlock the capabilities of robots and

Jordan Wilson [00:03:37]:
for everyone. That's like, if you're watching now what Amit just said is amazingly accurate in in putting everything into perspective and and why now. But, you know, I wanna dig in even deeper because what, you know, people may or may not realize, robotics is not new. Right? Robotics have been used across many industries and verticals for decades. Why though that inflection point now? You know, you said over the last couple of months. Is it just because of large language models, or are there other factors that are kind of bringing the robotics industry to this inflection point? I

Amit Goel [00:04:11]:
would say it's a confluence of several things that has made this possible. Right? At the heart of it is, of course, this ability of large language models to consume huge amount of data and learn from it without somebody actually having to annotate everything. Right? We we just, you you may have seen the work from NVIDIA Research, and, called Eureka where we taught a hand to spin a pen. In the past, even just telling what what the robot needs to do was good was so hard. Right? Now you can type, I want you to move this pen with your fingers, and all the complex math underneath it is taken care of by a large market. Right? So so that's that's one thing. The second thing that has happened is, in order to develop these robots and test these robots, you can't really do them in physical world. These are robots are expensive.

Amit Goel [00:05:13]:
Physical world is you have to be careful, safe around it. So this simulation has come a long way. We started working in, this platform called Omniverse, which is designed for simulating things, and we announced, you know, the Isaac Lab. This is where you can safely and very quickly test these robots. Right? So that's the second thing that came out. And the third thing was, in general, the computing power. Right? To to run these big rope models, to run all the AI capabilities, you need to have the computing power in the robot. You cannot be running its brain somewhere else in the cloud because it needs to just make a decision if it needs to stop moving its hand or not.

Amit Goel [00:05:57]:
So I think the combination of those three things, the large language models, a a great simulation environment to test these robots before you put them in in the real world. And the third, all the horsepower that you need to do the processing. It all came together, and so that's what has kicked off this, you know, big bang of new new generation of robots.

Jordan Wilson [00:06:20]:
Yeah. And and it's definitely that, you know, even if we look out, here on the g t c floor, you see robotics everywhere. Maybe that you you wouldn't expect if you were at the GTC conference 10 years ago. You you maybe might not expect this, but I do wanna zoom out a little bit and even talk specifically, about NVIDIA's role in all this because you're not technically making the robots. So tell tell us a little bit about how NVIDIA works. It seems like there's so many moving parts both literally and and physically. Right? Or, you know, metaphorically, I guess. But, how does NVIDIA's role play into the the the grander scheme?

Amit Goel [00:06:57]:
That's a good great question. Yeah. I mean, when people look at all our demos and, things that we show to for our technology, they often get confused. Like, look. NVIDIA's making a robot. We're making a human eye. We're making, a mobile robot. That's not what we do.

Amit Goel [00:07:14]:
Our our mission, is to be the foundry. Right? Be that platform where the robotics developers, the people who are making the hardware, the people who are making the end application can come to this foundry with their use case, with their data, with their robot hardware, and take back the intelligence for those robots. Right? So that's that's NVIDIA's role being that underlying platform on which people can bring their stuff, build build the final thing, take it back, and deploy it in the real world.

Jordan Wilson [00:07:48]:
What does that actually mean? Or or right? Like, you know, when we talk about, Isaac Lab, you know, in in, I guess, inside or underneath, the NVIDIA Omniverse, what does that actually mean, you you know, for, you know, the world of of manufacturing, production, etcetera? How does that change what companies and enterprises can do if they bring their simulation into Isaac Lab? That's great question.

Amit Goel [00:08:16]:
For example, if you see the announcements that we had today, at DTC this week at DTC, I'll highlight a few. We we had a announcement with the Yaskawa. Right? Industrial robots that you will see in a lot of manufacturing, lot of, logistics places. And typically, the way those robots were programmed was you get the robot, somebody goes there, spends several weeks, months programming them, and then you have one application done. Now with Isaac Lab and the work that we're doing, with Yaskawa, they can create a model, a digital twin

Jordan Wilson [00:08:56]:
of

Amit Goel [00:08:56]:
their robot in omniverse. They can use the foundation models. These models that already know how to pick things, but they don't really know how to pick things for that particular robot. Right? So that's the adaptation that they need to do specific to their hardware. So these are things that we call fine tuning. Right? So they can create a digital twin. They can fine tune them and take out of it a working skill or working task. And that's what we demonstrated here, with Yaskawa.

Amit Goel [00:09:28]:
They they worked with us, brought their robot in, all the all the secret sauce of their robot, trained it in the, lab, right, in the virtual lab, and then deployed it to the real world. Similar things that we did with Boston Dynamics. They have probably the best team to make some of the amazing robots, their mechanical design, the dynamics, the software that runs on the robot is fantastic. But in our every new skill to be added to the robots take took a long time, and together with them, we announced an end to end platform. You can take the Boston Dynamics robot. They worked with us to create a simulation digital twin of their robot. And now you can make it run on the sand. You can create a snow environment.

Amit Goel [00:10:16]:
You can create forest environment, learn all those things. And when it's learned and take this brain back, put it on an NVIDIA Jetson, right, To put it on the robot, and you have your skill. So that's essentially what it, you know, it looks like. Bring your hardware, bring your use case, create the environment, test it, take it back to the real world.

Jordan Wilson [00:10:38]:
Yeah. I I think the only downside of that is we may get fewer of those viral videos that we always see of the the Boston dynamic, you know, robot dog, you know, running across all different surfaces. But, you you know, I'm I'm really interested because even it seems, public perception around, robotics, it seems, is also changing at the same time. What does it what does it mean as we even transition from, you know, general robotics to now what, you know, people are saying, humanoids? Is it more of, you know, that that single use, you know, Amazon warehouse that's still general robotics, but, you know, when a humanoid type robot can do anything, is that, know, when it's a humanoid, can you explain, you know, what that even means as we kind of, look forward into a a future of of these humanoids?

Amit Goel [00:11:25]:
Yeah. It's if if you look around you, you don't see many robots. Right? And even the ones that you see, probably the Roomba in your house, the challenge has been human robot interaction. Right? As you and I cannot even explain what's going on in the Roomba's mind, it just keeps going around the same place sometimes over and over again. Right? So so with the we we in order to robots and people to coexist, there are 2 important things. They need to operate in shared spaces. The world was created for us, for people. Most of the world has been created for us.

Amit Goel [00:12:08]:
The factories are created for still created for a lot of robots. The new warehouses are created for robots, but majority of the world is created for humans. So they need to exist in a form factor that they can live around, you know, people. And second thing that is important is, they should be able to engage with with people. You should be able to understand what the robot is doing, and the robot should be able to understand what you want it to do. Right? So that that's essentially what why humanoid form factor is so interesting. Mhmm. We have a lot of data on humans.

Amit Goel [00:12:45]:
Right? You and I doing our things, there is a apple amount of data for that. And for generative AI, the one thing that you need is a lot of data. So there is a lot of data for humans that helps with the humanoid form factor. Again, these are designed with the dexterity and the capabilities so they can exist around people in the shared spaces. And last piece is with the with the large models with this, generative AI, you can really engage with them. You've seen some of the recent work where you can tell the robot what to do. You can even ask how it did it. Right? So so you can you can have more trust and comfort around these machines.

Jordan Wilson [00:13:26]:
Yeah. You you talked about the need for, you know, a space for for humans and and robots to kind of, coexist. I think maybe for people like like you and I, maybe that's exciting. Right? But for some people, it might make them feel uncomfortable. I mean, should people feel afraid of of coexisting, you know, with with with humanoids? And and what does that look like? Or maybe what can you say to those people who do feel a little uneasy maybe sharing that space? I think,

Amit Goel [00:13:57]:
you know, there's just if you look at this data right now, there are just so many jobs where we don't have people to work on. And all of these robots will need, human supervisors. Right? So, instead of actually doing some of those work, all anybody who's worried, they they should not be because they would be actually telling what these robots should do. Right? So you are you are still will still have human supervisors determining the task, determining what the robots are expected to do. So there's there's just a lot of, you know, need for those skills, that are, you know, robots have been great at, you know, dull, dangerous, and dirty Mhmm. That that we'll be able to offload to them and still, keep keep monitoring them and control them and tell them what is needed. Mhmm.

Jordan Wilson [00:14:47]:
You know, aside from those, you know, dangerous and and dirty, which, I love I I I just love the alliteration there. It makes it so easy to see where robots can go in the future. But, I also assume that there's just more, you you know, repetitive tasks that a lot of right now, you know, humans are doing. So and maybe that's the reason why a lot of people, you know, are are scared of AI or are not fully on board with with robotics. You know, I guess in that situation when you're talking about manufacturing, production plants, etcetera, especially those, you know, kind of like low hanging, you know, you might think of them as low hanging, fruit where these humanoids can go in and and place, and and be placed there. For for those humans in those industries, where should they even be focusing their time and attention right now? Because you can sit around and say, oh, well, you know, what happens if a humanoid is out here doing this in 2 years or you can do something about it? So where would you focus or where would you recommend people, you know, maybe in those industries start focusing their time, and attention on?

Amit Goel [00:15:49]:
I think, I did the first thing is to identify these tasks. Right? That that's the number one thing to do is to identify what are the pieces of your operations that you could automate and and start preparing for it. Right? You know, bringing automation is not just being a robot and putting it out there. You need to prepare the you need to prepare your operation, your IT systems to work with those robots, and, you need to prepare your workforce. So I think just the first thing is, you know, knowing what these robots are gonna be capable of, right, Not assuming that they'll do everything. You know? All your job, not gonna happen anytime soon. But they will be really good at, you know, picking things up from a cart, loading it up somewhere else. Right? We're seeing what's happening in warehousing, picking things, putting in the packages, shipping them out.

Amit Goel [00:16:42]:
So there are a lot of these things which are, gonna happen in the near term. Right? So identifying those tasks and then working with this ecosystem to help them define those tasks and and, and preparing, the environment, for those things. So I think those are some of the areas where they should focus on, and not not be, you know, worried about it at all.

Jordan Wilson [00:17:05]:
Yeah. You know, the NVIDIA CEO, your CEO Jensen, yesterday said, in a session that the chat GPT moment for robotics, is is near. It's soon. What's your thoughts on that and and what do you think that moment will actually be, for the robotics industry or or humanoids industry? Like, what is that chat gpt moment you think?

Amit Goel [00:17:29]:
Yeah. It's you know, what was different about chat gpt versus, you know, prior AI was that how who could use it? Right? I think that the capabilities were, of course, amazing. Right? But how accessible was it was the important piece as well. So I think for me, when I look at about think about chat gpt moment of robotics, I think about, you know, somebody who does not have a robotic background. If they feel comfortable that I can use a robot, I can program a robot. I can tell what I wanted to do. I think that would be that would be the big moment where you've said, okay. You know, we we we we've got there.

Amit Goel [00:18:13]:
Right? Instead of that handful of specialist robotics PhDs and master's degree people who can really work with these robots when, when most people would feel comfortable, using these robots, operating these robots, telling them what to do. I think that that's what would be the in my mind at

Jordan Wilson [00:18:34]:
the moment. What what do you think still has to be done or still has to be accomplished in order to get there? Because, you know, at least for me when I'm, you know, was listening to the keynote earlier this week at GTC. And again, as a reminder, if you're listening live, you can sign up and rewatch that keynote. Great great 2 hours. You won't regret it. But it it seems like all of the pieces to to get to what you just described are either there or were recently announced. So what are still the obstacles to get to that moment?

Amit Goel [00:19:05]:
I think we've seen, like, from the AV industry, for example. Right? The physical world is a lot lot complex. There are just there are a lot of variations, and there are real implications of things not working out. Right? So, you know, while we have the key technology pieces, getting it to that mass deployment will require and especially with because these are physical things that have to exist around, and been doing stuff. I think there is a lot to be, discovered on actually, you know, what happens when you put them in real world. So that's that that aspect. The second is, the data. Right? We in order to realize this language models based, and really generative AI based robots, we would we'll need a lot of data.

Amit Goel [00:20:00]:
And a lot of work is happening right now on how can we bring that data. And, again, you know, we we are seeing people creating a big farm of teleoperated robots so they can just keep them doing things and collect that data. There's work happening on learning from imitation. Right? How are we doing those things and and translating that? So I think there's a lot of work that needs to happen on the on the data side, a lot of work that has to happen on real physical implementation, the gaps that we will see when we have put it on the physical system. And then, yeah, I think, you know, those are the 2 big things that that we still need to solve. But I think, as you said, all the key building blocks are there to get us to that promised lag.

Jordan Wilson [00:20:44]:
Yeah. And I think, you know, in his keynote incense, you know, Jensen has talked about, you know, even this, transition from from Hopper to to Blackwell and and and what that means, you know, being able to think the example that he gave is, you know, training a, you know, 1,800,000,000,000, parameter GPT 4 model in, you know, I think a a fourth of the time or something like that. What does that mean though for other industries such as robotics? Right? Like, when we look over the course of the coming months or quarters or or year, when this, you know, these these GPU chips and the Jetson, you know, what does that mean for for your industry and what you maybe were not capable to do before just because of of compute? Yeah. What does that open up? What do these announcements open up?

Amit Goel [00:21:32]:
It it's it's it's just a question of iterations. Right? The way to get to perfection, for for this robot is to iterate quickly. And if you had to wait for 6 to 12 months to train a policy, If you can do that in 6 weeks, that just means you can iterate faster. And that's exactly what we are starting to see now, right, with some of these early partners who are adopting these technologies. The the pace has picked up. Right? Some 2 years has come down to 2 months and, you know, over time, it'll come down to 2 weeks. So you can iterate faster, faster, faster. And and then you that's that's the big impact this technology has.

Amit Goel [00:22:14]:
And second thing it'll do is also, it'll bring down the cost. Mhmm. Right? In order to in order to really deploy these, build these capabilities at scale, you need to do it in a way that is also, you know, cost effective. Right? So, if if it's just taking you 2 years, there is no way it can be cost effective. Right? But if it's happening in 2 months 2 days, it's gonna be cost effective. So so the scale will come in. It'll allow people to do iterate fast. It'll allow people to scale their, you know, learning, pieces.

Amit Goel [00:22:48]:
And, and and 3rd piece is it'll bring help to bring the cost down to something that is, you know, more commercially viable in the long term.

Jordan Wilson [00:22:57]:
Yeah. Even with that speed of of development and deployment, let's say you and I are having this exact same conversation next year at GTC.

Amit Goel [00:23:06]:
Yeah.

Jordan Wilson [00:23:07]:
What do you hope we're talking about next year? Maybe maybe just for NVIDIA or maybe just for, you know, robotics and and humanoids in general. Where do you hope we are at this point next year?

Amit Goel [00:23:16]:
Well, I think we saw a lot of humanoid robots, today that were standing and doing, some stuff. But, I think next year when we are here, we might be talking about why are they doing that why are they not doing something. Right? Okay. So I I don't know. I think, you know, it's just just the I don't wanna be, forecasting things here, but, the pace at which innovation is happening is amazing. And, I'm super excited about, what what it is going to, you know, unlock, for robotics and looking forward to that. Yeah.

Jordan Wilson [00:23:51]:
And I and I do wanna get back to, you know, something that we talked about earlier. You know, we kind of talked about why now and why this inflection point, and one of those, factors was large language models. It's it's generative AI. You know, I think a lot of people have saw you know, I've seen some demos recently, you know, the figure and and the chat gpt kind of collaboration. Is that what you kind of see happening, you know, in the near future? Just, you know, everyday people having the ability, you know, like you said, you don't need necessarily a a master's degree in in deep learning or robotics. Is that kind of what we should be looking at as as, you know, maybe being the the near future, the technology's kind of already there by, you know, being able to have a robot or a humanoid help with everyday tasks just by using natural language. I

Amit Goel [00:24:42]:
think I think that's that's absolutely correct, Jordan. The the way generative AI is actually helping this next generation of robots is in multiple ways. Right? One is, of course, how you are engaging with the robot. How are you telling the tasks to do? How are you building that, human robot interaction? But in the background, there is a lot of work that's happening on, well, what does it mean, for a robot when you say, hey. Go pick up find that can and put it in the trash. What does it mean in the robot language word? Right? So there is, behind the hood, I think there is a lot of research that is happening. There's a lot of core engineering that has to happen to translate that into a robot specific specific language, and this is where we announced the Groot model. Right? The foundation model where which it can take text, It can take videos, demonstration.

Amit Goel [00:25:39]:
K. I want you to stack this thing this way. Right? So there there's a lot of that work that needs to happen in the background, for translating that into a robot, specific language. And then also the way generative AI is helping is creating all this word. The human word is big and complex. How are you gonna create all these environments in simulation? So generative AI is gonna help there as well with the generative capabilities to create environment, create settings where you can just say, hey. I want to I'm I'm trying to deploy this in a kitchen with, you know, this much high shelves, the, you know, all these all these utensils should be there and boom, it creates an environment for you to go play with. So I think there's there's just a lot of things that are happening on the core technology side as well to to bring that, future forward.

Jordan Wilson [00:26:30]:
Yeah. You you know, speaking of that future forward, you know, you're in a very unique position, as the director of robotics here at NVIDIA and even NVIDIA's place, in the grand scheme of things. Right? So so you get a little bit of taste of, you you know, what your hardware partners are doing, you know, other partners you're working with in in in cloud and the AI side. But what's the thing that excites you? Right? Like, once we wrap up the craziness of the GTC conference, what are you gonna be excited to be working on, and what are you gonna be paying attention to, in the coming weeks months?

Amit Goel [00:27:03]:
I think, for me, in the coming weeks months, the exciting part is the traditional robotics, like the industrial robot arm, the collaborative robots, the AMRs, they are already there today. They have a lot of potential. Right? The new things with humanoids and, you know, other embodiments will come, but we we we also announced 2 key things. Right? Isaac manipulator and, and Isaac Perceptor, which applies to the robots of today, the robots that are operating today. So in the coming months, that is what I'm most excited about is how we will make these existing robots smarter. How can we make them easy for people to use? How can we help this company go from 100 and thousands to tens of thousands of robots with with AI and with the tools that we are providing. So I think that that's what I'm most excited about, making those tangible things, happen in the near term with the existing robots.

Jordan Wilson [00:28:08]:
And so, Amit, we've talked about so many things, on today's show from, you know, new announcements, that you had here at NVIDIA GTC, to partnerships that you just unveiled, to to the future, of of robotics and humanoids. But maybe as we wrap this up, what is the the one big takeaway that you want people, to to walk away from this conversation on, you know, specifically on on where NVIDIA is focusing and and doing their work right now? What's that one big takeaway message you have for people?

Amit Goel [00:28:38]:
I think the one, takeaway would be that, we we are building the foundry and the foundation for the world's, roboticists to come and build realize their dreams. Right? We're building all it takes a long, long time, a lot of key technologies. As Jensen said, the soul of NVIDIA, right, Graphics and, AI and simulation. Right? All physics. These three pieces are core building blocks of robotics. We've been investing on it for so many years. So now I think we we have that solid foundation on which, you know the the new generation of robots can be built.

Jordan Wilson [00:29:30]:
It's it's such a an exciting time not just for the industry but how it's ultimately going to you know play play out in the real world in the business, you know, sector and and everything else. So, Amit, thank you so much for joining the Everyday AI Show. We really appreciate your time.

Amit Goel [00:29:46]:
Thanks for having me. I really enjoyed this conversation.

Jordan Wilson [00:29:48]:
Alright. And, hey, as a reminder, there is so much more. Make sure to not only check out the show notes or the livestream or the podcast, but also the newsletter. So go to your everydayai.com, sign up for the newsletter, and we'll have information in there as well where you can still register for the NVIDIA GTC conference. Don't worry. It's not too late. If you slept through the first two and a half days, you can go back. Some of the sessions that we were talking about here, you can go back, watch those replays, sign up.

Jordan Wilson [00:30:16]:
You can do it for free as well as enter into, our giveaway to win a GPU as well as DLI credits. You gotta go check that out. And you gotta also continue to join us. We have a lot more here from the GTC conference. Thank you for tuning in. We hope to see you back later today and tomorrow and every day more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI