Ep 244: Accelerate your GenAI journey with AWS

A Roadmap to Accelerate Your Generative AI Journey

The world of generative AI can seem daunting. As businesses explore strategies to leverage this technology, the options and choices can often prove overwhelming. Harnessing the power of generative AI need not be a challenge, however, and technology giants like Amazon Web Services (AWS) are coming to the fore as accelerators on this journey.

Navigating the Generative AI Landscape with AWS

In recent years, AWS has made a significant footprint in the generative AI landscape. This, in part, is due to its multitude of services that address pivotal considerations that organizations have to make when integrating generative AI. These services extend from providing a diverse set of foundation models through Amazon Bedrock to allowing organizations to differentiate in their sector by customizing models to their domain-specific data.

AWS also offers out-of-the-box applications, designed for those who simply require ready-made generative AI solutions. Additionally, with AWS's reliable and scalable infrastructure, businesses can feel assured that their AI journey is well-supported.

Role of Partnerships in Scaling AI Solutions

Scalability is integral to any business adopting AI. AWS has recognized this and partnered with industry giants like NVIDIA to deliver high-performance infrastructure. These partnerships play a key role in accommodating the ever-growing influx of generative AI companies and the demands they bring. They ensure that more computationally heavy models can be supported and that workloads reliably distributed.

Keeping Up with Evolving AI Trends

The rapid evolution of AI creates a challenging environment, with new models, techniques, and software emerging frequently. The key to staying abreast of these advances lies in the hands of businesses willing to invest time in understanding the intricacies of the AI landscape. AWS offers several services accommodating this need for education and upskilling, such as Amazon Party Rock, a platform through which individuals can build apps through simple interaction.

Final Thoughts

The journey to generative AI adoption is complex yet filled with immense potential. A vast array of tools and services provided by companies like AWS removes barriers and accelerates this journey. Businesses willing to delve into this AI landscape can leverage these tools to carve out a niche for themselves in their respective sectors and stay ahead in the race of digital evolution. By starting small and steadily growing, organizations can take advantage of the power and potential generative AI presents.

Topics Covered in This Episode

1. AWS and its Role in Generative AI
2. AWS and Foundation Models
2. AWS's Involvement with Companies Implementing Generative AI
4. Future Preparations of AWS for Generative AI Developments


Podcast Transcript

Jordan Wilson [00:00:15]:
So many businesses are trying to figure out how they can accelerate their journey with generative AI. And you can look in all different directions and sometimes the more you look, the more confusing it might be. It seems like there's so much information, so many new, you know, pieces of software and and services that we should be looking at. So today we're going to be talking with an industry insider on the correct way. And and one of the best ways that I think that you can accelerate your generative AI journey. Thank you for joining us. My name is Jordan Wilson and I'm the host of Everyday AI, where your daily livestream, podcast and free daily newsletter helping everyday people like you and me not just learn generative AI, but how we can all actually leverage it. And if you are joining us on the podcast, thank you.

Jordan Wilson [00:01:01]:
As always, make sure to check out your show notes for more information. And if you're joining us live, you probably see something different. Yes. We are live here in person at NVIDIA's GTC conference, where our guest today, AWS, is is one of the big big partner booths. I can see it from here. So, without further ado, very excited to introduce our guest for today. Shruti Kuparkar is the senior product manager at AWS. Shruti, thank you for joining us.

Shruti Koparkar [00:01:25]:
Thank you. Thank you for having me, Jordan.

Jordan Wilson [00:01:27]:
Alright. Can you tell us a little bit about what your role at AWS, kind of, is is, made up of?

Shruti Koparkar [00:01:33]:
Yeah. So my role is leading product marketing for x rated computing at AWS, And that means basically helping our customers explore, evaluate, and adopt x rated computing solutions powered by NVIDIA GPUs as an example, to help power their AIML applications, their graphics applications, their high performance computing applications. Yeah. And so that that's that's my role at AWS. And for your listeners and, you know, hopefully, a lot of them know about who AWS is, but AWS is Amazon Web Services. And so in simple terms, we are basically the cloud computing division of Amazon.

Jordan Wilson [00:02:16]:
Yeah. And probably, yeah, probably anyone watching or or listening this, to this, whether you know it or not. AWS is probably involved in this process somewhere. Right? Depending on where you're listening, how you're listening, you're probably getting a lot of this through AWS. You you just may not know it. So so, Shorty, I'd I'd love to talk a little bit about how, AWS shows up in generative AI. Right? Because a a lot of people may not fully understand, you know, how big of a footprint AWS actually has in the generative AI landscape. Can you tell us a little bit about how AWS actually shows up in today's version of generative AI?

Shruti Koparkar [00:02:55]:
Absolutely. You know? And this is this is something that we, get asked about by our customers as well because they are trying to figure out how to adopt generative AI, how to take advantage of this technology, and get started quickly. And so I think we've what I would like to do is, like, talk about AWS and what we are doing, but from the lens of 4 important considerations that we've identified through our conversations with customers, through our conversations with our internal experts. These are the 4 considerations that are important, when getting started with generative AI. And so the first one is that there is no one foundation model that is going to rule the world. That is the right fit for every use case. And, you know, again, for the listeners, foundation models are these big you know, the large language models, the LLMs. These are all foundation models that are pretrained on terabytes of data.

Shruti Koparkar [00:03:58]:
And there are so many of them out there. There's LAMA 2 from Meta. There is Claude from Anthropic. You know, every week, there is a new model.

Jordan Wilson [00:04:06]:
Literally.

Shruti Koparkar [00:04:07]:
Yeah. I mean, literally. Right? And so customers really need to identify which of those models are the right fit for their use case. And so for that, we have a service called Amazon Bedrock, which makes a diverse set of foundation models available via a single API. So it's just a simple API call, and you can choose which model you want and test it out with the application you're building for your specific use cases. So that's sort of, you know, that's the easiest place to start, especially for developers because they all they have to do is, you know, make a call to this API and they can get going. Now the second important consideration is that customers need to differentiate with their own data. Because if you think about it, like, all these models are available to everyone.

Shruti Koparkar [00:04:57]:
So how do customers differentiate and gain competitive advantage? It's with their own data. And so this is where, again, we have managed services such as Amazon Bedrock or Amazon SageMaker, which allow our customers to customize their models, their applications on their own domain specific data. So think financial data or, you know, legal tech, legal data. In some cases, health care and life sciences, that's a completely different modality of data. It's, you know, it's language, but it's it's the sort of the language of life, through through genetics and things like that. So allowing customers to fine tune and customize their models to their own data is something we do really well in a, you know, very private and secure manner. Because security, we we we often say security is job number 1 at AWS because its customers are trusting us with their applications and their workloads and their data. So we take security really, really, seriously.

Shruti Koparkar [00:06:02]:
And so that's, sort of the the second consideration. The third consideration is that customers may just want to use out of the box applications. Right? Like, just like everyday every people. Right? Like, I use a lot of generative AI applications. I didn't build those. Yeah. I'm just using those. There's lots of people using, ChatGPT.

Shruti Koparkar [00:06:21]:
One of my favorite applications is Perplexity. So similarly, our customers might also want just out of the box applications. And this is where we have Amazon Code Whisperer, which is basically a coding assistant. It's your coding companion. It will help coders, developers write code, be much more effective, and focus on the innovation and not the root task of writing the code. And then finally, the 4th consideration, and this is more applicable to people who are building, you know, generative AI pipelines with us, but it's important for, you know, everyday folks to know is that ultimately what powers all of this is reliable and scalable infrastructure, and AWS excels at this. We have infrastructure that delivers the highest performance while keeping costs as low as possible and, you know, helping customers achieve their their goals in terms of the services or the applications they are trying to. And Nvidia, of course, is a huge partner for us in this space.

Shruti Koparkar [00:07:27]:
And that that's actually the compute x rated computing portfolio that I focus on. Yeah. So so, yeah, so those are sort of you know, those are a few ways in which, we show up. There are you know, AWS has 200 plus services. So and, honestly, each of them, I would imagine, touch generative AI in one way or the other. So it's really hard to pick, my favorites, but but these were some of the ones that came to mind. And I think that lens of, like, the four considerations maybe helps sort of, make it a little bit easier to to follow along. Mhmm.

Jordan Wilson [00:08:00]:
Yeah. And you, named some, very well known, you know, large language models there, chat gpt, perplexity as an example, which is actually one of your customers. Right? So I'm I'm I'm wondering if you can kind of walk us through, you know, an example of maybe perplexity or something like that of how AWS is actually powering that. Because I think a lot of our listeners, myself included, I use perplexity every single day. So I'm even curious, how does AWS accelerate, as an example, perplexity's journey in in Gen AI?

Shruti Koparkar [00:08:32]:
Yes. Absolutely. Happy to share it. So Perplexity actually spoke at our flagship event, the AWS re invent event last year, and they shared their journey. So, for all of you folks, like, go check out that video. Honestly, that will do so much more justice than I can, but I'm but I'm happy to talk about it. So perplexity, you know, for, again, for some of the listeners who may not know, it's basically like an alternative to a traditional search engine. Right? So instead of when you search and when you wanna learn about something, you right now have to, like, go through many links and figure out which one of them has the right information and and all of that.

Shruti Koparkar [00:09:10]:
Perplexity has simplified it where you can ask the, the app a question, and it'll come back to you with a with a really well structured answer with curated sources. And all of this is powered by their large language models. So how we help them is that they've, for as an example, they've done many things. So, again, I'm just going one example. Sure. They fine tuned, models like LAMA 2 or, which is these open source models that are available, and they fine tuned it on our p 4d and p 4d instances. What these, you know, these names sound really complicated. What they basically mean, these are servers that are powered by AWS technology as well as, the NVIDIA GPUs.

Shruti Koparkar [00:09:54]:
So they use this x rated computing to fine tune those models for their own application for what they were trying to do. Another service as an example that they used was Amazon SageMaker Hyperpod. Because when you are fine tuning or training these really large models, you have to do that it can't fit on one server. You have to do it across many, many, many nodes. Right? And to be able to do it well in a way that where a particular node goes down, it needs to be brought up back up really quickly or be replaced with something else. SageMaker HyperPOD makes that really easy. It makes this, like, multi node, you know, training, very resilient because it auto detects any failures. It replaces with a new node.

Shruti Koparkar [00:10:42]:
It makes the distribution easy. It optimizes performance. So those are just some ways in which, you know, we basically help our customers, do sort of we take care of the heavy lifting so they can focus on their innovation and their use cases. But, again, yeah, perplexity, I just love the app, use it so much, and please go check out, I I think, their CEO spoke at re:Invent last year.

Jordan Wilson [00:11:06]:
Yeah. Yeah. No. And and even kind of, you know, maybe we'll zoom in and then zoom out on this perplexity example. Right? It's, I think it's one of these startups talk about being accelerated. Right. It's gone from, you know, launch to one of the most, visited generative AI websites in the world in just about a year, give or take. But I'm sure a lot of that is being able to to scale on AWS.

Jordan Wilson [00:11:31]:
Can you talk a little bit about, maybe especially, you know, I know we have a lot of, people in startups and and and who work at, you know, now well funded startups and they're, you know, series B, C, D, etcetera. Can you talk maybe a little bit about some of those other examples? And you talk so many different, AWS services, you know, that that, companies and enterprise can take, can, leverage to to really grow. But maybe walk us through, you know, let's let's say a perplexity or another big company like that. How important is it to have something like AWS that you can start small and as you grow and as you grow, you can instantly start using all of these other different services and and scale. Can you kind of walk us through what that looks like?

Shruti Koparkar [00:12:11]:
Yeah. Yeah. Absolutely. So, you know, when you talked about scaling, the the first example that that came to my mind is Adobe. Adobe is, also a really important AWS customer, and their VP of generative AI, Alexandre also spoke at re Invent. This is where I learned that, like, I learned about our own stuff at at our event that it's it's really great. It's such an educational event. So he spoke there, and he talked about their journey on AWS because Alexander's team had invested in machine learning for a long time.

Shruti Koparkar [00:12:46]:
And even before, you know, sort of generative AI became a term, he there they were already using some techniques, right, to to to create, to, to create tools for creators in Photoshop where, you know, people could have this generative fill or or things like that. But then once, of course, some of these generative AI models came on the market, they off doubled down on it. And Alexandroup talked about basically what he called building AI superhighway, right, internally within Adobe so that the infrastructure, the services are all in place so that his teams could innovate. Like, he wanted to also take the heavy lifting off of his teams, and he used AWS for that. Right? He used AWS to build that AI superhighway. And what is that superhighway? It's, like I said, it's the servers at the very foundation. Right? But then it's all of the other things. Like, it's storage.

Shruti Koparkar [00:13:41]:
It's networking. Because where's the data living? It's living somewhere. It's in storage. You need storage solutions that can feed these models as fast as possible because they are processing data as fast as possible, and you don't want GPUs are expensive. You don't want them to sit idle. You want them to be working. So you wanna make sure that your storage solutions can feed them data as fast as possible too. You also want really good networking, as I mentioned, because we have to distribute it across multiple workloads.

Shruti Koparkar [00:14:11]:
So they use all of this. They use the compute services, the storage services, the networking services. Many of our orchestration services, which is like, how do you make all of this, you know, work together, come together, work really well? So Adobe Firefly is a really, really great example. And he also talked about how when they started out, you know, they were they were they thought like, oh, this is how much compute capacity we will need. Right? Like, because this is what this is how much user response we would get. But it just went viral. People love the product. Right? Creators love the product being able to just bring their ideas to life so quickly.

Shruti Koparkar [00:14:48]:
And this is where the beauty of AWS comes in is that once they realize that there were a lot more users they needed to serve, they needed to scale quickly. And we worked closely with them. They I think 20 x their capacity and, all of it or rather most of it anyway on GPUs. And they, they they deployed. The other thing that AWS obviously also does is provide a lot of optionality on all levels, on on the type of compute solutions you have. So we have GPU based solutions, but we also have our own chips, Trainium and Infinciar. And they use for example, Adobe used Infinciar powered, solutions as well. And then we have a lot of optionality on the storage side.

Shruti Koparkar [00:15:35]:
We have a lot of optionality from a managed services perspective. You can use SageMaker or you can use, you know, you can build your own machine learning pipes, so to speak. So, bringing it back a little bit, you know, to to to sort of the the everyday application, it's like you said earlier, you may not even know, but AWS is powering it. Like, so many creators today are using Photoshop. Another example is Leonardo dot ai. It's a startup outside of, Australia. Mhmm. And they are doing something very similar, image generation tools, generative tools for creators.

Shruti Koparkar [00:16:12]:
And and they are, you know, they're a smaller company, but, similar to Adobe, they're using a lot of our infrastructure as well.

Jordan Wilson [00:16:19]:
You know, and and even when you talk about generative AI, I think it's hard to keep up. I I like, even for me. Right? I I I talk about this every day. And every day you see a new, you know, not quite a new Leonardo AI every day, but you see, you know, new, AI image generators and and new language models all the time. You know, I'm I'm curious, you know, what is, you know, AWS doing or, to kind of prepare for this this influx. Right? Because we talk about compute and we talk about not wanting, you know, all these, you know, GPU's to to go to waste and to sit idly by. So, you know, what what is AWS working on or offering, to to kind of prepare for this this influx? Because it is getting easier and easier, right, to to build, you know, generative AI software. So I'm sure you guys are seeing this this nonstop, you know, influx of of new customers.

Jordan Wilson [00:17:15]:
So so what is AWS doing or what can it do to really prepare for this influx of of generative AI companies wanting to grow and scale?

Shruti Koparkar [00:17:22]:
Right. Right. Great question. And, again, like, I think I will have to bring it back to sort of that, for people who are familiar with AWS, they'll have seen this what we'd like to call the 3 layer cake. Mhmm. And, at the very foundation, as I mentioned, is the infrastructure. And so in terms of preparing for that, our partnership with NVIDIA is is a really important piece of that. Right? I mean, we've heard the announcement at at GTC this year with the Grace Blackwell 200.

Shruti Koparkar [00:17:53]:
That's coming to AWS. You know, that's going to definitely power a lot of generative AI innovation. We also do, like I said, continue to invest in our own silicon to offer sort of more options, more sort of optionality to optimize cost and performance. And so that's sort of what we are doing on on the in on that infrastructure side in addition to, you know, many innovations in the storage and networking piece. But then it's also going to be about trying to make it really easy to use. And this is, again, where Amazon Bedrock consent. We also have actually this, service called Amazon Party Rock, and I highly encourage all of your users to check it out. It is literally like, it's basically a service through which anybody can build an app.

Shruti Koparkar [00:18:42]:
You just have to go there and talk to it as if you are as if you're talking to a friend and basically say, hey. This is the app I wanna build, and just just try it out. Give it a shot. It's fun. It's so much that's why we we call it party rock. So I highly encourage you, and your listeners to check it out. So Amazon Bedrock, of course, for for developers who want to access foundation models through a single API, but, check out Amazon Party Rock and, you know, have some fun with it. And so that's sort of what we are doing.

Shruti Koparkar [00:19:12]:
Right? Party Rock is also a way for us to to to do that, to educate the a larger set of people about this technology, about what it can do, and sort of pull them in in the orbit so that they feel empowered and then they feel ready to go to, say, a bedrock and and start building their own applications.

Jordan Wilson [00:19:33]:
Mhmm.

Shruti Koparkar [00:19:35]:
Yeah. And then finally, you know, we we talked about Amazon Q and Amazon Code Whisperer. These are some of the applications where you don't need to do anything. They're already generative AI applications. You just need to use them. So you're just a user. But, hey, that they increase your productivity. They they help you interact.

Shruti Koparkar [00:19:52]:
Like, Amazon Q, for example, can help you interact with your business data, can, you know, can can really help you even navigate AWS services. So so those are sort of the three ways in which we are, you know, innovating across those 3 layers, to help prepare for this influx because you're absolutely right. We see a lot of interest, coming in. Mhmm.

Jordan Wilson [00:20:14]:
And, you know, you kind of talked earlier about, you know, at the AWS conference, you know, being able to to learn from so many of of of your customers. And here we are at, NVIDIA GTC. And if you're listening on the podcast, you probably can't see this. But over here, we have the entire, exhibit hall. Right? And we can see probably 100 of 1,000,000,000,000 of dollars in in market cap just sitting right there. And, you know, the future is is real really being built just just out this window that we kind of are are overlooking here. You know, as we talk about the, the GTC conference, you know, even for you personally, what's kind of catching your eyes, you know, specifically when it comes, you know you know, to the future of accelerated, computing because there's there's so much going on. But what's kind of catching your eye, this week at the, NVIDIA GTC conference?

Shruti Koparkar [00:21:01]:
Yeah. That's that's a really great question. What you know? So my background, just a little bit, you know, about myself. I worked at ARM for 8 years. So for those of you who don't know, ARM is basically a a a really big company in the semiconductor space. They've pioneered the the risk architecture, the ARM architecture that is basically, in a way, a competition to the Intel, you know, grown x86 architecture. Anyway, point being, I come from this really hardware background, and I, you know, I am a big fan of NVIDIA in that sense that they they their roots lie in that sort of chip design and, you know, GPU design space as well. And so what is really ex has been exciting for me, not just about GTC, but in general in this space, and it's not just, NVIDIA or even us with our own chips.

Shruti Koparkar [00:21:55]:
It's the idea that a lot of this software innovation is interlocked now on with the hardware innovation. Like, there's so much synergy between what happens at your chip level, at your system level, you know, and and what happens at your software stack and application level end user experience. There's so much codependence and synergy. Maybe it's always existed, but I feel like it it's it's true a lot more. I mean, I was in a talk earlier where Jensen hosted, the the authors of the paper attention is all you need. These are basically it was a paper that changed the world. Right? It was a paper where the transformer model the transformer architecture that is the foundation for generative AI, was proposed. And he just, interviewed everyone.

Shruti Koparkar [00:22:43]:
And they joked about how, you know, Jensen was like, we are building the next GPU to be the size of your next model. And then the the authors joke back, well, we are building the models to be the size of your GPU. So this sort of, you know, innovation, which is bringing together the the hardware architects with the with the deep learning engineers, with the machine learning scientists, like, all of them coming together, that's just to me that that synergy is very exciting to me. So I don't know if, like, that's one thing. I just I I I guess that was a cop out. I'm basically saying it's all of it. But yeah.

Jordan Wilson [00:23:18]:
Yeah. And and and and there's no doubt, you know, it's there's always so much excitement, you know, at big events like this. But, you know, Shruti, we've we've we've talked about a lot, you know, when it comes to, being able to accelerate your generative AI journey. You know, as we wrap up, what's what's maybe the the one, important take home, that that you really want listeners and and viewers, to take away from our time here when it comes to really helping them, leverage generative AI in their journey?

Shruti Koparkar [00:23:44]:
Yeah. I think would my first thing would be learn about it, which they're already doing by listening to this podcast. Right? So I would say be curious, learn about it, start in small ways, figure out what you can do today to learn more about it, to maybe try it out. You know, start using perplexity. Maybe go check out party log that I just mentioned.

Jordan Wilson [00:24:07]:
There we go.

Shruti Koparkar [00:24:08]:
So start small. And then once you sort of figure out what are the right use cases, now these could be use cases in your business applications and, you know, you could suggest that to your colleagues. You could brainstorm. You could think about how you could advance those within your profession, or it could be use cases in your own personal lives. You know? You could use, for example, like, I use perplexity all the time to learn I mean, all of this stuff that I know and is a lot of it, not related to AWS, but in general related to this field is through that app. Right? So you could you could just identify use cases within your sort of personal life as well. I've I've used generative AI services to think about ideas for my child's birthday party. Like, it's the the the opportunity is endless.

Shruti Koparkar [00:24:58]:
So start small and then go from there.

Jordan Wilson [00:25:01]:
I love it. And and there's no there's no greater advice because, yeah, we we covered so much here, but I love it's just start small, go from there. Well, thank you for tuning in. Shruti, thank you so much for joining the Everyday AI Show. We really appreciate your time.

Shruti Koparkar [00:25:14]:
Thank you again for having me, Jordan.

Jordan Wilson [00:25:15]:
Alright. And as a reminder, there's gonna be a lot more, so make sure to go to your everydayai.com. We talked about a lot. We're gonna be recapping it all in our newsletter as we always do. So thanks for tuning in today. We hope to see you back tomorrow and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI