Ep 161: Product Strategy in the Age of AI

 

Episode Categories:

 

Related Episodes

Overview

In the age of rapidly advancing artificial intelligence, businesses are navigating the complexities of integrating AI into their product strategies. While the potential benefits of AI are vast, the successful incorporation of AI technologies entails a user-centric approach, careful consideration of use cases, and the prioritization of human needs.

Understanding User-Centric AI Integration

In today's episode of Everyday AI, experts dove into the significance of prioritizing user-centric approaches when integrating AI into product strategies. Highlighting the importance of seamless integration and minimal user impact, the episode emphasized the need for AI technologies to augment user experiences without causing disruption.

Considerations for AI Implementation

The conversation also touched on notable examples from industry giants, showcasing the contrasting effects of AI integration on user experiences. Using cases from Google and Amazon as examples, the discussion shed light on the effectiveness and drawbacks of different AI implementation approaches.

Navigating the Unique Landscape of Generative AI

The episode also explored the intricacies of generative AI, offering insights into its strengths and limitations. While generative AI excels with unstructured data, the discussion underscored its inability to handle certain tasks, prompting businesses to carefully evaluate its applicability within their product strategies.

Strategic Decision-Making and AI Integration

Addressing the decision-making process for implementing AI, the podcast highlighted the importance of prioritizing use cases that provide substantial returns on investment. Deriskifying the integration of AI was also emphasized, underscoring the potential cost implications and the necessity of careful planning and execution.

Embracing a User-Centric Approach to AI Integration

As businesses continue to harness the potential of AI in their product strategies, the lessons from the podcast episode echo the importance of embracing a user-centric approach. Prioritizing user experiences, carefully evaluating use cases, and integrating AI technologies seamlessly are crucial steps in navigating the complex landscape of AI integration.


Topics Covered in This Episode

1.  Importance of User-Centric AI
2. Decision-Making Process for Implementing AI
3. Product Development Methodology
4. Importance of explainable AI in building trust


Podcast Transcript

Jordan Wilson [00:00:17]:

How can we create better AI that's really centered around users. You know, so often, it's everywhere you look And turn and listen and watch and feel. There's there's AI literally everywhere, but what does it mean for users, and how can we create, better products that have AI in them and that we can explain. And that makes sense, not only to the users, But also to everyone that's benefiting from these products and services as well. That's what we're gonna be diving into today on Everyday AI. Thank you for joining us. My name's Jordan Wilson. If you're new here, thanks for joining us.

Jordan Wilson [00:00:58]:

But everyday AI is a daily Livestream, podcasts, and the free daily newsletter, helping everyday people like you and I not just learn what's going on in the world of generative AI, but how we can Actually, leverage it all. How we can use it to to grow our companies, to grow our careers, to get ahead, to outsmart the future together. Right? Like, that's what we're all about and Everyday AI. This is technically hey. Technicalities here. We are debuting this show live. It is prerecorded. Not everyone can do the, You know, this 7:30 AM Central Standard Time, it doesn't always work, and there's amazing guests out there who are doing fantastic things in the world of generative AI.

About Svetlana and AI product management at Mayo Clinic


Jordan Wilson [00:01:36]:

So, today is no different. So please, with that wind up, help me welcome to the show, and please still get your comments in. We're still gonna be, you know, responding to your comments if you do have questions. But, with that, please help me. Welcome to the show, Svetlana Makarova, who is the AI group Product manager at a at Mayo Clinic. Slalana, thanks for joining us.

Svetlana Makarova [00:01:59]:

Thank you so much for having me.

Jordan Wilson [00:02:01]:

Absolutely. So, hey, tell tell everyone real quick a little bit about what you do As an AI group product manager at Mayo Clinic.

Svetlana Makarova [00:02:08]:

So as an AI group product manager at Mayo Clinic, I what my role is is consider the player and the coach, so I do have, I lead the team of, product manager, owners, You know, delivery leads and development teams, but I also, implement AI solutions on my own. So I actually have Development teams that I lead. I develop product strategies for that utilize AI, and It's not anything specific, per se. So I have experience working with deep learning, machine learning, natural language processing, generative AI, You name it. So, yeah, I think, that's basically what I do, and I'm happy to dive into it a little bit more.

Jordan Wilson [00:02:54]:

Yeah. Absolutely. We like, we're like a minute in already, and we've already dropped so many acronyms. So let's let's rewind a little bit. So maybe explain if if People aren't super familiar. Even, like, what is AI product management? Right? Like, are are you, helping create Products and then integrating AI into them. Is there already an AI, kind of algorithm or an AI, you You know, deep learning model and you're trying to bring it to market or you can even just speak in general because I know we can't always talk about everything people are working on behind the scenes. But What does that even mean? Just AI product management?

Svetlana Makarova [00:03:30]:

Yes. So I think it it it's, it's depending on the use case. So working on existing systems and then making them more intelligent or, you know, I have experience working from Complete concept, and then taking that all the way to to market. So it really depends again on the use case and how you would approach it, but it always with kind of the strategy aspect of it, and this is where I'm most involved. You know, trying to discover kinda what are the needs, what are the problems to solve. Is AI even the best solution, for that specific use case? Not always. And so I think, you know, some of the standard product, management practices, I think, still are at play here. The only thing that changes is that you've I have An expanded tool set is what I call it.

Svetlana Makarova [00:04:20]:

You know, I have just more tools under my belt that I have experience with, implementing. Now I kind of I understand and I have an eye on for eye out for which product could benefit from the efficiencies that AI could bring. Finding particular potential use cases. Right? So, understanding kind of from interviews and things like that where AI could Really bring those efficiencies into, in our case, the practice, the clinical practice, the research, and then the education.

Jordan Wilson [00:04:51]:

You know, as someone that both, uses AI and helps build it into products, I'm curious because I don't build a lot of AI, you know, little you know, toying around with simple stuff here and there, but Is there too much? Right? Is there too much AI in products? It seems like every single product out there, hardware, software, You know, there's there's generative AI in it for some reason. Like, is there too much AI out there in products right now?

Svetlana Makarova [00:05:20]:

I do think so. It is you know, a lot of companies are writing the hype quite heavily. I think generative AI AI in general is is Basically a buzzword. Anywhere you throw that in, it's it basically embellishes every product. So, And I think OpenAI has made it much more easier to bring in into digital products. So, for and I wanna caveat that I think small companies or companies that are selling, you know, quite, you know, streamlined products. Right? So things that are like automation tools, you know, being able to provide summaries and and things like that. But For enterprises, I think there's still a lot of challenges of bringing AI technologies because of privacy, data security, and other ethical considerations for why you you'd want to go about a little bit more and Carefully.

Svetlana Makarova [00:06:11]:

So I think b to c products and things that we, you and I, are much more exposed to on this platform and I think elsewhere, of course, you know, it's a buzzword, but I think enterprises are still encountering issues with, scaling efforts, I think costs, and things like that to be able to implement at that scale. So yeah. But, nonetheless, I think it is it is, sprinkled throughout all of the products at this point. For sure. It's it's a busy place out there.

User centric AI


Jordan Wilson [00:06:38]:

Yeah. And it seems like maybe if, and I'm sure there's, you know, other factors, You know, that are tugging at, you know, big companies or, you know, product managers to maybe implement AI when they maybe don't need it. Maybe it's Because they have to raise funds or maybe, you know, users are just, you know, demanding it in small numbers. But I think maybe if or do you think if product managers thought more about the user centric approach, do you think that that might, Allow us to more sparingly or more effectively implement AI into products because, yeah, I feel overwhelmed Because I love AI. I love using it. I talk about it every day, but there's so many things out there. I'm just like, we don't need AI in that.

Svetlana Makarova [00:07:25]:

Oh, absolutely. And I think that's, where user centric AI kind of comes in. It's, It's basically a user centered approach of developing products. And I think, again, it's not unique to AI specifically. It's just a, a concept for making sure that whatever you're developing, a solution that you're building is centered around user needs. And You're not building or you're bringing this technology for the sake of of saying, hey. This this tool is powered by AI. You're really are looking to, to the user.

Svetlana Makarova [00:07:59]:

Is it helping this this product? Is it helping to solve that particular need of that user? Whether it's AI, whether it's a rule based engine, it doesn't really matter. But to the user, that workflow should seem seem seamless. Right? So, if you're introducing AI and it's a new place, for a person to access to be able to get benefit of your solution, you're doing it wrong. So I think a key part of user centric AI is being able to bring these solutions that are embedded into the workflow. So, Folks should not be noticing, like, okay. Well, now you're entering the space of AI tool land, and you have to, You know, click this button or interact with the solution a different way. How do you truly embed it in a way that is almost invisible, to the user. Right? One example that I can bring is, you know, Google.

Svetlana Makarova [00:08:48]:

Right? So as Google evolved over the past decades, You know, they've brought more and more AI technologies into their tool sets. Right? So behind the scenes, they continue to evolve, improve their algorithms. But to the user, they're still interacting with it in the same way. Right? They're still go typing to on Google, but the difference is is that as a result of those technologies, they're getting better Searches. They're getting more accurate results. And so I think being user centric centric, you need to understand the The needs of those users and then how do you deliver them in the most efficient way that's the most fluid as possible. But then you have the other extreme where Some of these AI technologies do get cluttered. And I think another example of of maybe AI, in my opinion, not those done so well is Amazon.

Svetlana Makarova [00:09:37]:

Right? Because I think when you go on Amazon, their system is so cluttered, and I feel like, you know, they they're probably running some some large developmental teams that have certain components broken into separate teams, and so they're kind of doing their own thing with AI, and then they're launching and testing. And so Every day that you're coming onto the platform, something changes. And so, you want to control for that and you want to, Make sure again, like, is the person who came to your platform or to your product getting their task accomplished in a much more efficient way without having to leave their workflow, that platform, or whatever have you. So that's user centric AI done right.

Should we incorporate AI into everything?



Jordan Wilson [00:10:18]:

You know, Svetlana, you brought up a great point that I hadn't even thought of yet. You know, when we talk about, and I love the you you know, that terminology that you and Kind of use, you know, that it's invisible, to the user. So, you know, as these large companies, the biggest in the world, you know, they've been the one pushing, you know, AI for decades, but generative AI for the last couple of years, your Your Google, your Amazon, you know, now your OpenAI, your Microsoft. I'm wondering if, you you know, if we talk about user centric AI, and and being, you know, kind of invisible, quote, unquote, to the end user. I guess what happens When the end users are now very used to generative AI everywhere. Right? So I'm almost going against what I just said 5 minutes ago, but you brought up a good point. You know? If if we become accustomed to having, You know, essentially large language models that we talk to for everything, you know, you know, for our financial institution, for our insurance, for, you know, all over the board. Then is it okay.

Jordan Wilson [00:11:27]:

Is it then very pro user centric to incorporate, you know, AI into everything? I know I'm I'm contradicting myself, but now I'm curious.

Svetlana Makarova [00:11:36]:

Yeah. So I think that there's, generative AI is really great at certain things at this point in time, and maybe not so great at other things. Right? So You may be able to get specific insights, but, or complete creativity task and and things like that, but there are certain other things that it's not yet equipped to do really well. So it, works really well on unstructured, data and to be able to and I think that's the biggest use case to be able to provide, kinda insight summarization task and things like that, but things like predictive analytics. Right. So if you think of a use case such as an again, going back to Amazon, If you're kind of, searching the website and, you know, you're shopping for something, you know, how you have suggested items that you should probably look for, They're kinda looking at your engagement and what your you have a tendency to shop at and maybe your propensity to buy at that particular point, and then They recommend things, to you at that point. So things like that, LLMs would not be able to solve. So from a business standpoint, yes, from users, you know, l m's can serve serve specific functions, but businesses use AI too.

Svetlana Makarova [00:12:51]:

Right. So and they need to meet their objectives, and so recommendation engines are there for a reason. So and those use, Machine learning recommendation systems, too, to be able to, again, surface those users at the right time. And so If you kind of are looking, and there's a book, that I'm forgetting, what it's called, but they talk about kind of the evolution of AI even within the scope of Amazon. Right? So How can these predictive analytics inform some of the business decisions decisions over time so that could benefit the users? As an example, and then going into invisible Question going back to your invisible question. So, I think over time, you could, predict basically based on your, shopping behaviors, what items you would need in the future. So why do you even need to access Amazon to be able to have certain items sent to you? So you might just wake up in the morning, and then you get, like, a box of coffee delivered at your door because probably chances are you're gonna leave your house, and then you run out of and and then you're gonna go up there. So I think that's the beauty of data that that Amazon is collecting and all of these data systems is to be able to predict future behavior.

Svetlana Makarova [00:14:04]:

Again, LOMs will not be able to do that, but you need some infrastructures in place to be able to accommodate that kind of, You know, scale. Basically, a predictive analytics, which is why it's not rolled out or is not scaled at this point, but that's where the trend is going is really towards that invisible. How can you leverage more of that AI to predict certain behaviors? But as you've mentioned, you know, some of these other tasks such as summarization, being able to retrieve specific information from documents, get quick prompt prompt answers and things like that. I think that's also a tendency that we're gonna see. But, But, again, I think there are different use cases and but the tendency going to embedded user centric is is where the trend is going with

How to implement AI in product strategy


Jordan Wilson [00:15:58]:

You know, if, if you're a large company or maybe you're a a decision maker, Right. A a business leader who's making decisions for your company, and you're figuring out, you know, how, maybe how to implement, you you know, a solid product strategy into your product or your service offering. What's what's the best way to go about that process? Right? Because We haven't even gone into, you know, explainable AI and all of those things, but how you know, as a business leader, if if you are In the seat where you have to say, alright. Our our customers, our clients are needing, you you know, some, some sort of generative AI, you know, to help us make sense of this Unstructured data and to, you know, have a better, you know, consumer customer experience. How do business leaders go about making the right decision on putting the right in type of AI into their product at the right time in the right place. It's not easy.

Svetlana Makarova [00:16:52]:

Yeah. So I think it it depends on so if it's a an organization that has not Embraced AI. I think it depends. I think it starts with, a first use case. Whether you want it customer facing or you want it internally facing, I think it's a business decision that you'll have So where can you provide the biggest ROI behind that investment? Because bottom line, implementing AI is not cheap. So you wanna make sure that you kinda deriskify your first implementation of AI or basically that use case as much as you can. So, you know, probably makes sense to start with an internally facing product that, is focused on streamlining specific tasks. So are there repetitive tasks that, you know, the teams consistently do across different verticals that you could provide some efficiency.

Svetlana Makarova [00:17:37]:

So, being able to provide those efficiencies over not having them. So something is better than not having not having it. So Being able to implement the solution that provides 60% or 40% over those efficiencies is still a win. So I think, kinda lowering your expectations for what that use you know, you that first use case is and then seeing is their ROI behind that investment. I I think starting with that use you know, first use case and understanding, okay, here are the business objectives and then being able to measure again that that efficiencies and that ROI doing that pilot and then seeing, does it make for make sense for us to scale and then try out, you know, new use cases. And I think, part of that strategic approach, you know, that, I've spoken about on my page too is I think you you really need 3 pieces To to be able to scale AI in the enterprise. You know, you you you need data. All AI is heavily dependent on data, so You need to democratize access to that data.

Svetlana Makarova [00:18:42]:

You need to take a, platform approach for developing a AI applications. And what that means is, instead of building every machine learning, generative AI solution, like RAG LOM solution in in your enterprise, You would find platform use cases, basically reusable use cases that are applicable across different verticals. So You get the solution developed up to a point, and then you customize it to a unique use case. So if you if you have a need for recommendation engine or some predictive analytics. You know, I'm I'm sure that there's multiple use cases of it across the enterprise. So you do it once, but then you customize across different verticals. And, you know, number 3 is the infrastructure. So you need flexible infrastructure that allows you to be able to experiment, with the technologies by bringing and really testing and validating, iterating quickly.

Svetlana Makarova [00:19:35]:

And part of that approach is being able to develop in a way that modular. And what modular means is that the pace at which AI is evolving right now. Right? There's llama 2, and then there's Net Palm 2 that just kind of is is released. You know, there's new models that are basically popping up. So the modular infrastructure, basically, your development kind of, aspect of the solution needs to be able to swap out some of these components to be able to say, okay. Well, this model no longer works. So instead of me starting from scratch, I need to just take that module out and then put a new one in and then still have the entire solution work, from beginning to end. So I think those are the really the the 3 core components From being able to identify the 1st use case and then really scaling it through the enterprise.

Jordan Wilson [00:20:20]:

Yeah. And I love what you said there, and this is something I talk about all the time because, you know, individuals, companies, you know, everyone's saying, where do we start with AI? And it seems like most people, I think, Make the mistake of they look at the platform first or they look at what everyone else is doing and they try to follow their lead. But I love what you said. It's you you you have to see where you're doing that repeatable, you know, almost, you know, sometimes mundane work across verticals. And so it's a great point that you brought up that I just wanna you you really hammer home to the audiences. You know, focus on where you're spending The most repeatable time doing that that manual work that has data too. Right? That has data. Yeah.

Importance of explainable AI


Jordan Wilson [00:21:02]:

But, I do wanna ask you, like, how important is it to be able to Explain it. Right? To be able to explain what happens inside of the AI black box, you know, both both before You kind of go through that 3 step process that that you just laid out for us, but also on the back end. Right? And and and to be able to Kind of say, hey. Here's the impact. How important is that, and how do you go about doing it?

Svetlana Makarova [00:21:26]:

Yeah. And I think that's a it's a great question. I think it's it's it's at the core of being able to truly practice user centered AI is being able to explain how the the engine or basically that Solution really works end to end. So explainable AI basically opens up the black box and shows the users, This is how the engine or whatever, you know, AI came up with the recommendation or the way that it did. So You'll notice that, you know, Bard and ChargePoint started to include references. And so one of the purposes or kind of the needs that it's solving is being able to Kinda build trust. Right? People are not trusting these systems because they don't know where that data came from. So being able to surface evidence Back to the user for this is the data that went into the system, and this is how the machine kinda weighted those signals of of, you know, of that data, and then here are the recommendations.

Svetlana Makarova [00:22:23]:

And this is why this rep was this was a better recommendation than the other. And then so, Again, depending on the type of system that you're implementing, there's different ways of being able to surface that, and then you invite in Kind of feedback. Right? So, again, going back to, OpenAI's chat GPT example, you have the thumbs up, thumbs down. So was the answer Valid. Did that build trust, or did people find it helpful or useful? Right? So you take that feedback and and Implement that back into your system, and, again, with fine tunes. And, again, you bring those results back to the users, and you really show them, Kind of open all all of your cards and say, this is what it is. Do you still feel like this isn't an accurate answer? And then you just go back and iterate. But I feel like that's, really helped with, implementation, I think, rolling out of the solution.

Svetlana Makarova [00:23:13]:

So I think this is more of a user and Centric like a UI piece where you have to really bring that evidence back to that user to instill trust in in the results that AI is providing.

Jordan Wilson [00:23:27]:

You know, such a good example too because, if you're joining on the podcast, I was snickering, you know, a little bit as she's talking about that because I remember during the earlier days, I'm like, there is no user centric, like, in this AI originally. It it did take, You know, the big companies, you know, like Google, like OpenAI, kind of a long time to start saying, hey. Here's sources or even, hey. A simple thumbs up or thumbs down inside of chat gpt, or sometimes you get, you know, 2 options. I guess maybe can you help Blaine, because I know it's easier said than done. Right? So from a product management perspective, I guess what goes into that decision, right, of of how You go about, you know, creating a user centric product, what type of feedback you need, how you get that feedback, what you do with it. So, you know, without going, You know, like, into too crazy of details. Like, how does that process work and why do some like, why does it sometimes take a little bit longer, I feel, at least, to, you you know, really see that user centric piece.

Creating user centric AI


Svetlana Makarova [00:24:31]:

Yes. And I think, again, as I've mentioned, I don't think it's, Any different being in AI land than any other kind of, digital product is really investing that upfront time, understanding the users, understanding their workflows. But as I think you've mentioned is, also part of that discovery process is understanding the paper trail. So anything that you want to kind of automate with AI needs to have some trackable mechanism or some data behind it for the machine learning to then learn, let's say, patterns from or to be able to use that data, as as, to be able to mimic those tasks and really automate. So part of the discovery process is really, again, trying to automate and understand the intent. What is the user trying to do? It's not making those assumptions, but, like, really putting those users in front of you and and asking them, well, what are you trying to do with this? What kind of use cases are you trying to solve? And then you would invite as many of those users as you can and try to see what's the overlap. What can what efficiencies can I provide to to those users? And in One of the so for for the paper trail, so as an example, you know, we do have close to a 100 specialties of Mayo Clinic. So everyone kind of does, things slightly differently.

Svetlana Makarova [00:25:56]:

Right? But, during some of our discovery processes, we've identified that there are certain that, again, you know, certain groups were doing manually. We set them down, and we invited them for our conversation to understand. Okay. Well, What can we truly automate? What data overlaps really exists, across the special season? We were we're able to to leverage that. One of the other things that we do as we've where we've implemented products even when we go into, production, we do weekly work shares. And so I think that's been a key in in really practicing and not just talking the talk, but walking the walk. But instead of doing these sprint wide, which is Typically, like, 2 week, cycles. We do these weekly Mhmm.

Svetlana Makarova [00:26:40]:

Where we put whatever we've done in that week in front of our users. So we have, you know, the the folks who would be our target users of that solution really see the progress that we're making, and then we'll, You know, they provide us real time feedback. You know, are we in the right direction? Do we need to are we completely off, or, you know, do we need to pivot? So before it even actually reaches production or a potential release into kind of the live production environment, We also have a mechanism to be able to, again, like, pressure test this with users to see if they still feel that whatever we're putting out into the market is valuable, is something that They could see or is it noise? Right? So, it allows us to, like, again, pressure test this on an ongoing basis. One of the other things that I highly recommend doing also is, again, part of that you are, user centric kind of methodology is vending your users. So, do you have a an easily accessible channel where you could phone a friend, basically, and say, hey. I'd if you just Check this, validate this concept for me quickly. And I think you just need 4 to 6 users to be able to just validate quickly, You know, more conceptually, whether it's something that's worth even pursuing, from a strategy standpoint. So, again, find ways to to friend users, being being able to share progress and then be accepting of that feedback.

Svetlana Makarova [00:28:08]:

Don't take it critically. I think your users are gonna be the ones who using your product so you don't develop things in the silo, basically. If you're if you can, create points with your users along the way. I think that's the best the best way to implement some of these these technologies.

Jordan Wilson [00:28:25]:

Yes. Please please phone a friend. You know? Get Get real human users involved. You know? I think there's also this this rush toward, you know, like, synthetic data and, you you know, these These AI synthetic user groups, which is like, alright. That's great and all, but, yeah, at some point, you you you have to talk to human users. So I'm glad you brought that up. And alright. So we've we've gone all over the place in a fun way.

Svetlana's final takeaway


Jordan Wilson [00:28:49]:

We've we've we've explored, you you know, creating a better product strategy, you know, everything from talking about, you know, structured and unstructured data and, you know, talking about, you know, the right platform approach. It's So many other things, but, Svetlana, what, you know, kind of, as we wrap up, what's maybe the one big takeaway that you want to, you know, other people out there, whether they're, You know, decision makers, you know, trying to, implement AI into their product, into their organs, into their organization, Or maybe, you you know, people who are in your shoes, you you know, those actually managing the products and building AI into it. What's the biggest takeaway or the best piece of advice that you can give everyone?

Svetlana Makarova [00:29:30]:

Yeah. Don't write the hype. So I feel like, you know, just because you have a A buzzword in, on just kinda in the market doesn't mean that you really need it in your business. So I do see it a lot. I hear about it a lot that, hey. I need AI in my business. Where does it fit? You think it's it's a, It's it's a jargon or people just kinda make this up, but it I've heard this myself, trying to fit the technology into specific use cases. I need it, in my business.

Svetlana Makarova [00:30:01]:

Well, when you ask them, well, what do you need it for? What do you think that it could provide from a business standpoint? Or What value could it bring to your users? Well, I don't know. I just need AI, because it's it's a cool thing. It's the coolest thing on the block. So I feel like, you know, you you kinda have to pause and figure out what value can I provide and, what solution can help for that? Then, again, the answer is not always AI. Sometimes you can find, more efficient ways of of solving for a particular problem. One thing I was just brainstorming more recently was about, you know, does this use case need generative AI, or could we build a much more simpler, you know, maybe a machine learning algorithm or some much more streamlined technology than maybe a rule based engine. And if you think about it, even from a compute storage and, just efficiencies cost, like, the time that it takes for you to complete for that Engine or agent to complete that task is the difference of, again, starting up a motorcycle versus starting up a boat. Right? So You may you may not need to access an entire large language model to be able to complete that task.

Svetlana Makarova [00:31:14]:

Sometimes a motorcycle type of engine, just a small thing would do because, you know, the type of task that, your solution needs may not require that much data and does not require that much sophistication. So I think you have to go case by case basis and really evaluate different solution approaches. Don't ride the hype. Mhmm. Just because generative AI is the coolest kid on the blog doesn't mean that you need it.

Jordan Wilson [00:31:38]:

I love that. I love that. Yeah. Because people are always, you know, Trying to get on the big yacht, but you just might need the motorcycle. That's such a good point. Or hey. Or maybe the electric scooter. It could could even be smaller.

Jordan Wilson [00:31:50]:

Right? There we go. Hey. So thank you, Svetlana, so much for joining the Everyday AI Show. We really appreciate your time in helping us really dive in, to everything that that that is going on in AI product strategy. Thank you so much for joining us.

Svetlana Makarova [00:32:07]:

Thank you so much for having me.

Jordan Wilson [00:32:08]:

Alright. And, hey, as a reminder, we still have the news. If if you're looking for the news, we still have it. Make sure to go to your everyday AI .com. Sign up for the free daily newsletter, and we'll be back live again. Don't you worry. Thanks for joining us, and we hope to see you back for another episode of everyday AI. Thanks y'all.

Svetlana Makarova [00:32:26]:

Thank you.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI