Ep 298: Going from Everyday AI to Game-Changing AI

Making the Leap From Everyday AI To Game-Changing AI

As the corporate world spins at a rapidly evolving pace, small, specialized AI models are becoming increasingly prominent, interconnected with the growing open-source movement in AI and machine learning. This trend paints an exciting picture for midsize business-owners and decision-makers looking to adopt AI solutions that can transform their business landscapes.

Internal AI Policies: Preparation and Implementation

Companies interested in implementing AI solutions should establish an internal AI policy. Creating an AI steering committee could make it easier to evaluate and consider point-specific solutions tailored to individual business problems. This comprehensive planning process could take roughly three to four months, enabling businesses to prepare for AI adoption, identify use-cases for their organization, and adequately prepare their teams for the onboarding process.

Workforce Transformation and Risk Mitigation

As AI becomes an integral part of daily operations, businesses must prepare for the inevitable dependency it will create. Leaders should focus on workforce transformation, providing digital literacy training before moving to AI literacy, to ensure that employees can effectively interact with AI systems. Risk mitigation plans should be set in lieu of potential challenges in implementing generative AI and related security concerns.

AI Adoption: Risks and Challenges

Generative AI is already making strides in various sectors, such as customer service, automotive, and healthcare. However, despite its potential, full implementation of AI across organizations is still at a low 4%. Resistance to adoption could stem from concerns around unpredictability, the lack of processes for evaluation, monitoring, governance of AI systems, and security risks in terms of data privacy, ownership, and storage issues.

The Impact of Generative AI: Measuring Outcomes

The real challenge lies in measuring the impact of generative AI. Its less quantifiable nature compared to traditional AI requires companies to establish baseline measures and consider influence on decision-making and outcomes. The levers for impact analysis depend on the industry. In healthcare, a key performance indicator might be a reduction in patient return rates, while in the automotive industry, it could be the provision of context for field technicians.

Building AI Capabilities Internally

An effective way for businesses to adapt to and embrace AI is by incorporating it into various operational activities like sales, marketing, and strategic brainstorming. Businesses should identify critical areas within their organization where AI augmentation would enhance operational efficiency.

AI: The Game Changer

Consultancies like AlignAI are enabling businesses to adopt AI responsibly, helping to mitigate associated risks, particularly in regulated environments. They play an essential role in understanding why AI hasn’t become a game-changer for many businesses, mainly due to the fear of the unknown and workforce transformation challenges.

Generative AI, used correctly, can provide valuable context for a variety of roles, from customer service to high-level business strategies. Its use can potentially lead to increased revenue and cost-savings, making it a game-changer within the industry. By recognizing the benefits and mitigating the risks, businesses can make the leap from everyday AI to something truly transformative.

Topics Covered in This Episode

1. AI Avatars and Digital Twins
2. Hour One and Virtual Human Technology
3. Human Interaction and Limitations of Bots
4. Special Applications of AI Avatars

Podcast Transcript

Jordan Wilson [00:00:15]:
Yeah. We get it. AI is great, and it's here to stay. I think businesses everywhere have kind of embraced this concept of everyday AI. Not us, but using AI every day, because they understand the business productivity. But why is it not more than that? Right? Why does AI still seem sometimes that it's just this nice, productivity, feature that sometimes just sits on the shelf, but doesn't get fully implemented. We're gonna be addressing that and talking about some of the reasons why and what your company can do to fix that and move on to game changing AI. I'm excited for today's conversation.

Jordan Wilson [00:00:56]:
I hope you are too. So thank you for joining us. My name is Jordan Wilson, and I'm the host of Everyday AI. In And this whole thing, it's for you. It's for me. It's for us. It's for everyday people so we can learn to leverage generative AI to grow our companies and grow our careers. So if you're listening on the podcast, thank you for joining us.

Jordan Wilson [00:01:14]:
As always, make sure to check out the show notes for a lot more. If you're on the livestream, thanks for joining us as well. Get your questions in, and we can take them. But before we dive in, let's just start as we do, with the AI news. And if you haven't already, make sure to go to your everydayai.com, to sign up for the newsletter. For more on today's news, a lot more, and recapping our show. Alright. Let's get into the AI news for today.

Jordan Wilson [00:01:41]:
Couple things to look like look at. 1, pretty big. So former OpenAI cofounder has started a new AI company. So former OpenAI cofounder Ilya Sutskever has launched a new company called Safe Superintelligence Inc with a focus on advancing AI capabilities while also ensuring safety measures are in place to prevent potential harm from super intelligent AI systems. So this company is founded by Elia and then former Y Combinator partner, Daniel Gross, and AxOpenAI engineer Daniel Levy. As one of the more notable names in AI, Ilya's new company is one worth keeping an eye on in a pretty big move for AI and safety space. So a lot of people you know, it's been a very public, kind of thing in the AI world since, since Ilya left OpenAI. Everyone's been wondering, what is he working on? Because he's widely regarded as one of the most brilliant people in AI.

Jordan Wilson [00:02:43]:
So pretty big news there, with this new company SSI. You know, it seems like they're just, oh, skipping AGI and going straight to focus on Superintelligence. So, pretty interesting move there. Alright. Next piece of AI news, Accenture is forecasting a huge growth in revenue thanks to AI. So Accenture, a leading global consultancy, has projected annual revenue growth above estimates due to growing adoption of artificial intelligence. So despite economic uncertainty, the company has seen an increase in new bookings and expects a 1.5 to 2.5 percent in revenue growth for the fiscal year. So Accenture's annual revenue growth is expected to be much higher than normal and expected be cut to grow because of that demand in AI technologies and cloud migration services.

Jordan Wilson [00:03:32]:
So despite a strong dollar and economic uncertainty, the company has seen growth in new bookings and a consistent demands for its services, including a lot of work in and around generative AI. So it's pretty pretty interesting. My take on this is, you know, very early on, you had a lot of these big consulting companies writing off AI. I said at the time, that's not gonna last long and, you know, sure enough, they finally got it together and are embracing AI. Alright. So for more news, as always, go to your everydayai.com. We'll be recapping, today's show and more. But we're not here to talk about AI news, although we always do a little bit.

Jordan Wilson [00:04:10]:
But today, we're here to talk about how we can go from everyday AI to game changing AI. That's what we're all trying to figure out. I think we all understand the power of generative AI. So how can you take it past that and and really, have it be a transforming force for your business. So, I'm excited today to have on the show. Let's go ahead. Bring her on. There we go.

Jordan Wilson [00:04:34]:
Thank you. So we have Rehgan Bleile, the CEO at AlignAI. Reagan, thank you so much for joining the Everyday AI Show.

Rehgan Bleile [00:04:41]:
Thanks so much for having me. I'm excited.

Jordan Wilson [00:04:43]:
Alright. So can you tell us just a little bit about what you do at AlignAI?

Rehgan Bleile [00:04:48]:
Absolutely. So we work with enterprises to help them responsibly adopt AI quickly. And we do that by helping them identify the areas where value is going to be amplified and met and, you know, multiplied by inserting AI into that process. We'll also help helping them think about the risks associated with it. This is kind of like a major identity crisis. A lot of companies are having on inserting AI into their organization safely and making sure that they don't have any reputational damage, financial damage, especially in the regulated environments, making sure that they're not going to get fined. So that's what we help organizations do all the time.

Jordan Wilson [00:05:28]:
Yeah. And, hey, if if if you are joining us live, like like Douglas or, Jason, Kobe, Woozy, Tara, Jennifer, whoever, make sure to get your questions now. Like, I'd love to hear from our audience and to get you some some answers live. So so, Reagan, you know, I'm curious. So, you know, in in your work at AlignAI, you know, it it it seems like a big goal, right, for probably yourselves and just about everyone out there is, you know, not, turning AI from kind of this novelty or or productivity tool, but to really have it be transformative. What are some of the reasons that you would say that, you know, we are still in this phase. Right? Like, large language models have been out for, you know, years, yet, you know, so many people are still trying to make this into a game changing technology. What are some of the reasons that you're seeing that maybe this hasn't happened in most businesses?

Rehgan Bleile [00:06:20]:
This is not gonna be the most exciting answer, but it is honest and it's what we're observing and it has to do with how companies think about risk. So number 1, people are afraid of what they don't know. So you've got all of your risk teams, your cyber teams, they're really uncertain about how this is going to make an impact. And then they kind of hear all of these horror stories or war stories about AI and people are still trying to wrap their heads around it. How does it work? What do I think about if I'm thinking if I'm on a cybersecurity team, how do I think about the threat landscape for something like this? There's a lot of new processes that they need to put in place. And the second piece is around workforce transformation. So you've got a lot of users, nontechnical users who are trying to adopt and leverage and use these tools, and they're still really struggling on how do I use it? What do I use it for? What are the game changing use cases that I can actually leverage AI for?

Jordan Wilson [00:07:14]:
Maybe let's go into a couple use cases because that's always what ultimately I think people care about. Right? Because, still generative AI seems like a black box to many and, you know, so I think we we we learn from hearing successful stories. So yeah, maybe could you give us a a use case or 2, from your work that has shown, you know, kind of the path forward for how companies can, yeah, turn turn generative AI into something more than just this, you know, huge productivity boost.

Rehgan Bleile [00:07:44]:
Yeah. And so there's a couple of lenses to that before I jump into a story. I think the one is specifically around the fact that generative AI is really good at providing tons of context. So in hyper siloed enterprises, this is actually really useful. So for example, for customer service, for routing complaints, you know, there's a lot of context that we can start to pull, things that we can start to understand, and we can start to smartly, you know, route the different complaints to different types of people who are available and who have that skill set to answer those types of questions. So when we start to think about, you know, if your game changing element as a company is customer service, and that's what's gonna help you penetrate the market and get more market share and acquire new customers. That's game changing. Game changing for companies is when you can, increase revenue or save costs on a really large scale.

Rehgan Bleile [00:08:33]:
And so if we're inserting, you know, AI or generative AI into customer service use cases, we need to think about not just customer service, but what specifically about customer service? What metric can we move to make it game changing? And so for some companies, it is that experience like routing complaints to the right person. In other industries like automotive, you're looking at a bunch of different types of use cases that's game changing for them. Number 1, supply chain. Huge, huge problem in automotive to be able to hit your production numbers. The second customer service. So giving people visibility into the automotive process, and where their vehicle is in that entire process. And the third being the telematics data that's coming off of those vehicles. So how can we create hyper personalized, you know, experiences inside of the vehicle and start getting people more comfortable with some of those autonomous functionality in a vehicle over time without just kind of plunging people into the deep the deep end of an autonomous vehicle.

Rehgan Bleile [00:09:37]:
So that's what automakers are thinking a lot about at the moment.

Jordan Wilson [00:09:41]:
You know, one thing I I picked up on there is is data. Right? And it seems like in in my personal experience, you know, we work with a a a lot of companies, and it seems like companies are are having a hard time measuring what matters, when it comes to generative AI, because I think a lot of this is is maybe it helps people, think faster, it helps people process faster, it helps people, you know, kind of connect the dots faster and and sometimes that's hard to measure. You know, so you gave an example of, you know, customer service, but, you know, really what is that, you know, metric inside of customer service that you can actually measure? You know, from a data perspective, you know, what recommendations or where should companies be looking so they can actually know, right, if they're getting a return on generative AI and if it is actually a game changing feature for them?

Rehgan Bleile [00:10:29]:
Yeah. Great question. This is the first place we start. It's what we call our baseline. So what are we doing today? What does normal look like today? You know, a lot of people define game changing as, like, kind of the sexy use case, like the interesting one that's great on a news article. Those are fine and great, and that can be one dimension that helps with your board or your, you know, c level. But the reality is, is it, you know, affecting the bottom line? Your CFO is gonna care if it's increasing revenue drastically or cutting costs drastically and creates that differentiating element for you in the market. So, when we think about it, we think about baseline.

Rehgan Bleile [00:11:05]:
What are the things that we could move? What are the levers we could move? And how does AI enable that lever? And so if you don't have a baseline today, you're never gonna be able to understand if it made a huge impact. Now, if you look at generative AI versus traditional AI, where you have more of a discrete way of looking at predictions and accuracy, it's a lot easier to calculate an ROI because you can look at a lift, But generative AI is a little more squishy. You're basically looking at, like, if you're a marketing person, how much of an influence did that thing have on the piece I created that actually made a difference on people converting? That's a really hard thing to track and measure and manage. And so I often think a lot about how Google Analytics did this for marketing. Right? We started to create things like multi touch attribution. We started to look at influence of different, media that people were, you know, observing before they actually make a purchase. And so when we think about generative AI, it's going to have an influence on people so that they make different decisions. And so how do we measure that influence? How do we measure the outcome? That's something we think about all the time.

Jordan Wilson [00:12:15]:
Yeah. And, you know, I I like something you said there, Reagan, about, you know, the baseline and then measuring the levers that, you know, move or maybe don't move. You know, and I know you can't give blanket advice. Right? Because it depends. There are so many industries, so many sectors, and generative AI is is used all over the place, you know, from top to bottom. But what are some of those maybe more common levers, that companies should be looking at or could be looking at to see if certain generative AI initiatives are actually, moving that lever or or not?

Rehgan Bleile [00:12:48]:
Yeah. I think it comes down to the core service that they're actually providing. So health care provider and, you know, an auto glass repair company are gonna have 2 different types of problems. For your health care providers, you're looking for patients that leave and come back that shouldn't be coming back, and how do we prevent that from happening? Or even just kind of, you know, taking notes from the doctor and making sure they're accurate and and and streamlining that entire process of getting patients through that experience in in a hospital setting, right? Because they're they're resource constrained. You know, each industry is gonna have a very specific problem. Things like the auto glass repair type of example would be, you know, people in the field who are dealing with a huge variety of different types of technology in these new vehicles that they need to be able to understand. So how do we give them that context while they're in the field fixing those vehicles correctly? Right? So there's there's a bunch of different things that we think about that impact customer service, that impact sales, that impact the ability for a service or product to be delivered to the customer. And so if we break those down and we look at the biggest areas of opportunity on each of them, depending on the industry, that's when we can start to find these game changing use cases.

Rehgan Bleile [00:14:07]:
I think, you know, email summarization and that kind of stuff is super helpful and really great. And then actually for me, it's a tool to help companies build the muscle, around AI without having a tons of risk associated with it. But think about the things that if that part of your company goes down, it's a big problem. And how do we help augment some of those workflows with AI?

Jordan Wilson [00:14:31]:
You know, speaking of your company, I always love hearing how, you know, companies that work in and around AI are actually leveraging AI internally because I think that's that's very telling. Right? So I think you just gave a great example of of building that muscle because I think, teams need to build the muscle. Right? And if it's a if it's a new workout, you know, and you you you can't just, you know, join in an expert level. You have to start somewhere. So, you know, even for you internally, your team, where did you all start to build that muscle? And then I guess how have your, you know, anecdotally speaking, you know, how has has your, strength grown in those areas, you know, since you did start?

Rehgan Bleile [00:15:14]:
Yeah. We are trying our best to be an AI native company, meaning we use AI in every element of our company. And so when I think about how we're leveraging AI, we actually use our own platform internally, which, is great because we can start to get value out of, our own platform and understand how our customers are using it. But some of the areas that we use it are absolutely in sales and marketing, around copy, around use case generation. We've created, you know, custom g p t's. We have our own kind of teams version of open ai that we're using. We use it in codevelopment, on our platform, of course, we're using it. I use it all the time for strategic brainstorming, creating company on-site, agendas and ideas for activities, team building activities, ways to keep our, our remote team engaged with each other.

Rehgan Bleile [00:16:04]:
You know, I use it all the time for things like that. Analyzing data before our quarterly on sites, around metrics for the quarter. You know, these types of things are really, really useful. Looking at, you know, market research, I use a bunch of different types of models, to fact check, on different market research numbers when we're thinking about the opportunities to go after, creating ideal customer personas. There's all sorts of different things that we use it for, and we use it every single day. Every single person at our company uses it every single day. And if someone asks me for an AI tool, it's almost a no brainer. We do a risk evaluation on it, of course, but I'm happy to pay for it for our, for our employees.

Jordan Wilson [00:16:48]:
Yeah. I I think that's great. Right? And, it's an ongoing conversation. Right? Some of the most successful, medium sized companies that I've talked to who are implementing generative AI are doing it in an open fashion, kind of like what you said. You know? It's it's bringing ideas to the table, facing problems head head on, and, you know, finding a generative AI solution that can help there. You know, one thing that I always think about, Reagan, is is kind of this this AI implementation paradox because, you know, studies show that I think it was 83% of of companies say that AI is a top or the top priority, yet only 4% of companies have implemented it company wide. What well, I'll I'll just leave it open ended. Why?

Rehgan Bleile [00:17:34]:
Yeah. It's risk. It's definitely risk. So the the idea of this is if you I used to think about this a lot. So if you create a pillar of your company that is now AI and that pillar falls down for some reason, that can be catastrophic. And so companies from a business continuity perspective are just fearful that these systems will be unpredictable and they will make mistakes that humans aren't used to making. And so at least with people, when we hire people, we have this process of evaluating them and their skillsets. And we were really detailed about the types of jobs and tasks that they're supposed to do.

Rehgan Bleile [00:18:11]:
We measure their success. We give them promotions. We don't have that kind of evaluation process for AI systems yet. And we don't have a really good way of monitoring those types of things. And so companies, because they don't have that structure in place are a little bit more hesitant to be reliant on an AI system over a person to actually make that workflow or that function happen. And so we're often putting a lot of human in the loop in there to start just to gut check. Like, where is this thing going to be wrong? Right? So I think with people, we can anticipate where it goes wrong. In fact, there's entire security teams dedicated towards, you know, insider threats of companies.

Rehgan Bleile [00:18:52]:
And we don't have systems like that for AI yet. We don't have AI systems that are checking other AI systems on whether or not they're, you know, pulling a bunch of data and sending it out to somebody outside of the company. Right? And so I think a lot of organizations are still trying to think about what does that quote governance process look like? How do we red team these models and these systems to make sure that we can break them in all the ways that, you know, we can then anticipate in the future and we can monitor for in the future. And so that is the reason I can tell you, hands down. The second one is just how do we get people to use this? You know, we do trainings all the time for AI. There are folks that legitimately don't know how to use Microsoft online. And so there's a sense of digital literacy that has to come before AI literacy, for a lot of individuals that work at companies. And so for having them interface with AI systems, we have to make sure that they know how to use it appropriately, especially until these systems get good enough from a user experience perspective where we don't have to put so much onus on the person.

Jordan Wilson [00:19:56]:
And, hey. If you're joining us live, we have, Reagan BlytheCEO at AlignAI. Reagan, you said 2 very important things there when talking about this kind of, implementation paradox. Right? You know, risk independency. And I wanna dive into both of those, a little bit more. But, the human in the loop is is always, you know, a conversation you have to have is is, you know, oh, can can humans still drive this the right way? Can they still have check points on generative AI at the right times at the right checkpoints, but also becoming overreliance potentially. How should business leaders, Reagan, be looking at this factor of of dependency? Because I get it. Right? Like, some people might say, oh, we're giving generative AI too much control.

Jordan Wilson [00:20:43]:
We're giving it too much leeway to make business decisions when we don't fully understand it. So how can they, you you know, find that sweet spot of, you know, kind of relying on generative AI, but, you know, how do they deal with those fears of, you know, maybe not becoming too, dependent on it?

Rehgan Bleile [00:20:59]:
Yeah. I think this is, from a paradigm perspective, not necessarily new. Right? Like, we are very reliant on a lot of things like the electrical grid. You know, we don't think about the electrical grid very often, some people do, but we don't usually. Right? We just assume that it's up and running and and and it's gonna work and our electricity is gonna turn on and we're gonna be able to do what we need to do. So there there's maybe an example of an overreliance to an extent. Right? So if that goes down, that could be pretty catastrophic and there will be a lot of people kind of not prepared for that. And so when we think about kind of the spectrum of reliance on systems, not just AI, but systems in general, you know, what makes us comfortable being reliant on a system? Typically consistency and oversight.

Rehgan Bleile [00:21:46]:
Right? So consistency that it's gonna perform the way that we anticipate it to perform, which today these these AI systems don't do that. And then the second is oversight. So we have trust in somebody somewhere who's overseeing the grid that there's oversight and they're going to be able to anticipate things that could go wrong and they're thinking about the things that can go wrong and they can prevent those from happening. And so, you know, same thing with these AI systems. It's the oversight piece. And I can tell you not a lot of companies have the right structure in place for for the appropriate oversight. So I think one, it's still like the the systems are still a little shaky. They're still pretty unpredictable.

Rehgan Bleile [00:22:25]:
Like, we don't know where we're gonna get out of them some of the time and actually a lot of the time. And so, you know, that's a problem. And then the second piece is being able to anticipate and and having comfortability that there's the right monitoring components in place to be able to anticipate when it will go awry and try to prevent that from happening. So those are the two core reasons why companies are not, you know, going straight to this reliance, you know, mechanism for AI systems.

Jordan Wilson [00:22:55]:
And great great answer there on the dependency side. So, on the risk side, and and maybe we'll just, you know, you know, toss it to a question here from Douglas, kind of aligned with risk so we can, you know, kill 2 birds with 1 stone here. But Doug was asking, how do you handle generative AI for companies with security concerns? Do you, recommend RAG or local large language models like, you know, Llama as an example?

Rehgan Bleile [00:23:19]:
Great question. I talk about security all day long. In fact, I married somebody who is insecurity, so we have fun conversations at home. But I would say from a security lens, the things that people are usually pretty nervous about. So one is you can go and look at the OWASP, you know, top 10 for LLMs for applications, which gives kind of the top 10 vulnerabilities that they, they see out in the field. So things like prompt injection, things like overreliance, things like, you know, excessive agency. These are types of things that you can think about from a design perspective. So regardless of whether you're building or buying your own system, so using an API, like open a i's API or you're using something local like llama and kind of building your own system around that, you know, those are things that you should think about regardless.

Rehgan Bleile [00:24:14]:
The second lens of that is really data privacy, data ownership, copyright IP protection. These are this is when you start to get security plus risk plus legal, you know, plus these other kind of groups inside of the company that need to think about the implications of that. So where is the model running? What where's your data stored? Is it segmented from other customers? What environment is it stored in? So things like the Azure OpenAI, system is actually locally stored in the Azure environment. It's not going to to open AI. So things like that, just knowing how it works and asking those specific questions of those systems that'll help you do risk profiles on, you know, whatever you're building.

Jordan Wilson [00:24:57]:
Another great question here from our livestream audience. And, hey, if you're a podcast listener, you should come come join us live, get your questions answered. But a great one here from Rolando. So asking what emerging AI trends are you most excited about and how might they impact enterprises?

Rehgan Bleile [00:25:14]:
Yeah. I'm most excited about small models. So going from these really large generalized models down to these more kind of hyper specific models and really the open source movement. When we looked at traditional, you know, machine learning and AI, there was this movement from these big platforms like SAS over to open source and building out a techno technological ecosystem support those open source, components. So I'm very excited about the trend towards open source. I'm very excited about the trend towards these more niche specific models and the agent architecture, that that a lot of people are looking into to be able to kind of fact check and quality check and do multiple subtasks inside of an AI system without just kind of using 1 giant model.

Jordan Wilson [00:26:05]:
And and if you're a new listener here, Reagan just crushed that question. I couldn't agree more. I've been saying, like, for a year, the future of large language models is small language models. So so, yes, like, I'm I'm totally on board with that. So many great questions today from our audience. So Liz asking for companies that are mid size and ready to adopt the use of AI, how soon can this be implemented, and what are or can be done to prepare users to adopt quickly?

Rehgan Bleile [00:26:33]:
Estimated timelines. Yeah. We've done this a lot, so I'm happy to give you, some some discrete answers on this. So for midsize companies, I'd say number 1, first thing you should do is have, AI policies internally and have at least a small steering committee of individuals that can think through the risks when you're looking at solutions. Because midsize companies are gonna really struggle to get enough budget, to be able to build your own kind of custom solutions internally, at least for now. And so my suggestion often is to buy and to think about point solutions that that solve very specific problems internally. And so as you're going through those point solutions, it's really important to understand, how to ask the right security and risk questions to those vendors, because you don't get to control that. They do.

Rehgan Bleile [00:27:20]:
So that would be my number one is just have that in place and be able to workshop and brainstorm and scope use cases appropriately internally. Start with a quick win one, not a big one, and then identify a big one that's gonna get your executive super exciting so you can get more budget. So that that's what I would do, and start with something maybe really low risk as well. And then timeline wise, I would say, to set up a group like that probably takes about 4 to 6 weeks. And then use case ideation scoping, maybe another couple of weeks if you're dedicated to it. Prepping users for onboarding super important. So education, just kind of giving people a general understanding of what is AI, how does it work, why will benefit fit you, why should you care, what are the risks you should think about, that can be done depending on how big your company is. We've done you know, rollouts of 600, a 1000, 1500 people, in the matter of, you know, 3, 4 weeks, and getting people kind of exposed to that and doing more of communications campaign.

Rehgan Bleile [00:28:21]:
And then finally, the evaluation process or proof of concept for a first one should take a couple of weeks max, and then getting something onboarded and rolled out, prep probably another 3 to 4 weeks. So you're looking at maybe, like, 3 to 4 month timeline to get something rigorously thought through a good plan in place for adoption, good plan in place for risk, and then identifying some of those use cases.

Jordan Wilson [00:28:49]:
So, Reagan, we've we've talked about a lot in today's episode. You know, we went over, some of even your team's internal use cases, talked about risk and dependency, AI governance, the rollout process. We've covered a lot and, you know, I'm I'm wondering what is your one takeaway as as we wrap up here, that companies and business leaders should be looking toward, you know, especially those that have already realized the power and promise of generative AI, but they still haven't, you know, been able to to roll it out company wide, and they still haven't maybe, found it to be a game changer yet. What's the one takeaway for companies and business leaders to do today?

Rehgan Bleile [00:29:27]:
Yeah. I'd say number 1, kind of my hot take is gonna be that the dependency is going to happen. It's going to happen. So just get ready for that. Think about how you're going to prepare for that. And the workforce transformation is going to be significant. So those are kind of my 2 things to think about. The takeaway there is try to identify work with your risk teams to try to identify where you can be preventative and where you can keep these roadblocks from happening.

Rehgan Bleile [00:29:56]:
I can tell you a lot of AI initiatives are getting stuck at security. They're getting stuck at legal. They're getting stuck at risk. And if you can work with them, have everybody educated on the same page and have a plan in place, risk mitigation plan in place, identify your risk area and surface and work constantly, alongside them, you will move much faster, much faster. So I think that is kind of my biggest takeaway. I know it's not fun sometimes to sit down and talk to the people that are gonna block you, but I can guarantee you'll move much faster if you do.

Jordan Wilson [00:30:31]:
So much good advice there. So, Reagan, thank you so much for joining the Everyday AI Show. You gave us a great blueprint forward on how we can turn everyday AI into generative AI. We appreciate your time.

Rehgan Bleile [00:30:45]:
Thanks so much. This was fun.

Jordan Wilson [00:30:47]:
And, hey, everyone. There was a lot there. Yeah. Rehgan dropped just bullet point after bullet point of great advice on how you can leverage AI in your business. We're we're we're gonna be recapping it and a lot more in our newsletter. So if you haven't already, make sure to go to your everyday AI .com. Read today's newsletter. It's gonna be a good one.

Jordan Wilson [00:31:08]:
If you're listening on the podcast and found this helpful, please leave us a review, share with your friends, and make sure to join us tomorrow and every day for more everyday AI. Thanks, y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI