Ep 202: The Holy Grail of AI Mass Adoption – Governance

Episode Categories:

Resources

Join the discussion: Ask Jordan and Gabriella questions on AI governance

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Connect with Jordan Wilson: LinkedIn Profile


Aligning AI and ESG Strategies: A Lesson from Emerging Areas

The journey towards systematic adoption of Artificial Intelligence (AI) mirrors that of Environmental, Social, and Governance (ESG) strategies in business structures. Emerging areas often necessitate a dynamic drive spearheaded by dedicated individuals, committees, or entire departments. Learning to instill order amid chaotic landscapes is not just an option, but a necessity. As organizations continue to grasp the true potential of AI, the lessons learned from ESG adoption offer a practical guide towards harmonizing AI with existing business frameworks.

Rapid Pilot Programs: A Dual Approach to Governance and ROI

Going the AI route calls for an astute approach, leveraging insights from both governmental bodies and the private sector. Key to this would be a short 2-3 months pilot program with clearly outlined success metrics. A well-bound pilot ensures that any potential 'leakage' is minimized, paving the way for comprehensive risk identification and mitigation. The return on investment should never be far off the radar, offering insights on the potential fiscal implications of AI implementation.

Corporations and Federal Government: Collaboration Towards AI Governance

The role of corporations and the federal government in instituting AI governance cannot be undermined. Initially, corporations are likely to bear the burden of spearheading AI integration, setting the pace for eventual governmental input. The blend of industry self-regulation and societal pressure will undeniably influence the creation of formal legislation across AI usage.

Greater Inclusion and Learning: The Micro and Macro Governance Levels

Governance should not be a reserved global or national affair, but rather extend to micro organizational levels. Such dynamics should appreciate the interplay between different departments and foster an inclusive environment. This should be coupled with continuous learning and growth, possibly involving younger teams brought up in a technologically advanced era.

Embracing the Learning Curve: From Errors to Improvement

Tech neophytes may err on the side of being overly idealistic about new developments, while conservatives might completely seal off any tech advances. The middle ground lies in rigorous testing, learning, and the courage to embrace the inevitable errors. Case scenarios depict potential pitfalls such as AI bots causing fiscal damages due to poor regulation.

Ethical Standards and Global Technical Concepts: The Future of AI Governance

Ethical frameworks, stakeholder collaboration, risk mitigation, and an all-inclusive approach should be the cornerstone of AI governance. Learning from implementation failures, rather than discarding new technology, is key. Nothing beats the valuable lessons derived from practical failure experiences.

In conclusion, the journey towards complete AI adoption will not happen overnight. However, with sustained efforts and an open-minded approach, AI can truly transform the way we do business. Be on the lookout for more insights on AI governance.


Video Insights

Topics Covered in This Episode

1. Role of corporations and Government in AI Governance
2. Organization's Governance Structure
3. Risks of AI and Generative AI
4. Practical Tips for AI Governance
5. Ethical and Global Technical Standards


Podcast Transcript

Jordan Wilson [00:00:16]:
It's that tricky topic That it seems like no one can get ahold of. Governance. Right? There's always these these hurdles and these roadblocks that you have to go over or go through until you can properly implement and properly adopt to getting Generative AI in your organization, and one of those is governance. So we're gonna be tackling that today and more on everyday AI. Thank you for joining us. My name's Jordan Wilson, and I am the host of Everyday AI. If you're new here, thanks for joining us. This is for you.

Jordan Wilson [00:00:51]:
This this daily livestream, daily podcast, daily newsletter, it's all for you. There's always so much happening in the world of generative AI, And it's hard to tackle it all alone, so that's why we are here to help you grow your company and grow your career by understanding and using generative AI. Alright. So we're I'm super excited to talk about governance. It's something we literally cannot talk about enough. But before we do that, I'm going to go over as we and do every day the AI news. So if you maybe are joining this live or you're listening to us on your commute to work, there's always more Additional info and links in the show notes. Make sure to check those out, including to our website at your everyday AI.com, where we will be recapping These new stories as well as the show for today.

Jordan Wilson [00:01:36]:
Alright. So let's talk about what is new in the AI world news. Well, AI news in the world. Right? Alright. So, a Texas company is to blame for the AI robocall impersonating president Joe Biden. So Texas based telecom company, Life Corporation, and its owner, Walter Monk, were identified as the source behind the AI generated robocalls impersonating President Joe Biden during the new Hampshire presidential primary according to reports. So, this call was reportedly created With the voice cloning software from AI startup 11 Labs, but the company has denied responsibility. So the FCC and the New Hampshire attorney general's office have taken action against the source of this fake robocall.

Jordan Wilson [00:02:21]:
So I've been talking about this literally for, like, 9 months that the 2024 presidential election It's gonna be a lot of people's, 1st foray into generative AI. So, yeah, you're gonna be seeing a lot of these probably on a daily basis Starting here, in a couple of weeks as primary season starts to heat up. Alright. Next, there is a new and updated AI model, for video that was just released. So StabilityAI has announced an upgrade to its image to video latent diffusion model, SVD, Which promises better motion and consistency in short AI videos. So this model is available for public use now, and subscription members can also access it to use it for commercial purposes. So free for education and research, but you obviously have to have a paid account if you're using it for commercial purposes. So the new model is called SVD 1.1, and it's a fine tuned version of SVD 1.0, and it's optimized to generate, like I said, more consistent and photorealistic AI video.

Jordan Wilson [00:03:18]:
So this space is crowded. You know, I talked about that in the, you know, bold takes for 2024, That this is gonna be a space to keep an eye on. So not only do you have your runway and your Pika, but now you have the the new updated video model from Stability AI As well as you have Google Lumiere and Meta Imu video as well. It's gonna get crowded there. And, hey, there's already reports of, AI video in Super Bowl ads Big game ads. Right? What can you say? Alright. Last piece of AI news. The attorney who infamously submitted made up information from an AI chatbot is now Going, undergoing some, disciplinary action.

Jordan Wilson [00:03:55]:
So, the New York lawyer that is facing disciplined action for Using a fictitious case generated by AI in a medical malpractice lawsuit. So the lawyer admitted when this happened a couple of months ago, admitted to using AI for research and failing to verify the results, submitting a fabricated citation in her legal brief. So the use of AI in legal profession has raised concerns and about competency, complacency, and the need for transparency and verification. Like, y'all increase the quality of your input, and you're not gonna have hallucinations like this. So, yeah, this, case did happen a couple of months ago, but now the new news here is that it seems like these, disciplinary actions are apparently underway. Alright. That was a lot in a very short amount of time. So If something there caught your ear or we always have a lot more, that's going on in the world of AI news and just Kind of fresh finds from across the web and recapping our show for today.

Jordan Wilson [00:04:52]:
So as always, you can go to your everyday ai.com and sign up for that free daily newsletter. But Now let's talk governance. Right? It's it's something we've talked about here on the show a couple of times, but I think you can't hear it enough. So if you're someone that is in charge of implementing AI within your organization. If you're trying to think of the best ways to, you know, govern Generative AI. Today's show is for you. So, it's not just me. I'm I'm very excited for our guest today.

Jordan Wilson [00:05:20]:
So, please help me welcome to the show. There we go. Alright. So we have, Gabriela Kuze, senior fellow at AI 2030. Gabriela, thank you for joining the show.

Gabriella Kusz [00:05:32]:
Yeah. Thank you for having me. It's a pleasure to be here today.

Jordan Wilson [00:05:36]:
Absolutely. It's great to have you, and, hey, thanks for everyone joining us live. Tara from Nashville and Woosie from Kansas City. Everyone, thank you for joining us. As always, if you have a question, make sure to get it in now. Right? Don't wait. Alright. But, Gabriella, tell us a little bit about what you do, as a senior fellow at AI 2030.

Gabriella Kusz [00:05:55]:
Sure. So my background, I'm a governance subject matter expert, so in international financial sector and economic development programming around the world, 56 different countries. I built governance institutions. I helped to strengthen legal and regulatory, frameworks in order to ensure that there was good ethical and standard principles. And now at AI 2030, I'm applying those same concepts to the emerging edge technology fields of artificial intelligence. So, working to get the word out, share perspectives through different events, programming, and activities like today's, And just really looking to help shape standard setting, discussions around legal and regulatory, and ensuring responsible innovation in the AI space.

Jordan Wilson [00:06:43]:
Now I'm sure if you are actively trying to implement generative AI in your organization, you are very aware of of the the concept of governance and why it's needed. But But, Gabriella, maybe let's let's hit rewind and zoom out a little bit. But talk about what governance even is for those maybe who are unaware and why it's so important.

Gabriella Kusz [00:07:04]:
Sure. So governance usually encompasses kind of 2 core areas. 1 deals with, you know, technical, standards or practices, And the other really deals with your ethical behaviors or your moral assumptions for use of a given technology or an emerging area. Right? So when we talk about governance, we're talking about the rules or the framework that shapes both the design. Okay? So it goes all the way back and rewinds to some of the, engineering days. So it covers some of the design. It also covers the application, okay, so where you can use that technology, and then it it also covers any outputs from that technology and their use, their, how they're made available to the public or in what ways you can use those particular outputs. So it's really at each stage, design, build, and then actual use and application.

Jordan Wilson [00:08:04]:
Yeah. And I think it's important to talk about and investigate a little bit deeper because I think sometimes people are under, crap. Just have bad, instruction or, you know, people just think, oh, well, we can just throw a bunch of generative AI into our organization just like it's a new piece of software. But it's not really like that because, you know, Gabriela, could you talk a little bit of, you know, kind of the the dangers of of a AI, Especially if you don't know what you're doing. You know, obviously, we talked about, you know, the updated piece of news here with the, the attorney submitting, you You know, kind of hallucinations, to to to the courts. But what are some maybe, dangers on, like, okay. Well, here's why you actually need AI governance.

Gabriella Kusz [00:08:44]:
Yeah. So I think there's been, you know, a few cases that have come up so far. I believe it was a car dealership that had an AI bot that Sold a car for $1 or you have, and, you know, some of these things, they're funny, and then others aren't quite so funny. So More recently, you had the gentleman who thought he was speaking to a room full of, his c suite board. And, in fact, It was, you know, an AI, developed board scene, and he lost $25,000,000. So I think, you know, we're starting to see what some of the potential negative implications are and and Correctly so, you know, those of us who have dealt with emerging issue areas or emerging technologies are trying to work together collaboratively, to try to both create a framework and an opportunity for stepping forward in a way that positively allows companies, organizations and individuals to know what is the healthy, constructive way to use this technology and what is not. And I think what you'll see is a need to revamp some of the legal and regulatory framework around that in a way that's not and prohibitive for technology to develop, but also for a way that we start to protect consumers and the general public. So, really, when we talk about, you know, why governance is important, it comes down to This concept of, well, it's a new space.

Gabriella Kusz [00:10:17]:
We know that it's going to be an emerging opportunity, but with opportunities come threats. Right? And so that's where we need to start to look at both, you know, some of the security aspects, some of the ethical aspects of using somebody's image Without their permission or, using images that you've shared publicly, right, through your Facebook account, through your other social media accounts. Who owns that? How can that be used or manipulated? Should it be labeled, that it has been AI generated or AI manipulated? And and what implications does that have for society at large? So that's just a bit of a deep dive Using some, like, very recent somewhat funny, others not so funny examples. Yeah.

Jordan Wilson [00:11:02]:
Yeah. Yeah. And, you know, we talked about this on the show yesterday, the example that you brought up, the, the individual who accidentally, you know, transferred $25,000,000 to a company because they were Speaking with a very convincing and AI deep fake board, right, which is something even a year ago, you know, or a year and a half ago, I don't think people, envision that something like this would could be a problem for their company. Right? So maybe let's even before we even dive it, a little bit deeper on, you you know, governance and ethical frameworks. Maybe let's just talk about and I I'd love to hear your perspective on just the speed of all of this technology And how that actually complicates, governance because, you know, you can make, you know, the best, you know, rules and regulations, but then there's a maybe a brand new technology that didn't exist 2 months ago when you were putting your framework together. So how can companies really tackle that breakneck speed of of new generative AI technologies and With being safe yet using Gen AI to stay ahead. Yeah.

Gabriella Kusz [00:12:02]:
So I'll talk through some of this at a macro level, and then we'll go into sort of the company level. At the macro level, what you have to understand is that this is the speed with which this is moving. So So I came from, like, the blockchain and digital asset space, and that was moving fast. This is just taking off. It's like one day, there's, you know, nothing there, and the next day, you have a deep fake board that's now shorting or stealing $25,000,000 from me. Right? The pace is so fast that it is extremely difficult to build out a framework in rapid pace. Right? So just to give you an example, to create either ethical standards or global technical standards, You're talking about a period that usually would take anywhere at the very fastest pace, 5 years, and at the, like, longer end and probably median, seven, ad And then longer term, 10 years to get one standard through a due process, a standard setting board, ad Committees, advisory groups, ensuring all stakeholders have had the opportunity to give feedback and input, and that it's not going to have any unintended consequences when it comes to the market, the technology, or the geography that it's being applied. Alright? 7 7 years.

Gabriella Kusz [00:13:26]:
Let's give it on average. Now if it takes 7 years to create a new standard, there's a that creates a lot of bottleneck, ad And it's also not being relevant or timely for the purposes of users or companies that are trying to have a competitive advantage, okay, while protect their own interests as well as the interests of their customers and clients. So what you're gonna see is Most likely, this gap that exists, and that's where you're going to have industry sort of coming around and coming together and And starting to understand that they themselves need to start to come forward with, at the very minimum, a framework and more likely some level of principles based approach to standard setting that will allow for companies, especially listed companies, that have additional burdens, and responsibilities, that they can use this technology responsibly. And so, you know, I think that's 1 piece of When it comes to your everyday company, I strongly encourage people to look at and for guidance from Fintech for Good and from AI 2030. We've recently produced a response to NIST, which is our US national standard setting body, that goes through and provides sort of a high level etch around the ethical and appropriate application, of AI technology and practice. So, you know, again, you can Google it. Fintech 4 with the number. Good.

Gabriella Kusz [00:14:58]:
AI 2030. It should give you some level of direction attention. If you yourself are tasked, if you're, you know, sort of your company's tech ombudsman or you're sort of in in charge of some of the edge technology ethical or, standard governance procedures. I would strongly take a look at some of that and also follow NIST. So that's a piece that I think you want to look at as well, and follow what some of our national standard setting boards are doing, the direction they're taking, And that should give you some insight as to, like, how you yourself maybe need to strengthen order and, promote the use of that in a responsible way.

Jordan Wilson [00:15:37]:
So so much good information there. Yeah. Don't worry. This is one of those when I tell people, like, check the show notes. Check the show notes. 23rd yeah. Gabrielle just just dropped a lot great information. And, yeah, we've had other, other guests on the show from AI 2030, Fintech for Good.

Jordan Wilson [00:15:50]:
So we'll, we'll put their episodes in there as well. So yeah. Jeez. So so many nuggets there. My gosh. Alright. So, Gabriella, let's let's maybe, hit hit rewind now and and and talk just about governance It's because, like, what you said right there, I think well, first, we just have to tackle that. Like, you can't businesses, like, whether you're a small, medium, or enterprise, You can't operate the same way you've always operated with these 5 to 7 years going through committees, etcetera.

Jordan Wilson [00:16:13]:
Even with pilot programs, I tell big companies, If your 1st foray into generative AI is a 1 year pilot program, you're gonna fail. You gotta have something short and measurable. Anyways, Let's talk about just the the actual process of governance. Like, what is it? How does it work? You know? Because Like what you said, Gabriela, like doing the 5 to 7 year route, you can't really do that. So what's the responsible way to govern generative AI in your organization?

Gabriella Kusz [00:16:41]:
Yeah. So I'll, draw some parallels to some of what we're seeing with regards to the adoption and implementation of ESG Because I feel like a lot of emerging areas have a lot in common when it comes to having individuals or departments with responsibility for really trying to create order out of what may appear to be somewhat chaotic, you know, surroundings. And I think that In this instance, you can draw a lot of parallels to that. You're going to have a lot of emerging, likely private sector entities that stand forward and try to create and level of organized framework, okay, or principles. And I think that if if you are an individual who's tasked with helping to ensure the appropriate, approach for your company or your division when it comes to AI. The 1st and best place to look at, I think, will be Towards some of, like, the government, right, and see what's coming out of there. If that's too slow, then start to look at some of the private sector organizations that are Trying to advance responsible AI usage. Right? The other things you can do is look at some of the larger companies, so the Amazons, the Googles, The, Microsofts, the approaches that they're taking because that can give you some level of insight into how you can build and at least, You know, learn in a similar manner how to structure an approach that helps to hopefully protect and support your customers and clients and your staff and organization.

Gabriella Kusz [00:18:19]:
I think the last piece I'll say is that you need to be doing things like Listening to AI every day, and you need to be, you know, subscribing to different news feeds and other, you know, organizational feeds that provide you with insight on how other pilots are being run and structured. I agree with the fact that a year is way too long, to kind of test some of this. I think you're looking at more like a, you know, 2 to 3 month period, Which I think is much more reasonable and also minimizes any of the potential negative impacts that you would have. You need to design that pilot in a way that you have previously identified what your metrics for measuring success are. You need to bound it so that it doesn't have leakage into some of the broader, activities of your organization in the event that there are negative impacts, and I think you also need to be honest about what the true value is. Return on investing time, effort, energy, and capital. Okay? So what your return on investment is. And then ultimately, understanding and doing a very strong identification of risks and actions that you're going to take to mitigate those risks if you're going to push that towards, like, a next level pilot.

Gabriella Kusz [00:19:40]:
And I would say, you know, again, It's one of these things where you're not gonna, like, do a pilot and then push go. I think you're gonna do a bunch of iterative piloting until you and your organization feels comfortable With the framework and, structure that you've designed that appropriately takes into consideration some of the litigation risks, Some of the, you know, challenges that you're gonna see around privacy and protection. So these are not gonna necessarily be things that will be able to be litigated, but it's going to be in the court of public opinion whether or not the way that you've designed and sought to use AI is appropriate socially. Okay? And that's why I say I think there's a lot of parallels to some of what we've seen roll out with ESG Because you're in sort of this gray space of what is and isn't appropriate, and that is ever changing. And so that's why staying abreast you know, daily news on AI, understanding what public attitude is towards AI and its applications is gonna be crucially important to designing something that in the end is not only acceptable from a social perspective, is legal, when it comes to the shifting legal and regulatory framework, And I think it's suitable in terms of the appetite for risk and appropriate application of that for your own company, its leadership, its shareholders, Clients, customers, and staff.

Jordan Wilson [00:21:04]:
Yeah. Yeah. And I I think, Gabriela, that it also depends on where. Right? Like, a a lot of our audience is is here from in the US, but we have, jeez. I think just I looked this week. We have people listening from a 160 different countries.

Gabriella Kusz [00:21:17]:
Oh.

Jordan Wilson [00:21:17]:
So it it it depends on where you are. Right? But, like, just let's say for the US because that's where the majority of our, of our audience is from. So let's you know, you kinda mentioned government. So, Raul here has a great question. Raul, thanks for the question. So saying, do you believe that governance will first be implemented in corporations, or do you believe the federal government will have to intervene To make this happen within corporations. What's your take on that, Gabriela?

Gabriella Kusz [00:21:41]:
Sure. So I think it depends on the country. I think if I'm looking at the US, You know, if we're also understanding that we have a global audience. I think in the US, what you're gonna see is due to some of the protracted nature of the legal and regulatory system here in the US, The very first steps are going to be corporations that will work to minimize any potential negative impacts, to their own bottom wine and to their staff and to their products. So it's going to be somewhat self interested, but the first steps are gonna be, you know, risk identification and mitigation in order to ensure that things like the $25,000,000 loss doesn't happen again. Right? So that's gonna be the 1st wave of governance info. Most individuals who are tasked with some role in creating a governance framework for AI are going to be told, this can't happen to us. Make sure you find a way So that this doesn't happen.

Gabriella Kusz [00:22:33]:
Yeah. Then I think what you're going to see is an ongoing which you've already started to see, right, with the utilization of the robocalls for the Biden campaign, for example. Now you're gonna have some of the societal pressure, Which will, I think, again, before laws are created, okay, there's always some level of industry self regulation. And whether it's effective or not, You know, that's for the audience to judge. But before there is usually some form of formal law, The ways that formal laws typically get made is through almost piloting in real life in many cases, from industry. And so industry will give input and feedback into that process before a law is, you know, undertaken, usually, in the US. Overseas, you may have, especially depending on the type of government, if it is more, centralized, if it is more, command and control, then you will likely see laws being moved very fast to ensure that, you know, some of the Top down, powers that be are protected, that those risks are identified and mitigated so as to ensure, no undue or, inadvertent unseating of power or disruption to the order. But I think When it comes to, like, more free societies, free markets, you're gonna see it happen first in some of the corporations.

Jordan Wilson [00:24:00]:
Yeah. Oh, yeah. I couldn't I couldn't agree more. Yeah. Normally, I don't interject, but, you know, I'll say at least here in the US because I think people Don't have a good picture of this. I don't think there's gonna be any meaningful legislation around AI anytime soon, at least here in the US when we're talking about legislation. Will there be executive orders? Yes. Will there be legislation? Probably not.

Jordan Wilson [00:24:21]:
People don't even realize that there hasn't even been meaningful legislation passed Around social media. Right? We're still debating the merits of section two thirty, which is from 1996. So, yeah, I don't think, you know, kind of kind of what Gabriela said, It might be best to to tackle this one from the from the corporate side. I love this question here from Tara. So asking, what are the best practices for change management Involved in introducing new governance. Yeah. I think people just kind of, skip over the fact that those 2 are very interconnected. Gabriella, what's what's your thoughts on, you know, introducing kind of AI governance, but then also still prioritizing proper change management?

Gabriella Kusz [00:25:01]:
Sure. So I think in that sense, when you're looking at so I would say that there's almost, like, 2 different levels. 1 It's like, again, I do, like, a macro and then I do sort of an, organizational micro view. At a macro level, I think we're talking about, advancing some of the need for standards and for practices. It's not just with It's with a lot of things for that to be more timely and relevant and to be reviewed and amended more, and Rapidly for obsolescence and applicability. Okay? So that's kind of 1 piece of that. When you talk about, you know, internally, I always go through a process, whereby, you know, you're looking at a multi stakeholder group. So it is going to be some sort of unit internally that helps to provide input and feedback into the individual or the department that's task with creating that governance structure.

Gabriella Kusz [00:26:00]:
So ensuring that you have somebody from marketing, someone from product, somebody from, Finance, who's part of the conversation around what governance should look like for this space. Because you then have individuals from different departments, Those people become a natural delivery channel down into their units and departments, and they've also felt bought into the process. They understand why it's important and applicable to their business function. And so, one of the most important things for change management involved in new governance is inclusion. So that's why I think it's really important to make sure that you have that multidisciplinary, multidepartmental team that talks through What this is is a technology, why we are moving forward with governance, and explains, you know, what some of the tangible next steps people who are head of finance need to be. Maybe it's that, you know, when we have important board calls that Mhmm. We also have to, you know, do a text Just to double check and make sure that or another mode of communication that's off of that particular Zoom Meet So that we now are able to double check some of this. We look through some of the risks both as, like, the individual who's tasked with the governance, setup, But then also ask for feedback from some of those department heads where they see now that they have been introduced to this technology, now that we've started to see what some of the risks are, What do they see as some of the weaknesses or areas or threats that we need to look at? I think in addition to that, doing some sort of like a basic tutorial, that both, you know, educates at the departmental head around AI so that they feel comfortable and confident with using that technology, But also in taking a role in, not necessarily calling out, but in shepherding their staff In using that technology and saying these are the things that we're generally seeing are appropriate ways for applying this.

Gabriella Kusz [00:27:57]:
Here's where from, like, an IT or our CSO, you know, Talked about these are the ways that we are not going to use this technology. I just wanna make very clear, you know, the bounds of how and in what ways we can leverage this so that We are not inadvertently exposing ourselves to unnecessary risk, but at the same time that we're remaining competitively, advantageous with regards to early adoption and application of what we know to be a very powerful and useful technology. So I think inclusion is part of this. Education is another part which I just mentioned. And then I think, lastly, because this is an emerging space, It's ongoing education. So that little task force that you'll develop that's multidisciplinary, multideepartmental, it is not going to be like A one and done. Like, glad we had this talk, guys. Let's never let's never waste people's time again.

Gabriella Kusz [00:28:48]:
It's kind of like a, hey, once a month, you know, I'm the head of IT here. I'm the person who's been tasked with governance. My job is to listen to you, learn from, like, you know, again, with emerging technologies, YouTube videos, podcasts like this one. And then, you know, to be responsible for really kind of giving people I don't know. Do, like, a, you know, Brown bag lunch session once a month so that people don't feel like it's like they're being forced to do more work. I mean, I always pay people off in cupcakes cookies. So, you know, that's kind of an easy one. And then just say, like, this is what we're seeing on the horizon.

Gabriella Kusz [00:29:24]:
These are the key trends. I wanna make sure that people are aware of what these things are, You know, and just keep people up to speed on some of this. Encourage, especially younger people in your staff who you're prepping for succession planning and future leadership, That they're aware of this, that they're keeping pace with it, and that you also have somebody else who's kinda out there watching, especially as it relates to their particular vertical.

Jordan Wilson [00:29:48]:
Yeah. Something something you said in there, Gabriela, that I love, just the the risk mitigation. And it seems like sometimes it it's People are so risk adverse with generative AI, which makes sense, but then it just leads to inaction. Right? And then it leads to, you know, losing ground on your competitors. So, yeah, you have to balance Risk mitigation with proper implementation, I think, is huge. Great question here for Monica. Thanks thanks for this one, Monica. So asking, what are some of the mistakes that maybe you've seen, that that we can

Gabriella Kusz [00:30:17]:
learn from, Gabriela. Yeah. Well, again, we'll do some of those that are publicly available and have consumption, just to, you know, be sensitive to some of the firms that have, you know, forayed into this space and have had some kind of negative experiences so far. But I think one thing is sort of like a blind idealism towards what this can and should do. I think just like when the Internet was first coming forward, there's gonna be a lot of ways that this can be applied. Right? So it always has to be like a cost benefit of, like, whether it actually makes sense for your department, for a particular product, for its particular customer segment. Right? So I think that one of the things is just like, yay. We have a new technology.

Gabriella Kusz [00:31:03]:
It solves everything now. I expect you to get your work done in 10 minutes instead of 10 hours. Right? So I think, you know, some of those pieces are going to be, important. So ad Blind adherence to or excitement around a technology and its application. Mistake number 1. I I think mistake number 2 is, like, the exact opposite of that, which is a complete and utter clampdown, which is that we don't know what this is. No one should use it at all. It should be.

Gabriella Kusz [00:31:35]:
So it's almost like, I think, the main mistake here is kind of around Balancing it, being a learning organization that sees the value, knows how to dip their toes in, and is doing it to the best degree possible with regards to responsibility. I think you want people to test and play with this technology, and it shouldn't just be at some of the higher levels. You want it to be encouraged with younger staff. Why? Well, most likely, the younger staff is gonna be more open to using it. They're going to be the ones for whom this will dictate sort of their and Trajectory professionally and success. They're also going to see it as an early indicator of the degree to which your firm and organization is open to transformation, opportunity, and kind of the next wave, of where ad Your product market customer client is going. So I think that, you know, if I'm looking at some of this, it's, you know, one, ad Blind adherence, big mistake. 2, ultimate clampdown, 2nd mistake.

Gabriella Kusz [00:32:36]:
3, though, is keeping it almost in, like, an ivory tower, right, and saying only this and High level group should be able to apply it because probably the better applications of it and the people who are going to be more savvy around it are going to maybe be some of your, like, newer hires or your, like, lower level management. And so I think that making sure that those people feel both empowered as well as support it. And then lastly, I think, you know, if I had to name a 4th issue that I think comes up, it's that, Like, all mistakes are, you know, irreversible, and no one should be forgiven. It's a new space. There's gonna be lots of mistakes that are made. And if the reaction from senior leadership is a complete like, you're fired. You you tried to do this. It failed or like you were saying, around piloting.

Gabriella Kusz [00:33:29]:
There's gonna be lots of pilots that will fail. Acknowledge it's a failure. Don't just stop, you know, iterating or playing with what could be potential applications of AI, but really start to think about what is That would have been better, you know, and really do a true debrief from that. Right? Because a lot of times we're like, oh, AI didn't work, So the entire technology can be thrown out, and you're like, well, hold on. Like, maybe it didn't work in finance, but it definitely could work in, you know, engaging around business development and creating prompts and scripts and, you know, more personalized tailored emails, saving time, you know, efficiency, effectiveness. You know, what are some low risk areas that we can apply this first that don't unnecessarily, you know, disclose and Proprietary or sensitive information, but can enhance people's daily job experience so that they're they're Able to be freed up to do some of those higher value added tasks. Right? And I think that's where you're gonna find those, like, kinda like lower management, maybe even higher, level early hire people who I think you know, they're gonna have some really cool ways to work with us, And I think it's really important to empower them and to support them in sort of a lot of that inventing. Yeah?

Jordan Wilson [00:34:48]:
So so so many I mean, this this Episode, Gabriela, has been like a a goodie bag of governance. You know? We've talked about how to set up ethical frameworks, best practices on working with stakeholders, Balancing risk risk mitigation and timely implementation and, you know, inclusion. You're right. Involving all levels of an organization. But, You know, as we wrap up here, maybe what's what's your one best piece of advice, for someone? Maybe they are in charge of, you know, governance or they're in charge of implementation. What is that one very specific practical piece of advice that you can give for people that they can start implementing, in AI responsibly with governance in mind?

Gabriella Kusz [00:35:27]:
Yeah. So I think I'll give you 2, and I'll make them really fast because I know we're getting close to wrap up. But I think the first one is to make sure that, You're learning. And, you know, it's not gonna be your traditional learning. By the time formal educational resources are available, You know, they're gonna be so watered down in general. You really do need to start to do your homework by listening to podcasts, by going on to the you know, on to YouTube, watching editing videos. You need to self educate on this so that you're ahead of the curve. That's number 1.

Gabriella Kusz [00:36:00]:
And number 2 is get a task force set up immediately. You know, start pulling people to the center. Start talking about what this is, and don't be afraid to admit when things fail. You know? You learn So much more from when in a, you know, pilot fails, fails. And I actually don't believe in the concept of failure, to be quite honest with you. I think ad Everything is just a part of the ultimate success story. So, you know, there's setbacks, but I don't know that you could ever consider a failure a failure if you learn something from it. So that's why I think it's really important to see that as a progression towards a successful application of this.

Gabriella Kusz [00:36:37]:
This is

Jordan Wilson [00:36:37]:
jeez. This has been a lesson and a half. Thank you, Gabriella Kuz from the Senior fellow AI 2030, thank you so much for joining the Everyday AI Show to talk governance. We appreciate your time.

Gabriella Kusz [00:36:50]:
Thank you. This has been fun.

Jordan Wilson [00:36:52]:
Hey. And as as a reminder, everyone, this was a lot. Make sure to check out today's newsletter, so go to your everyday AI.com. We're gonna be recapping that. Tomorrow, join us for our show translation in the world of AI. Will we even have a job tomorrow? Tune in and find out. But, thank you so much for joining us. Hey.

Jordan Wilson [00:37:10]:
Hey. You know what? I'm gonna go ahead and shout another episode. So if this was up your alley, if this was super helpful, what you heard Gabriella talking about, go check out episode 197, which was our 5 simple steps Using Gen AI at your business today. These 2 episodes work well together. So thank you for joining us, and we hope to see you back tomorrow and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI