Ep 250: Google’s AI announcements, Adobe training on AI images and more. AI News That Matters

Google Amps Up Its Generative AI Game

In a recent Google Cloud Next Conference, the tech giant unveiled new generative Artificial Intelligence (AI) features aimed at enhancing productivity for Google Workspace users.

Google's robust features include the ‘Help Me Write’ feature, offering voice-triggered assistance for writing. The new options also enable Gmail users to refine their email drafts using an AI tool, 'Gemini'. Furthermore, Google Sheets now alerts for cell changes and emphasizes new templates to simplify spreadsheet creation.

Moreover, Google Suite comprises advanced tools such as Google Docs (supporting multiple document handling within the same file) and Google Chat (integrating Gemini to summarize messages and translate conversations into 69 languages). Google Meet offers automatic transcription and translation features and provides an option for note-taking during meetings.

A key highlight is Google Vids, an AI-powered tool that facilitates video creation via generating storyboards and compiling rough drafts using different media elements.

These advancements may be the response to criticisms of Google's generative AI offerings being difficult to access, even by paying customers.

OpenAI Data Leaks

Two renowned researchers from OpenAI, a leading player in the AI world, were reportedly dismissed over alleged information leaks. The selected employees were part of OpenAI's safety team, the group responsible for AI safety and security.

Although the type and scale of the leaked information remain undisclosed, it serves as a warning for business owners regarding data security while exploring AI tools. Despite the confidentiality issues, OpenAI remains committed to enhancing the applicability and safety of AI systems.

YouTube CEO Weighs in On Content Use Ethics

In recent developments, YouTube's CEO issued a warning regarding OpenAI's means of training its AI video program, SoRUP. The platform head alleged that if SoRUP used YouTube content for its training, it would violate the platform's terms of service.

Emphasizing on creators' expectations for their work to be used in compliance with YouTube's terms of service, Mohan states any unauthorized scraping or downloading would breach these rules. Despite these controversies, OpenAI continues their efforts on efficiently using AI technology for global benefit.

Adobe's Controversial Training Practices

Adobe, well-known for its strong stance on ethical AI training, now faces controversy. Adobe reportedly trained its ethical AI image generator, Firefly, with AI images, including those generated by its competitors, potentially without proper licensing.

Adobe defends itself, maintaining that every image submitted to Adobe's stock, including those generated with AI, undergoes strict moderation for legal compliance.

Adobe Offers Incentives for AI Training Data Collection

In an effort to ethically train its AI systems, Adobe has resorted to incentivizing creators to submit short video clips. The initiative aims to gather everyday actions and emotions used for AI training and rewards contributors for their video content. This move has opened up a new revenue stream for artists while encouraging ethical data sourcing.

Conclusion

In a highly dynamic AI scenario, the approach of various tech leaders provides crucial lessons for business owners. The recent developments reflect how ethics, transparency, respect for legal and copyright norms, and safety are shaping the AI landscape. The emerging landscape sheds light on the importance of understanding the specifics and nuances behind AI tools to make informed decisions about leveraging this technology effectively.

Topics Covered in This Episode

1. Google's AI announcements
2. OpenAI's data leak and staffing changes
3. YouTube's CEO's concerns about OpenAI
4.  Controversies around Adobe's AI image model, Firefly
5. Adobe's incentives for AI training data collection


Podcast Transcript

Jordan Wilson [00:00:15]:
Will Google's AI announcements finally make them a little more relevant in the generative AI game? What's going on with Adobe and, also, OpenAI in a data leak? All right. There's a lot going on in the world of AI news. We're going to get to all of that today and more. What's going on y'all? My name's Jordan Wilson. I'm the host of Everyday AI. Thanks for joining us. This is for you. We are a daily livestream podcast and free daily newsletter helping everyday people like you and me, not just learn generative AI and, what's going on there, but how we can use all of these these news developments, new tools, new tips, new softwares, how we can actually leverage this to grow our companies and to grow our careers.

Jordan Wilson [00:01:00]:
So So on most Mondays, we come to you live and bring you the AI news that matters. So that's what we're doing today. Well, a little bit. If you're joining on the podcast, you probably won't see or, you you know, feel anything different. But for our livestream audience, you might notice I have these kind of, vacation vibes, going on. That's because, yes, I'm actually on vacation, but it is so important to stay up with the AI news that taking a little, little break on my Sunday afternoon, to make sure that you have the news that you need for the rest of the week. So, hey, if you're joining us, live Monday, I might technically be on a roller coaster or something, but make sure you still leave your comment. Let me know what is the news that matters to you, what questions, what is actually gonna impact, your company or your career? And, hey, there's who knows? Probably the fact that I'm, you know, recording this on a Sunday.

Jordan Wilson [00:01:48]:
There's probably gonna be some huge breaking news, but don't worry. We'll still have it in our newsletter. So make sure if you haven't already, please go to your everydayai.com and sign up for that free daily newsletter. Every single, Monday through Friday, it comes out so you can keep up to date and not just keep up, but how but you can get ahead. And I tell people, it's like a free generative AI university on there. We have access to now, like, 250 backlog episodes of great content around generative AI from some global thought leaders. Alright. But let's just get straight into it.

Jordan Wilson [00:02:21]:
What is the AI news that matters for you for the week of April 15th? So let's start, with the, I guess, the big piece of news here. So Google. Google, from its Google Cloud Next Conference, introduced a lot of new generative AI features. So we're gonna go over them, kind of in bullet point. We've already talked about them multiple times in the newsletter already, but let's just dive straight into it. So, Google has unveiled a a lot of new AI announcements from its cloud next 2024 presentation. So, Google Workspace users will soon have access to these new AI powered tools to enhance their productivity. Let me just hit pause there and preface this.

Jordan Wilson [00:03:04]:
We hope. Right? So, you know, Google said this will be a general release, so it should, in theory, be rolling out to everyone. If you listen to the show at all, you know that I normally have a little bit of a bone to pick with Google because, you know, as great as their, you know, new generated AI features seem or sound, it has been so far very difficult even if you are on a paid, enterprise account or on a paid advanced account. It's been very difficult so far if you are a Google Workspace user to, take advantage of all of these generative AI features from Google. Right? So as an example, here at everyday AI, we, you know, our team uses Google Workspace, and we pay, I don't know, 15, $20 a month, something like that, to to use all these features, but it's we still can't connect our data. So, you know, this has been 6 months in the making. So, hopefully, this, kind of this, signaling from Google that this is going to be a general general release. That means that, you know, for a lot of people that have been waiting months or more than a year to use all of these Google AI features at their, you know, small, medium sized business, connect all their data.

Jordan Wilson [00:04:08]:
If they wish to, we'll have that option. So we'll see just how to get that out of the way. Right? We don't just spoon feed you marketing talk. We like to tell it to you how it is. Alright. So let's go over some of these newer AI features. So the help me write feature, which a lot of people have had technically beta access to for many months, should be rolling out in a general, general release. And the help me write will feature, inside Google will now allow users to trigger generative AI with their voice, making it even more convenient to start writing or to continue writing.

Jordan Wilson [00:04:37]:
So whether you're writing an email, inside Google, Google Docs, etcetera, Gmail users will now have the option to polish draft. Hey, look at that Google using our, polish from prime pump polish. Right? So Gmail users will have the option to polish draft, using Gemini to enhance the quality of their email drafts. Google Sheets as well getting some updates. So they've introduced alerts for cell changes to keep everyone updated and emphasize new templates to stream to streamline, spreadsheet creation. So if you're working with your team, Google Docs will now support tabs about time. Alright? Allowing users to work on multiple documents within the same file, efficiently. Google Chat now integrating Gemini to summarize messages and translate conversations into 69 languages, as well as Google Meet should be offering automatic transcription about time, and translation features with the ability to take notes during meetings as an optional add on for $10 $10 per user per month.

Jordan Wilson [00:05:40]:
So, you know, there's a lot more, but probably one of the biggest ones saving saving for the last year. So the highlight, I would say, of their announcement was Google Vids, which is a new AI powered tool that helps users create videos by generating storyboards and compiling rough drafts using various media elements. So there was actually a lot more, happening at Google, that we, you know, already talked about in the newsletter, but let me just tell you a couple more. So obviously, the general availability of, Gemini 1.5 pro, so that's Google's large language model, with 1,000,000 token context, which is huge. You know, Google also talked about agents, you know, pretty, a a lot of talk actually around agents and, the ability to kind of build, you know, agents that can connect with your data and perform certain tasks. Right? So, right now, I think when people think of generative AI, they think of it, oh, it's just kind of, you know, helps me write content or, you know, photos or videos, and that's definitely not the case. Right? I've been talking about that here on the Everyday AI Show now for a year, you know, talking about how agents, I think, are going to be much more impactful much sooner than people realize. So that, I believe that is reflected here, right, in, Google's, really just heavy marketing, their new agents.

Jordan Wilson [00:06:56]:
Also support for NVIDIA's Blackwell GPU system, which I think, should be pretty huge. Also the the vertex AI model garden, which, you know, so in Google's AI, kind of their vertex, I won't call it a a sandbox per se, but, it's it's where all their different AI models are connected. The ability to switch between I think it was dozens of different models. So, obviously, Google's Gemini, Anthropic's Claude, Llama from Meta, Mistral, etcetera. So a lot of open open models as well as Claude from Anthropic as well as their Vertex AI agent builder. So a lot going on at this Google, Cloud Next conference. Again, we'll be detailing it even more in the newsletter. Alright.

Jordan Wilson [00:07:42]:
Next but not least. So OpenAI has fired 2 prominent researchers over alleged information leaks. So according to a report from the information, OpenAI has dismissed 2 researchers, and, hopefully, I get the names right. I'm probably not going to, but, Leopold Ashen Brenner and Pavel Ismailov. Definitely didn't get those right. But, you know, has fired, Leopold and Pavel, for suspected information leaks. Again, that is according to reports from the information. So Leopold is known for his work on AI safety and was reportedly aligned with OpenAI's chief scientist, Ila Shutevar.

Jordan Wilson [00:08:23]:
So this incident marked one of the first public staffing changes since CEO Sam Altman's return to the board in March following an inquiry by OpenAI's nonprofit board that cleared him of previous allegations going all the way back to, you you know, his November, firing and rehiring. So, yeah, there's been a lot of, kind of messy behind the scenes stuff going on in the last couple of months from OpenAI, but it had kind of been quiet after, you know, Sam Altman was, you know, kind of, rehired back on as CEO. You know, he kind of made the the media rounds. He, you know, kind of really, I think, did a really good job kind of controlling that narrative. So it's been kind of a a little silent in the, you know, open AI drama now until these recent firings. And, so reportedly, the internal investigation revealed that both of these individuals were part of OpenAI's safety, safety team adding a layer of complexity to the company's internal dynamics. So, you know, right now we don't really know what data was leaked, but, I mean, you have to be paying attention to this, and it and it matters. Right? Because OpenAI has nearly 200 1,000,000 users.

Jordan Wilson [00:09:31]:
Right? And it is, the fastest growing, you know, app or software ever, you know, in human history. I mean, people talk about, threads, from from Meta. I don't know if that counts, per se. Right? Because Meta, all these new users that they had signed up for threads were essentially existing users from their other platforms. So I would still argue, that OpenAI and and ChatGPT is is the largest or the fastest growing, software ever. So anytime a a company that is so young, it's still, you know, still in startup mode technically. I think it's, you know, somewhere around 500 employees. I I first of all, I don't think OpenAI gets enough credit, firstly, for what they've accomplished with such a small team, but also the fact that we haven't had, a data leak in the past 2, you you know, 2 years, since the world has been talking about OpenAI.

Jordan Wilson [00:10:23]:
So, again, we don't know exactly, what was leaked here. This is all just according to reports from the information, so we'll be linking that if you do want to read that. But it could have, you know, honestly, just huge, rippling, impacts across the inter across the industry, because here's why. I think there's, you know, still, hundreds of thousands of large companies here in the United States that are kind of, you know, 1 foot in generative AI, kind of 1 foot in the generative AI pool and and 1 foot out. And I think that this is going to cause a little bit of hesitation for some of these companies to kind of go all in into the, into the generative AI pool, so to speak, which, you know, that's on them to to to make their their own, calls on that. But, I think that you have to look at this as part of a broader picture of of data security. Right? I I I mean, look at over the past couple of years. There's been huge confirmed, data leaks from, you know, big Fortune 100 companies.

Jordan Wilson [00:11:26]:
You know, it's like I I feel you hear about these almost every month. So is it alarming, to reportedly have some some data leaks that lead to firings? Not necessarily, but I think it is different, when you're talking about a company like OpenAI. And I think that people, for the most part haven't really been sure. Like, oh, should I upload data? Right? Like, you know, I know there's a big Home Depot, you know, data leak a couple of years ago, but it's like, alright. Well, you know, aside from if you've, you know, have an account on homedepot.com, which I don't know how many people do, you know, it's like, okay. You know, maybe your credit card information was in there somewhere. Not good. But aside from that, it's like people aren't necessarily uploading a lot of their information to, Home Depot or to a lot of these other companies, you you know, that have gone through these, kind of data leaks or kind of these, scandals in in in terms of, you know, keeping, consumer and and client in, information private.

Jordan Wilson [00:12:22]:
So I think that's a little different because people haven't really known how they should be using, you you know, live language models like chat gbt, like anthropics Claude, you you know, so all these kind of generative AI tools that allow you to upload information. I think a lot of people have been up uploading their information, maybe in a lot of cases, when they shouldn't. Right? So, I should, as I normally do, put the preface out there that unless you're on an enterprise plan, you should not be, uploading confidential, sensitive, proprietary information into these large language models, but I think so many people are. Right? So that's why I think you do have to pay attention, to this report. And I'm sure in the coming weeks months, we're gonna see a little bit more, information come out of it, but you have to be paying attention to it. Alright. Next, speaking of open AI. So, YouTube's CEO kind of fired some warning shot at OpenAI's for its, reported training, practices for its AI image or sorry, its AI video model, SoRUP.

Jordan Wilson [00:13:23]:
Alright. So YouTube CEO, Neil Mohan, has mentioned that if OpenAI had used content from YouTube to train its new AI video program, SoRAT, it would be considered a clear violation, in quotes, of YouTube's terms of service. So this is following the story from last month where OpenAI's CTO, Mira, Miratai, was unable to confirm the type of content used to train Sora in an interview with I I believe it was The Wall Street Journal where, and and this was raising a lot of trans a lot of transparency issues. Right? So we actually played it, a short clip here on the show, a couple of weeks ago, where she kind of was asked a somewhat simple question about, hey. How is this, you know, this new model from OpenAI called SoRa, which it produces amazing results. So, you know, text to video. And she was asked just kind of point blank, hey. How was this trained? Was it trained on, you know, YouTube videos? Is it trained on social media? And she kinda hesitated almost in a, like, like, why did she hesitate like that kind of way? And I think that this kind of implied to everyone that Sora's trained on the open Internet.

Jordan Wilson [00:14:31]:
But guess what? The open Internet obviously trains, or or sorry. The open Internet obviously contains a lot of copyrighted materials. Right? So this is where we get into this, this this new kind of report, featuring this, quote that we just heard or this, kind of warning shot that we just heard from YouTube CEO, Neil Mohan. So, Mohan has emphasized that creators who upload their content to YouTube expect their work to be used in accordance with the platform's terms of service and, any unauthorized scraping or downloading of YouTube's content would breach those terms. So although Mohan did not explicitly confirm whether OpenAI did actually use YouTube content to develop SoRa, he did highlighted that such an an action would pose a significant issue. So, obviously, OpenAI has faced a lot of scrutiny over the training of its new AI, video model, Sora, amidst, you know, all these concerns about copyright infringement and data sourcing. Right? So especially, you know, in the couple of weeks since this, this interview with, Mira Miratay, like, a lot of people have just been saying, okay. Well so if she stumbled on a very basic question like that, you have to kind of assume that, you know, that OpenAI just used the open Internet, everything out there.

Jordan Wilson [00:15:46]:
So so copyrighted materials and everything. Again, that's the assumption that, you know, kind of everyone is is under now. So we'll see where this this goes. And maybe we'll save this for a hot take Tuesday, but I would really keep an eye, on this kind of, you know, YouTube, which is obviously owned, by Google, which is a big competitor now to OpenAI. Right? Because OpenAI, has a, you know, partnership with Microsoft, which is probably Google's chief competitor. So here you kind of have, you know, Google versus Microsoft in the lens of YouTube versus OpenAI. So, I do see you know, I've been saying this for a long time. Look at the, kind of the OpenAI versus New York time lawsuit.

Jordan Wilson [00:16:28]:
Whenever that may get settled, I doubt it's actually going to go to a trial. But whenever that gets settled, that's the first big domino to fall. So maybe, in theory, this could be one of the next, you know, kind of this YouTube versus, OpenAI. And, you know, I know OpenAI has kinda made the case for, like, hey. What is actually copyrighted? Like right? Like, is copyrighted material what it meant decades ago? So I think we're gonna be hearing a lot about the concept of, copy copyright copyright law. What does it mean if you share something online even if you are, you you know, protected by a, you know, YouTube or a meta when you upload this content there, what does it mean? And and what does it mean or will it mean when, you know, so many, or or so much of the content that we are going to be uploading will be AI generated. Right? I can see a time in the very near future, whether it's months or years from now when so much of the, you you know, information that is even uploaded to a YouTube or something like that is from a, you know, program such as Sora or maybe Runway or Midjourney. Right? Like, as these other companies are starting to add, kind of this video capability after everyone saw the literal jaw dropping, capabilities of Sora.

Jordan Wilson [00:17:42]:
So I do think that so much of what's going to be playing out online, you know, at least YouTube, social media, and who knows? Maybe, you you know, eventually we'll see it in, you know, mainstream media. Right? When you have your b roll, right, when someone's talking on the screen, maybe like right now. Right? Maybe in the future when I'm talking about something, we're gonna see real time AI generated video that's splashed up there. So it's like, okay. Well, who owns it? What was it trained on, and who's ultimately getting paid for it? Right? So definitely something, to keep an eye on. Hey. Speaking of that related story. Right? So Adobe has been accused of training its ethical, AI image model on AI images.

Jordan Wilson [00:18:24]:
Yeah. Kind of inception here, but let's break it down. So, there's been a lot of ethical concerns arise over Adobe's, AI image generator, Firefly. So, Adobe's AI image generator, Firefly, is known for its ethical training on licensed stock images, but it has recently faced controversy as it was revealed in a recent report that some images were sourced from a competitor, Midjourney, and potentially without proper licensing. But then also it's like, how do how can you license, images from a company like Midjourney that is technically training in theory off copyrighted materials. Right? So you get this whole cycle of, like, where does the new content start the AI image generating begin. Right? So it's it's it's a little messy. But, according to the report, approximately 5% of the images used to train firefly were from questionable sources, such as other AI image generators, but Adobe assures us that all nonhuman pictures are still copyright safe.

Jordan Wilson [00:19:29]:
That's the other thing. Right? So with so, with so many of these AI image generators, like, you technically don't own the copyright. Right? So if I go in there and if I, you know, create a bunch of images with a a mid journey, or, you know, DALL E from, from OpenAI. Right? I don't technically own the copyright on those. They're they're they're technically kind of, copywritten by no one because they're, you know, it's it's there's no technical, original work being made there or at least that's the argument that people are making. So, it does get a little confusing, but, Adobe has claimed that every image submitted to Adobe stock, which is where they are training their Firefly model on, including those generated with AI, undergo a strict moderation process to ensure legal compliance. So despite the training data controversy, Adobe maintains that images created with firefly are safe to use without copyright infringement. Also worth noting, Adobe is reportedly working on an AI video generator and is rumored to be compensating artists per minute for video clips showcasing a potential shift toward more artist friendly practices.

Jordan Wilson [00:20:38]:
So hey. Speaking of that, let's just go ahead and and wrap up our AI news that matters with exactly that. So, Adobe, Adobe is offering incentives for AI training data collection. So Adobe is investing in its generative AI platform, Firefly, like we just talked about, by offering up to a $120 to photographers and other artists for submitting short, including videographers, but, for submitting short video clips of everyday actions and emotions for AI training. This is according to a Bloomberg report. So the submitted videos should showcase everyday actions, emotions, basic anatomy, anatomy, and people using objects like smartphones, fitness gear, etcetera, while avoiding copyrighted material, nudity, or offensive content. So, according to the Bloomberg report, contributors can earn between $2.60 to $7.25 per minute of submitted video, providing an opportunity to earn money from existing video content. So, again, you you know, our last, kind of story led straight into this one.

Jordan Wilson [00:21:45]:
Right? So first, we talk about how Adobe is, you know, kind of facing a little bit of heat for, reportedly using AI images to train its AI model, Firefly. Right? And you have to hand it to Adobe because, they've I think they've been probably one of the least, one of the leaders, in terms of creating models in an ethical way. Right? Like, we don't hear a whole lot from the, you know, from the Googles, from the Microsoft, OpenAI, Anthropic, Midjourney, Runway, Picolabs. Right? Like, we don't hear a lot specifically on, okay, you're training on the Internet, but doesn't that include a lot of copyrighted materials? Right? It seems like a lot of the bigger companies aren't really talking about that, or they're just kind of waiting to, you you know, settle some of these cases. Right? And there's obviously been some huge multimillion dollar annual partnerships, for these big companies, you know, like like Google, Anthropic, OpenAI, etcetera, are forming these multimillion dollars annual partnerships with these big, you know, content agencies, essentially, right, to to use all of their content to train their models. But what about all the models that were trained before these partnerships were even formed? Right? So Adobe has been going about it, reportedly a different way, you know, really just, training its models on all of the information and all of the content that it had access or it had the, kind of legal rights to train its models on until this story kind of these stories came out, like, one right after another. Right? Where first, Adobe was apparently using, you know, small percentage, but, you you know, reportedly 5% of the total that that it was training its model on were from Midjourney. Right? Which is tricky because all of those were, you know, in theory, trained on copyrighted materials.

Jordan Wilson [00:23:35]:
Right? We talk about that here in the show all the time. You can just say, you know, give me a picture of a superhero in midjourney, and it's gonna spit out copyrighted materials. Right? You're gonna get an Iron Man and, you know, Superman and all these things even if you aren't asking for it. Right? Because the model has been trained on presumably a lot of copyrighted material. So that does really throw a wrench in Adobe's whole, you know, super ethical, like, ethically trained campaign, I think it would have been better for them to, either, hey. We're gonna, quote, unquote, suffer in quality or we're gonna be a little slower by not using any of these, you know, any of these AI images in general. Because once you do that, you really I'm not saying you close the book on your case that, oh, we're going about this in the right ethical way, But I think you really soften your ability to to really take a hard stance that saying like, hey. We are the super ethical AI model.

Jordan Wilson [00:24:24]:
So I don't know if this was intentional, if this was accidental. It could be hard. Right? Because it it does appear according to this, to this report, that some users are submitting AI generated content to, Adobe's stock services. So, you know, really, it's it's it's going to be interesting how this plays out as well as, you know, kind of interesting also that Adobe is paying, you know, paying creators to say, like, hey. Go go shoot some videos. We're working on an AI model, so, you know, we're gonna pay you, which I like that concept, but, also, hey. As a former, you you know, way back in my former life, I did a lot of, you you know, videography, photography. Right? So, you know, this was obviously, like, you know, almost 15 plus years ago, but still.

Jordan Wilson [00:25:10]:
Right? So they are paying a lot of these creators, a a lot of these videographers, you know, dimes on the dollar to essentially say, like, hey. In the future, a lot of the clients that are maybe paying you for this right now, right, are going to just not be paying you. Right? They're gonna be using our models instead. So it is this kind of this weird, state of of creativity that we're in right now where, hey, I do like this that Adobe is, you know, making this very ethical effort to pay creators, for specific footage. But all they're really doing is saying, hey, creators. We're gonna pay you for this because in 2 years or 18 months or 5 years, all the people that are paying you to create content right now may not be paying you. They may just be using our services as well. So it is kind of this sticky, gray area that, that we're in right now in the creative space where, you know, either these companies are just training models off the open Internet or they're paying creators, but, you know, essentially, that means that they're gonna be getting paid less by their their current clients, current businesses that they're that they're working with in the future.

Jordan Wilson [00:26:15]:
That's definitely a reality. Alright. There's always more. So if you haven't already, there's a lot of news there. There's more. So make sure if you haven't already, please go to your everydayai.com. Sign up for that free daily newsletter. Yes.

Jordan Wilson [00:26:32]:
I took a little break, on my Sunday afternoon here. Normally, we do this show live Monday. So if anything is late and breaking, don't worry. We'll still have that in the newsletter. So make sure that you go subscribe to that. If this was helpful, we'd really appreciate you leaving us a rating if you listen to us on Spotify or Apple Podcasts, or if you would just, you know, if you're watching here on social media, if you could, you know, share this, repost it to your friends to keep them up to date, we'd appreciate that as all. And, hey, we'd appreciate you tuning in for the rest of the week for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI