Ep 282: AI’s Role in Scam Detection and Prevention

Episode Categories:

AI's Role in Scam Detection and Prevention

As every business owner knows, scams are a part and parcel of the business world. But have you ever imagined being scammed out of $25 million during a typical Zoom call with seemingly familiar colleagues? It’s unnerving, but it's happening. Sophisticated scammers are using advanced AI technologies to deploy deepfakes, an innovative yet menacing tool, making it extremely challenging to differentiate between real and fake communications.

AI in Scam Prevention: Using Technology to Fight Technology

In our increasingly interconnected world, the prevalence of scams and insider cyberattacks is rising. Interestingly, though, the very technology that is being used to deceive individuals and businesses can also be leveraged for protection. Implementing verification systems that utilize AI, similar to a "family password" could provide an extra layer of security against AI-based scams.


Beating Scammers at Their Own Game: Multichannel Safeguards

In today’s digital age, scams have gone multichannel and are less frequent but more targeted and sophisticated. For businesses, especially medium to large enterprises, the need to be vigilant across all platforms is paramount. Creating safeguards such as implementing two-factor authentication (2FA), not trusting risky actions, and designating a point of contact for reporting suspicious activities are ways to protect valuable resources and data.


Hone Your Critical Thinking for Scam Detection

Understanding the signs of a scam is critical in safeguarding businesses from falling prey to cybercriminals. Obvious red flags include a sense of urgency, psychological pressure, and being prompted to act without verification. Training in critical thinking and evaluation of trust in digital content helps build scam detection acumen. Remember, it’s increasingly essential to verify before you trust.


Seeing Through the Deepfakes: The Seriousness of the Situation

It's crucial to remember that scams have evolved from mere fake jobs postings to sophisticated corporate scams, taking advantage of advanced AI technology. And as technology continues to develop, so does the scam. Scammers now have access to voice synthesizer technology, making it difficult to authenticate audio conversations. Generative AI can create near-real deepfakes, making it problematic for businesses to identify and counter.

Shaping a Future of Purposeful Human-AI Collaboration

Despite the promising advancements, businesses must be cognizant about the concerns about Google's deceptive marketing tactics and the difficulties in accessing new features. Knowing how to sift through the AI buzzwords, and understanding the true capabilities of tools like Google Gemini and Google Gems, play a crucial role in AI deployment success.


Stay Vigilant, Stay Safe Against AI Scams

Technology is a double-edged sword. As AI amplifies the sophistication of scams, it also vitalizes scam detection and prevention techniques. The road to cyber safety is tricky but not impossible. By adopting AI-driven strategies, instilling a culture of vigilance, staying updated on cybersecurity trends, and making well-informed decisions, your business can be one step ahead of the scammers.

Current trends and cases underline the evolving complexity of staying safe online amid the rising tide of scams and AI-enhanced cybercriminal activity. As the saying goes, 'Prevention is better than cure', it's high time we incorporated AI into our cybersecurity strategies to fight against these sophisticated scams. Staying vigilant, using the right technology, and staying informed can help businesses large and small navigate the choppy waters of cyber safety.

Topics Covered in This Episode

1. Sophistication of AI in Scams
2. Countermeasures to Combat AI Scams
3. Deepfakes and Their Increasing Prevalence


Podcast Transcript

Jordan Wilson [00:00:16]:
Scams are not easy to detect anymore. They used to be extremely easy to spot. You could spot them miles away, but with AI means more sophisticated scams. So now more than ever, it's so important for your business to really take scam scam detection and prevention extremely seriously when you're talking about cybersecurity. So that's what we're gonna be talking about today on everyday AI. So thanks for joining us. I'm excited for today's conversation. If you're new here, my name is Jordan.

Jordan Wilson [00:00:48]:
I'm the host, and we do this every day. And we bring this to you live, unedited, pretty much unscripted as well, helping you understand AI to grow your company and to grow your career. So we are gonna be talking today on AI's role in scam detection and prevention with a guest. But before we do, we're gonna start as we do every single day going over the AI news. So if you haven't already, make sure you go to your everyday ai.com and sign up for the free daily newsletter where we're gonna be going over both today's episode and a lot of AI news because there's more than we can get to, in our little recap here. But let's let's start at the top. So AI safety researcher has left OpenAI for rival Anthropic. So the former lead safety researcher at Open AI, Jan Leakey, has joined rival AI startup, Anthropic, after resigning from Open AI earlier this month.

Jordan Wilson [00:01:38]:
Yeah. Just in 2 weeks span, we saw a very, very public, resignation from Leakey on Twitter, and he's already joined rival, Anthropic. So, Anthropic is obviously backed by Amazon with, $4,000,000,000 in funding from them, and they focus on super alignment. So in his new role, he'll be focusing on super alignment, mission, scalable oversight, and weak to strong generalization and automated alignment research. So OpenAI did dissolve their superalignment group, which was strange and emphasized the importance of AI safety in the tech sector at the same time. Alright. Our next piece of AI news is, well, NVIDIA is closing in on Apple and could quickly become the 2nd largest company in the United States. So NVIDIA's shares surged by about 6%, a huge jump, reaching a record high and bringing its market capitalization to 2,800,000,000,000, just shy of Apple's 2,900,000,000,000 valuation.

Jordan Wilson [00:02:36]:
So who knows? That could happen today or in about, 30 minutes when the market's open here in the US. So following a strong Q2 revenue forecast, NVIDIA stock did hit an intraday peak of $1,149, up 9 sorry, up 8% during the session. NVIDIA's remarkable growth this year has seen its stock or its share value more than double and is obviously attributed to its success in the AI industry and a significant fivefold increase in the data center segment. Huge. If if only someone out there, you know, I don't know, a year ago told you that NVIDIA was the most important company in the US that no one was talking about. About. Oh, wait. We did that.

Jordan Wilson [00:03:16]:
Alright. And last but not least, a former OpenAI board member has dished on why Sam Altman was fired from the company. So in a podcast interview, former OpenAI board member, Helen Toner, gave some more details on the drama that went down in November 2023 when we saw Sam Altman getting fired and rehired days later. So if you did miss that, the OpenAI's, OpenAI CEO and cofounder Sam Altman was unexpectedly fired November 17th, but then was rehired November 20th, 3 days later. So at the time, the board said Altman had been, quote, consistently not candid. And, so Toner was one that voted to remove him, him being Sam Altman, and then she later resigned from the board. So in this new interview, Toner revealed some new details and said the board was not informed about the launch of chat g p t and that they find found out about it on Twitter, which is wild to think about. Toner also accused Altman of lying to the board on multiple occasions and said the board at that time could no longer trust him.

Jordan Wilson [00:04:20]:
She also detailed other employees' accounts of Altman's reported toxic atmosphere and psychological abuse. So, the drama is always brewing in AI, but that's hey. That's why we cover it every single day. So, we're we're not here to chat about AI news all day. We're here to talk, which I'm extremely excited about, AI's role in scam, sorry, in scam detection and prevention. So, if you're joining us live, get your questions in now. So thanks for everyone. Brian from Minnesota and Woozy from Kansas City, Fred, and everyone else.

Jordan Wilson [00:04:52]:
Thanks for joining us. But, please help me in welcoming in our guests, for today. I'm extremely excited to bring on to the show. There we go. We we have him there. So, please help me welcome Yuri Vinos, the chief innovation officer at Aura. Yuri, thank you so much for joining the Everyday AI Show.

Yuri Dvoinos [00:05:12]:
Thank you for having me.

Jordan Wilson [00:05:13]:
Alright. Absolutely. So, I mean, let's just start at the top. Tell me a little bit about what you do, at Aura.

Yuri Dvoinos [00:05:19]:
Right. So I drive, innovation functions at Aura, which is a Boston based cybersecurity company for individuals and families. We help, everyone stay safe online, And, it's just becoming such a massively difficult job to stay safe online. One of the things that I have actually done, lately is scam protection technologies, and scams are on the rise. And, as you can imagine, I'm gonna chat about using a lot of AI to help everyone, be safer online. But as a matter of fact, a lot of bad guys are equally using AI, are equally empowered with all those technology and tools, and they make our job much harder. So it's a very interesting dynamic, and it's been increased. It it's been up and down, and we see a lot of changes, within those several last years.

Jordan Wilson [00:06:22]:
Yeah. And and maybe walk us through those changes. I can only imagine what the role of a chief innovation officer at at at a company providing cybersecurity looks like and how much it has changed over the past couple of years, especially with this, you know, kind of, surgeons of of large language models and and generative AI. But but, Yuri, maybe just walk us through how has just your not just your role, but how has the industry changed, with now generative AI and large language models?

Yuri Dvoinos [00:06:50]:
Yeah. Absolutely, Jordan, and that is the right question. I think the nature of scams, you know, scams is not a new concept. We've been we've all known scams for many years. A lot of time a lot of times when I hear a scam or something is a scam, we think about something like a spamming message or something ridiculously dumb, but that somehow we tend to click. We rarely think about extremely sophisticated attacks that you can barely recognize. Well, now we got to the point where they got smarter and smarter every single day. So what we are seeing is that and it all started with, and, by the way, scams scams are on the rise.

Yuri Dvoinos [00:07:32]:
Right? So we know that the impact, the monetary impact, and the amount of victims is growing year over year. So this problem is getting only bigger. We have first seen a very interesting tendency with, mimicking writing styles. And, just a quick background information. So many decades ago, everyone was trying to hack a system, a computer or a system, well, that now that it's it becomes very difficult to hack a system, so now everyone tries to hack a human being or us, typically by impersonating. So their first call is to impersonate someone you trust, whether it's a bank representative, your friend, or anyone else, customer support, doing a refund, whatever this is. Right? Financial institution, they just want to establish this trust and credibility, so they literally are trying to hack our cognitive ability or trust by default, sort of, mental disposition and there is a very interesting thing that is very hardwired in our brains. We tend to authenticate a person over texting through their writing style, So for example, if your mom writes you, hey, Jordi, and that is how she starts every single text message.

Yuri Dvoinos [00:08:59]:
And, you know, there is someone that says mom, and it's the message starts with, hey, Jordi. And then, you know, they might use some specific language in addition to that. You almost immediately start to ignore everything else. Like, you you you you you you you have complete trust over that message, and that is exactly what's being exploited right now by these cameras as they they are using large language models to mimic someone's writing stuff. All it takes to do, by the way, is to have several references or sample communication letters or, examples of how someone is writing. So basically, you need to communicate with someone and when once you know how they write and, of course, better write to you, then they can mimic that style and then can try to impersonate someone that you know.

Jordan Wilson [00:09:51]:
You you know, and I have so many questions that I I love that example that that that you gave. Right? Like, yeah, someone could just very easily impersonate you just by looking at your your writing style. Right? Like, I have so much of my own writing online. I was a former journalist. It's out there, so it's pretty easy. But, you know, I'm curious because, you know, I think that previously, scams maybe targeted larger companies, enterprise companies because there was more to gain. Right? And maybe that's why scams had to be more sophisticated. But, you know, I've even had friends who are small business owners, you know, not, you know, thousands of employees, but just small ones now being targeted by scams.

Jordan Wilson [00:10:28]:
You know, with AI, do companies of all size now, Yuri, needs need to be on the lookout, whereas I think maybe before it's just larger companies. Like, are you seeing that in your role?

Yuri Dvoinos [00:10:38]:
We definitely are, and this is anecdotally. Like, I don't have a hard date on that, but I have a feeling that, if you fall have you followed any of those, rumors about losing jobs to AI, Jordan?

Jordan Wilson [00:10:55]:
Yeah. I might have talked about that once or twice here on the show.

Yuri Dvoinos [00:10:59]:
I think this is what has happened to scammers. I think their job has been taken over by AI, and I'm joking about this, of course, but I think what's happening is that, in the earlier days, we were seeing someone physically writing all those messages. So there were some sort of call center, chat center people that were trying to create all that communication. Right now, that's not what we see happening. Right now, most of those communications being automated. Think about someone's Instagram account being hacked. What they do, they take everyone within the contact list. They analyze the profile of the contact that you have and they create a tailored message using AI so you can send message from a hacked hacked account can send message to all your contact base within 5 minutes.

Yuri Dvoinos [00:11:46]:
Obviously, this is automated. This is a large language model, Dylan.

Jordan Wilson [00:11:52]:
Yep. You know, and and I'm I'm curious. Maybe could you walk us through, in, like, an actual, use case or example, you know, at Aura? I'm I'm sure that you have many clients. Some you can't talk about, some you can't, but maybe just walk us through typically what scam detection looks like for businesses. Because I can only imagine there's, you know, so many now different ways that, you you know, companies can can, you know, fall victim to these scams. Whereas maybe before it was a little easier to spot, now I feel it can hit you from all sides with generative AI. But maybe could you just walk us through, you know, like, a use case of, you know, essentially, here's what companies, like our types of clients are seeing, and here is how we specifically act against it.

Yuri Dvoinos [00:12:35]:
Absolutely. So we have a consumer offering, which is a consumer app that is a suite of tools and that has everything you need in order to stay safe online, which is an absolutely state of the art set of, technologies that will help you include in scam protection. The way we help businesses is by making every single employee of those businesses, more protected and we know that we all have some portion of our work on our personal devices. So we believe that by protecting every single employees and families of your employees, the overall security posture of the organization is gonna be way stronger and way better. Now what we are offering within Aura is we have a message protection technology that filters all unknown incoming messages, or messages from unknown, context. We do the same thing for calls, which is where it gets very interesting as we can understand the intention of the call. We can understand the early signals of the call signals of the scam, sorry. We can also understand that something funky is happening on the call in the middle of the conversation with an unknown number, and then we can identify that as well.

Yuri Dvoinos [00:13:48]:
And lastly, we're just about to release our last technology that, scans every single email in your email inbox, and we have so much. Our inbox are over cluttered, so, we believe that you have to protect yourself within every single communication channel. And for every single communication channel, we have created a little AI assistant that will meticulously check every single thing you can possibly think of, and see if that is similar to other scams. If that is trying to explode your psychological bias, like, is there any sense of urgency? Is there any psychological pressure happening? And all of the other things, is this a first time you're receiving this email from that specific email? Just is that is that email looks similar to other email you're constantly exchanging messages with, but you have never received emails from this before. So all of those, I can keep going on and on and on. There are hundreds of or dozens of those triggers that we're looking at. And, of course, it is so difficult for us to look at every single one of them for every incoming message, but it is so easy for AI to to augment, your ability to recognize riskier communications.

Jordan Wilson [00:15:08]:
Yeah. Exactly. I'm sure it's a it's a challenging, you know, task to take on just because of how easy scams are now. And and speaking of that, you know, maybe maybe with some, a little bit of humor here, but I love this question here from from Wuzy asking. And I'm gonna ask the the the flip side after we hear your answer, Yuri. But, so Wuzy here asking, what's the worst scam attempt that you have seen or heard about? Because I'm sure that they're getting lazier and lazier and not very good because of AI, but maybe what's what's the worst one that you've seen or heard about?

Yuri Dvoinos [00:15:42]:
Can I can I give you something that is somewhat related to Ardo, but more like anecdotal?

Jordan Wilson [00:15:50]:
Yeah. Yeah. Absolutely.

Yuri Dvoinos [00:15:51]:
I was yeah. And, it's probably more I'm answering the question, the worst scam, the most ridiculous scam I've seen happened in Indonesia. Very recently, I think a man started to someone has been using online dating, and, they found a potential partner through an online dating platform, and then they got married, and, a person realized that, I think it was like a week or 2 weeks after marriage that this was another man that he married and all of that was a scam. So someone was trying to impersonate someone else and this is how far it went, which was very absolutely ridiculous. So this, stroke me as, you know, just how creative some people are when they're dealing with scams. In terms of the AI and the most ridiculous scam that I've seen in a more in a more serious manner, I was pretty surprised by the deepfakes. I think the deepfakes are being real, and we can chat chat about this, Jordan, if you want. But

Jordan Wilson [00:17:03]:
Yeah. Yeah. Absolutely. Because, you know, now so there's there's different things. Right? So there's there's AI clones, right, which a lot of companies are are using, I think, in smart responsible ways. Right? Like, to be able to provide training and and personalized learning and development. So there's authorized ways that you can essentially, you know, clone yourself, but then there's these unauthorized way, right, which is deep fakes when someone without your permission, makes makes a version of you. So, yeah, I mean, let's maybe talk about the technology side first, and then we can get into maybe some things to look out for, Yuri.

Jordan Wilson [00:17:36]:
But, you know, how has just the availability of this generative AI technology changed deepfakes because, yeah, it seems like it's easier than ever before to create something like this.

Yuri Dvoinos [00:17:48]:
Absolutely. So if you think about how human beings are communicating with each other, it's really at least online, it's really three things. It's texting, it's phone conversation or audio only conversation, and then it's via and audio conversation. So, of course, with texting, it gets a little bit easier. It it it's getting more complicated with mimicking, someone else's, writing styles, but it is still manageable to detect the scam. Now with voice, things become murky. Right? Because, one can generate, voice synthesizer using, like, less than a minute of someone's talking, as a reference. And then people think that, oh, but that's that still that still doesn't sound right.

Yuri Dvoinos [00:18:43]:
Right? You can still understand those little bits and bits. So it's a it's a voice synthesizer. Yes. That is right. But this technology is getting so much better so aggressively that I'm not sure if that's gonna be right in 1 year. And just to add to that, people add background noise to mask those imperfections of the voice synthesizer. So you might receive a call from your CEO, and the phone number was also spoofed. You're not using the call call protection.

Yuri Dvoinos [00:19:12]:
Okay. You can get a spoof phone number, incoming phone number that is from a random number that says the name of your CEO and then someone who talks with a voice very similar to your CEO with a lot of background noise, like a New York street or something. There's a lot of background noise and says, hey. I just sent you something. I need you to do blah. How likely you're gonna obey?

Jordan Wilson [00:19:36]:
You know? And and it it might seem you know, if you're not following, you know, if you're not following the space, something like this might seem far fetched, but it's it's really not. Right? Like so so like I said, I've I've had friends, you know, even, you you know, get hit with these types of scams, you know, whether it's just email and, you you know, but multiple on on multiple platforms as as well. So maybe, Yuri, can can can we talk about that? You know? And and maybe some some safeguards that people need to take because I guess if if you get a scam, it might be easier to detect if it's one platform. But what happens then when it starts hitting us from multiple places and it does seem very sophisticated and maybe coordinated from multiple platforms. So may maybe number 1, how can business leaders look out for that? And then number 2, what should they be doing about that or common, you you know, common, sense steps that they should be taking to protect themselves and their businesses?

Yuri Dvoinos [00:20:33]:
Right. So the good news is that multichannel scams are still rare or much less frequent that a single channel scam. So my first advice to just everyone across the board is that, as dark as it may sound, don't trust anything that you might feel, might think be risky or suspicious. Like, if there is a clicking on the link, sending the money, or doing something involved, like, just you received a WhatsApp message, pick up a phone, call that that person and say, hey. I received that. What what do you mean? If you received an Iamessage, text them over, Insta, whatever, but hacking 2 channels at the same time is much less likely in terms of the impersonation. So that is a very simple advice that all of us can follow and that will help, hopefully, will eliminate most of the scams, but, of course, for businesses, I think, the more targeted sophisticated multichannel attacks are becoming more frequent, and then the question becomes, like, what do you do? So, obviously, you have to train, people you work with. And, the I gotta I gotta say that cybersecurity trainings, might be the most boring trainings in the world.

Yuri Dvoinos [00:21:53]:
Like, we all that that is a necessary evil, but I think there is a massive opportunity to make them much more entertaining, memorable, joyful, and I think we should absolutely do that. So just just be a little bit more empathetic with your workforce. No one wants to read a huge manual of how do the question is how might we make this truly enjoyable, truly memorable? I think that's one thing. The second thing, obviously, you have to have a person inside your organization that, everyone knows they need to report. They can, they can write if they see something suspicious. So someone has to be physically present there to protect you. And I think with the between those three things, 2FA, responsible person, and, of course, some training in terms of critical thinking. Not all of us know about inability of trust.

Yuri Dvoinos [00:22:47]:
Like, we used to trust everything we see. Now it's just not the case, and we we're not wired that way. So I think there has to be some training, of course. You should be you should be safe.

Jordan Wilson [00:22:58]:
Safer. You know, Yuri, you you bring up a good point because I think that, you know, over the last even with social media, you know, over the last 20 or 30 years when it comes to, consuming information from the Internet, I think we give it just a a one off, you know, does this pass, you know, my own human detection, like, yes or no? It it seems like, you know, maybe that's a good habit that might be become a bad habit, right, where we just look at something quickly and we're like, yep. This seems legit. I trust it. It almost seems like we might have to rewire our brain as scams get more sophisticated to really look into like, hey. Yeah. This this this video looks real. This this voice looks real.

Jordan Wilson [00:23:39]:
This this text looks real, but we might have to unlearn some things. So what are maybe some, some of those key things to look out for? So you already mentioned, you know, single channel scams, hit them up on a different channel. But maybe what are some maybe telltale signs or, you you know, something that businesses should be looking out for that normally you know, like, I normally wouldn't look at this or look into this. So maybe what are some of those, not red flags, but some of those things that might be under the surface that we should be paying closer attention to.

Yuri Dvoinos [00:25:08]:
There are lots of, signs of a scam. And just just, I think, adjusting to the reality that you cannot trust out of the gate to most of the things that you see and you hear, is a good rule of thumb, and then, of course, how can you be on even on a higher alert. Right? So, a good another good sign of scam is sense of urgency. Like, if someone is asking you to do it right away, this might be exactly where you're you should wait and hold off. If someone is applying any psychological pressure, like, hey, You know, I just had a call from CEO. You have to do it now. Like, don't buy that. Just don't don't react to that.

Yuri Dvoinos [00:25:57]:
You have to learn to ignore that because that that is more likely to be a sign of a scam and this is, of course, happens when you believe you are talking to someone you can trust. So calling back, involving anyone else within this organization, double checking, being extra careful is, of course, my my large advice. And then, scams are being extremely creative anywhere from jobs interviews, you know, fake job postings to, sophisticated corporate scams, but, it's difficult for me to chat about that because they are, the variety, is high, but but the principle stays, always stay, unchanged.

Jordan Wilson [00:26:44]:
Yeah. And, you know, even when when you talk there about just the sophistication, I mean, it's it's getting really good, and I think people need to be aware of it that even if you are a, you know, a smart, intelligent businessperson, you can easily be duped. Right? There was something we talked about on the show a couple months ago. I believe it was back in February where a finance worker essentially got on a Zoom call with what he thought were all his coworkers. It looked like them, talked like them. It was video, and they were interfacing. Right? And this company, a very sophisticated scam ended up, you you know, essentially stealing $25,000,000, from this company just because of the level of sophistication. So, you know, in instances like that, Yuri, you know, I know there's no, you know, catchall, but even just when we talk about the future of of scams because I think, you know, we've we've we've talked about, you know, as an example, GPT 4 0 and Google Astra, you know, their project Astra where, AI models can can kind of see and react in real time, but you have to think that this type of technology will also exist for the bad guys as well.

Jordan Wilson [00:27:48]:
So, you know, when it comes to the future of scams, how how should we be looking at that? What should we be paying attention to? And and how can we actually detect them as they become more and more sophisticated?

Yuri Dvoinos [00:28:02]:
Absolutely. I fundamentally believe you have to use, a technology. If someone is using AI against you and you are, you know, not not using a production for for a better say. I think you are, you you become more vulnerable. That's just the reality of things. So I think using AI back to which is almost the beauty of AI, it's, it helps you to augment your vision. It just helps you to recognize the risk. It's almost like coming up with this risk score.

Yuri Dvoinos [00:28:36]:
Right? This communication seems more risk, less risky, and here's why. I think that is, that is a very good thing. I also think that, unfortunately, we don't have a really good answer to deep fakes right now. Now. That's just the reality of things. Someone is forging, videos. It's absolutely doable to impersonate someone, someone's voice, someone's video appearance. That's massive.

Yuri Dvoinos [00:29:02]:
Right? That's the impact of misuse or potential misinformation is disinformation is massive as we believe what we see. I think we just all have to be more much more critically become much more critical thinkers and, just know about that. Know that you cannot trust things that you see online or through digital communication or recorded communication out of the gate as we used to many years ago.

Jordan Wilson [00:29:33]:
Yeah. Yeah. And, you know, I like I like what Monica is is saying here, you know, kind of this concept. I think, you know, people come up with, like, a family password, right, in case they're ever, you know, being targeted, you know, but maybe I mean, should businesses be doing something similarly, like, you know, before something is signed off, like, should businesses have, like, a, like, a almost like a family password that says, like, yes. This is real. You know, this isn't, you know, a scammer. Is that taking it too far, or might we see something like that being commonplace in the future, Yuri?

Yuri Dvoinos [00:30:05]:
I I don't know. I think we might. I think I I think organizations are different by their nature, and everyone is different. If you are a very small business, you're in one place, but then if you're growing rapidly, I think but you don't still don't have a CISO or someone who's responsible for information security, in your company, I think you become higher on the radar for sophisticated targeted scams, but you don't necessarily buttoned up your security and that makes you even more vulnerable. So I think, you know, if you are an average consumer, just, you know, again, following those basic rules will help you to eliminate most of the nonsense. If you are a larger company or a larger org, definitely use, some tools or better invest in the security, invest in someone who can take care of this, or those things get gets more sophisticated, and, it's just difficult to manage this, part time.

Jordan Wilson [00:31:05]:
Yeah. Yeah. You know, I'm I'm I'm curious. What not keeps you up at night, but what do you worry about? Right? So, obviously, you know, you and your company provide solutions. But what is that thing that you still worry about specifically when it comes to AI being used in business scams? What kind of keeps you up at night in that regard?

Yuri Dvoinos [00:31:25]:
I think the video like I said, I think the the deepfakes is something that, we just haven't cracked this this you know, we we didn't figure this out. We haven't figured this out yet. I I we it's very difficult to recognize a scam. It's very difficult to, understand, how to deal with that. It's obviously someone is impersonating someone. They have the exact same appearance, the exact same voice. How do you differentiate sort of what's the ground source of truth? How do you not, you know, Internet has used to be the most free place in the world and now there is so much misinformation or garbage information there. It just it's not something that I'm worried about from a corporate standpoint or, how do we absolutely will, a lot of extremely smart people are working to solve these problems, but I do tend to think that it got to the point where you can't trust what you see, and I I don't like that, of course, and I don't like where it's going.

Jordan Wilson [00:32:31]:
Yeah. Yeah. Great. Another great question here. So, Monica asking, do you see insider cyberattacks at medium and large businesses? Something I didn't even think about. Right? So we're always thinking about, you know, these threats from the outside. But, yeah, what about at large enterprise companies? Is that something that comes up? And if so, you know, what steps should we be taking to maybe mitigate that or maybe help prevent that?

Yuri Dvoinos [00:32:57]:
Well, I think that this is a separate topic from scams, right? So insider threats, I think they've been here forever. I I don't have again, I don't have the hard data on that, but I think but I think this is something that, like, the Trojan horses, if you will, they have been happening for decades. I, I have seen mass, targeted scam attacks. So imagine that someone has identified 25 key employees of your company, and someone has sent them almost simultaneously a message from from the CEO, and the CEO says to do something immediately. All it takes for one person to buy it out of 25, and you might be compromising someone else's company security security. Right? So that is something that is happening quite often, but, unfortunately, I can't comment on the insider, insider threat. I think it's a different beast.

Jordan Wilson [00:34:02]:
Sure. No. That makes sense. So, I mean, Yuri, we we we've talked about a lot in today's conversation. So, you know, we went over, you know, as an example, you know, not paying as much attention to, you know, single channel, attacks and maybe verifying them on multi channels. Kind of the concept now that with AI, people are hacking humans instead of hacking systems and, you know, how easy it can be to replicate someone's writing style and how people are using, you know, large language models to do that and to, you know, launch more and more, kind of, cyber attacks. But, you know, as as we wrap up, maybe one is the one important, the the the one most important piece of of tactical advice that you have for business leaders when it comes to understanding AI's role in scam detection and prevention?

Yuri Dvoinos [00:34:47]:
Just imagine that it is impossible to recognize if the communication is a scam. It is so good that it's no matter how a critical thinker you are, it's just not feasible for you to know to differentiate what's real from what's not real. So how can you, protect yourself in such an environment? And I think, again, going back to to the FA, not making hasty media decisions, calling back, using second communication channel is what actually will help you to verify that you are communicating with someone you can, in fact, trust. I think that trust is something that everyone online will have to work a little bit more to gain, and, we just have to stay cautious about that.

Jordan Wilson [00:35:44]:
I love that. I love that. So so so much good advice, for business leaders, especially with this new sweeping I mean, so so many AI scams out there, very sophisticated, but some some great advice, here today on the show. So, Yuri, thank you so much for joining the Everyday AI Show. We really appreciate your time.

Yuri Dvoinos [00:36:07]:
Thank you for having me.

Jordan Wilson [00:36:08]:
Alright. Hey. As a reminder, everyone, there's a lot more. If this was helpful, please let us know. You know, repost this, tag tags, you know, someone in your organization who needs to hear this, but also go to your everydayai.com. We're gonna be sharing a lot more about what Yuri talked about in today's episode, a lot of links to some of the stories that we talked about, and as well, as as Aura so you can check out a little bit more about what they do. So thank you for joining us, and we hope to see you back tomorrow and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI