Ep 26: The Impact of AI on Society: Making Us Smart or Dumb?


Episode Categories:


Will AI ever make its way into financial tech? And if so, what does that mean for the rest of us? That's one of the things that we're going to be talking about today on Everyday AI. This is a Daily Livestream podcast and newsletter for you and everyday people trying to figure out what the heck is going on in the world of AI. Maybe we're going to get some answers because today I have a guest and he's a special guest and I'll tell you why, but Piotr Warchol is joining us. Piotr is a programmer analyst at FIS. Piotr, what's up? Good morning. Thanks for joining us.

Piotr [00:00:48]:

Good morning, Jordan. How are you doing?

AI Leaders Warn of Extinction Risk

Jordan [00:00:50]:

Great. The reason why Piotr is a special guest, I'm very open and honest on the show. Piotr is married my cousin, but that's not the reason he's on the show. The reason he's on the show is he has a master's in computer science with a concentration in artificial intelligence. So I'm like, Piotr, you have to come on the show because this is kind of what you do and what you know. So just FYI, that's why Piotr's on the show. And it's going to be great because I know he has some insights. As a reminder, this is a live show. If you have a question for me or Piotr, please drop a comment and we'll do our best to tackle it. S

o before we get into all that, let's talk about what's actually happening in the world of AI. So this is fascinating. A 22-word statement warning against the risk of extinction. All right, so let's talk about this news piece. So this is some of the top AI researchers and CEOs warning about a risk of extinction.

So not just like some random guys. We're talking about the Google DeepMind CEO, the OpenAI CEO, Sam Altman. So these are the people that are probably most responsible for driving the tech-forward. And let me read this 22-word statement and then I'm going to get Piotr's take on it. So the statement reads as follows. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Piotr, seeing that statement, what's your thoughts? Are we worried about a risk of extinction yet?

Piotr [00:02:25]:

I don't think a mass level extinction yet is imminent or something we should be scared of. But if we don't do anything about it, definitely there is something to say about how effective it is at doing everyday tasks. And already and we're not even allowing it to like I mean, we're obviously allowing it to learn from trillions of data points, but allowing it to move any further without being unchecked can be very dangerous. I agree with that statement.

Jordan [00:02:53]:

Yeah, it's going to be very interesting to see how that pans out. Christopher dropped a comment. Good morning, Chris, thanks for being here. I hope you enjoyed today's conversation. The second news piece, which comes as no surprise to me, but it's still worth highlighting. So a recent survey says students are just ditching their tutors at an all time high and just going with ChatGPT. So this study came from Intelligence and it said about 40% of students have already replaced human tutoring with ChatGPT. Piotr, what's your take on that?

Piotr [00:03:26]:

Coming from someone who did their Masters online completely and I never even went into a single classroom for my Masters. I can see the benefits of this immensely. As a large language model, they were able to give you anything that you want at a fingertips notice. Within moments. You don't have to text someone and hope they're awake or hope that they're available to talk to you for half hour, 30 minutes. You could just type it into a little chat thing and just say, hey, I want to know about this topic. This is what I think I know is what's wrong, what's right, and it can give you that information quickly, accurately. And it's just amazing.

Deepfakes at an All-Time High

Jordan [00:04:03]:

Yeah, it is amazing. Sayetta we're going to get to that question, that's a great question, asking if there's potential risks or downsides. We're going to get to that, don't worry. Our next piece of news.

So a recent Reuters report, Reuters looked at deep fakes in politics. If you think it's bad now, just wait until kind of the US primary election cycle heats up here in a couple of months. So, anyways, this Reuters report looked at deepfakes being at an all time high, being used for, as an example, a Democrat saying great things about a Republican or a Republican saying great things about a Democrat that obviously don't exist. Piotr, this technology, this deep fake technology where it looks like someone's actually saying something, they're not cloning voices. All this, I mean, with your background, do you see this as just getting like a bigger, bigger problem as time goes on?

Piotr [00:05:01]:

No doubt in my mind that this is going to be a major issue and really what we're going to have to do is lean on news outlets to be very thorough with what they allow to be published and what they don't allow to be published. Platforms like Twitter and Facebook and things like that are going to have to become it's hard to say, but I mean, censoring is a very dangerous topic to broach, but you have to think about the implications of what some of these deepfakes could do. You could swing markets by points and just the implications of how these deepfakes can be used is really terrifying and should really be taken seriously by the Senate, by the government and how we approach these kind of problems.

Jordan [00:05:47]:

Yeah, and Piotr's, not exaggerating where he's saying this can cause a swing in the market. We actually saw and we shared about that in the newsletter last week, someone made a deep fake, which wasn't even a great deep fake of the Pentagon being attacked and the market started to tank, even though it was momentarily it happened. So that is a real concern. And that's a great point that Piotr brings up.

Amex Turns to AI for Customer Service

Kind of our last news piece for the day. And I'm sure people will love this because there's nothing more in customer service than we love, than talking to chatbots. But Amex, so American Express just did say today that they're going to start using generative AI and customer service. A lot of fintech companies, which is kind of a little bit Piotr's recent background, they've been very cautious using generative AI and makes sense. But Piotr, what's your thought on Amex, one of the biggest companies in fintech, saying like, hey, we're going to start using generative AI now?

Chatbots in Customer Service: Finding Limits

Piotr [00:06:45]:

Sure, I guess it really depends on their customer and how inward facing the product is that they're going to be allowing a chatbot to take over troubleshooting. So let's say it's a very general thing that they're doing. Like a customer that's calling in about their credit card being unable to be used that I think could be very easily taken over by an AI bot. But someone who is working with their technology internally, that is, someone like me who's writing code and stuff like that, where it's like we run into an issue.

It's like I can't talk to an AI because you guys don't share the information with that AI about your proprietary software. So that AI is going to be useless to me. I need a human being on the other end that knows their system and knows how to work with it. So I guess there has to be a line drawn about which customer facing people are allowed to use nai and which ones should have a dedicated person who's trained and able to answer questions thoroughly, quickly and effectively without me sitting on a phone line for hours on end.

Piotr's Role as a Programmer Analyst

Jordan [00:07:51]:

Yeah, that's a good point. So actually you kind of talked about kind of open access, closed access. So talk a little bit about what your role is as a programmer analyst at FIS, just for the everyday person. We hear those buzzwords. What is it that you're actually doing on a day to day basis as a programmer analyst?

Piotr [00:08:14]:

Sure, I'm still kind of going through and learning, but at FIS what I do is we write in a language called COBOL, primarily. And what we're doing is we're taking tasks or projects from the bank. The bank I work with is TD Bank, and they have issues with certain applications or programs. And we go through and I have a project manager and we talk back and forth and we figure out the issue. We write that code, we do test artifacts on all the test cases that we need to and then we implement that code. And that just kind of goes through the pretty standard lifecycle of code.

Jordan [00:08:52]:

Yeah, kind of explain to people not getting into the specifics of Cobalt, but even just how this technology works. Because the way that I might get this wrong, Piotr, this is why I'm holding you to back check me here. But the entire Internet runs on different code, right? So if you're watching this on social media, there's obviously hundreds of lines of code in the background.

But with financial institutions, a lot of time they're not using the same type of code or the same language that runs the rest of the internet. A lot of times that code is maybe proprietary, something that whether it's a financial institution created so different banks can talk to each other, or sometimes it's a language that a company itself maybe helped create just for specific use cases for its clients. So is this kind of at least in the kind of fintech the financial tech space, is it pretty common to have code that is exclusive to banks and to just companies that only use that specific code, is that right?

Large Language Models Revolutionizing Code Efficiency

Piotr [00:10:03]:

I would say yeah, no, absolutely. And it can be very frustrating where you get technologies that come out that there's been such a boom in large language models like Bar Chgpt and stuff like that, where I would love to feed in my code and make my daily tasks significantly more efficient and able to be able to do things and play around with because the beauty of code is that you're able to solve a problem many different ways. Right? And I would love for a different perspective on how to write code.

ChatGPT could do that in seconds where I'll say, this is how I wrote my code, is there another way to do it? And then it can give me some different way to attack a problem. I'll say I don't like that, can you change that? Can you change that? And that can be done in seconds versus where me, I'll be struggling through the logic for hours minutes. The efficiency of being able to do a job like mine would be amazing if I was able to feed in proprietary code and safely. Obviously, where the rest of the world can't access that code, but I'd still be able to use the technologies that are becoming available and where I could become much more efficient at my job. That'd be very exciting. Cool.

Jordan [00:11:14]:

Piotr, you bring up a great point there because I think there might be this digital divide that's being created. Again, companies like yours at FIS, there's a reason that they don't want their code.

Piotr [00:11:30]:

I agree.

Jordan [00:11:31]:

Even being used to train a model, right? Like there's certain code that just needs to stay in house. But do you see a great divide happening? And I'm not just saying like with programmers, but in general, because a lot of big companies, I think stories that came out, I think Chase kind of banned ChatGPT and other large language models, and a lot of big companies are banning the use. Which part of it makes sense? Right. But do you think that there's going to be a divide of just access to information growth, all of these different things amongst companies or sectors that allow this technology and those that don't?

Piotr [00:12:11]:

Sure. And I think we can feed back into the apocalypse that AI could be causing. I could really see the Senate or like a bigger body just swinging very hard the other way and saying, we don't know how this is going to affect companies. And if a lot of companies right now build up their necessity on AI, and then the Senate says we can't use AI and companies, we want to take a stop and see what's going to happen, that could swing very hard the other way and companies could apocalyptically fall because of their needs for AI.

Jordan [00:12:47]:

That's a very real possibility, too, because you see now at these, especially these bigger companies like Nvidia, right. Microsoft, these companies, some of them have nearly doubled in the past six to nine months. I'll have to look it up. But I think Nvidia now that is almost worth a trillion dollars. Yeah. Has almost doubled in nine months. So I think a large part of this growth is the companies that are very open with not just using the technology, but developing it.

But yeah, then what happens if there is regulation that just shuts it down? Right. What do you think? Again, it's hard to play that scenario out in your mind, but what would happen if there was a huge ban tomorrow and all these companies that are creating hundreds of billions of dollars of new business all of a sudden, oh, this technology is banned.

Piotr [00:13:42]:

Yeah, right, I agree.

Jordan [00:13:44]:

I don't know. Would it be an apocalypse? Maybe not, but that would be economic collapse, potentially.

Piotr [00:13:51]:

I mean, it depends on what happened, in which sectors it happens and what kind of a great grand scale it is. But I mean, something has to be done and it'll be interesting to see how companies pivot and what they allow the technology to use and what they don't allow it to use. I can see if certain big companies, that we're calling them corporations that are being hesitant, can kind of fall behind if they don't catch up quickly enough and don't allow for integration in certain components of the business that would allow them to get certain jobs done much more quickly and effectively.

Jordan [00:14:25]:

Yeah. Piotr, I think that you're in a very unique position because you graduated with your Master's in 2022, correct?

Piotr [00:14:36]:


Jordan [00:14:36]:

Right. Okay. So coming out as a Master's graduate in 2022 with a concentration in artificial intelligence. Right. And then a matter of months later, there's this huge boom in AI. Right. Did you find people looking at you for answers or looking at you, whether it's at your company or friends, family, all that kind of stuff? Was there a certain not pressure, but a weight that came with it, where all of a sudden artificial intelligence is booming? And here you are, a recent graduate with a master's with a focus in AI. What was that like? Or what has that been like for you so far?

The Fascinating Growth of Large Language Models

Piotr [00:15:20]:

I mean, it's really been fascinating. Just when I was graduating, I took my natural language processing course, which deals with large language models and stuff like that. And the way I was explained to it by my obviously the technologies existed, but they weren't nearly effective enough to replace jobs, yet they were struggling with certain things. Yada, yada, yada. And then give it a couple of months and all of a sudden you have ChatGPT that can write an essay on any topic in the world in seconds.

The growth of the large language model has been truly amazing. And hearing the people around me talk about it, they bring up terminator a lot where salvation is coming, the world is ending and the robots are going to take over. And I don't think the robots are going to take over unless we allow them to take over. But it's been fascinating to see what I've learned about just become so commonplace, like the word AI to be used as such a buzzword now. It's been cool.

Jordan [00:16:30]:

Yeah. If any company wants to see their stock go up, you just mentioned AI in your earnings calls as much as possible, and it works. There's studies on it. Right. So another question, Piotr, as someone that I'm sure throughout your education, you had to do a lot of coding, you had to do a lot of problem solving and building applications, what was your first reaction when you saw ChatGPT? Being able to do these things in a matter of seconds or minutes that would take even a very skilled developer maybe hours. What was your first reaction when you saw kind of what this GPT technology was capable of?

ChatGPT Speeds Up Coding Troubleshooting Process

Piotr [00:17:14]:

It was fascinating. A lot of what I kind of relate my learning of code was, is you basically Google a lot like you. You'll write your code and then you become very good at troubleshooting your code. You can either interactively debug it, or you can take the error code and throw it into Google and figure out what causes that error code. And basically what ChatGPT is doing is it's eliminating my need to copy that error, throw it into Google, and go back and forth with that process, which could take very long.

I could just throw in my problem into ChatGPT and it won't write the code perfectly for me by any means, but it gives me such a good baseline of where I need to start, where it takes away so much of that. Like, do I use this kind of code? Do I use this kind of FL statement? Do I use my if else statement here? It just takes away so much of the guesswork of how to attack a problem, where it just makes you so much more efficient about attacking that. To answer your question, my homework would have been done so much faster.

Jordan [00:18:19]:

Yeah, I go back even when I was personally just starting to use ChatGPT a couple of months ago. I'm not a programmer, I'm not a developer, but I knew it could work for that and I was just trying to build little games or applications. I did something in three minutes and I sent it to Piotr. I don't know if you remember this, Piotr, I think it was a Pong clone or it was some 80s arcade game. And I sent it to Piotr and I said, hey, this took me three minutes. And do you remember what your reaction was to that?

Piotr [00:18:52]:

That would have taken me weeks to.

Jordan [00:18:54]:

Code, which is wild, right?

Piotr [00:18:57]:


Jordan [00:18:58]:

Someone like me with very little technical knowledge can go in and create something working. So we'll have more about that in the newsletter. But I did want to get to Sayetta, you had two questions, actually. Let's tackle these one by one. Piotr, I'll throw this to you. So Sayetta asks, what emerging AI technologies do you think have the potential to revolutionize everyday life in the near future? Great question.

Piotr [00:19:23]:

It's a fantastic question. What I was really interested in, and I really haven't delved too deep into the topic, but allowing plugins to work on your browser, I think would be fascinating, where it could I mean, the possibilities are endless. It basically becomes an app store of who could become more creative with how to use AI in an everyday sense of when we Google things and when we go through our day-to-day lives and what do we want to allow AI to kind of implement. The plugins are already so fantastic without AI. They've made my life so much better and kind of how you can implement the plugins and the AI into your day to day life. You could have it just write your emails for you automatically on a whim. It's fascinating.

Jordan [00:20:12]:

Yeah. And I think that's what's coming. So, Sayetta, even my take on that, you saw Microsoft last week unveil their, I think it was called Copilot, which essentially works AI into the operating system itself. And not just a browser or not just like, oh, if I'm going to ChatGPT, but kind of what Piotr referenced there, that's where I think it's going. It's where it's baked into your operating system. And at any point, no matter what application you're in, you're always going to have an AI assistant there, which I think is going to really change the game.

Christopher, another great question. How can we ensure that the development and use of AI systems are inclusive and unbiased, given that AI algorithms are trained on data sets that could potentially reflect and perpetuate existing human biases. Christopher, that's a fantastic question that's so hard hitting at 07:50 A.m. In the morning, but let's try to tackle that. Piotr, how can we ensure that AI systems aren't just going to reflect human biases? Or is there no way?

Piotr [00:21:22]:

Do you think? I tackled a very similar issue in my studies and a lot of what we were doing was with Neural Networks, which essentially there's multiple ways you can code AI, but you can code it where it's very easy to tell someone who is nontechnical. This is exactly why the machine is telling me that this point is working this way. Where a lot of what is being coded now is neural networks have had such a boom, where it literally is a black box. Like you give it data and it's going to spit out data.

We don't exactly know why it's giving us this data, but it's very good at giving us this data. And I think a fantastic way of attacking these kind of problems is we create a hard set of rules or questions, right, where we ask the after we iterate on the AI, we ask the AI the exact same questions. And if it deviates too hard one way or the other, if it isn't able to answer that question repeatedly in the exact same way, that it's bias. Again, it's so difficult to tackle. But we want the AI to answer questions in ways that we feel comfortable releasing to the general public without biases or things like that. Perpetuating.

Because really the most important people that are going to be in the new world with AI is who is feeding the data and why are they giving them that data? Because you can take away certain data and completely change how the world thinks, literally, because they're going to be so heavily reliant on AI.

Jordan [00:22:54]:

Yeah, and I will Chris, even to follow up, I'll make sure to go to the thread in LinkedIn and leave a comment. There was a great and I've referenced this on the show a couple of times, so the OpenAI CEO, Sam Altman did like a two and a half hour podcast with Lex Friedman specifically, and he's probably spent a half an hour talking about these biases and human feedback and all that.

Is AI Making Us Smarter or Dumber?

 So fascinating follow up to the answer that Piotr gave as well. Another question, Sayetta always comes with great questions. So thank you for always tuning in. So she says, are there any potential risks or downsides to relying too heavily on AI for decision making in our own personal lives? Oh, I have a great take on that. But Piotr, what's your take? Is there downsides to us using AI too much?

Piotr [00:23:45]:

Even personally, I think a great way to approach that is people should be taught how to do things without AI. The same way we're taught how to use math without a calculator. And then you allow AI to help you in your day to day lives. You should still understand the core of what you're doing and how it works before just solely tuning into AI, because then you're literally a slave to the system.

You don't really know why it's giving you these answers. You're not able to interpret these answers for you. And you should definitely be able to do research on a subject, any topic, and be able to have a general baseline understanding of it before you just allow some random search to give you whatever you want without doing your due diligence.

Jordan [00:24:27]:

Yeah, I think that's a good point. I've talked about this before, but sayeda, to even get back to that question, I think AI, it's kind of like the Internet, right? And I always think, oh, did the Internet make us smart or dumb? And I kind of think the same thing with AI. It's like, is AI making us smart or dumb? And I think it depends on what you're doing outside of your usage of AI.

I think that if you're still seeking to understand the knowledge so even if we go back to the new story about people ditching their tutors and just using ChatGPT so my thing is, okay, are you actually learning whatever your ChatGPT tutor tells you or are you just using it to okay, there's my answer for this paper. There's my answer for this test. Or are you actually ingesting the information and understanding it? And kind of like what Piotr said, the math without a calculator. So I think it all just depends on if you're actually using that information. I know we went a little long, Piotr, but I think we made it through all the questions. But I want to get one more question for you before we end this episode.

Piotr [00:25:40]:


Jordan [00:25:41]:

As someone that's in the field, you understand AI probably better than almost anyone listening to this show just because this is where your education is. This is what you're doing on a day to day basis as a developer or as a programmer. But where do you see this technology going in the next couple of years? Will it get to the point where many jobs, in theory, could be automated? Do you think that are we at kind of the height of the climb of AI? What's your take on where this technology is heading in the near future?

Piotr [00:26:21]:

I mean, the growth is truly exponential. I mean, there's a ton of brilliant people that are going to be able to come up with so many fascinating ways to use these larger language models to help general people in so many different ways. The real, real question is, I mean, there's so many memes about how the Senate handled, like the snapchat debate and all that, but we really need a governing body to come in and responsibly and with knowledge base, be able to kind of help guide where this technology goes.

Because if it goes unfeathered, I mean, apocalyptic destruction of jobs is definitely not off the table. I don't think. So. I mean, it's going to be very fascinating to see how the problem is tackled and I'm very excited to see where it goes.

Jordan [00:27:11]:

Yeah, you bring up a great point which should be a whole nother conversation, but just in general. You have these big CEOs of the Microsoft, the OpenAI, everyone, and they're openly saying, yes, regulate this industry. But on the flip side, like what you said, Piotr, I'm never shy about this. I don't think our current government in the US is set up to not just regulate, but even understand AI. If you've ever tuned into any line of questioning from anything to US congress.

Piotr [00:27:50]:

Google, Snapchat, there was a couple of instances, right?

Jordan [00:27:55]:

There's a couple of people who understand, but the overwhelming majority, actual US senators asking questions of these CEOs, where it's Face Paul moments, where it's a lot of the people running our country don't even understand how the Internet works.

Piotr [00:28:09]:

Most of them need their kids to tune into the Internet and stuff.

Jordan [00:28:13]:

Yeah, it's crazy. All right, well, we made it to the end of the show. Piotr, thank you so much for jumping on the show. Really appreciate having you on.

Piotr [00:28:24]:

No, it's been great. Thank you so much.

Jordan [00:28:25]:

Yeah. All right, so as a reminder, if you are still watching, listening, please go to your Everydayai.com. We have a daily newsletter and we're giving away two year long subscriptions to ChatGPT Plus. So the premium version of ChatGPT, it's a little expensive, but we're going to pay for it for you. So go to your everydayai.com. Sign up for the newsletter. We have that information in there. So thank you so much for tuning in today and we hope to see you tomorrow and every day at Your Everyday AI. Thank you.

AI [00:29:01]:

And that's a wrap for today's edition of Everyday. AI, thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit your Everydaya.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI