Ep 188: AI in the Classroom – Focus on literacy not detection

Episode Categories:

Navigating the Ethical Integration of AI in Education

As artificial intelligence (AI) continues to reshape various industries, its impact on education is a topic that demands attention. In a rapidly evolving landscape, businesses, educational institutions, and decision-makers face the challenge of effectively integrating AI into educational settings while upholding ethical standards and preparing students for the future. On today's episode of the "Everyday AI" podcast sheds light on the ethical considerations and challenges surrounding AI integration in educational settings, providing valuable insights for businesses and decision-makers

Educational challenges in AI integration

The podcast episode delves into the complexities of integrating AI technology into classroom settings. The guest speaker, an experienced educator, emphasizes the need for clear guidelines to navigate the use of AI in education. The discussion highlights the challenges associated with creating and implementing policies that govern student AI usage, addressing concerns related to academic integrity, transparency, and the evolving nature of generative AI technology.

Ethical considerations and student preparedness

The conversation underscores the importance of preparing students for the impact of AI on writing and editing. The guest speaker calls attention to the lack of specific AI courses in higher education and emphasizes the integration of AI literacy into existing courses to equip students with the necessary skills for the future workforce. Furthermore, the episode emphasizes the need for instructors to adapt to the evolving AI landscape and incorporate AI ethics into the curriculum to guide students in using AI tools ethically and effectively.

Challenges and opportunities for businesses

For businesses and decision-makers, understanding the ethical considerations and challenges associated with AI integration in education is crucial. The podcast episode highlights the slow adoption of AI in education and the need for faster ways to introduce AI-related learning opportunities, such as micro credentials and workshops. By recognizing the challenges faced by educators and institutions, business leaders can identify opportunities to support the ethical and effective use of AI in educational settings.

Future-proofing policies and collaboration

The conversation between the podcast host and the guest speaker emphasizes the need for future-proof guidelines and continuous updates to accommodate the speed of change in AI technology. Furthermore, the episode underscores the importance of collaboration between tech companies and educators to create walled gardens and closed systems specifically trained for the intended purpose in educational settings. This collaboration can facilitate the development of ethical AI programs tailored to educational needs, ensuring that students are equipped with the skills required for an AI-integrated future

Empowering educators and decision-makers

Ultimately, the "Everyday AI" podcast episode provides valuable insights for businesses and decision-makers seeking to navigate the ethical integration of AI in education. By understanding the challenges, ethical considerations, and opportunities discussed in the episode, decision-makers can empower educators with the resources and support needed to effectively integrate AI into educational settings while prioritizing ethical usage and student preparedness.

In conclusion, the ethical integration of AI in education requires thoughtful considerations, collaboration, and continuous adaptation to the evolving AI landscape. By leveraging the insights from the "Everyday AI" podcast episode, businesses and decision-makers can play a pivotal role in supporting ethical AI integration in educational settings, ultimately preparing students for the AI-driven future.

Video Insights

Topics Covered in This Episode

1. Challenges of Integrating AI into Education
2. Ethical Use of AI in Education
3. Preparing Students for AI Integration
4. Educator's Perspective on AI in Education


Podcast Transcript

Jordan Wilson [00:00:16]:
Here in the US, a new school semester is started, which means also that the conversation is going to start again Around generative AI in the classroom, should it be used? Should it be banned? How should you monitor it? How should you govern it? How can we do this ethically? There's so many questions, and it's a never ending discussion and one that I'm personally passionate about, but I'm not the only one. So we're gonna be having a discussion today on AI in the classroom And how we should be focusing on literacy and not AI detection. I'm excited for today's show. I hope you are too. So if you're joining us On the livestream, thank you for joining us. Please make sure to get your questions in. As always, if you're joining us on the podcast, thank you for that as well. Your your, support has made us a top 10 tech podcast on Spotify, so we super appreciate that.

Jordan Wilson [00:01:07]:
Always I say this every day. Check your show notes. We hide so many more, great resources in the show notes. So if you care about AI in education And how it's working and different takes on it. You know, we have a whole section on our website of other podcasts where we've, you know, talked to other experts on their take on AI in education. So, if you enjoy this one, make sure to check out your show notes as well. Alright. Before we get to that, We're gonna do as we do every single day starts with what's going on in the AI news.

Jordan Wilson [00:01:38]:
So if you haven't already, make sure to go to your everyday AI .com. Sign up for the free daily newsletter. We'll be recapping not just this AI news, but also some depth and detail from our conversation today. So, let's start with Sam Altman. Yeah. Sam Altman. So Sam Altman says that OpenAI doesn't need New York Times data in a new interview. So, Sam Altman, CEO of OpenAI, addressed the New York Times lawsuit against the company for alleged copyright infringement And express surprise at the legal action.

Jordan Wilson [00:02:09]:
So Altman, spoke at, the Davos at the World Economic Forum, And he stated that training on the New York Times data is not a priority for OpenAI and that they do not need, to use The New York Times data to train their AI models. Altman also, in, related, I guess unrelated, but related for sure, news Talk, changed his tone a little bit on artificial general intelligence. So in an interview, he did say that he no longer believes in a sudden and radical disruption by AGI, but stated that it will be a continuous improvement process and humans will adapt. You previously kinda written about the potential harm and benefits of AGI, but now has a much more measured, outlook. Alright. Next piece of news that matters is Google DeepMind is again flexing its math skills. A new artificial intelligence Some, developed by Google DeepMind, which is Google's AI arm. It is called alpha geometry, and it can solve complex geometry problems At the level comparable to human gold medalist at the International Mathematical Olympiad.

Jordan Wilson [00:03:19]:
I honestly did not know There was a gold medal at the International Mathematical Olympiad, but there is. An alpha geometry combines a neural language model In a symbolic deduction engine to solve geometry problems, this new system performed well on challenging problems and can even discover new mathematical theories. Yeah. I had a whole show on that once, and it kind of blows your mind to think about how generative AI can create new, math problems. It's fascinating. Alright. Our last piece of news is Samsung and Google Cloud made a multiyear agreement, made an announcement on their multiyear agreement, for more AI smartphones. So Samsung has announced their partnership with Google Cloud to integrate generative AI technologies into their latest smartphone series, including, the much talked about Samsung Galaxy XS 24, which is probably gonna be the, kind of the most popular, and first, smartphone to market with that AI edge or, you know, edge, edge AI on the device.

Jordan Wilson [00:04:17]:
So, this partnership aims aims to enhance the user experience by providing advanced AI features such as text and image editing, real time translations, and intelligent search capabilities. Samsung is also exploring the use of Google's, Gemini, models for more complex tasks and on device language processing, enhancing the efficiency and capabilities of their smartphones. That was a mouthful y'all. A lot going on as always in AI news. So if you wanna know more about those, and we always have so much more, make sure to go to your everyday AI .com. Sign up for that free daily newsletter. And, hey, while you're there, it's we literally it's like a free generative AI university. We have More than a 180 shows, with expert guests taking deep dives into all kinds of topics like we're doing today with our guests.

Jordan Wilson [00:05:03]:
So, I'm ready to talk AI in the classroom. Right? It is, you you know, new semesters, you know, just kicked off either this week or, you know, just just around the corner. And this is going to be continue to be a topic. If I'm being honest y'all, I thought we would have already figured this out at the college level, but it's not that easy. Right? There's so many, complexities and and difficulties that educators are having to go through. So, with that, I'm excited for today's guest. So please help me welcome as we bring, her onto the show. There we go.

Jordan Wilson [00:05:34]:
We got her. Laura Doonan, Laura Doonan is the professor of English and technical writing at the University of Central Oklahoma. Laura, thank you so much for joining the show.

Laura Dumin [00:05:43]:
It's good to be here.

Jordan Wilson [00:05:44]:
Alright. Hey. Can you tell everyone a little bit about your role, in in, you you know, a little bit about, you know, Kind of the courses that you teach there at the University of Central Oklahoma.

Laura Dumin [00:05:54]:
Sure. I have a lot of different roles, but the one that kinda matters today is that I am the AI coordinator for our university, which means that I work with a group of people to Make sure that if there are issues happening, we have a chance to address them. If there are educational opportunities for faculty or staff For students, I can help put those together or I can be there to help moderate or, you know, whatever's needed for that. And with my students right now, I'm actually back in the 1st year English composition classroom. I'm very excited about that. And I am also working with graduate students. So I teach a range of intro to English and then technical writing courses as well as a few graduate classes.

Jordan Wilson [00:06:45]:
All all over the place. All over the place. I love that. Yeah. And, you know, hey. If if you're joining us live like Tara who says she's excited for the conversation, thanks for tuning in. Make sure to get your questions in now. If you have any questions about How AI is used or should be used or shouldn't be used? Love to hear your comments, feedbacks, and questions.

Jordan Wilson [00:07:01]:
So, let's just start at the top maybe, Laura. And, Yeah. It it at least at your university, help us understand, like, how is AI how is generative AI being viewed right now?

Laura Dumin [00:07:13]:
Sure. I think at our university and probably most universities, you've got some faculty who are really on board. You've got some faculty who are kind of in the middle, And then you've got faculty who haven't quite gotten on board yet for whatever reason. And I think that spectrum makes it Both difficult and also a really rich space for us to engage with each other because there are some really great conversations that are happening. There are faculty who aren't on board yet, but who have really good reasons for it. And So those conversations can get really rich. But I think, you know, as we see that spectrum, it also can make it difficult for students because they don't necessarily know what is allowed in each class because, you know, professors can do whatever they want to with AI. So, you know, my big thing is let's get guidelines in place.

Laura Dumin [00:08:07]:
You have a syllabus statement of some kind that talks about AI, have guidelines on your assignment sheets so that students know what's allowed or disallowed in the class. And that way, they're less likely to accidentally step across the line into academic misconduct, and, hopefully, they're also less likely to purposely step across the line into academic misconduct.

Jordan Wilson [00:08:28]:
Yeah. Not that students would ever try to always be 1 step ahead of the teachers in the world. Right? That that's that's never a thing. You know, I'm I'm curious, Laura, because, like, what you said there, to some, it might sound so simple. Right? Get guidelines on AI. But I'd I'd imagine that it is multifaceted, and and and not very easy. Can you even talk a little bit about the challenges and, know, maybe not just specifically about, you know, at the University of Central Oklahoma, but what are the challenges of getting guidelines On generative AI usage, in higher education.

Laura Dumin [00:09:03]:
Sure. I will take one of my assignment sheets as an example. So In a comp one class, this is a space where students are usually learning how to write at the college level. They're learning to find their voice. They might be learning to do a little bit of research and Do good citation. Some faculty feel like brainstorming and drafting have to be done by human, Possibly by hand. And other faculty don't see that as such an issue. So as you're thinking about Putting guidelines together, they might even be very different across classes.

Laura Dumin [00:09:39]:
It and, you know, the same class, just instructors. And it it can also be one of those things where if you haven't had a chance to play around and experiment with the different AI programs, maybe you don't know what they can and can't do, Which also makes it hard for you to know what your students should or shouldn't do. So for a lot of us, it takes time and Real thought to go back to what do I actually want my students to come away from this assignment with? Okay. So how might AI augment some of that? How might we need to have human only moments? And so it's not just as simple as saying, yes. You can use AI. No. You can't use AI. It's really sitting down and think about, you know so for my assignment, we've got topics, so brainstorming, rough draft, peer review, final draft, And then a reflective memo on the writing process.

Laura Dumin [00:10:28]:
And AI fits into those spaces for me in different ways. So each piece of the assignment sheet has AI guidelines for how students can or can't use it in that moment. And, you know, again, that's not something that you just sit down and and draft in 3 minutes. That's, you know, maybe a day long or a week long process to really think about each assignment sheet.

Jordan Wilson [00:10:48]:
Yeah. And, You know, I'm curious because, you know, what Alfonso is saying here is with the speed of change, share policy can be difficult. How much how much does that complicate things? Because, you know, as a daily podcast that covers generative AI, we can barely keep up with, You know, all these big advancements, new models, new capabilities, whether we're talking, you know, the text to text of the chat GPT and Google Bard world Or, you know, the the mid journey DALL E, like, how difficult is the speed of generative AI technology? How difficult is that for, the education system to try to continually, create policies that are both effective, ethical, but also that empower students.

Laura Dumin [00:11:31]:
Yeah. I think as much as possible, if institutions and instructors can come up with Somewhat future proof guidelines. So instead of saying we are going to use ChatGPT at this time in this way, we say, alright. So with large language models, We might do this. We could do this. Check with your instructor, number 1. And then number 2 is instructors, I think we need to Be willing to put in the time throughout the semester to update our guidelines if we need to. Right now, mine are pretty wide in that I say, you know, brainstorming if it makes sense for you to do so with so everything is with transparency.

Laura Dumin [00:12:12]:
I want to know which program my students used, how they used it, if they liked it, did they get what they wanted out of it. And then we do AI literacy throughout the semester, and it's not always big things. Sometimes it's just, hey. This new program came out. Look what it can do. Let's play with chat gbt and see what it says. Let's compare it to Claude. You know, those kinds of things.

Laura Dumin [00:12:33]:
So for me, it's that having that constant curiosity to see what's going on, but I also realized that's exhausting for a lot of instructors. So I think if we can just find some high level things, like, You know, I don't want you using AI for the problem set on this assignment. We'll check it, you know, in 3 weeks and see what AI is doing at that time. You know, like, with just yesterday with the math one that you were talking about, you know, that was brand new. We didn't know it could do that.

Jordan Wilson [00:13:01]:
Yeah. Yeah. It's it literally just developing, you know, new capabilities day by day. So yeah. It's, the the the policy side is, understandably difficult. So so I'm I'm curious. You know? So, yeah, it sounds like, you you know, your situation, which is kind of like, you know, professors, you know, kind of Being on their own a little bit or, you know, having to come up with their own policies on a, you know, class by class or department by department basis, which I'm sure is, you know, Pretty standard across the country, but what at least for your own classroom, I would love to hear more. Even what are your guidelines for the classes that you are creating? How are you telling Students, you know, to use it, to not use it, and how are you even focusing on literacy?

Laura Dumin [00:13:42]:
Sure. So I'll go back to that 1 assignment sheet. And this is the basic guidelines for all of my assignment sheets. So brainstorming, I'm cool with AI if students need it. You know, I I realize that the brainstorming is part of the learning process, but sometimes especially if we're thinking about accessibility, students get caught at that cursor that that I don't know what to do. And then sometimes they freak out. They procrastinate. So if we can use AI so that they're not procrastinating, that to me is a good thing, but, again, transparency.

Laura Dumin [00:14:11]:
With drafting, they're allowed to use AI to help with their drafts. They can have up to 40% of their drafts be AI generated, And they need to highlight that text in red. And I do that I know there are citation guidelines. That's great. But with the red text, We can see if there are large blocks of text that are AI written, and that becomes a learning space. Why are there large blocks of AI written text? Was it 11 o'clock at night? The paper's due at 11:59. You didn't know what to write. Did you like the AI better than what you wrote? Did you not know what it was talking about? You just Set it in there.

Laura Dumin [00:14:45]:
You know, there are a lot of reasons. And then in peer review, they can put their own work into the AI to get feedback, But they may not put other people's work in. And that's something that I continue to talk to both students and faculty about is because of Privacy issues, we don't wanna stick anybody else's work in. With the final draft, 15% of that can be AI generated. Again, red text with their reflective memos to me because that is a personal reflection. I don't want them to use AI there. They're also talking about their AI use, what they did, what they didn't do. Same thing with the peer review.

Laura Dumin [00:15:21]:
They actually will give me reflection on how good the AI information was compared to The human, generated feedback. And then I've also added something that I call annotated PDFs. And that information, I have just a real brief discussion of that on my website as well, in the Creative Commons files area. But what I have students do is every source that they use, they have To turn it into a PDF, and then they have to highlight the quotes that they used to talk about why they used them and then highlight any other spaces In those articles that maybe gave them an idea or made them think something interesting or led them to another topic. And, yes, I know that AI can do that for them. But for the most part, like, at this point, I have not had any obvious cheating through AI at all since it came online, Which I know is a big thing. So my students seem to be willing to go on this journey with me.

Jordan Wilson [00:16:17]:
Yeah. Yeah. I was I was curious. That's I love the approach. Right? It's it's it's super specific, but also doesn't seem terribly hard to implement. You know, I'm sure there's, you know, Extra challenges on the back end, you know, for for for you or, you know, anyone else that that may be, you know, grading or checking it over. But what's the from the students so far for for that kind of policy.

Laura Dumin [00:16:39]:
So far, my students have been fine with it. We struggled a little bit last semester implementing the annotated PDFs because students hadn't done that before, and so there was a little bit of a learning curve for them. But once they got into it, Especially my graduate students last semester really appreciated the opportunity to sit down with the PDFs and mark them up Because, you know, a lot of people say, well, let's just do an annotated bibliography, which is great, but most of us as researchers are not going to sit down and do Annotated bibliography as we're working through a topic. That's something we might publish, but not something we would often do on our own. And I think a lot of us do just go out and read the articles and mark them up and highlight things that we're interested in. So this also gets to a more authentic research process for a lot of students. And I think when students see that a process is authentic to what they actually need to do as opposed to feeling like busy work, They're also more likely to go on that journey.

Jordan Wilson [00:17:38]:
It's, it's interesting because I feel that that that your approach here, Laura, it's it's it's forcing students to use AI in the correct way. Right. Because I think if if if colleges and universities just say, oh, you know, we don't we don't really care. Just tell us if you do. You know, obviously, I would say, vast majority, you know, kind of like, you know, we had a we had a comment from, from from Wuzy here saying he can't imagine, you know, ever you You know, he said, I can't imagine being in college right now and not using AI for everything. Right? Yeah. But I guess that that leads me to the next, kind of I'm a fit in the room. And I'm gonna try my my hardest, Laura, to to to not go on an accidental, hot take on This, but

Laura Dumin [00:18:26]:
That's okay. Go for it.

Jordan Wilson [00:18:27]:
Even yeah. Even even Tara asking about it. So if you've seen issues with plagiarism checkers because that's something that Universities are using across the country where you take the text and plug it in and it says, this is this percent AI generated. Is that something you all are using? And if so, have you seen problems?

Laura Dumin [00:18:46]:
And I'm gonna try not to go on a hot take too here. Okay. So AI detectors, Can we please, as instructors, stop using them? And I say that for a number of reasons. We watched OpenAI pull their AI detector last fall Saying, hey. It doesn't actually work. We've seen all sorts of problems with AI detectors. And some people will say, well, it gets it right most of the time. You know, there's a 1% Error rate or a 6% error rate.

Laura Dumin [00:19:13]:
That's not bad. But but what happens to the students that we're falsely accusing? And I have watched this over and over in the groups that I'm in. I've had students come and talk to me about it. And what happens is we lose our students' trust, and they stop being willing to go on that learning journey with us If they have to spend time defending themselves, they also you know, even Jason, our our mutual friend, you know, is talking about his own being accused of using AI to write something that he didn't and how emotionally impactful that is and not in a good way. So if we want to create a space for our students to learn, to grow, to have a chance to make mistakes and recover from them. I don't think AI detectors are the space. It's a policing space that as instructors we shouldn't be in. So instead, I take the time, you know, with those guidelines that I have, talking to my students constantly about, hey.

Laura Dumin [00:20:19]:
Look what the AI is doing this week. Check out Claude. See what Bard's doing. You know? And we play with these technologies, and we talk about them, and we learn. Hey. Chatt GPT gave me this answer, but it really wasn't that Great. And I had to spend an hour fixing it. And we also you know, through the transparency, that gives us the ethical space to to say, yes.

Laura Dumin [00:20:40]:
I used it. And then we've got things like Grammarly AI where students are using it. And a lot of times, we're going to see students who are non native English speakers Using something like Grammarly to, quote, unquote, fix their language or students who are developmental writers trying to sound more academic, And they're going to get caught up in the AI detector. We've got students who are neurodiverse. We're seeing Those students get caught up more in the AI detector. And, of course, as more information gets fed into the AI programs, more of what we write on our own is wanting to get flagged as AI because it's going to say, oh, I saw another person write like this. Therefore, I'm gonna flag it. So if if we can just move away from the idea of of policing and spend time and, again, I know we're exhausted.

Laura Dumin [00:21:32]:
I get that, but we have to spend time creating spaces in our classrooms where our students learn how to use the AI ethically and effectively and are willing to tell us where they've used it.

Jordan Wilson [00:21:43]:
You said it perfectly. I'm gonna I'm gonna try to only follow-up with, with FAST. Yeah. So even OpenAI's detector, they did shut it down because it was 26% accurate, in testing. So, you know, yeah, you're you're more likely to, you know, just flip a coin, or to, you know, have a cat walk and and choose between 2 boxes than using any AI detector out there. They are fake. They don't Work. Alright.

Jordan Wilson [00:22:08]:
That's enough. I got it out of my system.

Laura Dumin [00:22:09]:
So Okay.

Jordan Wilson [00:22:10]:
But yeah. That was that was good. I was I was hoping we didn't accidentally, you know, Go go 30 minutes on that one. So so with with that in in in mind, Laura, because not not everyone is gonna have, you know, your very calculated multi tiered, you know, well thought out approach to to implementing AI in the classroom. So just as a whole, how can other university instructors, you you know, prepare students. Right? Because that's the other thing I talk about all the time. Gen AI skills are the most in demand, you know, skills out there right now, but so many universities aren't doing anything about it. So how do universities find that balance, especially when it is also new, So fresh.

Laura Dumin [00:22:51]:
Yeah. Again, I think instructors have to be out there experimenting with the AI in their field And knowing what can and can't happen, I think it's helpful if they are able to talk To actual practitioners in their field. So what's going on in nursing right now with AI? What's going on in dietetics? What's going on in history? Right. And so we're out there talking to people and saying, okay. What skills do our students need? How can we implement them? And just taking that time and I know. Like, I keep coming back to that, but we really have to To take the time to figure out what our students are gonna do with it. And if it can solve math problems and, you know, we're looking at The possibility of AI being able to reason. You know, people are questioning right now if it can reason.

Laura Dumin [00:23:45]:
And, you know, with GPT 5, people are saying, yeah, that next leap is gonna be reasoning. So what skills do our students need to thrive in this space? And then we can just focus on those. So, you know, I teach technical writing, And I want my students to be able to show that they still have value when AI can write most things. Maybe not well, but it can write them. And, you know, if we can go out to Grammarly and have it edit, you know, what's the point of an editor? If you've used you know, if you've ever even used Word and you've gone, what is that suggestion, and why does it think that This verb should be that. Right? You know that these programs don't always get things right. So helping students to understand where their value lies And helping students to learn how to learn about the AI programs. So, yeah, I'm gonna talk about, you know, 4 major large language models.

Laura Dumin [00:24:34]:
But I want students to know how to go out and find other programs and learn how to use them as benefits them in their work so that they can again show value. So that that's kind of my approach is that instructors need to figure out what makes value for their students.

Jordan Wilson [00:24:49]:
Yeah. It's it's great great advice. So, 2 2 part question here. We'll tackle it 1 at a time. So, Monica asking, what, are there specific AI courses right now being taught at your university?

Laura Dumin [00:25:02]:
Sadly, no. Not really. Not that I know of. Let me put it that way. Somebody might have 1 on the books that I don't know about. I did try to offer an AI in writing course for the spring, and unfortunately, it did not make. So what I am doing is I'm just putting AI literacy into my courses. I've got other colleagues who are putting AI literacy into their courses, but, no, not that I know of.

Jordan Wilson [00:25:25]:
And and then the second part of her question is, asking what are some of the more unique pros and cons for AI in the college setting that maybe we haven't heard?

Laura Dumin [00:25:35]:
That's an interesting question. Let me think on that. I think some of the pros are that we get to be out there playing and learning at the same time as our students, Which I think also might be a con. Right? Because we have to get really comfortable with that gray space of not knowing everything. And I think as instructors, we're used to walking into the classroom and being like, boom. I know this material, and now I'm going to transfer it to you. But with the AI, I think it's it's one of those challenges where you walk into the classroom maybe and you say, Okay. Hey.

Laura Dumin [00:26:09]:
What have you all used it for this week? What's Snapchat up to this week? Anybody using it for LinkedIn? You know, that kind of thing. And so the pro is that we get to be on that journey with Our students, the con is that we don't know, necessarily that much more than they do about this topic. So I think that's gonna be the one I'm gonna sit with.

Jordan Wilson [00:26:27]:
No. That's good. So so my thought on on maybe how this could be done or should be done, and I'd love to to get your get your feedback, Is I I think universities should have dedicated and required courses in generative AI. You know, my thought is it's you know, in in the future, it's gonna be just as important as, you know, math or science or English or, you know, business. Right? That's my thought is is there should be required courses in different areas of generative AI, and then, hopefully, those learnings should be carried, you you know, as you go into your more specialized colleges and departments. Is that a good approach? Is that feasible? Or is that just, you know, not really, you you know, how you think it might shake out?

Laura Dumin [00:27:09]:
Sadly, I'm gonna disagree with you here. So for us This is good.

Jordan Wilson [00:27:13]:
We need this on the show. It's always it's always too much agreeing.

Laura Dumin [00:27:17]:
The the thing about the curriculum cycle for us, and I I think we may have a longer one than a lot of people do, but it's about a 2 year 1 and a half to 2 year process. So we get classes that we start putting together summer or fall of year 1. Then spring of year 1, they go to committees. And then fall of year 2, they go out to the academic affairs curriculum committee. And so then summer of year 2, they're finally on the books. So it's it's a 2 year process. Wow. And I think right now what we are dealing with is we are dealing with the students who didn't learn how to use AI in high school because k through 12 was banning it at first, and now you've got people who are on board.

Laura Dumin [00:27:59]:
But There's the concern about, well, you know, people under 13 shouldn't be using them, and how are we gonna get this into the standard curriculum in high school. So right now, we've got students who don't necessarily know how to use it ethically. My guess is that that's going to shift in 3 to 5 years as high schools get on board with this, have a curriculum that they can use to implement, good AI usage. So I think in about 3 to 5 years, our students are gonna come to us with better ethical understandings of AI. And I think what that means is that any courses that we get on the books, a, they would be slow, and b, because The technology is changing, you know, every day, every week. It's really hard to have a curriculum that's going to move forward that way. And, you know, in a lot of universities, you can't just have a special topics course that's open to anything. You have to have, like, a somewhat, normative curriculum that's gonna go from each time you teach it to each time you teach it.

Laura Dumin [00:29:00]:
So I think that's a huge challenge as well. And I think that's also unfortunate because it means that instructors are left to try and figure out what fits into their field, And students may not get some of that in-depth ethical training that they need. So I don't know what the answer is aside from maybe Trying to get into early, 1st year courses and have some sort of dedicated learning model that can change every semester, but we go into those courses and maybe we have a day on AI where we talk to students.

Jordan Wilson [00:29:34]:
Oh, yeah. But I

Laura Dumin [00:29:34]:
don't know what the other solutions might be at this point.

Jordan Wilson [00:29:37]:
Yeah. That makes sense. It's in but this this it it also brings me to, you you know, a point here that Alfonso was making saying, you know, slow adoptions. People are always talking about this is the way we've always done it mode. Does does higher education, even the way that it operates, the way it works, the way it Approved courses. Does it need to change because things are just moving faster than they have been before?

Laura Dumin [00:30:01]:
There's a lot of conversation around that, and I think, I think it would be great if we had a faster way to get things going. And to an extent, we do have some things we can do. So micro credentials, we can get out there really quickly for students, and faculty and staff. So, you know, maybe 6 weeks to 3 months to get something together, which is pretty quick for academic purposes. I I think things like that, workshops where people can attend, but they don't have to, if we can give students some kind of credit, you know, maybe a badge of some kind, it's you know, I love badges. I'll I'll go to all sorts of workshops for badges. You know, that it works. So I think the things like that, we have to get a little bit creative doing things outside of the box and not doing things the way that we've always done them Because things are shifting so fast now.

Jordan Wilson [00:30:57]:
So, you know, one one there's so many good questions. It's it's hard to stop here, Laura. So, you know, Literacy, you know, you you talked about it. Right? Like, it it starts obviously before, you know, higher education. It starts before college. A great question from Cecilia here asking, what are your thoughts on the best ways to use AI in the elementary and secondary education? What are your thoughts on that?

Laura Dumin [00:31:19]:
Okay. So I have an 8 year old and 11 year old.

Jordan Wilson [00:31:21]:
There we go.

Laura Dumin [00:31:22]:
And my 8 year old is not really interested in AI yet, and that's great. That's fine. I think Some of the learning programs that learn with your kid can be really helpful as long as they are walled gardens. They're closed off. They've been trained on the datasets, and we know they're giving the right answers, which is not what happens with large language models right now. With my older son, He has a coding class that he goes to, and sometimes he has questions that are something that the teacher can't help him with. So I've actually taught him how to get into Bing because he can use Bing for free, and he can use it at his coding class on the computer. So he can pull up Bing, and he can ask it his coding questions as he's going through.

Laura Dumin [00:32:06]:
And I think that's a really good use of it Because the program that they're using had an AI assistant built in, but you only had so many tokens, and he ran out of tokens. So I think things like that can be really helpful. You know, I'm wary of things like Your AI best friend that we've seen in some of the social medias because we've seen that those don't always have really good guardrails, and they might be talking to children about Things that are grown up topics and that children certainly don't need to be discussing in the way that these AI programs are discussing. So I think I think the biggest thing is that if tech companies want to help educators, they need to work with educators. And then, again, they do need to be those walled gardens, closed systems trained on the data specific for that topic or specific for that grade level so that we know that whatever information is coming back is valid.

Jordan Wilson [00:33:04]:
So, Laura, we talked about so much here. We've talked about, you know, ethical and responsible use of AI, literacy, content detectors. You walked us through your own process, which I think is fantastic. But maybe, you know, as we wrap up here, because I'm sure there's a lot of educators who are gonna be listening, what's your one one takeaway? What's your one takeaway that you want people To to say, hey. Going forward, this is the 1 piece of advice that I give you in order to bring more AI literacy into the classroom.

Laura Dumin [00:33:33]:
Oh, man. Only 1. Okay. I think that my advice is that AI is here Whether we like it or not, our students are going to be using it whether we like it or not. And that means that if we want our students to use it Ethically and effectively, we have to spend some time with these programs. We have to spend some time with our assignment sheets. And, you know, even if that means taking time over winter break, over summer break to just really sit down with it and think about it, We have to figure out where it works for us. And I think my biggest takeaway from that is that every instructor in every course is going to have Slightly different needs and values from the AI, and it's great for us to be having conversations with each other, but we also have to dig deep And figure out what works for us and go forward from there.

Jordan Wilson [00:34:26]:
Alright. Class is closed, y'all. We just we just got a great, 30 minute education from, from Laura. Laura, thank you so much for coming on the AI show and and sharing your insights and expertise with us. We really appreciate it.

Laura Dumin [00:34:40]:
Thank you.

Jordan Wilson [00:34:40]:
And, hey, as a reminder, we went over a lot, so make sure to go to your everyday AI .com. We're gonna be recapping everything that we talked about, with Laura and a lot more sharing some other resources. So your everyday AI .com. Check it out. Sign up for the newsletter, and come back. See you tomorrow and every day with more everyday AI. Thanks

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI