Ep 168: AI in Higher Education is Broken. How to Fix it.

Episode Categories:

Embracing Responsible AI in Higher Education

In today's podcast episode of "Everyday AI," a thought-provoking discussion unfolded around the challenges and opportunities of integrating artificial intelligence (AI) into higher education. The conversation shed light on the evolving landscape of AI usage in academic settings and its potential to reshape learning experiences for students and educators alike. As the business world continues to intersect with academia, it is crucial for decision-makers and business leaders to understand the implications of responsible AI implementation in higher education.

Fostering Trust and Transparency

One of the key takeaways from the podcast episode was the emphasis on fostering trust and transparency in the integration of AI within higher education. It is paramount for colleges and universities to prioritize building and maintaining public trust, especially amidst rising concerns about tuition costs and the need for job preparation. Responsible AI practices, clear policies, and meaningful engagement with students can serve as foundations for establishing trust within academic communities.

Empowering Students in AI Governance

The discussion highlighted the importance of involving students in the development of AI policies and practices within higher education. By empowering students to contribute to AI-related committees and conferences, educational institutions can foster a culture of shared responsibility and inclusion. Engaging students in conversations about AI usage and support can provide valuable insights and feedback, ultimately shaping a more collaborative and student-centric approach to AI education.

Addressing the AI Skills Gap

The podcast delved into the evolving nature of AI skills and their relevance in preparing students for the job market. As AI continues to reshape industries, colleges and universities must address the teaching and use of generative AI skills to ensure that students remain competitive and adaptable in a rapidly changing professional landscape. Recognizing and bridging the AI skills gap can equip students with the necessary competencies to thrive in diverse career paths.

Nurturing Responsible AI Experimentation

Embracing responsible AI experimentation within academic settings emerged as a crucial theme. Encouraging professors to integrate AI into their teaching methodologies and involving students in the process can pave the way for innovative approaches to learning. Creating spaces for AI experimentation and collaboration can ignite a culture of continuous improvement and adaptation, benefiting both educators and students.

Propelling Innovation and Collaboration

The insights from the podcast underscored the significance of propelling innovation and collaboration in the realm of AI and higher education. As businesses and educational institutions converge, there exists a remarkable opportunity for fostering collaborative partnerships centered on responsible AI integration. Embracing evolving technologies, inspiring a community of practice, and breaking free from traditional academic silos can stimulate a culture of innovation and enriched educational experiences.

Looking to the Future

The podcast conversation concluded with a call to pressure educational institutions to embrace change and advocate for the responsible use of AI in education. The future of AI in higher education holds promise for personalized learning experiences, empowered students, and a reimagined approach to teaching and learning. As the landscape of AI in education continues to evolve, it is imperative for decision-makers to actively engage in conversations and initiatives aimed at propelling responsible AI integration within higher education.

In conclusion, the podcast episode provided valuable insights into the transformative potential of responsible AI in higher education. By fostering trust, empowering students, addressing the AI skills gap, nurturing responsible AI experimentation, and propelling collaboration, educational institutions can pave the way for enriched learning experiences and innovative academic landscapes. As the business community engages with academia, embracing responsible AI in higher education is not only a necessity but a pathway to shaping a future where technology and education intersect seamlessly.

Video Insights

Topics Covered in This Episode

1. The Changing Landscape of Higher Education and AI
2. Challenges and Opportunities in Integrating AI in Education
3. Ethical Use of AI in Education
4. Utilizing AI for Teaching and Learning Enhancement
5. Collaborative Approach and Future Vision for AI in Education

Podcast Transcript

Jordan Wilson [00:00:18]:

I have a lot of hot takes today, especially when it comes to how AI is being used in higher education. Because If you've listened to the Everyday AI Show before, you kinda know that I have a feeling that so many, universities and colleges, especially here in the US, are letting our students down. And I think that there's a great, conversation going on with how should we, incorporate or not incorporate, AI Into higher education. And luckily, it's not just me today talking about this. We're gonna be bringing on, an expert Who actually is doing this and and teaching not just his students how to properly use chat gpt and generative AI, but also, consulting other teachers, which I'm extremely excited for today's conversation. And if you are new here, welcome. My name's Jordan Wilson, and this is Everyday AI. Everyday AI is a daily livestream podcast and free daily newsletter, helping everyday people learn and leverage generative AI.

Jordan Wilson [00:01:23]:

So, normally, we do this super live. Today, hey. Not everyone can always fit into that 7:30 AM Central Standard Time, you you know, kind of time slot. You know, sometimes there's people who are teachers and maybe are busy at that time. So this is technically prerecorded. Don't worry. It's still debuting to you live, and we're gonna be in the, in the comments answering questions as well. So with that, no more further ado.

Jordan Wilson [00:01:46]:

If you do want the AI news, It's still there for you. It's still there. It's still fresh. So go to your everyday AI.com. Sign up for that free newsletter. We'll be recapping this conversation As well as sharing, the freshest news and the freshest finds from across the Internet. But let's talk about AI and head AI in higher education. What's working, what's not, and how can we fix it.

About Jason and his role at Berkeley College

Jordan Wilson [00:02:08]:

So, again, I'm excited for today's guest. If if if you're on LinkedIn at all and If you are, you know, reading or following anything in AI, you probably see a lot from our guest today. So let's go ahead And bring him onto the show. Jason Gulya, and not only an English professor, but also an AI consultant for higher education. Jason, thank you so much for joining us.

Jason Gulya [00:02:29]:

Thank you so much, Jordan. It's such a pleasure to actually meet you and get to see you even if just virtually and get to have this conversation because I think that, know, there's a lot we have to talk about with how higher ed is using AI or not using AI. Oh, yeah. This hey. And if you've listened to

Jordan Wilson [00:02:46]:

the show before, you know this is one of my favorite subjects to talk about, because I have a lot of hot takes, but that's why I also bring on someone who knows What he's doing, someone that's doing it correctly. So, Jason, can you give everyone just a brief overview of what are you doing both, like, as an actual professor, and then what are you doing on the, You know, kind of AI consultant side as well.

Jason Gulya [00:03:05]:

Yeah. So my everyday job is I'm an English professor, so I Teach basically anything related to writing, whatever that is, and I'll put that in giant scare quotes because I think that is changing. How we understand writing is changing, Especially in the age of AI. And so I teach anything related to writing and the humanities. So I teach film, and I teach basically Anything related to the liberal arts and what it means to how you use this phrase, to be human, right, whatever that is in this new world. And so that's my day job. And, you know, my side gig for the last year, basically, since right after chat gbt came out, is to consult with colleges and universities and also help students. I try to walk students whether they're my students or they're At another college or institution, walk them through how to use AI, how to use it responsibly, how to use it to empower our voices, All that sort of stuff.

Jason Gulya [00:03:57]:

And one of the things that I've learned is that these 2 positions, the side gig that I have and the everyday full time gig I have are very much bleeding into each other. I can't keep them separate in any way. And I've actually found that very empowering that I learned a lot By going in, giving a keynote, or doing a training at AI at another college, another university, and then bringing that into my own college and into my own classroom And vice versa. Because I think that a lot of us, especially in higher ed, so even if we've been convinced for a while, as I've been, That AI is here to stay and we should use it in productive ways. I think that we're still and I am still in this experimental phase Where I'm still just learning from things out there and trying to play with things and experiment, seeing what works and what doesn't work and just iterating it. And being a consultant has really, I think, Help me be a better professor, and that's what I do.

Attitudes toward LLMs in Higher Ed

Jordan Wilson [00:04:50]:

I love that. So, Jason, I'm I'm I'm curious. You know? So you mentioned you've been On the consulting side doing this for just over a year, you know, we're on, you know, month 13a half or something like that of just chat gbt being, you know, out in the wild, so to speak. How, like, specifically speaking on the higher education side. Right? Not just your own role, but how have you seen, the attitudes, toward, chat g p t or or, you know, large language models, specifically in the use in higher education. How have you seen those, change at all over the past year, or have they not changed?

Jason Gulya [00:05:26]:

I think they are changing, but much more slowly than I would have liked, and I actually think there is a weird trajectory that a lot of colleges and universities went through. When Chat GbT came out November of last year. I think a lot of us started in that space of worry. And I'll be totally honest. The first Time I use chat GBT. I came across it right after it came out. I played with it probably for about an hour, and I turned to my wife who's on the other side of the table, And I said to her, the most awful thing just happened. I found out how everyone is going to cheat.

Jason Gulya [00:06:01]:

That was my immediate knee jerk Reaction. And then I put it aside probably for about 2 days. I went back to it, and I changed my perspective. Because what I did is I forced myself to not think of it as a professor, but to think of it as a learner. And for me, that fundamentally changed how I approach this technology. And so that's where I started from, and I try to tell a lot of professors that that's where I started from, and this is this is how I've This is how I now understand this technology. And one of the things that astounds me every time I do a training is I encounter basically past versions of myself. Because what I think happened was that when after chat gpt came out, a lot of us who were following it and it.

Jason Gulya [00:06:45]:

Became very, very worried. And then about 2 months in, I think there was all this optimism that professors were playing with it, Colleges were playing with it, and so I became optimistic because I thought, oh, maybe this is going to be the push to change, and then it stopped. I think we went into the summer, and many professors and colleges hit the pause button, And I don't think we quite got back on that trajectory, and that's sort of my worry because I think that we sort of plateaued. And you can there's a lot of evidence to suggest this, that, You know, last year in the spring, professors were kind of a lot of them were sort of on top of it, at least playing with it and thinking about it. And now if you read around, students are way more advanced than professors and faculty members and administrators in this technology, I think that we just have to we have to get caught up. I think we have to get back on that trajectory as we're playing with things. Because I think that ad Things are changing, but they're changing very, very slowly. I'm trying to push things along as much as I can.

Jordan Wilson [00:07:47]:

Why do you think the shift Why do you think the shift why was there this, initial kind of like optimism and excitement, and then why did that kind of, You know, even itself out. Why do you think that happened?

Jason Gulya [00:07:59]:

I think the newness and the hype, at least in higher ed, wore off a little bit. I think that as long as there is a sense of novelty, professors really were saying, you know, oh, maybe I can reimagine my syllabus now because now I have time to do it if I'm able to offload certain things to AI. And so I think there is this birth of optimism, but then I think as that newness went away we went off and we all did our own thing, our own separate thing in the summer, and then we came back. I think that newness, it it was lost. And then I think and I'll put it this way. I think just everyday life just got in the way. AI can make us efficient. You can save a ton of time.

Jason Gulya [00:08:41]:

I probably save I'd say at least 5 to 10 hours a week because of how I use AI and how it's worked into my workflow, And I can repurpose that, but I had to put work into figuring that out. And so now it becomes a hard sell if the technology doesn't feel new anymore. And then suddenly go to professor and say, you know, you can save 10 hours a week, but you're gonna have to put 10 hours into it, right, to learn something. Right? Some skill, whether we call it, you know, prompt engineering or what lang whatever language we wanna give it, any things that describe interacting with an AI, That becomes a harder sell. Now the semesters are just kind of picking up, and we all just wanna go back. There's this temptation we wanna go back to normal, And that's something that really, really worries me, and I think that it was a combination of that desire to get back to normal and and that kind of hard sell saying, you know, you need to put 10 hours or even more into learning how to engage with these systems. I think that those 3 things at least, explain a lot of what I think has happened.

Why universities use AI or ban it

Jordan Wilson [00:09:44]:

And, You know, to catch everyone up and, I wanna get your view on this as well, Jason, but it it seems like there is a a great divide. Right? It doesn't seem like there's a lot of gray area, when it comes to higher education's, stance on generative AI tools like ChetGPT. It seems like universities are either going all in, and they're trying to teach it and incorporate it into their curriculum, or they are just banning it. What would you say in your experience by, you know, consulting universities? What's maybe, the reason, why universities are, you know, being open to it, and what's the reason universities are, you know, banning it or just, pushing it off?

Jason Gulya [00:10:27]:

I think the big difference is whether the college believes that they can ban it. That's the big divide. So we have on the one hand, colleges, universities, and professors who believe in AI detection programs And other methods that will allow them to keep AI out of the classroom, and they really, really believe it. I do not. I do not buy into AI detection programs. I don't think they work. I think they're super easy to fool, and also there are ethical concerns I have Using them even if they were accurate. But so I think that some student some, sorry, colleges think that, And then others, and I think LSU is now working AI into 20 of their courses and stuff like that.

Jason Gulya [00:11:14]:

Others start have started to recognize that that may not be practical. Right? It's not a long term strategy. Even if you think That AI detection programs work. I think it's hard to say that they're gonna work a year from now or 5 years from now, certainly 10 years from now. They're going to break apart, At least in some way. And I think that's the big divide. And one of the things that I I try to emphasize whenever I work with College or certainly an administrator is that keeping AI out of the classroom, at least for me, is not possible. You can't do it, especially as it gets worked into more and more programs.

Jason Gulya [00:11:51]:

I mean, right now, I am working on my desktop at home. I have Microsoft, and I literally have Copilot built into the system that can do certain things. And that's gonna become more and more the case that as we have this technology that, you know, we don't actually go have to go on to a computer and log in to chat gbt anymore. It will just come along with us as we have more and more copilots to choose from regardless of what system, what regardless of whatever we're using. And as that happens, I I think that we need to think more practically. And so I do think that that trust in AI detection will start to and fade away. Right? At least I hope it will. But at least for me, that's the big divide with you know, between colleges that are innovating and trying to figure out long term strategies for Not just coexisting with AI, but actually using it and excelling with it.

Jason Gulya [00:12:41]:

You know, that's one kind of college and others that are trying to ban it. And, yeah, I don't I I imagine that will change as we go along, but who really knows how colleges will respond or won't respond?

Jordan Wilson [00:12:53]:

I'm gonna do something I don't normally do here, Jason, is I'm gonna go on a very Hard. 60 second rant. And I'm gonna put myself on the clock here because I think this is important to know. And and maybe, you know, Jason doesn't wanna ruffle any feathers, but I'm fine ruffling feathers. Ready? So here we go. Colleges out there, AI detection tools are false. They don't work. They are a a marketing ploy, from companies who want to make money.

Jordan Wilson [00:13:21]:

OpenAI themselves. Right? I even think Initially, this their own they have their own detection program. I think, if I'm being honest, I think they put that out there to give people the ease to use it, but then, obviously, they, you know, 6 months ago or so, they obviously shut it down and said, hey. It's not actually accurate. It was only accurate 26% of the time. Right? Which is actually a much higher accuracy than I would give these AI detection, tools. So, right there, just the fact that OpenAI essentially said, no. They don't work.

Jordan Wilson [00:13:56]:

They have their own tool. They shut it down. They said it's only 26% accurate. We've tried here at Everyday AI, so Shout out to, our producer, Brandon, who who did a lot of work on this. We've busted every single one of them. They are fake. They do not work, period. I will just put that out there.

Jordan Wilson [00:14:13]:

And if if you, have one of those companies and you disagree with me, I would love to have you on the show. It probably won't be good for your company because I will show you very easily. I've been getting paid to write for 20 years. They're fake. They don't work. These companies essentially kind of pay colleges to to use them. Sorry. That's all.

Using AI in education responsibly

Jordan Wilson [00:14:32]:

I will get off my, my my steamy rant, now, Jason, and ask you actual questions. But, so how can we, You you know, regardless of what happens in the AI detection scene, how can we still be responsible, right, about, AI usage and chat g p t usage? You know, how can you find that that balance of still, encouraging students to learn, to read on their own, To write on their own. How can you do it when you do have these tools, that are so powerful and so useful for doing those things?

Jason Gulya [00:15:06]:

Yeah. And first, I do wanna very quickly do my own mini rant. I agree with you. I am on board with this. I'm never going to get My turn it in endorsement or whatever detection they're using, I don't use them. They're out of my courses. I have phased them all out. I don't click on them.

Jason Gulya [00:15:23]:

I don't look at those percentages, and I tell my students that I do not. And I tell my students why I do not. And that's going to be my transition to answering your questions. So for me, the key to responsibly using AI It's transparency. I don't think you can teach it without having that. And I have kind of what I've been told is a radical version of it. So one of the things that I truly, truly believe, we want to move the needle in our courses in terms of getting students to use AI responsibly. Step number 1 me needs to be that we have to use it responsibly, and that means if we have an AI policy and I create my AI policy with my students.

Jason Gulya [00:16:08]:

We actually create it together. I ask them what they think about it. They're able to have their voice heard. I talk as well, and we work together to create that AI policy. And when we have that AI policy, whatever it is, I have to live to it too up to it too. So if we decide, and a lot of my students want this, if we decide that if a student uses Chat gpt For some part of the paper, right, that they announce it. They have, you know, something at the end saying what prompt they use, everything like that. If I do something, I have to do that.

Jason Gulya [00:16:42]:

So if I use it for a course material, which I do, I let them know this is what I used. This is how I used it. Right. And so I think that we have to all play by the same rules. Right? That if I'm asking my student to do something, I have to do the same thing, and that for me has to be step number 1 because a lot of what we're dealing with, a lot of what colleges and professors are dealing with is the need to rebuild trust. And there is a long history of this, and I'll be totally honest. I'm okay with ruffling feathers too. Many colleges have lost the trust of the public, and there are a lot of reasons for it.

Jason Gulya [00:17:21]:

And a lot of them are college's fault. Right. Tuition, it has gotten astoundingly expensive. Right? So when I went to College. I think my tuition per year was $8,000. I graduated with my degree. I, you know, didn't get any help. I paid for it entirely myself.

Jason Gulya [00:17:46]:

I graduated with $7,000 in debt. That's it. Enough that when I was a grad student, I paid it off. Now that is unheard of now, and, you know, Rising tuition cost and for all other sorts of reasons, a lot of that has to do with job preparation or the lack of job preparation, means that colleges have lost trust. And so I think that on a very micro level, finding a way to rebuild that trust is key. Right? And that means being transparent. Right. Because if you're not transparent, you're not gonna be able to encourage responsible AI use.

Jason Gulya [00:18:20]:

I just I just don't think you can. If you are Say, if yours, for example, and this is happening in some classrooms, banning or supposedly banning AI, and then you go write a lesson plan With AI. That doesn't sit well with me. There needs to be a great deal of consistency if we actually want to rebuild that trust from the bottom up. And so that has to be step number 1, and then also just making it clear that the classes that we teach are connected to what students want to do in the world. For me, it comes down to an ethical responsibility. I have an ethical responsibility to prepare my students for the workforce. It's not the only thing I do, but if I'm not doing that, I'm failing at my job, and I try to make it clear how I'm doing that using AI and getting my students to use it in certain ways.

Jason Gulya [00:19:11]:

And that just means using it responsibly and knowing Really what AI literacy is. Right? That's what a lot of it comes down to so that a student doesn't just know how to use blank tool, insert whatever Tool is helpful for them who has actually thought about what's happening in the background, how that tool is working, how they can improve it, what, And all of those sort of questions that come to basically just an awareness of what they're doing when they use AI. And so for me, it's about building trust, that's the only way to encourage responsible use of AI and, again, linking the classroom to where the workforce, really, more than anything else.

Jordan Wilson [00:19:50]:

You know, Jason, something very important you said there is is trust and transparency, because I think especially when it comes to AI. You it it has to be there. That's that's foundational. It's not a nice to have. Right? Here here I go again burning down bridges of any potential sponsor, but I think you look at the way as an example, you know, Google, you know, kind of, like, quote, unquote, released Gemini to the world with, Very misleading information. Right? They showed a live video. It wasn't actually live. Even how they built it, you you know, they built it in a very, would say almost a shady way.

Jordan Wilson [00:20:25]:

You you know, so when you talk about trust in the classroom, I think it's important in in what you said, and I'll point this out for, You know, if there's, obviously, there's gonna be a lot of educators and students listening to this episode. I think what you said there is is so important because it's two way trash. Right. And it's establishing, those those issues with your students, and you're saying, hey. I'm going to, You know, adhere to the same rules that that you are in creating them together. I'll ask well, I'm I'm gonna come up with 1 one more opinion here and ask How, colleges should address this. Because my opinion is if colleges and universities are aren't already, teaching and encouraging the use of generative AI. Those students are going to, leave the university at an extreme disadvantage.

Advice for universities not wanting to use AI

Jordan Wilson [00:21:13]:

The job market, more so than any time in the history of our lifetime, is demanding generative AI skills, almost across all roles now. So what would your advice be, to to to maybe a university administrator or a decision maker at a college that still thinks, You know, hey. Still, generative AI isn't for us. You know, we don't know how to police it. We don't know how to control it. We don't know how to, You know, still give our students a high enough level of education. What would your piece of advice be, which I know is like the the the $1,000,000,000 question, but How how do they fix this? If if if they don't think it's for them, are students screwed? Is there a way to fix it?

Jason Gulya [00:21:56]:

I actually have 2 levels of advice. So the 1st level advice is on the micro level. So individual professors. So the individuals that Students are interacting with far more frequently than any other level of the college. So to them, experiment and play. Try something out and tell your students that you're trying it out. When I was a grad student or even a young professor when I first got my role, I was very worried about experimenting with my students. I I thought that my students were gonna feel cheated, that, oh, he's just kind of you know, you're doing everything off the cuff and not really thinking about things, But exactly the opposite has happened.

Jason Gulya [00:22:33]:

If I started experimenting with something, and I found this with AI too. If I'm experimenting with someone with Chat gpt or gamma or any other AI program, and I tell my students that it fundamentally changes the learning culture. So on a micro level, get professors who are experimenting, and hopefully they've been doing that already. Right? Maybe for some of us, I've been experimenting with this technology for a year, You just learn so much by doing that and rerunning experiments, and then on a macro level. So for those colleges and universities, Find the professors experimenting and bring them in bring them into the conversation. One of the things that's happening On a big level of colleges and universities is that we have all these knee jerk reactions, all these assumptions, all these biases in terms of how AI is being used. And that's another thing I wanna focus on. Don't just think about how professors are using it.

Jason Gulya [00:23:27]:

Think about how students are using it. I really think that you're if you are at a college or working at a college right now as an administrator, you should have, If you can find them, a student on everything. If there is an AI committee, have a student on it. If you are attending if you are organizing a Conference on AI in education. Have students on it. I think they should be all over the place. And if you ask a student To be on, like, a committee or in a conference, they just, like, light up. Right? So excited.

Jason Gulya [00:23:59]:

They've never been asked like, oh, what do you think about AI? And how are you using AI, and how can I support you? And I actually start every course there that I actually ask my students How they're using this technology because there's such a huge range, and and colleges are not unique in this way at all that some of us are very familiar with it. Some of us, whether a student or professor, are just getting started with it and just finding that space of experimenting and then finding a way to showcase them. So one of the examples I give when I go and I talk to faculty members or students too is I talk about alternatives to essays. So I teach English and writing. And so for a long time, I've taught argumentation. I believe it's a it's a huge skill going forward. Just being able to look, Especially now at the amount of information out there being able to synthesize and come up with, like, a core idea that you believe, Right. It's going to be a big skill going forward, and we need practice in doing that.

Jason Gulya [00:24:59]:

So a traditional way of doing that is to create an essay So you create a thesis, link everything to it, and I now have moved away from that in a lot of my courses. And so instead, I give my students a chat GPT prompt, and it's like a 2 page long mega prompt. And I can actually work it into a chatbot, but I want them to see the Right? Most of them will not read it, but they pop it into chat gpt, and what it does is it forces chat gpt To be a contrarian, and I'll give it very specific instructions that their goal is to basically look at the argument and start to poke holes in it. Right. Finding assumptions, figuring out where they are, and I give it a lot of examples for what that can look like. And I have my students run that. And then at the end, I have them click that button that allows them to share the link with me. And I look at that instead of an essay, and I'll be totally honest.

Jason Gulya [00:25:54]:

I learned 10 times more from that because I see not just how they can create an argument, but how they can back it up, How they can respond when there's something that doesn't have feelings. It will just put pressure on you. Just fearing just seeing how they respond to it And having those sort of examples, I think are gonna be the way that we get AI and the responsible use of AI into colleges, into the classroom and into students' hands because I think that you're right. Until we do that, we are not preparing our students for the future because after they graduate, Regardless of what they're going into, they're going to need to be proficient in AI and really learn how to learn AI, learn new programs, and all this and sort of stuff. And I I think it's our ethical responsibility to do that as colleges and as professors.

Jordan Wilson [00:26:43]:

Hey, professors, deans, administrators, who whoever you are. Jason just gave you one very simple way to not only increase The transparency and trust, but to also, I think, have a much better learning environment for your students. Right? Giving them, instead of, hey. We're working on this. Here's here's the worksheet. It's here's the prompt. Right? And then you have to share What you actually did inside. I I love that.

Jordan Wilson [00:27:12]:

And and, Jason, one thing that I think, even my own experience on how I'm learning much more at a deeper level with generative AI. I liken it to back when I was a kid. If I wanted to learn something, I had to go to the library and either find a book or at home, you know, we might have some encyclopedias, and that was really it. And I look at the you know, even like 3 years ago, look at how much learning has changed. Right? There's obviously online learning, elearning. You know? You have interactive modules. You have podcasts, YouTubes, you know, PDFs, you know, like, so many different, formats or medias for for learning. But what you just said right there, it even changes learning a little bit more.

Jordan Wilson [00:27:54]:

So can we talk about even, like, how much do you think that using, you know, Gen AI tools like Chat GPT. How much does that enrich the learning experience for students compared to traditional, you know, methods, which is generally, You know, reading, assignments, tests.

How GenAI enriches learning

Jason Gulya [00:28:12]:

One of the things that I emphasize for colleges and universities is that AI gives us the ability, hopefully, if we use it correctly and use it responsibly, to Actually stick to best practices in teaching that we've known for a while we have not been following. We have known for decades that for the vast majority of learners, lecturing doesn't work. We've known for decades That certain kinds of learning actually ignites a passion for learning. And there's been a divide For a long time between what learning science tells us and what colleges and universities have been doing. Right? If you go into a room and you say personalized learning works, No one will argue for you argue with you. Right? We've known that for a long time. And so now the power of something like AI is that we can actually do that because the I don't think the obstacle is ever that a professor didn't recognize that personalized learning works. It's just that you can do that with 10 students, but maybe not 20 or 30 or a 100 or 500 depending on where you're teaching.

Jason Gulya [00:29:23]:

And so now you actually can to a reasonable extent. You can personalize the learning experience, and that that is so powerful Because in a simp in something as simple, you know, start small with a chatbot. It doesn't have to be a chatbot, but that's one way in which you can actually have students Having basically feel more like a game so that they can enter and learn regardless of their skill level. Right? You go into a game, you might be able to hit beginner or For whatever the levels are in that particular game, in many ways, learning should be like that, and it hasn't been. And we know that it hasn't been. And so I think that trying to emphasize that the way that, in in many ways, the way that we learn Hasn't quite changed. What has changed is our ability to deliver a learning experience that actually sticks to learning science, Right? In a way that actually just it makes it easier and it makes it scalable. And I I you know, part of that is that it takes away the fear a little bit.

Jason Gulya [00:30:25]:

But, yeah, just it's it's a tool. Right? It's a tool that we can use to actually just make learning better.

Future of Higher Ed and AI

Jordan Wilson [00:30:30]:

Yeah. It's it's It's crazy because I think that, you know, l and d, you know, learning and development, you know, companies or, you know, departments within larger companies have already started to do this, you know, To to your point, Jason, with personalized learning, I even go back and think to your example of, you you know, this long prompt that you give to students that kicks off the learning journey. You know? Hey. Start off this semester by doing, you know, quick online assessments According to how students learn, and then maybe instead of that 1 prompt, maybe there's, you know, 3, 4, 5 different versions of that prompt based on, a student's own preferences or how they learn best, which I think is another great way and easy way to scale a more personalized, learning experience. But, you you know, one thing I wanna ask you, Jason, is is, you know, looking at how much, things have changed. Right? So you you we started the show by talking about, how everything has changed so much in 1 year. Looking a year in the future, I know no one's, you you know, in the in crystal ball, You you know, can can accurately predict it. But normally at this time of year, you know, 2023 wrapping up, we talk about 2024.

Jordan Wilson [00:31:37]:

What's this gonna look like in higher education a year inform now.

Jason Gulya [00:31:41]:

Alright. So can I do different versions? Yes. Please do. My dream future is That we come to our senses, we move away from AI detection, and we rethink the learning experiences that we can give to our students that we actually Have individuals who are and have been experimenting with AI. We work with them, we work with students, and we create this community of practice, and we're actually Honoring these experiments. That's my dream scenario. My nightmare scenario is these programs get more and more powerful, And we dig in and actually see some colleges starting to dig their heels in, and that worries me because I do think There is a way that we can shape how this technology is used, but it needs to happen now. I think we are very much at that critical point that ad If we want to encourage students to say use AI to encourage critical thought, to encourage analytical thinking, and connective thinking.

Jason Gulya [00:32:42]:

Right? Everything we know works and actually helps us not just learn, but actually be passionate about what we're doing. Right? If we can do that in In the next couple years, I think it's gonna make such a huge difference. So one of the things that I am trying to make happen is I'm trying to make that dream scenario happen because that's what I want to see happen. Right. So we're actually using AI in a way that is productive, it is good, and it actually is responsible. But there's always this Nightmare scenario that I also see playing out. And weird thing about colleges, I guess it's not weird because companies go through the same thing, They're also drastically different, and we know we're about to enter in the next couple years that what's called the enrollment cliff. We've known it was gonna hit in about 2026 for a while.

Jason Gulya [00:33:30]:

It might have moved a little bit earlier, depending who you read. But colleges are all so different, and so I think there's gonna be this weird jostling for power in the next couple years. So I'm hoping that dream scenario is the one that plays out, and I'm trying to make that happen.

Jason's final takeaway

Jordan Wilson [00:33:45]:

There there you go. Hey. Everyone that's listening to this, if you work in higher education, I hope you have a pen, pencil, word processor, typewriter, computer, something taking notes because Jason's literally giving you the blueprint. And, oh, FYI, I agree because I I I I think the, all the universities and colleges that for the last year have been, encouraging and implementing AI. I think their placement, you know, their job placement rates, which universities care so much about, especially specialized, you know, you know, part, you know, colleges within the university care about, They're gonna continue to go up or stay steady where I think these other schools that are are putting hard bands on AI, they're gonna see it. They're gonna see it both in first, I think their job placement rates, their their reputations for for you know, that they've worked decades, to to, you know, bring up are gonna start to to suffer. But but hey. As we as we wrap this up, Jason, because I could literally talk about this for hours, but what is your be be because we've covered so much so much here, What is your best piece of of practical and actionable, advice, both for teachers And for students on, you know, ethically using, ChatGPT in the classroom.

Jordan Wilson [00:35:01]:

Right? The the how we fix it, ad What's your best one piece of advice for for teachers and for students?

Jason Gulya [00:35:07]:

Ask so if you are on the professor and college side, ask students. Bring them into the conversation. I I do think that step number 1 of moving forward is going to get in get as many people into the conversation as you possibly can. If you are at an academic institution, there is the tradition of living in your own silo. If you live in your own silo, I don't think you're gonna make it. I don't think you're going to be able to survive. Find ways to get out of it. Ask students about what they're doing.

Jason Gulya [00:35:41]:

Right? If you're a professor, ask students. If you're a student, ask a professor. Right? Go out there. Reach over to that other person regardless what they are and ask them if they're using AI, how they're using AI. Because one of the things I think is happening is we're We're staying in our silos. We're staying in our little cardboard box, and we're just not talking about this. And I think that is a huge Problem. And I've actually learned a lot from just talking to my students and just having honest conversations about how we're using it for email or writing or and media, you know, whatever we're doing with it or now with, like, videos and images.

Jason Gulya [00:36:16]:

I use it all the time. So I would say just talk. Right. Find out, you know, how you can have as many conversations for people in different groups with different jobs, and I think that's going to help us out A lot because I think that a lot of colleges are not listening to students, and that's that's the big problem, especially because colleges, That's their thing. Our thing is to serve our students, and so we are not listening to them. We cannot serve them.

Jordan Wilson [00:36:45]:

So good. My mic is unfortunately on a stand. Otherwise, I would literally drop it right now. Jason Gouliot, thank you so much for coming on the Everyday AI Show. You you you fixed AI in higher education for all of us, so thank you. We appreciate your time.

Jason Gulya [00:37:02]:

I don't know about that, but I appreciate it. And thank you for thank you for the rant, actually. Keep putting pressure on institutions that refuse to change.

Jordan Wilson [00:37:12]:

We Oh, I will. I will. Don't don't you worry. And, hey, everyone, thank you. Thank you for joining us. And, there is still the daily AI news. Don't worry. So, maybe you weren't able to get every single tidbit, every single golden nugget, that that, Jason just dropped.

Jordan Wilson [00:37:28]:

We recap every single podcast episode, so go to your everyday AI .com. Sign up for that free daily newsletter. Get your daily AI news and everything else there as well. But Thank you for tuning in today, and we hope to see you back tomorrow and every day for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI