AI in the Classroom: Navigating the Future of Learning with Dr. Marc Wolpoff
57 min
•Sep 9, 20257 months agoSummary
This episode explores AI's rapid adoption in college classrooms, where 86% of students use AI in their studies and 56% have used it on assignments. Dr. Marc Wolpoff discusses the tension between leveraging AI as a learning tool and protecting critical thinking skills, proposing a framework of shared responsibility between educators, students, and employers to teach ethical AI use rather than ban it outright.
Insights
- AI adoption in education is moving faster than institutional policy can address, creating inconsistent standards across disciplines and institutions that confuse students about appropriate use
- Students recognize the problem themselves—over 50% worry AI overreliance harms their development—but face cognitive offloading temptation when overwhelmed, suggesting education should focus on judgment rather than prohibition
- The real workforce risk isn't students using AI, but graduating workers who can't think critically without it, making process-based learning (how to think) more valuable than outcome-based assessment
- AI creep (invisible AI in everyday tools) is harder to address than intentional misuse, requiring transparency and interrogation of when and why AI is used rather than detection-based policing
- Corporate involvement in AI literacy education is essential but must be balanced through multi-stakeholder consortiums to prevent vendor lock-in while preparing students for real-world tool use
Trends
Shift from punishment-based academic integrity enforcement to curiosity-driven learning frameworks that treat AI misuse as teaching momentsGrowing recognition that AI literacy should start in elementary/middle school, not just college, to build foundational judgment before tools become ubiquitousEmergence of process-versus-outcome educational philosophy as the key differentiator in AI-era learning designCorporate-education partnerships moving beyond CSR into co-curriculum development and advisory boards to align academic preparation with workforce needsAI detection tools and plagiarism systems becoming unreliable, shifting focus from detection to transparent use policies and student self-auditingNeuroscience research showing critical thinking and brain development suffer when students outsource cognitive work to AI from the startDecoupling of tool availability from tool appropriateness—recognizing that just because AI exists doesn't mean it should be used in every contextRise of AI-integrated disciplines (media arts, design) versus AI-incompatible disciplines (botany, medical diagnosis) requiring discipline-specific policiesStudent demand for clarity on honor codes and AI use, indicating generational confusion about academic integrity in the AI eraEmphasis on developing unique personal value proposition and 'human stamp' on work as competitive advantage in AI-augmented workforce
Topics
AI in Higher Education Policy and ImplementationAcademic Integrity and Honor Codes in the AI EraCritical Thinking Development vs. AI Cognitive OffloadingAI Literacy and Ethical Tool Use TrainingDiscipline-Specific AI Integration FrameworksCorporate-Education Partnerships for Workforce PreparationAI Creep and Invisible Automation in Daily ToolsProcess-Based Learning vs. Outcome-Based AssessmentNeuroscience of AI Use and Brain DevelopmentStudent Autonomy and Judgment in Tool SelectionPlagiarism Detection and AI Detection Tool LimitationsTransparency and Interrogation of AI UseAccess Equity and AI Tool AvailabilityFaculty Consensus and Institutional InconsistencyReal-World Application of AI Skills in Professional Settings
Companies
OpenAI
ChatGPT discussed as primary AI tool students use for assignments, study aids, and cognitive offloading in academic s...
Anthropic
Claude mentioned as alternative AI system students have access to, part of competitive AI landscape in education
Microsoft
Co-Pilot referenced as AI tool available to students alongside ChatGPT and Claude for academic work
Google
Google Translate cited as example of imperfect AI translation; Google search compared to AI prompt use by students
Grammarly
AI grammar tool mentioned as example of AI creep where students don't realize AI is rewriting their authentic voice
Apple
Apple Watch text suggestions discussed as example of AI creep in consumer devices that reduces thinking
Riverside Community College
Institution where Dr. Marc Wolpoff teaches and observes AI adoption patterns among students for 15 years
UC Riverside
University where Dr. Marc Wolpoff earned his PhD in developmental psychology
MIT
Recent research cited showing students using large language models struggle to answer follow-up questions on their ow...
Digital Education Council
Research organization cited for data showing 66% of bachelor's, master's, and doctoral students use ChatGPT regularly
People
Dr. Marc Wolpoff
Expert guest discussing AI adoption in classrooms, student behavior, and framework for ethical AI use in education
Kedira
Co-host framing AI in education as workforce preparation issue and advocating for corporate-education partnerships
Melissa
Co-host discussing customer experience lens, corporate responsibility, and need for transparent AI frameworks
Erin
Co-host sharing personal experience with daughter in college, honor codes, and practical examples of AI use in classr...
Quotes
"Going to college is about learning how to think, not just what to think. And students are missing out on that opportunity if they're using particularly artificial intelligence to circumvent that thinking process."
Dr. Marc Wolpoff
"It's like they're having this real time internal debate about whether they're getting smarter, or just getting better at prompting an AI to do the thinking for them."
Kedira
"We're handing this generation a powerful tool, a power tool, to be specific. And the question isn't if they should use it, but it's how they should use it."
Erin
"Why would you call an Uber for the last two, three miles, right? Like we live in a world of Ubers. Yes, we don't want Ubers to go away. There's a time and a place if I'm late to get to the airport, I'm calling an Uber. Absolutely. But doing a final exam or writing a final paper for college is not the time."
Dr. Marc Wolpoff
"If you don't know when's the right time to call the Uber, how are you going to make that decision when you're in the workforce?"
Dr. Marc Wolpoff
Full Transcript
Welcome to We Fixed It. You're welcome. The show where we take over companies, you come along for the ride, and we try to put them back better than we found them. Back to school seasons and full swing, so we're going back to college. Nearly 20 million students in the US are starting college this fall. They've got their books, their student loans, their hopes and dreams, and they've also got something else. AI. Now us fearless fixers, we didn't graduate college in caveman times, but we certainly didn't have all the benefits or the complexities of AI in the classroom. The college students of today are immersed in AI. It's tempting. It's a click away. Here's an answer. Here's a solution. Just copy, paste me. It'll be all right. What's troubling is that aside from maybe not actually learning anything in college, is that this generation is the workforce of the future. So the question is, should students be plugged into AI 24-7? Does that actually prepare them for the realities of the working world as we see it today? Or should AI be banished from schools so students can think for themselves? Is there a happy medium? That's what we're here to fix. And to do that, we'll need someone who sees firsthand how students are using AI, understands the pressures of those trying to teach those students, and maybe has some ideas about what to do here. Let's turn it up by a few degrees. Say hello to Dr. Mark Walpoth, Professor of Psychology at Riverside Community College. Hi, Mark. Hey, nice to be here. Good to see you all. Give us a little bit about your background to us a little bit about yourself. Yeah, so I'm a professor of psychology. As you said, I teach at Riverside City College. I've taught previously at four year universities, public school, private school, religious, secular. I've taught at other community colleges. Been at Riverside for about 15 years. My background is actually developmental psychology. I went to school at UC Riverside, got a PhD in developmental psychology there. And I like to study higher level thinking like planning and time management and reasoning, those types of problem solving skills. Thanks, Mark. You sound like the right person to be here with us on this one. Thanks. Let's get into what we're here to talk about. So for everyone who's listening, everyone listening who thinks AI adoption is moving too fast in the business world, just wait till you hear what's happening on college campuses. We're talking about a generation that's basically growing up with chat GBT as their study buddy. And the numbers were honestly pretty wild. Recent surveys show that an overwhelming 86% of students now use AI in their studies. And here's the kicker, 56% of college students have specifically used AI on assignments or exams. So this isn't just, Hey, can you help me brainstorm anymore? Students are going all in using AI and everything from writing first drafts to turning in finished work with their name on it. And they're doing it with the same amount of effort as a Google search. So you can set a typing in I mean, we know how it works that typing into Google, they're throwing it into a system chat system of their choice and saying, some of them not all are saying, here's my finished work. But let's be fair, not all students are plugging prompts into AI like that and calling it a day. There's actually a lot more nuance here than you might expect. According to research by the Digital Education Council, 66% of students in bachelor's, master's and doctoral programs are using chat GBT regularly. But get this over 50% of students believe too much reliance on AI could negatively impact their academic development. They get the problem. They're using AI, and they're also worried about using AI. It's like they're having this real time internal debate about whether they're getting smarter, or just getting better at prompting an AI to do the thinking for them, which maybe is a good skill to have to and one that should be taught at the college level. We don't know. But what's really going on? What do we see here? Start us off, Mark. Yeah, it's it's pretty bad in terms of the accessibility for these tools to be integrated by the student. You made it sound like they even have to copy and paste. Sometimes they could even use screen readers now or just voice dictation. There's they could be in a meeting or an interview or oral exam where the, let's say the questioner is asking them questions. The AI will actually produce suggestions on their screen in real time so they don't even have to necessarily contemplate what they're even hearing. So it's it's not even a copy paste situation anymore. AI is moving so quickly that the tools are developing to the point where the students who are intent on using them will find ways to do that that are undetectable. But as you mentioned, not everybody is doing that. And I think that's an important part of the process is saying, this is a really important tool and we have this, it's not going to go away. And so how do we understand maybe how to use it, when to use it, when it's appropriate, and then understand how it's circumventing our opportunity to learn how to think during the training process. And to me, you know, going to college is about learning how to think, not just what to think. And students are missing out on that opportunity if they're using particularly artificial intelligence to circumvent that thinking process. Well, let me ask you, it's a broad generalization, but from what you're seeing, do students come in relying on AI and then they get into they realize this is on them, they've got to step up and do the thinking themselves, or is it the other way around where they come in full of aspirations and they have a full course load and they are fully prepared to keep up and realize halfway through AI, maybe not even halfway through, apart way through AI is going to do this for me. What do you say? Yeah, a little bit of both. I'm going to divide AI use into two categories. Okay, so I call it, the first category I call is, I call it AI creep, which is where it's built into our technologies that we're accustomed to using. And so for example, when you start to text somebody, and it makes a suggestion for your next word or finishes that thought, or for example, if you I check my text on my Apple Watch, and it gives me five choices of how to reply to a text. So I don't have to think about what should I say, I just think which one is the best response out of these choices. I call that AI creep, because we don't even notice how it's intruding in our ability to think, it's just simplifying our process in terms of the communication in those cases. And then the other category is called cognitive offloading, which is where you're purposefully using AI to help alleviate some of that mental fatigue that you're experiencing by having to make many decisions or maybe just solve complicated problems. And so that's where the students come in usually with pretty good intentions of I'm going to do this work and I'm going to nail it and then life happens and things get busy. And now it's a choice, should I finish my homework tonight or not? And if I have to finish my homework tonight, I might need a little bit of help. And at that point, it's one click away. And I don't envy them at all knowing that all their solutions are seemingly, all their solutions are literally one click away. But that's the challenge. And then they start doing this cognitive offloading where then they're not really thinking about their way to solve the problem. They're thinking of how do I use the tool to solve the problem. And so that's where we start to see the accumulation of the effect of the AI, where now they're just thinking about the tool use as opposed to the actually ability to develop the skills. I think, Mark, you've really kind of set the stage for us because this is a really fascinating topic. And it's really important for us to speak about it. From my seat, from a customer experience lens, I also don't see this only as an academic debate, meaning this is the things that are going on on campuses right now and through schools, but as looming reality for every business that hires students or hires this generation. And I love what you say, Mark, it's not all doom and gloom. There's a lot of potential here, but it's like harvesting and honing in on that potential. So the way I kind of see it is we're handing this generation a powerful tool, a power tool, to be specific. And the question isn't if they should use it, but it's how they should use it. So like if you use the analogy of a power tool and you put it in the hands of a trained carpenter, they can build a masterpiece. But if you put it in the hands of someone who's never held a hammer, doesn't really know what it is, it's dangerous and it can be destructive and there can be consequences. So I think it's the job of educators like you, Mark, employers like all of us to be those kind of trainers and guides to teach them that craft alongside with the tool. And I think you've brought up all the different reasons why like today in the environment of academia, it's very difficult to kind of put your arms around it because you're trying to teach them an actual subject, right? And then you have to think about all of these kinds of things. I personally have a daughter who's a senior in college right now. And one of the things that they're facing, and I would love to hear Mark, your opinion on this is the honor codes versus AI, right? So each of these kids sign up and when they go into college, the honor code is kind of beaten into their head, right? So this really ties directly into that kind of giant elephant in the lecture hall. Like, how do you address the honor code? I mean, you know, they've got the clickers now for quizzes. And I know we have there was an incident at her college campus where people brought their roommates quicker so their roommates could sleep in and they had a huge thing where they failed all those people that weren't actually at the class. So plagiarism detectors are there in different forms all the time. And they're like kind of like the digital police. But I kind of feel like it's a losing battle because it's almost like you're trying to put the genie back in the bottle. Like it's not going to happen. And that conflict is really exhausting and trying on both the faculty who are trying to do a job by teaching this generation of students and by the students who are feeling like their failures because they're, you know, these tools are so readily available. And I love what you said about AI creep because you don't even realize it's there, right? And so I think it's about redefining this. And the old model was, you know, no cheating, no plagiarism, no, you know, use everything has to be an original idea. I think the new model has to be more about use your right judgment, use curation, use unique critical thinking. And how do we flip that script? And so how do we make it more transparent? How do we have faculty being very honest in their conversations? And I'm sure they are with the students instead of saying, don't use it. But here's how we'll use it in class. And let me show you why what you're doing right now is not getting you the results you want, how to use it as a personal audit as well, right? So again, trying to kind of push those critical thinking aspects of it, Mark, that you mentioned. But again, I just wonder how they're going to be able to kind of come together on this honor code that's been in institutions forever versus, you know, this whole thing around plagiarism, around using AI. I mean, I kind of feel like I don't want to date us, but, you know, going back to like when, you know, you got the trig calculator. And then they say, you can't use the trig calculator on your final. It's like, what? Like, what, you know, where in real life would you not have the calculator, right? Like, if you were an accountant or doing any of these kinds of things. And so I kind of wonder if it's that we've kind of, like, you know, we're kind of in that space again, but this is a much more powerful, I don't actually say it's a more powerful tool. It's a different tool. Yeah. Well, you say, and you say real life, Kedira, we're seeing both of these things in the corporate world too, right? We're seeing the creep where all of a sudden, all the application that you use are starting to prompt you and give you quick cut and paste responses or one click responses. And you're seeing the cognitive offloading, right? Where you're just tired, you've made a lot of decisions, you want to read a HR memo, and you don't want to write it, you want to write a couple words and then, and companies have been caught pretty, you know, pretty blatantly with AI tags on it, or they forget to remove the footprints or fingerprints. Not always to that degree sometimes, the tool, but what we're seeing on campus, we're seeing it play out in the professional world too, right? You know, when you see people with phones with cracked screens and you think, whoops, they weren't careful, well, that's something you can see on the outside. But what you can't see is how careful they're being with their online data. Because whenever someone goes online without ExpressVPN, it could mean trouble like passwords and logins all out in the open. If a screen cracks, you can fix it. But once your personal data is out there, it's out there. You can protect your own data with ExpressVPN and feel great about it. ExpressVPN creates a secure encrypted tunnel between your device and the internet. You can use it on your phone, tablet, and laptop at its lowest price ever, with plans starting at around 12 cents a day. It matters to me that your data is protected. I love fixing problems, and this one's easy to solve. And it's rated number one by top tech reviewers like CNET and The Verge. Secure your online data today by visiting expressvpn.com slash fixed. That's expressvpn.com slash fixed to find out how you can get up to four extra months. Expressvpn.com slash fixed. Absolutely. I think there's almost this opportunity for corporations to play with schools to help with some of that ethics training and coaching and education so that students are having to unlearn once they get on the job. Or even to your point, Erin, as professionals, we're already seeing some of that stuff happening, kind of that slap your hand things, but can you imagine a world where companies are like, we're going to take this little bit shared responsibility and get involved with students sooner. So it kind of has my will spinning from a social responsibility perspective that companies have to jump in and say, okay, look, from a school or education perspective, the role there is to really provide that foundation, to provide the foundation around critical thinking, communication, lifelong learning habits, even teaching those core principles. But then the company is there to provide that kind of real world application. So moving from theory to practice, we hear kind of that terminology a lot. And I think it will be really interesting to see the companies that have for the last several years focused their CSR in the technology space, how they are going to expand to tackle AI. That would be my advice for the company leaders that are out there listening to really start thinking about what is this shared responsibility, how can you start to lean your strategies toward helping to prepare this next generation of your workforce. Because ready or not, they're going to be graduating. And again, this is an opportunity to help prepare them and not have to kind of carry that burden of unlearning once they onboard versus you can do some of that preemptively. So absolutely. Yeah. Yeah. Well, and the challenge too, I hear everything you're saying. Companies are not big on self policing and restraint. So if there's a market opportunity and a way to capitalize, they have LTV or a lifetime value of a customer. Presuming these AI systems, companies are going to be around for a long time. So chat, GPT and Claude and Anthropik and co-pilot, what have you, they have an incentive to get students hooked as early as possible on their flavor of AI. And if they're not, you know, companies by and large, I'm not saying these are good or bad companies, by and large, are not good at holding back and saying, well, let's let our customer mature. Let's, you know, let's step back for a while. They want to mark, I think, they would want to be in the classroom as quick as possible. So I want to just draw one distinction because in the corporate world, it's a little bit more results oriented and where I come from, it's a lot more process based. So you come to school, you need to learn the content to show mastery and then get your degree so that you can enter that workforce. I'm trying to teach the process of the critical thinking as well as you guys were mentioning the worker needs to use the tools in a way that they can demonstrate the critical thinking. But if they haven't developed that along the way, it's going to be a big challenge for them to do that on the job. And it's going to be a much deeper learning curve. I use the analogy of my class is kind of like a decathlon where I'm going to create a whole bunch of events for you. And as the semester gets on, the events might get a little more challenging. And the last event might be, let's just call it a 10k where you're going to have to run farther than you've ever run. But it's a community college, just finish every event and you're going to be fine, right? You're going to show you've done everything I've asked you to do. And you start that 10k and you're two miles in, three miles in, it's getting harder and it's getting harder. Why would you call an Uber for the last two, three miles, right? Like we live in a world of Ubers. Yes, we don't want Ubers to go away. There's a time and a place if I'm late to get to the airport, I'm calling an Uber. Absolutely. But doing a final exam or writing a final paper for college is not the time to say, I don't know how to finish my last two miles, I'm going to call an Uber. And so I don't, to me, it's a learning process. They're not publishing these papers. They're not trying to get grants or anything like that. There is absolutely no outcome aside from their grade that they're going to get from showing that they demonstrated this process. And then they can move on and then they can show the mastery at the next level. Whereas once they get the job, they may care less about how they get the results that Uber might work just fine some of the time and they may not care. But if you don't know when's the right time to call the Uber, how are you going to make that decision when you're in the workforce? I love how you've been framing it in these real life types of scenarios where I think people can see and I think that's the thing that I feel that college campuses need to do. I mean, I think, you know, when a student comes onto campus, they have to go through some orientation type of courses and most of them are digital online and you have their alcohol awareness. There's all these kinds of things. I think it would be great for them to do an AI class, right? Like how to utilize this tool, how you can use it and have it be very curated for the student and the situation where you know you've got somebody coming from a high school setting, going into college is already, they're going to go crazy because it's the first taste of independence they've ever had and especially in this generation with all the helicopter parents, they've never been away. They've had everything taken care of for them. So I think it would be an amazing opportunity for, you know, the faculty to come together and say, this is how we would like our students to be able to utilize AI and how we would like to incorporate that into their overall, you know, journey as a student, their four years, two years, however long it's going to take them and, you know, kind of give them the building blocks to understand that we realize that this is important and it, but it's more important that you understand the best way to utilize it. So how to write the correct kind of prompts, you know, if I'm asking you the themes of Macbeth, that's the assignment, don't just type that in, right? You need to actually generate an essay that has, you know, pieces to it that we can critique. We can find the flaws, we can find your voice and tone that don't sound like you, right? Where it lacks depth, originality, et cetera, et cetera. How do you tie it and connect it to you as a student, right? And, you know, if you could show the kids examples, I think that they would then be able to take that and run with it. And then that would lend Kedira, you know, and Erin, that would kind of graduate into, you know, the workplace where you say, okay, I know how to be accountable and responsible and how to utilize these tools so that it is helpful in the overall, you know, world. But like, as I know in customer experience, for sure, like if somebody calls in and is complaining about like they can't connect to their internet, calls into the call center, and I just type in, connect to the internet, right? Like, I'm going to get three things and I'm not listening for the tone of the customer, why they're mad, why, you know, maybe they're tech savvy, and it's more than just those three points, right? And now you've really pissed them off. So I think that there's just ways in which we could kind of create this, you know, beginning part of the journey that's so important, but align it with what they're the point of like Mark said, the point of them coming to college is for this, it's just just a think tank, it's like, they're going to be learning and curious, and they're learning so much and you don't, you know, one of the things that I love about educators is that they're not stifling, they're growing, right? You're growing a person is like, oh my gosh, you're opening up these worlds to these students. And I just feel like, you know, by, you know, people saying, we're not going to use AI at all, you're kind of not getting, you're not being realistic. And I mean, we're seeing that in, you know, in companies too, right? You know, like, there are companies that have policies against it, right? But, you know, copy writers, you know, it's the way things are going, right? You know, repetitive tasks, all those kinds of things you're looking at, that's what the future looks like. But I wonder how Mark, you know, when you're in a faculty meeting or committee meeting, what's the commentary? Is it just frustration? Is it like, what are you guys thinking is the best path forward? Mark. So that's a great question. And unfortunately, there's no consensus among faculty, because different, well, different subjects and different disciplines require different use of AI. So for example, media arts, they can't exist without AI. They absolutely integrate from day one, because the students have far more opportunity to develop really creative projects through the use of AI. Whereas imagine a botany class where you have to identify plants, and now your app just does it for you, you have to memorize nothing. There's no value in the AI there, because it completely destroys the thinking process. So we're not, it's sort of the wild west, right? We're left on our own. Administration typically is more friendly towards it than faculty, because they see it as a tool that students can use to do better. And faculty want to make sure we put the guard rails up so that when they come to us, at least we can explain what they're doing and why they're doing it, and how to use it properly. But no, there's no way. I attended a statewide academic intelligence meeting this past spring. And it was really interesting, because you have the folks in there who are 100% on board, let's get this as quick as we can, as much as we can. And you had folks like me that were a little more on the reluctant side saying, let's think about the consequences before we dive in, because yes, it's a tool that's available to us, but we have lots of tools we don't use for every problem that we need to solve. And so just because it exists doesn't mean we need to implement it in every possible corner. And I don't know where we're headed if the technology keeps going as fast as it's going. Right now, it only functions for me, for my students, as a pretty low level starting point. But I show them, by the way, I do a project in my class on AI, which is really cool. And they really get to dig in deep and see how it affects people. It's a developmental psychology class. So I have them look at different age groups and how AI affects people of different ages. And then they try to project where those, where those cohorts are going to end up in a few years. And they really like that because they're like, this is the first time I had to stop and think about what's helping us and what's not helping us. Because they have to look at their nine year old brother or, you know, 30 year old cousin or somebody who's different from them saying, wow, they can use this to help them. But most of the way they're using it is just to keep them entertained or busy or, you know, take them away from doing productive things. But in terms of implementing in the classroom, seeing their ability to know when it's the appropriate time to use it is really the most important thing that we can do. So I have them do that project so that they can say, okay, I understand why, let's say, copying and pasting exam questions doesn't help me. Right. And just like in the workforce, if they are using AI to answer their questions, what value do they add above and beyond the fact that they click the button, which by the way, in a few years, we won't even need a human to click the button, the button will click itself probably. And so my biggest thing I work with them is to almost think about themselves as, you know, independent companies where they have to sell themselves as a small business. Here's my value I add to this. And I encourage a lot of my students are bilingual. So I encourage them to, for example, study sign language, both English, American sign language and Spanish sign language so they can be interpreters. And they could work in a lot of them go into nursing or medical facilities, they can work with multiple communities and do things that maybe they have a value that they can add to it because AI is never going to be as good at translation as humans will. Because, you know, if you try Google translate, you'll know what I'm talking about. It's rough around the edges. But but so so knowing what value do they add, what makes them human? I really asked them that question, what makes you human? What what, you know, if I can get all the information I want in one click, what do you add to that? And how do you use the higher level thinking that you're trying to develop now to take all this information that's available to you, but to come out with a better outcome, then just the lower level things that the L the large language models can produce. You know, my husband and my brother are both professors as well. And they have said the same thing too. And they've also said that kids are not smart. So they're not really asking the right prompts in AI. They're they are just copying and pasting. They're not even like thinking about that. And I know some professors have gotten smarter and they've actually put like white font so you can't see it in the in the in the prompt that they're for their essays or whatever. And then when the kids cut and paste it, it'll go into chat GPT and they'll be like something about like, you know, it'll be a like I said, a prompt about McBack. And then all of a sudden, there'll be a whole thing about the apple from the Disney Snow White or something, right? And so then like, and then they just cut and paste the whole essay and then just turn it in, right? You know, and, you know, my husband's like, you know, they're not that smart, right? You know, I unfortunately are kids are not that smart. So I love that idea of like, you know, you have to balance between like, when is it the right type of thing? And so then, you know, you know, it leads me to back to the question of like, helping them understand how to ask the right questions. I know there are tools out there like prep AI that generate quizzes and things like that to help the students, right? It's not actually answering all, I mean, of course, gives them the answers later, but helps them in their study process, right? And so, you know, the old days we created flashcards, right? So now the students go in, put their, deliver their lecture notes and then create go through prep AI to create quizzes and things like that, right? But again, I think that there are so many different tools for learning that, you know, Mark, I'm sure you see this in your classroom that like, when you were talking about the Decafalon, I think that's perfect that there's going to be somebody who's great at the javelin. There's going to be somebody who's really good at the long jump. There's going to be somebody who's good at running. But and they're all in your class for the Decafalon, right? They're, you know, and so you have to figure out ways in which you adjust. So just like that, that's the, that's the beauty of like having a powerful tool like AI, which can help a student who may be struggling and just a portion of it. But, but, you know, to replace their thinking, I think we're getting to a place where really from, you know, like a people and culture perspective, this is a definitely a turning point for us, because this is not something that most of the professors who are teaching ever had to deal with, except we're in a classroom setting from this side, not from the other side, right? You know, coming into it. Going online without ExpressVPN is like driving without a seatbelt. You might be careful, but if something risky happens, wouldn't you want to feel more secure? Well, every time you connect to public Wi-Fi, it's like you're not wearing a seatbelt, because your data is vulnerable and valuable, like your logins and credit cards, people want them. Learning how to steal your data is easy, but guess what? So is protecting it. ExpressVPN creates a secure encrypted tunnel between your device and the internet. Whether you're on a phone, a laptop, or tablet, you can rest easy wherever you go. And when I say easy, I mean easy. You open the app, click a button, and that's it. Look, hackers got a hack, but it's important to me that you don't fall victim to them. This one's obvious. If you could protect your data anywhere you go for about 12 cents a day, why wouldn't you? So buckle up and secure your online data today by visiting expressvpn.com slash fixed. That's expressvpn.com slash fixed to find out how you can get up to four extra months. Expressvpn.com slash fixed. Yeah. Well, Kaderi, you remember college? Let's think it back to notes, right? So some professors would say no notes ever in my class. And some professors, like when they came to exam, some would say one index card front and back and everyone would write as tiny as they possibly could. Someone say open book, open notes. So we could do these preps and we could carry it over to the corporate world too. You can get trainings about how to navigate AI and hear the policies and you sign something. And then when someone says, I don't care about that, do what you want. And one professor to the next or one employer to the next, it becomes tricky to navigate. So does it just get tuned out at a certain point? Yeah, I mean, just listening to the conversation, I mean, I think there's some opportunity, the way I would describe it is there's opportunity where there's appetite, because Mark, you make a really good point that different colleges, different groups, even within a university have different appetites, interests to use AI. But I'm like, okay, you know, I think companies again, have this opportunity to kind of plant seeds and do some of that. You know, here's what's what's going to be important when you graduate. Here's what's going to be important when you get ready to practice. You're obviously practicing at the university and college level. But like, once you get a job from an AI perspective, here's really what we're looking for. Here's what's going to be important. Here's what the right thing on the right path looks like. And so I wonder if there's an opportunity and Melissa, you would kind of tease this out as well for, you know, companies to help with some of that kind of co-course creation, co-curriculum creation and development, participating on advisory boards to kind of help steer some of that in orientation. I think you made that point, which I think would be beautiful. You know, Mark, even to your point, like there's not going to just be there can't be this blanket approach. But I think that there is opportunity again, based on the type of field, based on the type of degree, based on the type of college, helping to kind of set the expectations and the understanding for the importance and relevance and appetite for AI and college and companies helping really to develop those skill sets and expectations among students. Again, I can't stress enough, having that earlier versus waiting until they graduate and then trying to kind of get them on board with what that needs to look like. But I think, you know, Melissa, you made the point about, you know, what this looks like from a college level. But I was kind of laughing to myself when I think about like, there's going to be, you know, this requirement to like balance what this looks like in the workforce. And again, if we're not preparing students to balance kind of that human judgment with whether it's that, you know, the fatigue and kind of just cutting and pasting into AI, that's happening in the workforce. It's not just our students that are doing this. And so I think that we also need to be thinking about how are we going to again, train them earlier, because it's, and that means that we're going to, we're looking at kind of this culture shift from the college and academic world, but also to the corporate world as well. Right. And I think, you know, there's a couple of things that that needs to happen. I mean, we need really to, and I'm all about frameworks. But, you know, is there a framework either again at the college level, at the corporate level where we are really helping, you know, our students that become employees to think about and being transparent when AI is used and interrogating all of those AI processes when AI is used, because in some ways, in some companies and some, you know, I think in academic settings, AI is almost like this dirty word. Everybody's using it and doing it, but we don't want to talk about it. Let's be transparent about it. Let's interrogate when we're using it. Let's actually ask the question and encourage our students and our employees to actually ask the questions of when you are noticing that AI was used. Okay, great. Why was it used? How was it used? Is there a better way? How do we apply some of that critical thinking to make what we, the output even better? And then I think the other thing is like, look, again, we're going to be using AI, but like reward when we're seeing students kind of step up again, and that in turn is going to become our employees, step up and say, okay, I actually didn't just plug a prompt in and it spit out this output, but I actually interrogated what came out from it. So I think, again, there's kind of this framework of creating, you know, the best practices of how to use it. And again, we should be starting that earlier in college settings. I would even push high school settings. But then also rewarding when our students are actually kind of using that critical thinking that they are learning in college settings to go one step beyond just the outputs that are coming from the tools that again, everybody's using, it's not a secret. We're in some cases folks are rewarded for using it. But again, we want to balance that with actually the human judgment and that critical thinking that everyone should be applying. Yeah, I love what you're saying. And I think when you think about it in practicality, I don't know Mark if this makes sense. But like, you know, using it not, you know, as a punishment tool, right, like, you know, in college right now, like I mentioned the honor code and plagiarism, there's a lot of punishing punishment that's associated with it. So instead, potentially, the professor or the teacher lecturer saying, okay, everyone, before you submit your first draft, we want to run it through the AI detection tool. And it's not to punish you, but it's to learn. It's a personal audit. So as it flags sections, we want you to go back in and ask yourself why. Does this section really sound like me? Does it? Does this even make sense? Did I integrate the, you know, my my answers and cite the right sources, right? Like, remember when we had a good library and you had to do footnotes and, you know, all those things like kids don't know what that means today, right? So like, citing your sources, where they're, where they're from, and then use that to refine your own voice and ensure the integrity of your work is authentically yours. And so that's Kadyra kind of getting to building that muscle so that that can kind of transfer over further down it teaches that critical self-reflection to the student becomes more of an active participant in evaluating their work, not just trying to evade the plagiarism system, right? You know, we've all seen it where the teacher puts it up on the lecture in the lecture hall and it's like 99% and that student's like, oh my God, right? And it kind of demystifies that entire process. And it's not that the teacher's using it to be a snitch. It's that it's the teacher's using it to be a mirror to reflect the student's work, right? To say, hey, look, this doesn't feel like it's really you, right? You're a javelin thrower. So why'd you spend so much time talking about the race? Right? Yeah. So again, I think that is the goal is to create kind of this culture of accountability and not policing. But I think that again, it's the challenge is what Mark just said, which is the diversity of the the like the campuses, the teachers, the subject matter, you know, my daughter's an engineer. So it was very funny. She, one of her first classes, the professor throws everybody's finals and was like, I think you guys were cheating. Wow. And it's a 250 person class for aerospace engineering. Everybody has to take it. They were freaking out over Christmas break because they were like, what the heck? And I'm like, what class was this? She's like, coding. Well, hopefully, they'll get the same answer. My gosh, you know what I mean? And she's like, we weren't cheating and we were all working with our TAs. And just like she asked us to. And so they were in their small groups and she's like, you know, and so there's certain things though, like coding, you know, you would hope that you would get, I mean, we still you don't want them using it. You want them to be able to understand how to code, right? You know, I'm not getting on a rocket ship if she doesn't know how to do that. But I just feel like it's an interesting thing that like Mark, you were studying like in visual media, right? You know, in other in other arenas, it's a different type of thing that's used with AI. So like, I think we have to be very careful. And because of that diversity, it's going to be really hard to kind of come up with a cohesive fix, to be honest, for this, this is a huge challenge for a nation. Yeah, and I've had students who tell me in other classes, they're told they need to use Grammarly because in this day and age, there's no excuse to have grammar mistakes. And then in my class, I say, I don't really care how poor your grammar is, I want to see your authentic writing. And they are not used to that. And then with the creep coming back is where AI just automatically changes your words, I can't tell if it's adding commas and periods, or if it's completely rewriting sentences for them. And then the AI detectors aren't amazing. So they'll they'll flag it either way, it'll say, Oh, it detects algorithmic use right there. I can't tell if that's a small grammar fix, or if it's, you know, completing sentences and thoughts and rearranging paragraphs. And so it's a big challenge because in one class, it might be completely perfect and not only legal and acceptable, it is expected. And then another class they might get, I wouldn't say punished, but maybe, you know, talk to you about what's going on with their work in this class. For me, it's it's it's not punitive. It's more of a let's have a conversation, let's see what's going on. I try my best to put the expectations forward from day one, I have them go through a whole list of questions on, you know, is this permissible in my class? No, I remind them on every assessment, I remind them on every assignment. And then inevitably, we're going to have that conversation if it happens. But, you know, let's figure this out, what what happened there? Why did we take that shortcut? Or why did we think that that was an appropriate way to use a tool when the instructions were pretty clear that that wasn't appropriate for this instance. But I can see the confusion, right? Because in another class, they might be working on their homework the same night. And in that class, it was perfectly fine. And so it's not we're not going to find the solution that everybody's going to agree upon. And I think that's fine. It's it's it's again, going back to tool use. And we live in a world with lots of tools that we don't use all the time, right? And no when this really powerful tool we have is appropriate, and knowing how to use it. And, you know, I have them do, there's just something fun, I have them make up a joke about two different types of psychologists walk into a bar, and they have to just come up with a punchline. And, and most of them really struggle with that. And they say, I don't I don't really think like that. And so I just say, just do your best, have fun with it. I don't care, you know, dad jokes, totally fine, corny jokes, totally fine, whatever it is, come up with something. And then when it's over, I run through chat, GVT, I show them, I said in five seconds, I got 10 jokes that were better than probably all of us could have come up with in a month. And but that doesn't mean that the process wasn't valuable to try to think about what would happen if these two psychologists had to come into a room together, right? And they're like, Oh, you're right, that was really neat to think about, even if the joke that came out wasn't great, you're not professional joke writers, your community college students just learning about psychology. And so I don't expect you to go on stage and perform these jokes, I just want you to think about what would that look like in a funny way, these two people who have opposing viewpoints have to sit down and have a beer together, right? So so just to me, again, I do a lot of the project based learning. So they are integrated in the process more so than that professor who says I want grammar lead because I don't want to read poorly written essays, I think is missing the point to some degree of this is where they learn to do that writing because when they walk away, they're neither going to have the skill nor the confidence to write in their own voice if they don't develop after in college. And that I mean, I have a funny story about that. It's pre grammarly. So my son when he applied to college, I read his essay, right? Of course, that's a good mom would. And he had a great story, which was fabulous, but he used a word that was not real world word, he made it up, I think, and it sounded okay, but it was not a real in the dictionary word. And it was kind of an adjective or something. And so I met I flagged it, right? I underlined it and I said, Hey, that's not a real word, you should change it to, you know, something like that. And I remember him saying, it's not your essay, you already went to college. It's my essay. And I that's what I and I want to put that word in there. And I just remember thinking, God, I really hope there still are people reading these essays, you know, you're getting a 1000 applications, right? So whatever wherever he was applied to, you know, and because I just thought, you'll be able to tell that a 17 year old boy wrote this and not his, you know, 50 year old mother, right? You know what I mean? And like, or that he didn't hire somebody to write his essay for it. You know, now they have all these people that do that. So I just, you know, it just gave me like, I was proud of him and he did fine. And now it's kind of funny because he's in marketing. That's so I'm like, hmm, how do you keep, are you using made up words? And, you know, and so I just think that's it's interesting, Mark, you know, because I feel like the application, like you said, of like, that it doesn't have to be perfect. It has to be you. Right? That's the thing. I mean, it doesn't have to be the perfect joke. It's the process. Like, and in that process, you're probably going to find your javelin thrower, right? You're going to find somebody and you're like, Oh my gosh, this person is the quietest person, but they're the funniest person. Right? Yeah. I mean, I mean, I think really cool. Yeah, it's a really good point because I mean, look, these tools, AI, the tools, you know, the machine, whatever, I mean, it is designed for good, right? I mean, I think the intentions, you know, in terms of the original design are for good. But I think, you know, the points that we're raising are around like the when and the how and the what. And just this fear and concern, I think, rightfully so around just overreliance. That I think, you know, whether again, it's our students and what that looks like when they enter into the workforce or those of us that are already into the workforce. And I think, you know, just Melissa and Mark hearing you talk, I'm thinking about, you know, the stamp, the personal stamp that as a student, but also as a professional that you get to put on your work, right? And when you are overly relying on AI, again, there's a place for it. But like when you're overly relying on it, where is your personal stamp on your work? And I think that's a, you know, the ballissy of the story about your son, Mark, you know, the examples of your students. Like that's the cool stuff. That's the sexy stuff. That's what makes your work your own, right? When you again are applying that critical thinking, when you're questioning, when you're interrogating, when you're making the mistakes. It's where you're developing your IP. And it just makes me wonder, like, again, if this kind of goes unchecked, or we're not starting to address this sooner with our students, like, what is this going to look like for all of us? Are we all just kind of walking around as AI machines, which we know that's not what companies are hiring for, right? They don't just are hiring, you know, a person in a seat to be an AI operator. It's really how are you going to leverage the tool to innovate, to be better, to be faster, etc. So I think this is, you know, it is, it is a conversation to have. It is a topic to watch and listen in for because I think among the myriad of things that we are faced with on the educational front, this will be one that is going to define this generation for sure, like how we handle it, how we respond to it. Well, let's cap it on that and let's, let's fix the situation. So I'm gonna, I'm gonna pull us back together with everything that I've been hearing. So if at the educational level at the, at every college across the country, let's just make it unilateral. If we set policies and help students navigate, maybe there's a pledge involved, and it's going to be situational because one instructor to the next and one situation next is going to be different, but there's some kind of accountability. If there's infractions or blatant misuse of AI, approach it with, where we're in a learning phase. So approach it with curiosity and not just discipline and punishment. Bring in companies to help shape that are involved in AI and have some kind of stake in it, but they're also the ones, you know, maturing this technology, help them shape AI literacy at the student level. I think that doing, letting them come in with their corporate interests isn't the right way to go about it. Probably, you know, put it at some kind of consortium or some type of multi-stakeholder strike the right balance there. I think getting students involved in that would be a really good thing to do. So it's not just handed top down, you know, you have to sit through a week of AI literacy, but students telling, teaching students how to do this would be really powerful. Watch out for AI creep and don't let it kill your, your own thought and turn, turn your brain off and lull you into saying yes to the prompt and just, you know, you're there, you're there to think, not just to click the next. And then at the academic level, as in the corporate level, put a, put a premium on original thought and put a reward there and recognize those that are not just, you know, accepting what's in front of them, but they're using tools in the right way and they're bringing their own perspectives and their own resourcefulness to the situations and, and those that, that do, you know, like Mark said, the even clicking a button short in short order is not going to be a role anymore. The button can click itself. It's what can you do with the resources available to you. So if we did that, here you go. Did we fix the situation? Melissa? I don't think we fixed the situation, but I do think we brought up huge, important topics for discussion. And so first of all, I want to say thank you to Mark and all educators out there, because you're not just preparing your students for finals or for a grade, you're preparing them for us for their first day on their job. When you teach them to use AI ethically and critically, you're giving them a massive head start. You're sending us people who are more efficient, more creative, better communicators. I think one of the things, Erin, that you brought up that is really important is to understand the diversity, not just in the tools and the students, but in the subjects and in the faculty, because I think Mark brought that to the forefront. And I think that's important that it's not just about a policy that's kind of overarching for everybody, because I think it's, it's going to be messy. It's going to be messy. And that's what the real world is like. I mean, I think, Kadir, we can, we can all agree that the real world is not easy. It's not a playbook. It's, it's messy. So I think that's really important to acknowledge and to recognize and allow that leeway and flexibility within those, you know, AI committees out there on campus to allow for the professors to know what's best for their students in their classroom. And for businesses, I think we need to get ready. Spaces need to be, you know, ready for these new tools. Let's not fear the power tool. Let's build the workshops where we can at least use that for a creative and amazing outcomes and spaces and a great culture that's keeping up with the times. What about you, Kadir? You like, you like playbooks, do we create things for a playbook? Straight through a balance here? Where your trigger are at, too. Yeah, I think I'm cautiously optimistic. I think that we have the start of a playbook. I mean, I think the, and I use the term, I think shared responsibility, right? You know, creating this like public partnership between corporations and the education system, schools, universities, honestly, I would start even earlier than college. I mean, I would, I would start in elementary school, middle school, where, you know, some of these students have access to these types of tools, whether we want them to or not. So I can't, you know, emphasize that enough. I think the company, and we put a lot of responsibility and weight on corporations. But when you think about it, that's where AI is really going to be implemented at scale. And so if that, if you want, you know, this next generation, this workforce, these folks coming out of college to do good with these tools on for your company, then you absolutely carry some responsibility and weight to make sure that they're doing it responsibly. And then the other thing, and we didn't get to go too far into this, I think we started talking about it at the beginning, but I would really emphasize, I'm just thinking about the access in the, the inequality of access from an AI perspective. I mean, I think AI can be an equalizer. And we've seen really good examples of that, whether it's with people with, you know, disabilities, or, you know, different languages and things like that. But if we are not careful, there are some haves and have nots in terms of who has rich access to the tools, paid features, all the things. And then that just kind of snowballs, it creates a skills disparity, it creates a confidence disparity. And so I think we have to be really mindful of we're going to fix it to also be thinking about how do we create access equality with a tool like AI. So I say all that to say, I'm cautiously optimistic, I think we have the start of a framework of a playbook. Oh, I like it. Mark, how far did we get? We got a long way to go, but one last word of caution is some of the research that's coming out actually shows a pretty strong negative relationship between AI use in general and critical thinking. And some really interesting studies have been done recently, one was done at MIT recently that they showed the students that use the large language models to produce their work. They couldn't even answer a follow up questions on their own work, basically, they were struggling to even understand what they came up with. Versus someone who looked it up on Google did a little bit better, but the students who had to do it on their own, they had actual brain scans happening while they were doing this work, and their brains were lighting up a whole lot more making a whole lot more connections. And then following up on those students, the ones who had to generate the ideas on their own, their brain continued to stay active even if they introduced a large language model later. And so particularly at the creativity and the problem solving levels, I would strongly encourage breaking away from just having that as your first go to move. And as you both, all three of you mentioned, this is happening at younger levels, by the time they get to college, they're well, well trained from their peers, if not from the tools themselves on how to how to use these tools, maybe not in terms of the way I would use them, but the apps on their phones or just the screen grabs or things like that. So I would say, as come up in this conversation, you can't ignore it, you have to start from day one saying this is part of our world that we live in. How are we going to deal with that? And it's okay, it's okay to say, let's pretend like it doesn't exist for the purposes of this class, if that's the expectation, because, like I said, if you're a track coach, and you're teaching them how to run, you're not going to say, let's just not run today and take Ubers, it's just not going to happen. So it's okay to be a writing instructor to say, we're going to write today and not use the large language model to do our writing for us. And I don't want people to feel like they're not, they're doing their students a disservice by taking away a tool just because it exists. But at the same time, you have to say, well, you know, this is something that in the real world, people don't really do, they just use shortcuts, and it's okay to learn how to use those shortcuts when the process matters less than the outcome. So we're closer, if we start thinking in terms of process versus outcomes, I think that really helps. What do we want our students to learn versus what has to be done? And what tools will they have when they do this in real world? You know, if you went to a train a lot of medical students, if you went to a doctor, and the doctor just plugged your symptoms into chat GBT, you'd say, why did I go to the doctor? I want a doctor who can listen to my symptoms and come up with ideas, and then look that up and say, is this a reasonable diagnosis? But right now, if we train them to just plug it into chat GBT, then that's going to be the way that they solve that problem. That doesn't mean that you can't, you know, somebody won a Nobel Prize recently, by using AI to break down, you know, some really complex chemical that we can't do as a human, because it would take our whole life. But AI can run simulations and break it down to the point where we can solve these unbelievably complex problems, a fantastic tool to have when we know how to apply it. But yeah, I would say it's okay to not use it, and it's okay to encourage them to learn how to develop those skills on their own first and foremost, and then start integrating it. I like that a lot. So we don't know that we didn't, it's not a full fix. We won't put it away with a lid on it. This is going to keep happening, keep being a part of our daily conversations and daily lives at the school level, at the work level. I mean, every way AI is trading everywhere for better or worse, and it's an evolving case study. We'll have to see what happens as it happens, right? Absolutely. All right. Well, thank you again to Dr. Mark Walpuff for joining. Let's give you a test. True or false, are you also my brother? I think that one's an easy one for me. Let me quickly run it through the AI. No, yes, see. Yes, I am. And I'm so happy to see this podcast doing great. I listen to it as well, and I'm very supportive of it. So keep up the great work. Oh, thank you. Do you have anything you'd like to share or shout out? Not in particular, just we stay off the grid the best we can, keep the tool used when it's appropriate for us here too. So thanks for having me on. All right. Thanks very much. It's going to have to do it for this back to school episode of We Fixed It You're Welcome. Mark, thank you for bringing us intelligence that was anything but artificial. Likewise, Kedira and Melissa, for our audience of Fixed Aholics listening at home on the job or while studying, we appreciate you. If you're enrolled in a business course, tell your professor to make us required curriculum. If you get an A on the final, you're welcome. Until next time, class dismissed. We hope you enjoyed this episode of We Fixed It You're Welcome. We go into every episode somewhat cold, and nothing we say should be construed as legal advice, financial advice, or anything that would get us in trouble. All trademarks IP and brand elements remain property of their respective owners.