VOLUME 104
ISSUE 09
The Student Movement

Humans

Technology to the Rescue: AI Use in the Classroom

Marco Sciarabba


Photo by Choong Deng Xiang on Unsplash

As the semester is quickly coming to a close, projects, papers, assignments and tests flood students’ schedules. With the increase in business and the arrival of deadlines, students are turning to artificial intelligence (AI). More and more students and schools across the country are using AI programs to lessen their workload and increase their efficiency. Andrews University is one of these schools that offers students access to AI-based technology such as Grammarly. But are these programs ethical? What do teachers think of them? Should students be allowed to use them?

AI has been around since the mid-1900s. What started as a programmable digital computer in the 1940s has now grown into highly complex large language models (LLMs) like ChatGPT. Since its release on Nov. 30, 2022, ChatGPT has grown to be one of the most used artificial intelligence applications in the world, hitting 200 million users in October of 2024. Since then, ChatGPT usage has more than doubled and continues to grow and is expected to reach 1 billion users by the end of 2025. 

With its ability to summarize information and present a seemingly infinite range of ideas, there is no doubt that students have taken full advantage of its benefits. 

“The main things that have helped me through school this year are definitely ChatGPT and when the teacher uploads the class slides,” said Edward Cervantes (freshman, business). More specifically, he has been using ChatGPT to “break down concepts and help walk [him] through problems.” 

Another student, Joseph Nordgren (freshman, computer science), has been using AI to develop study strategies for his classes. 

“I remember in history class, there was an AI program I used that analyzed PDFs and output questions about it, which was very helpful in making sure that I remembered all the details,” he said. 

Additionally, when his teacher gave the class a study guide for the next test, he would ask ChatGPT for questions similar to those on the study guide so he could practice more before the next test. 

While the responses above are largely positive and illustrate the benefits of AI use in the classroom, there are other uses of AI that are less commendable, like cheating, plagiarizing and letting AI do your work for you. With these and many other risks in mind, how should teachers go about handling the topic of AI in their classrooms? 

Tamara Watson, assistant professor of communication, gavea positive outlook on AI use in the classroom. “It’s a tool, so use it as a tool,” she repeatedly said. She’s completely for AI, so long as you are using it ethically in the setting you are in. To deal with the problem of AI being used for incorrect purposes to replace human thought, she typically asks her students for personal reflections on class topics, which makes it a little harder to use ChatGPT to respond. 

Furthermore, she has been incorporating AI in her classroom setting in one of her social media classes by challenging her students to use AI for their assignments. However, she is very clear to her students about the rules of using AI in her class, saying, “How are you using AI? It’s fine that you use it, just tell me how you're using it.” Overall, Watson is very in favor of AI use. Although there are many skeptics, she confidently asserts, “I don’t think we should hide from it at all.”

Another pro-AI professor is Kylene Cave, assistant professor of English. Cave has had much experience with AI due to her experience dealing with the digital humanities at Michigan State University while doing her doctorate. While there, she acquired a graduate degree in digital humanities, which studies how digital tools and technology intersect with humanities research. She studied different ways that digital tools and technology can open up various avenues for humanities research that look more non-traditional. 

So, when AI came onto the scene, it didn’t surprise her too much.

“I didn’t find it earth-shattering,” she said, “because I’ve already been having similar conversations and thinking along the same lines through my training in digital humanities.” 

When she looked deeper into it, she saw it as a new tool. But, as with every new tool, she said, “we have to think about: what are the downsides? How can we use it ethically? What do we need to be thinking about? How can it be taught in the classroom in an ethical way?” 

In her classroom, Cave has an open AI policy. 

“I want students to see ChatGPT or any AI as a tool or a resource. Not something that substitutes them putting in the work, having original thoughts, and creating unique things, but as a supplement and a tool that helps them produce interesting and creative work,” she said. 

She made sure to stress that as long as teachers are effectively communicating their AI policy to their students, AI shouldn’t be feared or avoided.

On the flip side, there are also some professors who are against AI use in their classrooms.

Stephanie Carpenter, professor of history, highlights the risks that come with using AI. 

“I have no problem with using it like Wikipedia to find out ‘what’s the answer to this?’” she said. “It's when students use those words as their own and turn it in [where it becomes a problem].” 

More importantly, she doesn’t view AI as helpful in her area of study, the humanities, or helpful for her students in general. 

“It is a shortcut that I don’t think is beneficial to a growing, questioning college student that is trying to expand their own knowledge. It’s not helping,” she said. 

At the end of the day, Carpenter wants what’s most beneficial to her students’ growth. With that, she stresses the importance of student independence. 

“I just want students to think on their own,” Carpenter said.

Gary Wood, associate professor of political science, shares this view, but in a broader perspective. He starts off by asking himself, “What, as a teacher at a university, do I aspire to for my students? What do I aspire to teach students?” which he answers, “What I really want for my students is to encourage and inspire them to achieve excellence in a given field.” 

He continued, saying, “If I can use this calculator to do arithmetic, am I a knower of arithmetic? If I can implant computer chips in my brain that allow me to download a chemistry textbook, am I a knower of chemistry?” 

The problem at stake here is whether students are actually learning or not. He is concerned with the individual growth of the students, not the results they can bring.

“If you can use AI to write your paper, do you know how to write? Have I taught you how to write? And just as importantly, are you a knower of the subject of that paper? Or have you just used a shortcut (an extraneous intervention)? And, am I, as a teacher, able to allow you to actually achieve that excellence? That is my goal as a teacher.”

Excellence is the focus for Wood. He is not necessarily against AI, but he fears it will have a negative impact on the students’ personal growth towards excellence. 

“It could be very valuable,” he said. “It’s a tool. But, if it becomes how people in the future are writing their papers, [with AI], then, as far as I’m concerned, that’s a big problem.”

So, is AI bad? Not necessarily. Some say it is a tool, while others say it is a risk. If used well, AI could be one or the greatest forms of assistance to humans. If used poorly, people can become overly reliant, stunting future intelligence and innovation. The choice is up to each of us on how to best use AI, if at all.


The Student Movement is the official student newspaper of Andrews University. Opinions expressed in the Student Movement are those of the authors and do not necessarily reflect the opinions of the editors, Andrews University or the Seventh-day Adventist church.