Skip to Content, Navigation, or Footer.

How can generative AI impact students’ learning? Four student panelists weigh in.

A Sheridan Center panel centered around the benefits and limitations of using AI to supplement learning.

Ian Ritter_AI.jpg
Looking to the future, students ended the panel by discussing the need for stricter restrictions to AI and technology usage for younger students.

Students and faculty gathered Wednesday to discuss how generative AI can impact learning in an event hosted by the Sheridan Center for Teaching and Learning. Moderated by Mary Wright, the center’s executive director, the event included a panel featuring four students, followed by a Q&A.

The panel began by discussing how generative AI can be used to supplement learning within their fields. All four panelists are writing fellows and associates at the Sheridan Center.

Prudence Ross GS, a fifth-year English PhD candidate, noted that while AI can help organize ideas and edit writing, its contents need to be double-checked and scrutinized.

When using AI, Ross said that students should ask themselves questions to ensure that any AI-generated content accurately reflects their intentions. “Is this word choice actually what I want to say? Is this organization that it’s given me for an outline really emphasizing the thing I want to emphasize?” she asked.

ADVERTISEMENT

Ross also noted how generative AI struggles with close reading assignments, as it regurgitates facts instead of breaking down and analyzing text.

Angela Lian ’26 agreed with Ross, noting that tools such as Grammarly can be particularly useful for improving students’ writing. But she emphasized that the use of AI to generate entire assignments — which she said is uncommon — is detrimental to students’ skill development.

Abby Katz GS, a 3rd year PhD student at the School of Public Health, noted that AI is becoming “ubiquitous” within society and stressed the importance of getting familiar with its benefits and limitations. To emphasize the technology’s fault, she cited a viral example in which AI failed to correctly identify how many r’s are in the word strawberry.

Da-Young Kim ’25 highlighted the importance of gaining a fundamental understanding of a field before trying to use AI to aid with comprehension and problem-solving. 

Kim noted that AI simply doesn’t have the fundamentals of critical thinking to solve mathematical proofs and write code. “It just doesn’t have the capabilities to weave together such complex ideas,” Kim said.

Next, the panel turned to policies in the classroom, stating that professors need to make their AI usage regulations clearer to students to avoid academic misconduct. Before the panel, the four students agreed not to mention their personal usage of AI to avoid any potential consequences from the University.

Katz noted that some assignments flagged as being generated by AI are actually products of the students’ original work. To combat these false positives, she suggested that professors require students to turn in drafts of their work as they complete the assignment.

To ensure that students properly cite AI-generated content within their work, Kim suggested that instructors require students to indicate how they used AI to complete the assignment, whether to generate ideas, edit text or organize their thoughts. 

Students ended the panel by discussing the need for stricter restrictions on AI and technology usage for younger students. 

“The pervasiveness of technology and AI is dissuading people from developing critical skills like social interaction and critical thinking because that stuff is hard and uncomfortable,” Kim said. “You get a lot of good things out of being in discomfort.”

ADVERTISEMENT


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.