Skip to Content, Navigation, or Footer.

‘Pay attention to it, ignore it or push back on it’: Brown professors discuss AI’s impact on academic integrity

New artificial intelligence chatbot ChatGPT creates waves in academic world

ding_chatgpt.jpg

ChatGPT, a chatbot created by artificial intelligence company OpenAI Nov. 30, has prompted professors at colleges across the country to change the way they teach and assign work.

Advertised as a program that users can interact with in a "conversational way," ChatGPT has earned attention for its ability to write concise and convincing essays and fix code. When prompted, the bot can write anything from a textbook intro to a love poem. Some school districts in the United States have banned ChatGPT completely.

According to five professors interviewed by The Herald, whether or not the bot’s expansive abilities offer a supplementary educational tool or provide unacceptable outside help is up for debate. 

At the University, the academic code remains unchanged, according to Steven Reiss, chair of the University’s Standing Committee on the Academic Code and a professor of computer science. 

ADVERTISEMENT

Reiss emphasized the “Basic Policy” passage in the code, which states that a student’s name on academic work assures that the work is “the result of the student’s own thoughts and study,” unless the work acknowledges external assistance.

“We do not discourage or encourage the use of (ChatGPT),” Reiss said. He added that professors’ individual policies always override the academic code, which means that professors can choose to allow the use of ChatGPT.

The University’s Harriet W. Sheridan Center for Teaching has also released a guide for professors that offers direction on dealing with new developments in AI.

With the final decision left to individual professors, some professors told The Herald that they plan to alter their syllabi to include language on ChatGPT.

Jason Harry, professor of the practice of technology and entrepreneurship, teaches ENGN 2125: “Engineering Management and Decision Making.” This year, the course has an updated syllabus that adapts to ChatGPT: “Submitting work that has been substantially created using AI technology” is considered “unacceptable behavior.”

Students “will turn to ChatGPT to rough draft an answer, and then they'll mask the fact that it was done substantially by AI,” Harry added. “So, we've disallowed it.”

Benjamin Parker, an assistant professor of English, said he experimented with feeding ChatGPT essay prompts he has previously used in his classes and found the responses underwhelming. In his trials, Parker concluded that ChatGPT was not capable of close reading, creatively analyzing quotations or paying attention to “patterns of significance.”

“I would convey to my students who are tempted to have computers think for them that they are only cheating themselves,” Parker said.

Even if students do submit ChatGPT’s work as their own, there are limitations to the bot’s abilities, according to Assistant Professor of Computer Science Stephen Bach.

While ChatGPT can gather information, it lacks many of the complex reasoning skills that humans possess, Bach said. Asking the AI to reason through hypothetical scenarios often causes it to “hallucinate” — a term used in natural language processing to describe AI-generated content that is incorrect, though confidently stated, he added.

ADVERTISEMENT

“ChatGPT can be eager to please,” Bach explained. “In some cases, if you say ‘prove this statement,’ and it's not a true statement, it'll still act as if it's true and give you something that sounds plausible, but is wrong.”

According to Bach, no widespread and reliable method exists to detect ChatGPT-generated content, though OpenAI recently released an AI Text Classifier that “predicts how likely it is that a piece of text was generated by AI” — which comes with the caveat that the tool “isn’t always accurate.”

“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” an OpenAI spokesperson wrote in an email to The Herald. “We look forward to working with educators on useful solutions and (finding) other ways to help teachers and students benefit from artificial intelligence.”

Rather than focusing on detecting or banning AI use, Professor of American Studies and History Steven Lubar — who specializes in the history of technology — said he plans to experiment with allowing students to use ChatGPT for brainstorming and first drafts. 

Get The Herald delivered to your inbox daily.

“When to pay attention to it, ignore it or push back on it … those are the kinds of things you need to learn with any tool,” Lubar said. “My hope is to learn to teach students to use (tools like ChatGPT) wisely.”

Lubar said he modified his syllabus to tell students they are welcome to use ChatGPT, as long as they clearly explain how they used it and provide a detailed discussion of how it helped them.

“My guess is it may be more work to use ChatGPT wisely than to not use it at all,” Lubar said.

During computer science teaching assistant training camp over winter break, ChatGPT became “the elephant in the room” due to its ability to write code, according to Samantha Gundotra ’24, a CS concentrator and head teaching assistant for CSCI 0320: “Introduction to Software Engineering.”

ChatGPT “is a great tool to get an outline down or get a block of code,” Gundotra said. “It’s really nice to overcome that barrier of starting something, which can give students anxiety. I hope that professors will take that into account.” 

Harry likened the tool to technologies such as calculators or Wikipedia, citing how ChatGPT can break down confusing topics and help students who are not native English speakers. In many cases, Harry said, new technology such as ChatGPT could be “liberating to the human condition.”

“In some circumstances, it may be ultimately unshackling. In other ways, it may just be the opposite,” Harry added. “It may shackle people to be the human front-end to an AI engine.”


Maya Davis

Maya is a staff writer for The Brown Daily Herald covering science and research, metro and university news. She previously reported health news for WebMD and Medscape, and is pursuing degrees in Biology and International Affairs. 


Anisha Kumar

Anisha Kumar is a section editor covering University Hall. She is a junior from Menlo Park, California concentrating in English and Political Science who loves speed-crosswording and rewatching sitcoms.



Popular


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.