Skip to Content
Categories:

Students discuss AI amid changing policies

New school year opens with reminder on artificial intelligence

During the first regular School (US) assembly of the 2024-2025 academic year, Upper School (US), Dean of Students Rory Morton ’81 and US Director Jessica Keimowitz spoke to students about academic honesty and the school’s policies on the use of artificial intelligence (AI). Mr. Morton provided specific examples of scenarios in which students might decide to use AI against school rules to assist them in completing assignments. These situations can be avoided if students manage their time well and reach out to teachers for help, he said.

For Brandon Xie ’27, Mr. Morton’s comments reminded him to prioritize planning ahead of deadlines and communicating effectively with his teachers.

“Mr. Morton said that often, when it’s late at night and there is a project due the next day, you get desperate,” Brandon said. “The better thing to do is to ask your teacher for an extension because using AI to complete the assignment will lead you into a deeper hole.”

Students who use AI dishonestly deprive themselves of learning opportunities, Brandon said.

“Even if you don’t get caught when using AI, you hop over learning processes that you need to achieve at certain times.”

Leo Song ’27 likened the use of AI in school to taking shortcuts while cooking a meal. Students who rely on AI to complete their work will ultimately face consequences, he said.

“It’s like setting a rice cooker at 500 degrees Celsius to heat it up faster, but in the end it just gets burned.”

Hannah Rosado ’26 said she has noticed the school’s focus on academic honesty in response to the increased brainstorming capabilities and intelligence of AI chatbots.

“With ChatGPT, you can brainstorm ideas such that they aren’t truly yours. It’s become a question of, ‘Are your ideas actually your own?’ rather than merely, ‘Are the words that you’re using actually your own?’”

Hannah agreed that AI is detrimental to students’ education in the long term, she said.

“You can plagiarize your way to a job, but once you’re there, you’ll discover you can’t use AI to brainstorm ideas. You need critical thinking skills in order to be successful.”

This year, Hannah, among other students, has noticed the implementation of new policies and guidelines to mitigate the use of AI in her classes, most notably the “One Doc” policy utilized by some English teachers, she said. One Doc writing assignments require that students complete all of their brainstorming and drafts in a single Google document without any copying and pasting.

Sensing confusion from students, US English Department Head Ariel Duddy said her department met to create clear rules regarding AI, including the One Doc policy.

“Given the ubiquity of AI and our sense that students felt confused about the English Department’s views on AI, we wanted to craft a policy and pledge that made our expectations clear. The English Department has always prohibited the use of secondary sources. We view our AI policy as an extension of that long-standing policy.”

Josh Curhan ’25 said he has observed other new policies related to academic honesty in his English class.

“Once we turned an essay in, my teacher went through our version history and looked at all of our peer edits, which I don’t think any of my teachers have ever done before.”

Josh said he believes these policies will be successful in helping teachers prevent AI use.

“If the teachers are really looking into the version history and checking it for every student, it will scare people into not using AI.”

Josh added that students are now officially not allowed to use Grammarly, a generative software that has AI-powered features. Increased academic honesty regulations, however, may be an indicator of deteriorating relationships between teachers and students, he said.

“Maybe we don’t deserve their trust, but they’re doing all these things to watch us and survey us. They do not seem to have any trust in us.”

A sophomore, who requested anonymity to protect her classmates, believes the school is increasingly focused on preventing academic dishonesty because many US students use AI without their teachers’ permission, she said.

“Before, the school didn’t know much about AI-related academic dishonesty, and it was hard to navigate because AI was evolving rapidly. They thought honesty would work, and it doesn’t necessarily because I know a lot of people who use it.”

AI use among students is a more prevalent problem than many teachers and students think it is, she claimed.

“Last year, in my history class, about 50% used AI for outlining or for their drafts. Some of them actually got caught, but I know other people who did not get caught. The people who are using AI have always been finding a way to cheat.”

Effectively monitoring and preventing students’ AI use is difficult, she said.

“I think BB&N is doing the most it can, but it’s so easy to maneuver around even the strictest rules. It doesn’t help that the AI writing detectors online don’t work all the time.”

Students who are less interested in learning than receiving a high grade are often the ones who use AI, she said.

“Some people work to get a good grade and some people to gain skills. The people who work only for a good grade are the ones who are using AI.”

More to Discover
TheVanguard

FREE
VIEW