By Steven Zhang.
With the introduction of ChatGPT, questions over academic dishonesty have become a common cause for concern within the educational community, as repeated demonstrations have shown that it is capable of replicating human work. An example of this is shown in the findings by Dr. David Kiping at Columbia University, in which ChatGPT scored similarly to real students in a college-level astrophysics exam (Kiping). English teachers have also raised a similar concern over the potential for ChatGPT to produce original writing that is indistinguishable from a human author. As ensuring academic integrity is a top priority for educational institutions, ChatGPT’s specific ability to create writing at a human level presents a particular concern. Since ensuring against plagiarism is a critical component of maintaining academic integrity, the inability to distinguish between the work of a human student and that of a computer program places much of the principles that academia is built upon at risk.
A natural opposing argument that arises is to label such accusations as too eager to assign the role of technological assistance to one of solely facilitating plagiarism. As proposed by such critics, to synonymize the use of technological assistance with the violation of academic integrity, as if one inevitably precipitates the other, is a fallacy in argument built upon misguided assumptions. The use of programs such as ChatGPT, as argued by its proponents, is no less radical than similar tools introduced in the past such as calculators. After all, upon the introduction of pocket calculators, the response was not to ban their use in exams, but rather to change our teaching methods so that students are tested on more abstract mathematical concepts rather than their ability to perform brute arithmetic. In this case, the pocket calculator’s potential to replace the student’s role in computation was deemed secondary to its potential to expedite the mundane calculations that provide no educational value to perform by hand. Similarly, assistance in transcribing thoughts through the use of ChatGPT could be deemed a catalyst for different instruction methods, allowing time for the student to engage in more productive aspects of the writing process by greatly expediting the nonproductive aspects. A survey of college English professors likewise concluded that “[s]tudents shared that using these apps in scaffolded assignments can enhance their creative process, a promising outcome… What we should instead focus on is teaching our students data literacy so that they can use this technology to engage human creativity and thought” (Watkins). Therefore, according to Watkins, if the end product is a more meaningful and effective result, the fact that it was produced by the student in conjunction with AI assistance should be judged no differently than if a student’s work in a class for algebra was done with the assistance of a calculator.
One common rhetoric already adopted by many educators is the reasoning that having access to such reference materials during examinations is a more realistic approximation of when a student will be required to apply their knowledge, as in the real world, it is unlikely that the student will be without such materials. Educators who embrace this philosophy have started to offer open-note or take-home exams in place of traditional examinations, as they believe that allowing access to such materials incentivizes students to pursue a deeper understanding of the material, rather than testing their ability to perform memorization. By corollary, it can be argued that AI programs are an extension of reference materials already available to students such as notes or textbooks. In response to this, Dr. Kiping at Columbia University raises the question as to whether such AI programs should be allowed in exams, or should educators set limits for what degree of assistance students are allowed to reference (Kiping).
Rather than providing his opinion, though, Professor Kiping instead proposes the question openly, providing reasoning from both perspectives. Yet when considering the applications of ChatGPT and similar forms of AI assistance in an academic setting, a number of other concerns arise. For one, opponents of the usage of ChatGPT in exams argue that, similarly to how students must demonstrate their ability to perform multiplication and long division prior to advancing to algebra or calculus, students must equally first demonstrate their knowledge of any subject in the absence of technological assistance. In addition, the results of Dr. Kiping’s experiments with ChatGPT showed that, in an entry-level astrophysics course, ChatGPT attained an exam score of 73.9%, While this is comparable to the student median score of 75.6%, is in no way an infallible result. Furthermore, when asked to provide explanations for its results, ChatGPT frequently made errors in reasoning or calculations (Kiping). Thus, if a student was to rely solely on ChatGPT, their results would likely be no better than if they had no access at all.
In determining the role of ChatGPT in academia, it is imperative to understand the inefficiency of holding opposition, so as not to fall victim to irrelevant pedantics. Neither argument offers an infallible defense to every counterpoint and exception, nor can a single approach be so comprehensive as to provide a solution to all matters of concern. The objective of offering open-note examinations or access to reference materials is to deepen a student’s understanding of the material in order to pursue more productive aspects of the subject, not to disincentivize students from committing the information to memory. Likewise, the purpose of implementing ChatGPT should not be to offer students the invitation to freely violate academic integrity, but to facilitate the student to engage in more productive aspects of the curriculum so that they are tested in situations where the practical extent of their knowledge can be utilized.
Works Consulted
D’Agostino, Susan. “Designing Assignments in the ChatGPT Era.” Inside Higher Ed, Inside Higher Ed, https://www.insidehighered.com/news/2023/01/31/chatgpt-sparks-debate-how-design-student-assignments-now.
Herman, Daniel. “The End of High-School English.” The Atlantic, Atlantic Media Company, 16 Dec. 2022, https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/.
Kiping, David. “CHATGPT Takes a College Level Astrophysics Exam.” YouTube, YouTube, 7 Jan. 2023, https://www.youtube.com/watch?v=K0cmmKPklp4.
Pittalwala, Iqbal. “Is Chatgpt a Threat to Education?” UCR Magazine, University of California, Riverside, 25 Jan. 2023, https://news.ucr.edu/articles/2023/01/24/chatgpt-threat-education.
Roose, Kevin. “Don’t Ban ChatGPT in Schools. Teach with It.” The New York Times, The New York Times, 12 Jan. 2023, https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html.
Watkins, Marc. “Guest Post: Ai Will Augment, Not Replace.” Guest Post: AI Will Augment, Not Replace, Inside Higher Ed, 14 Dec. 2022, https://www.insidehighered.com/blogs/just-visiting/guest-post-ai-will-augment-not-replace.
“What Students Are Saying about Chatgpt.” The Learning Network, The New York Times, 2 Feb. 2023, https://www.nytimes.com/2023/02/02/learning/students-chatgpt.html.


Leave a comment