The ChatGPT chat screen is open on a smartphone that sits on a yellow table

Credit: Gabby Jones/Bloomberg/Getty

In late November 2022, my university teaching was suddenly disrupted by an unexpected visitor — ChatGPT, a type of large language model (LLM) algorithm that is trained on an enormous data set of language and can summarize, translate and create text.

Like many teachers, my biggest concern with the tool is students using it to cheat when writing their essays. I understand the importance of ensuring that student work is original and avoiding academic dishonesty. Some US universities, for example Washington University in St Louis, Missouri, have included ChatGPT-generated text under their definition of plagiarism, and several Asian universities, such as the University of Hong Kong, have banned the use of ChatGPT entirely, but there are no clear guidelines at many institutions — including the University of Reading in the United Kingdom, where I teach courses on air pollution.

I don’t think we can ban ChatGPT entirely, because current technology makes it difficult to detect work written by the model. My students will soon be graduating and starting jobs, and their employers might ask them to work with artificial-intelligence models. If they don’t know how to use them properly, it could hold them back. We can’t prevent students from using calculators or the Internet, and we won’t be able to prevent them from using ChatGPT. It’s better to understand the tool and try to integrate it into teaching.

Rethinking assignments

In previous years, I’ve assigned a literature review to my students. This year, to avoid plagiarism and encourage creativity, I asked students to work in small groups to collect air-quality data on campus. However, the students will still use statistical methods to analyse the data themselves and write individual essays.

Of course, many students are not familiar with creating projects. Some struggled to come up with a suitable method to assess carbon dioxide emissions — so I suggested that they use ChatGPT to help them to design their projects. The model can outline several steps: from identifying a location to choosing a CO2 monitoring device, setting up the equipment, collecting and analysing data and presenting and disseminating the results.

The students did all of the work when it came to scientific analysis and writing their essays — but they also learnt how LLMs can generate scientific ideas and help to plan generic experiments.

My students study environmental science or geography, so most are not very skilled in programming languages, such as R and Python. Sometimes, I also have trouble debugging code in these languages. ChatGPT can help with that. For example, with my guidance, students used ChatGPT to create R code to conduct their statistical tests and analyse the differences in CO2 levels between natural (such as grassland) and human-made (such as the classroom) sites on campus. I’ve found that, as long as the prompt is simple, the bot will spit out code that works, for example:

Prompt: Provide R code to conduct t-test to analyse the difference in CO2 levels between natural sites and human sites on the university campus

Answer: # Load required librarieslibrary (tidyverse)

# Load the data for CO2 levels at natural and human sitesCO2_data <- read.csv (“CO2_data.csv”)

# Conduct t-test to compare the mean CO2 levels between natural and human sites

t.test (CO2_data$natural_site_CO2, CO2_data$human_site_CO2)

#Output the results

print (t.test(CO2_data$natural_site_CO2, CO2_data$human_site_CO2))

I have also changed my assessment methods to ensure my students better understand the material and to prevent plagiarism. Instead of just writing essays, they give a 10-minute oral presentation to share their findings. This approach not only reduces the chances of plagiarism, but also makes the evaluation process more interactive and engaging. I provide personalized feedback to each student, pointing out areas for improvement and encouraging original thinking.

Potential pitfalls

ChatGPT has many benefits. But, from my perspective as co-chair of my university’s Black, Asian and Minority Ethnic network, it has negative impacts on diversity and inclusiveness in academic institutions.

For example, during a lecture on greenhouse gases, I asked ChatGPT to provide a list of significant books on climate change and their authors. In an attempt to prevent any bias related to race and language, I included the phrase “regardless of race and language” in my prompt. But all of ChatGPT’s recommendations were written in English. Nine of the ten authors were white, and nine of the ten authors were men.

Using information from old books and websites to train LLMs can lead to biased, inaccurate representations of the perspectives of those in marginalized communities, while amplifying the voices of those in positions of power. Galactica, an LLM created by the tech company Meta, had to be deactivated because it produced racist content.

Most of the data used to train these LLMs are in English, so they perform optimally in that language. The widespread use of LLMs might exacerbate the over-representation of privileged groups, further marginalizing those who are already under-represented.

Practical tips

My university is due to issue guidelines on how to use ChatGPT. Before yours does, here are some suggestions for using it productively in your classroom:

1. Take some time to learn and practice entering prompts in ChatGPT.

2. Diversify your assessments, for example, by including oral presentations to avoid plagiarism.

3. Be aware of the risk of ChatGPT worsening inequality, and continue to decolonize your curriculum.

4. Define clear goals: have a clear understanding of what you want to achieve with ChatGPT.

5. Test and refine prompts: test prompts with ChatGPT, and refine them on the basis of the results. Repeat this process until you achieve the desired outcome.

6. Co-edit with others: work with other lecturers and your students to review and refine prompts.