Department of History Policy on the Use of Generative Artificial Intelligence in the Classroom
Department of History Policy on the Use of Generative Artificial Intelligence[1] in the Classroom
The Western Washington University Department of History does not permit the use of generative AI in its courses at any stage of the conception, research, or writing process without the clear written and verbal permission of the instructor.
We do so for the following reasons:
- The use of generative AI in student work runs counter to our goal of teaching and supporting foundational skills in reading, writing, research, and the critical analysis of information.
- As publishing scholars and educators, we place a high value on the necessity of human feedback and communication in the production and publication of historical research and writing, values which are incompatible with the use of generative AI as a replacement for the generative, collaborative, and editorial processes in our discipline.
- As publishing scholars, we likewise are acutely aware of the risks and harm generative AI technologies pose to intellectual property rights of writers, artists, and other creators, including our students.[2]
- We are aware of the risks and harm generative AI technologies pose with regard to the spread of misinformation.[3]
- We are aware of the ways generative AI reproduces and negatively contributes to in language, culture, gender, ethnicity, and other social biases.[4]
- We are concerned by the negative impact of the growth of generative AI technologies on the environment.[5]
[1] Artificial intelligence (AI) refers to systems that predict outcomes based on statistical models derived from large datasets. Generative AI produces text, images, and videos in response to prompts. Generative AI produces responses based in characteristics of its training data, often pulled from online materials, as well as user input.
[2] Pamela Samuelson, “Generative AI Meets Copyright,” Science 381, no. 6654 (2023).
[3] Bird, et al., “Typology of Risks of Generative Text-to-Image Models,” in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (New York: ACM, 2023); Laura Weidinger, et al., “A Taxonomy of Risks Posed by Language Models,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (New York: ACM, 2022); Chenshaung Zhang, et al., “Text-to-Image Diffusion Models in Generative AI: A Survey,” arXiv preprint arXiv:2303.07909 (2023).
[4] Bird, et al. “Typology of Risks”; Weidinger, et al., “A Taxonomy of Risks.”
[5] Norman Bashir, et al., “The Climate and Sustainability Implications of Generative AI” from An MIT Exploration of Generative AI: From Novel Chemicals to Opera (2024) < https://doi.org/10.21428/e4baedd9.9070dfe7>; K. Crawford, “Generative AI is Guzzling Water and Energy,” Nature 626, no. 8000 (2024): 693; Matthias C. Rillig, et al., “Risks and Benefits of Large Language Models for the Environment,” Environmental Science and Technology 57, no. 9 (2023); Matthias C. Rillig, “How Widespread Use of Generative AI for Images and Video Can Affect the Environment and the Science of Ecology,” Ecology Letters 27, no. 3 (March 2024); Wim Vanderbauwhede, “The Climate Cost of the AI Revolution,” RIPE Labs (May 2023), <https://labs.ripe.net/author/wim-vanderbauwhede/the-climate-cost-of-the-ai-revolution/>, accessed November 21, 2024; Weidinger, et al., “A Taxonomy of Risks.”