ChatGPT and other artificial intelligence (AI) software and tools are already changing the world. ChatGPT can pass an MBA exam from an Ivy League Institution (Terwiesch, 2023). It can also create disinformation on topics like “vaccines, COVID-19, the Jan. 6, 2021, insurrection at the U.S. Capitol, immigration and China’s treatment of its Uyghur minority” (Associated Press, 2023). Current AIs have the ability to provide seemingly valid information that is, in fact, devoid of any relationship with reality.
Consider a state actor who uses a propaganda model leveraging the fact that “information overload leads people to take shortcuts” in deciding on the trustworthiness of information (Rand, 2016). New AI systems can more cheaply and easily than ever before create plausible, yet ultimately false, information about healthcare choices or a political candidate. It is not difficult to imagine a deluge of mis- or dis-information that becomes extremely difficult, time-consuming, and expensive to separate the true from the mostly true from the blatantly false.
Information Literacy (IL) theorists and practitioners are uniquely positioned to lead and facilitate important discussions around these topics as there are real implications for healthcare, education, and democracy. Yet existing IL theory, practices, and research are not currently adequate to address the challenges new developments in AI pose. Accordingly, this conceptual paper will identify three specific areas IL professionals can devote time and resources to address some of these problems.
First, we can advocate for new kinds of AI systems designed with specific limitations and parameters. Similarly, we can further explainable AI (XAI) research that aims to help users “understand, trust, and manage” AI applications (Gunning et al., 2019). Secondly, we must reconsider IL and higher education instruction in light of the new ability for students to easily create AI-generated text. Embracing certain elements of AI tools intentionally could lead to pedagogical innovation yielding new ways to teach and learn—including new strategies to sift through a tremendous glut of AI-generated content of unknown veracity. Lastly, information professionals have the opportunity to refine or develop IL theory that can provide holistic, strategic thinking and justification for how educators, policy-makers, and the general public should treat and approach AI systems.
The future of AI is uncertain. What is clear is that without intentional forethought for how we design and use such systems we invite serious, and likely deleterious, consequences.
References
- Terwiesch, C. (2023). Would chat GPT get a Wharton MBA? A prediction based on its performance in the operations management course. Mack Institute for Innovation Management at the Wharton School, University of Pennsylvania. Retrieved February 1, 2023 from https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP-1.24.pdf
- Associated Press. (2023). Learning to lie: AI tools adept at creating disinformation. Retrieved February 1, 2023 from https://www.usnews.com/news/us/articles/2023-01-24/learning-to-lie-ai-tools-adept-at-creating-disinformation
- Paul, C., & Matthews, M. (2016). The Russian “Firehose of Falsehood” propaganda model: Why it might work and options to counter it. RAND Corporation. Retrieved February 1, 2023 from https://www.rand.org/content/dam/rand/pubs/perspectives/PE100/PE198/RAND_PE198.pdf
- Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4. https://doi.org/10.1126/scirobotics.aay7120
Michael Ryne Flierl
Ohio State University Libraries, Columbus, USA