To Find the “Rotten Apple” – Information Ethical Requirements for the Information Literacy of Autonomous Writing Engines

Since the availability of the chatbot ChatGPT in 2022, there has been a heated discussion, especially in didactics and media education science, about what conclusions should be drawn (Baidoo-Anu & Owusu Ansah, 2023) from letting students complete tasks through the chatbot. In this paper, we will turn this idea ethically. Namely, not only the writer’s truthfulness but also the source’s truthfulness and accuracy is an information ethical requirement. Therefore, there is a claim on the author to account for the sources’ quality. However, the realization of this claim is bound to an explicit competence, in this case, information literacy. When using autonomous writing engines, however, this competence is shared: The user of this machines usually does not apply his information literacy but leaves the source responsibility to the AI. This delegation of information literacy can only succeed when the machine can possess information literacy itself. This has information ethical consequences for the use and for the development of such technology: autonomous writing machines must be information literate to meaningfully and efficiently find information that it has been trained to use, for example, from the Internet, according to a given task or question, and then combine it according to the given task or question.

Let us imagine that such machines like ChatGPT will be increased or even widely used in the future. Applications creating research reports or journalistic reports are already a reality today (Pavlik, 2023). In the future, machines will independently search for information online to process multiple queries. It is important to remember that not only correct and up-to-date information can be found on the web, but also intentionally or accidentally incorrect, tendentious, or falsified sources are also widely available. It is part of successful information literacy to constantly check the sources for their truthfulness and reliability. If the old phrase “one bad apple spoils the whole barrel” is true, then the ability to distinguish good apples from bad is an information core competency. After all, if the increasingly used autonomous machines take over rotten apples and incorporate them into their texts, not only will these texts become rotten and wrong, but the net itself will become infested with this rot since, because of increasing digitization, machine texts will be increasingly present.

This paper discusses what information ethical requirements must be placed on design, programming, and use of autonomous writing machines so that they themselves can actively seek out and avoid the rotten apples. It is not about technology but more fundamentally about explicitly normative demands on the development and use of autonomous writing machines.

References

  • Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Retrieved February 14, 2023 from https://ssrn.com/abstract=4337484
  • Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism & Mass Communication Educator,78(1). https://doi.org/10.1177/10776958221149577

Matthias Otto Rath
Ludwigsburg University of Education, Germany

en_USEnglish
Scroll to Top