Home › Forums › The nature of inquiry and information literacy › ChatGPT et al › Reply To: ChatGPT et al
I’ve been meaning to update for a while.
—
Via Paul Prinsloo – “Open, Distributed and Digital Learning Researcher Consultant (ex Research Professor, University of South Africa, Unisa) – on LinkedIn: COMMENTARY by Ulises A. Mejias: The Core of Gen-AI is Incompatible with Academic Integrity (2024/12/24) for Future U:
“My main concern is that, by encouraging the adoption of GenAI, we in the educational field are directly undermining the principles we have been trying to instill in our students. On the one hand, we tell them that plagiarism is bad. On the other hand, we give them a plagiarism machine, which, as an aside, may reduce their chances of getting a job, damage the environment, and widen inequality gaps in the process.”
—
Via Ben Williamson – “Higher education teacher and researcher working on education, technology, data and policy [at the University of Edinburgh]” – on LinkedIn (and many more posts besides): And on we go: The truth is sacked, the elephants are in the room, and tomorrow belongs to tech (2025/01/09) by Helen Beetham – “Lecturer, researcher and consultant in digital education [at Manchester University]” – on LinkedIn for imperfect offerings.
“No, it doesn’t matter how destructive generative AI turns out to be for the environment, how damaging to knowledge systems such as search, journalism, publishing, translation, scientific scholarship and information more generally. It doesn’t matter how exploitative AI may be of data workers, or how it may be taken up by other employers to deskill and precaritise their own staff. Despite AI’s known biases and colonial histories, its entirely predictable use to target women and minorities for violence, to erode democratic debate and degrade human rights; and despite the toxic politics of AI’s owners and CEOs, including outright attacks on higher education – still people will walk around the herd of elephants in the room to get to the bright box marked ‘AI’ in the corner. And when I say ‘people’ I mean, all too often, people with ‘AI in education’ in their LinkedIn profiles.”
—
Finally, for now, Via Dagmar Monett – “Director of Computer Science Dept., Prof. Dr. Computer Science (Artificial Intelligence, Software Engineering) [at Berlin School of Economics and Law]” – on LinkedIn: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking (2025/01/03) by Michael Gerlich – “Head of Center for Strategic Corporate Foresight and Sustainability / Head of Executive Education / Professor of Management, Sociology and Behavioural Science Researcher, Author, Keynote Speaker, Leadership Coach [at SBS Swiss Business School]” on LinkedIn for Societies / MDPI.
“Hypothesis 1: Higher AI tool usage is associated with reduced critical thinking skills.The findings confirm this hypothesis. The correlation analysis and multiple regression results indicate a significant negative relationship between AI tool usage and critical thinking skills. Participants who reported higher usage of AI tools consistently showed lower scores on critical thinking assessments.”
“Hypothesis 2: Cognitive offloading mediates the relationship between AI tool usage and critical thinking skills.This hypothesis is also confirmed. The mediation analysis demonstrates that cognitive offloading significantly mediates the relationship between AI tool usage and critical thinking. Participants who engaged in higher levels of cognitive offloading due to AI tool usage exhibited lower critical thinking skills, indicating that the reduction in cognitive load from AI tools adversely affects critical thinking development.”