Home › Forums › The nature of inquiry and information literacy › ChatGPT et al › Reply To: ChatGPT et al
Another interesting article: Why ChatGPT lies in some languages more than others in TechCrunch (26 April 2023, by Devin Coldewey) | “AI is very much a work in progress, and we should all be wary of its potential for confidently spouting misinformation. But it seems to be more likely to do so in some languages than others. Why is that? … It’s already hard enough to tell whether a language model is answering accurately, hallucinating wildly or even regurgitating exactly — and adding the uncertainty of a language barrier in there only makes it harder. … It reinforces the notion that when ChatGPT or some other model gives you an answer, it’s always worth asking yourself (not the model) where that answer came from and if the data it is based on is itself trustworthy.”
—
Assuming that we can overcome these and other legal and ethical objections to using ChatGPT et al (see posts above), what practical objections might we need to overcome, bearing in mind that our fundamental concern here is with augmenting the intellect of our students?
Inquiry, remember, is a response to the desire to know and understand the world and ourselves in it – in other words, reality. This point is worth labouring because everything else hinges on it, and Charles Sanders Peirce (1955, p. 54) makes it powerfully:
Upon this first, and in one sense this sole, rule of reason – that in order to learn you must desire to learn, and in so desiring not be satisfied with what you already incline to think – there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy: Do not block the way of inquiry.
Peirce draws our attention to two deeply interrelated things. Firstly, without a desire to learn, the process of learning about reality, which is an inquiry process, is not possible – this should be obvious, but apparently isn’t. Secondly, in desiring to learn, we must also be willing to learn, which is to be open to being changed by what we learn. So, without a desire to learn and an openness to being changed by what we learn, learning is not possible.
These insights highlight the importance and value of the Connect stage, which serves to engage students in the learning process – fanning, or more likely awakening, the desire to learn. Having engaged students in the learning process, Connect also serves to ground and orientate the learning process – identifying what is known, or seems familiar, and gaining a sense of what is unknown, or unfamiliar. This is crucial, because the process of learning about the world and ourselves in it, which is a knowledge-building process, requires information about the world and our place in it. If this knowledge-building process is to be sound, then the information that it depends on needs to reliable. Danny Hillis (2012), in The Learning Map, develops the idea that we ought to be able to learn anything for ourselves with suitable guidance. ChatGPT et al has an obvious role to play here as a guide, but its unreliability as a guide severely limits this role, at least for the time being. As the article referenced above puts it, “it’s already hard enough to tell whether a language model is answering accurately, hallucinating wildly or even regurgitating exactly — and adding the uncertainty of a language barrier in there only makes it harder”. Given this, and also in the words of the article referenced above, “when ChatGPT or some other model gives you an answer, it’s always worth asking yourself (not the model) where that answer came from and if the data it is based on is itself trustworthy”. From what age can we reasonably expect our students to be able to ask these questions, let alone answer them, and from what age can we realistically expect our students to make the effort to do so? In this regard, ChatGPT et al seems to combine the greatest temptations and pitfalls of Google and Wikipedia. This unreliability as a guide in the Connect stage similarly limits the role of ChatGPT et al in particularly the Investigate stage.
The process of gaining a sense of what is unknown, or unfamiliar, in Connect inevitably raises questions that then lead naturally into the Wonder stage, and this transition is crucial, for, as Hans-Georg Gadamer (1994, p. 365) points out, “to question means to lay open, to place in the open [and] only a person who has questions can have [real understanding]”. This presents another seemingly obvious role for ChatGPT et al, which is to help raise questions that are worthy of investigation. However, as Jerome Bruner (1960, p. 40) points out:
Given particular subject matter or a particular concept, it is easy to ask trivial questions….It is also easy to ask impossibly difficult questions. The trick is to find the medium questions that can be answered and that take you somewhere.
These medium questions that can be answered and that take you somewhere are fruitful questions, and they must belong to our students if they are to be fruitful, in that being able to answer these questions matters more to our students than merely finding answers to them. This distinction is crucial, because it makes the difference between finding information that appears to answer a question that you do not care about, and therefore have invested no thought in, and finding information that helps you to formulate a thoughtful answer to a question that you care about answering. This also makes the difference between whether you are likely to care about the reliability of your information, or not. We have already noted the questionable reliability of ChatGPT et al as a guide to the information that is available in response to a question, and this brings into focus some of the limitations of ChatGPT et al in helping us to raise fruitful questions.
Given the extent to which our educational systems are likely to be focussed on students ‘learning’ answers to questions that are not their own, they are likely to need much guidance and support in developing fruitful questions of their own. This will require personal knowledge of the students that grows out of relationship with them. Broadly, this will be difficult, if not impossible, for ChatGPT et al. It is conceivable that, given a certain level of maturity, fruitful questions could emerge out of a ‘discussion’ between a student and ChatGPT, but, as above, we need to ask from what age can we reasonably expect our students to be able to have this kind of ‘discussion,’ and from what age can we realistically expect our students to make the effort to do so.
So far, it seems to me that the role of ChatGPT et al in actually augmenting student intellect is very limited, and therefore, at best, an unhelpful distraction in our work with them.
References