Two apologies – first that it has taken me a while to post because I have been unwell, and second that because it has taken me so long I have done an awful lot of thinking, so this has become a very long post!
I’ve been reading and thinking a lot about this topic recently – thanks Elizabeth for the interesting podcast link[i] (I’ve listened to that twice now!), and Darryl for the rather disturbing articles. I must preface my thoughts by making it clear that I am not a Luddite. I left school at the point the world wide web was really taking off – email was a flashy new technology that I first encountered at university (which makes me seem very old to the children I teach – maybe I am!). But this has changed education for the better. Children (and adults) now are free to pursue their interests and teach themselves about topics that fascinate them to a level that was just impossible for previous generations. That very freedom brings a dizzying array of choices, some of which are more productive and some more ethical than others. Our role as librarians has changed subtly over the years from gatekeeper and curator of scarce resources to expert guide to overabundant resources – but is still just as important as it was, if not more so. I should also mention that although I refer to ChatGPT explicitly because a lot has already been written about it, much of my thinking applies to other AIs such as Google Bard and Bing’s AI.
I was fascinated by Elizabeth Hutchison, John Royce, Jeri Hurd, Susan Merrick and Sabrina Cox’s discussion. It was helpful to hear five information professionals who have been actively exploring this area reflecting on their experiences. I must confess that while I have been following the story carefully, I have not yet signed up to ChatGPT – largely because I have real data privacy concerns about giving my personal phone number to a company that has already used personal data is a way that is so ethically questionable that a few countries are banning it outright[ii] (although Italy has revised that decision[iii] now that Open AI has put some extra privacy features in place). I can see why they might want my email address. I cannot fathom an ethical reason why they need my phone number, and until I understand that I don’t want to give it to them. I will also not be setting any assignments that require my students to sign up. This interesting article highlights some of these concerns: ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned[iv].
Beyond that, I think there are educational opportunities and also very serious educational concerns (beyond the obvious one of children getting chat GPT to do their homework for them).
In-built racism (and other forms of discrimination)
Jeri raised the important point in the podcast that the plagiarism checkers that have sprung up to spot AI generated work are much more likely to produce false positives for second language learners because their writing is less ‘natural’ and more like an AI. Unlike with previous plagiarism checkers (where you could often compare the work and the original sources side by side if necessary), it is impossible to prove either way whether a piece of work is AI generated because the answers are ephemeral. This seems to stack the odds against non-native speakers. There are also plenty of issues with the secrecy around the training material. It is already widely recognised that there the training sets used for facial recognition software led to inherent racism in the products and their use[v], without some scrutiny of the training material for AI systems, there is a strong likelihood and a fair amount of evidence already of inherent bias (see, for example the answer to Steven T. Piantadosi’s question[vi] about how to decide whether someone would make a good scientist).
None of this would matter so much if it was just an interesting ‘toy’, but early indications as far as I can see are that it is being adopted unquestioningly and enthusiastically in a wide range of areas at an alarming rate.[vii]. We absolutely need to be educating our students about the moral and ethical issues involved and explaining the potential real world consequences – this is NOT just about them cheating in homework assignments or even coursework. The challenge is that it is hugely attractive in a wide range of applications because it offers easy but deeply flawed solutions.
Source credibility and downright lies!
In the podcast Susan describes asking ChatGPT and Google Bard to recommend fantasy series for a children aged 11-18 from 2015 –202 and to produce book blurbs for a display. It was astonishing to hear that ChatGPT made every single one of the book recommendations up (less astonishing to hear that the book summaries were poor). But how did Susan know this? Because she is an experienced Librarian who explored the book recommendations and quickly discovered they were fake, and who had read the books so knew the summaries/ recommendations were poor. She was starting from a position of knowledge. Our children are usually starting from a position of relative ignorance about their topic and are much less likely to spot the blatant lies (sorry – I believe technical specialists prefer the term ‘hallucinations’).
Even the CEO of Open AI said on Twitter[viii] in December 2022 that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
As EPQ co-ordinator (as well as further down the school) an important part of my job is teaching students to evaluate their sources – to look at where their information has come from, whether the sources are trustworthy for factual accuracy and what their bias is likely to be.
Chat GPT blows this out of the water.
It does not reveal it’s sources (and even if you ask it to, it is likely to make them up[ix]). So a student (or anyone else) getting information from ChatGPT cannot evaluate their sources for accuracy and bias because they have no idea what they are. Even worse, ChatGPT is very plausible but often just plain wrong. You cannot actually rely on anything that it tells you. If you need to verify everything it says (which I am not sure many students would do in reality) then I don’t see how it saves any time at all. It is a great deal worse than Wikipedia (which can – sometimes – be a helpful starting point, if only because it directs you to external sources which you might then be able to visit and use).
It plagiarises pretty much all its content
There has been a lot of talk about plagiarism by students and plagiarism detection (an arms race that I am not sure we can win), but rather less about plagiarism by ChatGPT itself. Noam Chomsky got a fair amount of press attention recently when he said that ChatGPT is “basically high-tech plagiarism”[x]. Now, although Professor Chomsky is an exceptionally experienced and highly respected educational thinker and linguist, a 94 year old may not be the first person you would turn to for opinions on emerging technologies. He is, however, spot on. ChatGPT takes information from unknown sources and presents it as its own work. And even when asked it is often unable to cite all (or even any) of its sources with any accuracy. How is that not plagiarism? How can we direct students to use a source that is itself so unethical? While I am a big fan of the IBO in many areas, I did find it hugely depressing that they are suggesting that ChatGPT should be a citable source in coursework[xi]. Of course, if you use a source you must cite it, but I think it is important to provide context (such as they might with Wikipedia) about what circumstances (if any) might make this an appropriate source to cite (e.g., if you were writing an essay ABOUT ChatGTP, perhaps).
To me citing an AI such as ChatGPT feels like citing ‘Google’ as a source. It isn’t a source. It is essentially a highly sophisticated search engine. You might as well cite ‘the Library’ or ‘Amazon’ as your source. It doesn’t tell your reader anything about the actual provenance of the information which, surely, is the main point of source citation. Having read the actual statement from the IB[xii] (and associated blog post[xiii]) on this, however, I understand the sentiment behind it. Better to find a legitimate way to allow students to acknowledge use of a new technology than to ban it, which is likely to mean they will just use it without acknowledgement.
Note that even where AI’s do cite actual sources, their lack of ability to actually understand them means that it is easy to cite them out of context – as in this hilariously depressing example[xiv] where Bing’s AI cited Google’s Bard as it erroneously claimed it had itself already been shut down.
It stunts thinking
Our SLS recently produced a very interesting infographic on ChatGPT based on Jeri Hurd’s work. I’m very grateful to them and her for the effort – I haven’t had time to produce anything similar – but I don’t feel I can use it because it is so overwhelmingly positive about ChatGPT and (as you can tell) I don’t feel that way. One of the suggestions is to ask it to produce an essay plan/ outline for you. Now, it may depend exactly what is meant by that, but as a bare statement I find it very uncomfortable. An essay plan, produced at the Construct stage, is a very personal path through your own journey from information to knowledge. It is your chance to pull together all the ideas you have discovered into something that is meaningful and coherent to you and to others. It is, in many ways, more important than the essay (or other product) itself.
If you outsource this thinking, you might as well not bother. As an educator I don’t want to know what ChatGTP ‘thinks’, I want to know what YOU think. There many other subtle and not so subtle ways in which AI tempts students because it will do their thinking for them (I was shown an ad on my Duolinguo app the other day, proudly telling me that I never needed to write an essay again if I downloaded a particular AI app). In Chomsky’s interview with EduKitchen[xv] he says that he does not feel that he would have had a problem with ChatGPT plagiarism with his students at MIT because they wanted the LEARN so they wouldn’t have touched it – he describes AI systems like ChatGPT as “a way of avoiding learning”.
It potentially ‘blocks the way of inquiry’
Many have suggested that the rise of AI such as ChatGTP will spell the end for coursework and make exams paramount again[xvi]. Now it is important to make it VERY clear that inquiry is not all about coursework – some of the richest experience I have had with inquiry have been preparing students for A-level exams. The potential demise of coursework would not reduce the importance of inquiry as an educational tool, but it would make it harder for it to get a foothold in an educational system already biased towards instructionist methods. At the moment skills lessons for coursework are a helpful way in for many librarians to forge new relationships with curricular colleagues and demonstrate what they can offer. With this avenue potentially closing, we as a profession will need to get more creative!
The other suggestion from the podcast that filled me with dismay was the suggestion that essays (including perhaps something like the Extended Essay) could be replaced by being given the draft of an essay and being asked to write the prompt to fix what was wrong with it. As one teaching tool among many, I can see how that could be useful, alongside comparing AI written sources with human written ones, but not to replace essay writing and inquiry themselves. The EE can be a life-changing experience of inquiry for some students, allowing them (often for the first time) to immerse themselves in pursuing their own lines of inquiry on a topic important to them over an extended period of time to produce something new and original. The magic of this would be completely lost if it was only about writing good questions or only about evaluating and improving AI answers to someone else’s question. Extended inquiry definitely still has a vital place – but we do have to work out how AI changes the landscape and how to work with new technologies without destroying what we had. One possible option is robust vivas (or presentations as is the case in the EPQ) accompanying all coursework submissions to make it clear which students actually understand their topics and which do not. But this is very staff and time intensive so I can’t see it catching on wholesale beyond the EE and EPQ.
Of course, there are some amazing inquiry opportunities investigating the technology itself, looking at what it can and can’t do, ethics and potential future uses and, as Jeri found, real opportunities for librarians to take a lead in the schools in terms of how to deal with this new reality. As long as ‘taking the lead’ doesn’t equate to cheerleading for it. I’m also quite interested in the idea of encouraging older students to ask it their inquiry questions and to critique the answers it gives – although I would rather they did that towards the end of their inquiries than at the beginning because I think this is best done from a position of knowledge than relative ignorance. I’d also have to choose an AI that I had fewer data privacy concerns about than ChatGPT, and use it as an opportunity to explore my other ethical concerns with students.
Alongside issues of inherent bias in the systems themselves, I do worry about overenthusiastic adoption of AI in education exacerbating the current digital divide. If AI becomes part of assessment (e.g., students being assessed on how well the write prompts, being allowed or even encouraged to use it in coursework etc.) then it does cause issues for those with less ready access to technology at home who have less opportunity to practice. In one sense this is an argument for using and teaching with AI in school to make sure everyone has some degree of access, but making it an inherent part of assessment has issues. Particularly also because there are already early steps to monetise access[xvii], which is not unreasonable because it is very costly to produce and run, but will create a two tier system. Perhaps it won’t be long before we start seeing ‘institutional access’ options for AI, alongside our existing databases…
So what can it and can’t helpfully do?
One of the most balanced analyses I have come across – neither enthusiastically pro or rabidly anti – is from Duke University in North Carolina. The full article[xviii] is definitely worth reading, but the summary of their suggestions is that ChatGPT could be good for:
Helping you to broaden your existing keyword set at the start of an inquiry (‘What keywords would you use for a search on…?’).
Identifying places to search for material (‘Which databases would you recommend for a project on…?’). [Note that this does not take account of paid databases that the user actually has access to via their Library but might be a good way to find free databases.].
Summarising sources or writing a literature review [This reminded me of Jeri’s comments on the podcast about Elicit, which I was keen to have a play with. I will include a few comments on this in a separate post.]
Making future predictions, or giving up to date information about current events
[i] Hutchinson E., Cox, S. (hosts). (2023, March 8). Empowering Learning Through ChatGPT and AI: Insights from School Librarians. [Audio podcast episode]. In School Library Podcast. https://ehutchinson44.podbean.com/