Home › Forums › The nature of inquiry and information literacy › ChatGPT et al › Reply To: ChatGPT et al
Thanks Matt. That’s a really interesting reflection on NotebookLM as a Construct tool, and I think I agree with you – it is a tool I am only really comfortable using with material that I already know quite a lot about. A tool for thinking with. Like having a conversation with a friend or colleague who has read the same material that you have and isn’t an infallible source, but helps you to kick ideas around. My problem with introducing it to students is that I’m not sure that the time it would take to teach them to use it properly is worth it (particularly given the pace with which the AI field is changing and I can never tell what will be monetized next).
I am also not convinced that most of my students yet have either the maturity or motivation to appreciate the difference between using a tool like this to help them get their own thoughts in order (in Construct) rather than to do their thinking for them (in Investigate). That being said, we do need to engage with AI; ignoring it completely is not an option. Given that, maybe introducing NotebookLM during Construct is one option for them to play in a safe kind of way with it, where they largely know the sources it is drawing on. I say largely because my experience with the Battle of Hastings experiment suggests that for at least some of NotebookLM’s functions it is drawing on the wider internet (even though it claims not to).
The other sticky issue with using a tool like this in coursework/EE/EPQ is the JCQ guidelines on AI use in assessments which state very clearly that:
“where students use AI, they must acknowledge its use andshow clearly how they have used it” (p6)
“The student must retain a copy of the question(s) andcomputer-generated content for reference and authentication purposes, in a non-editable format (such as a screenshot) and provide a brief explanation of how it has been used.” (p6)
“Students should also be reminded that if they use AI so that they have notindependently met the marking criteria, they will not be rewarded.” (p6)
An example is given of this on p16 “However, for the section in the work in which the candidate discusses some key points and differences between three historical resources, the candidate has relied solely uponan AI tool. This use has been appropriately acknowledged and a copy of the input to and output from the AI tool has been submitted with the work. As the candidate has not independently met the marking criteria they cannot be rewarded for this aspect of the descriptor (i.e. the third bullet point above).”
So not only does the candidate have the onerous task of keeping screenshots of all conversations they have with the AI and submitting them with the work, but if the AI is deemed to have played a significant role in helping them to construct their ideas then they may lose significant higher level reasoning marks. This seems like too high a level risk to me, given the relatively minimal reward available. It is a shame because the adult world is increasingly adopting AI in all sorts of areas and it probably would be a good idea for schools to be able to give students safe and (relatively!) ethical ways to practise this. But I’m not sure the current JCQ regulations make this worth the risk.
Of course, given that with this use of NotebookLM the student is doing the final writing not the AI, their AI use in the end product is pretty much undetectable so students could do this and not acknowledge and there is next to no chance that they would be caught. But academic integrity is about following the letter and the spirit of the rules just because it is the right thing to do, not because you think you might be caught, so there is absolutely no way that I could suggest that course of action to my students.
Which leaves us back where we started…
(Very happy to chat about the podcast Elizabeth)