SARAH PAVEY, JUNE 2023
So summer is here, and once again schools are surfacing after the deluge of student examinations. School halls and gyms are returning to their intended purpose and attention turns to sports days and end of term celebrations. This is also the time of year for education trade shows, and I decided to attend the Schools & Academies Show (2023) at Excel, London in mid-May. Being an inquisitive sort of information literacy specialist, I thought I would take the trouble to ask the English examination boards about how the advent of ChatGPT (OpenAI, 2023) and similar Artificial Intelligence (AI) products would impact on the future of national assessments in England.
In my previous ILG blog post of February 2023 “Watch Out! The Robots are Coming to School” I pondered about how AI might revolutionise learning, to encompass all the critical analytical values with emphasis placed at last on information literacy competencies. Certainly, the International Baccalaureate Organisation has already adopted AI as an opportunity to move students onwards from rote learning of facts to thinking about how the concepts they uncover through independent research can be questioned, endorsed, or rejected. Their statement about ChatGPT and artificial intelligence in assessment and education (updated in June 2023) reads:
“The IB believes that artificial intelligence (AI) technology will become part of our everyday lives—like spell checkers, translation software and calculators. We, therefore, need to adapt and transform our educational programmes and assessment practices so that students can use these new AI tools ethically and effectively. The IB is not going to ban the use of such software but will work with schools to help them support their students on how to use these tools ethically in line with our principles of academic integrity.” (IBO, 2023)
But unlike the IB Diploma, A Levels and GCSEs in England, do not contain a coursework element, unless it is constructed under examination conditions without access to the internet. With this in mind, it is perhaps understandable why the Joint Council for Qualifications (JCQ) who oversee the assessment standards for these awards state in their advice about AI.
Students complete the majority of their exams and a large number of other assessments under close staff supervision with limited access to authorised materials and no permitted access to the internet. The delivery of these assessments will be unaffected by developments in AI tools as students will not be able to use such tools when completing these assessments. (JCQ, 2023)
At the show, I thought it would be interesting to gain the thoughts of two of the major examination boards who were exhibiting regarding these statements. The first company (OCR) said it was not an issue for concern and that they were adhering to the guidance from JCQ. I asked about the future, but they reiterated that there was no plan to re-introduce coursework and it might well even be reduced in the areas where it was still in existence, and this was unlikely to change.
JCQ in their advice address the issue of AI malpractice for courses that require project work. However, the document directs teachers towards undertaking such work in controlled classroom environments, restricting access to the internet unless supervised. Thus, the issue is avoided. More worryingly, from an information literacy viewpoint, the advice underpins the importance of schools having a “plagiarism” policy rather than a call for “academic integrity” advocated by the IBO. The focus is on catching the culprits for cheating suggests to students that research is insular and the marks awarded for opinion without reference (because no material is permitted in the examination room) supports this assumption. It is disappointing that it is not recognised by JCQ that AI can be used as a teaching tool to demonstrate an understanding of information literacy and the importance of learning through building on the work of others.
Moving on to the AQA stand, I was met with much the same response in terms of the use of chat bots by students. By dismissing student use of AI as non-existent due to the examination environment, the focus of this company rested in the use of AI for assessment. Currently AQA are piloting “online examinations”. In a recent blog on the subject, AQA’s Head of Research and Development, Cesare Aloisi, aired his views on using AI to mark papers:
Even if future AI systems can be trained to become more ethical and principled, these unresolved fundamental questions about who or what we consider to be an expert, who or what we can trust, and who or what can be found ‘guilty’ of a bad decision will require further research and stakeholder engagement to arrive at workable solutions.
If AI is not to be trusted to do a good job in marking texts due to bias and questionable ethics, I wondered how well current “analogue” examination markers deemed “experts” perform. The Government publishes statistics on examination board grade changes following a request for review (Ofqual, 2022). In Tables 9 and 10, we can see that in more subjective subjects such as English Literature, English Language and Art & Design awards were raised by 2 grades for some students. Although numbers were small, 2% at GCSE and 1% at A Level for all exams fell into this bracket, it would have had a huge personal impact on those students affected – ergo human markers are obviously not totally reliable either. This seems a weak argument for dismissing AI intervention when many of the examination mark schemes rely on yes/no answers or a controlled vocabulary anyway. In my opinion, it shouts the same mantra of Avoid! Avoid! rather than investigating potential benefits.
What concerns me most about the examination board and JQC attitudes, is that AI, even with all its current limitations, still helps students to understand that there is more to information handling than regurgitating facts. Maybe the technology will force national examinations taken in England to begin to value process and critical thinking alongside the end product, as found in the IB qualifications. Driving these information literacy competencies further underground will only serve to widen the gap between school students and higher education and maybe undergraduates will find it much harder to adjust to academic writing demands in higher education – harder even than they find it now.
References
Aloisi, C. (2023) AI and Exam Marking: Exploring the Difficult Questions of Trust and Accountability. Available at: https://www.aqa.org.uk/about-us/our-research/blog/ai-and-exam-marking-exploring-the-difficult-questions-of-trust-and-accountability
International Baccalaureate Organisation (2023) Statement from the IB about ChatGPT and Artificial Intelligence in Assessment and Education. Available at: https://www.ibo.org/news/news-about-the-ib/statement-from-the-ib-about-chatgpt-and-artificial-intelligence-in-assessment-and-education/
Joint Council for Qualifications (JCQ) (2023) Artificial Intelligence (AI) Use in Assessments: Protecting the Integrity of Qualifications. Available at: https://www.jcq.org.uk/exams-office/malpractice/artificial-intelligence/
Ofqual (2023) Reviews of Marking and Moderation for GCSE, AS and A Level: Summer 2022 Exam Series. Available at: https://www.gov.uk/government/statistics/reviews-of-marking-and-moderation-for-gcse-as-and-a-level-summer-2022-exam-series/reviews-of-marking-and-moderation-for-gcse-as-and-a-level-summer-2022-exam-series#number-of-grade-changes-of-2-or-more-grades-by-subject-june-2022
Open AI (2023) ChatGPT: Optimizing Language Models for Dialogue. Available at: https://openai.com/blog/chatgpt/
Pavey, S. (2023) Watch Out! the Robots are Coming to School. Available at: https://infolit.org.uk/watchout-the-robots-are-coming-to-school-sarah-pavey/
Schools & Academies Show (2023) The Agenda : London 2023. Available at: https://schoolsandacademiesshow.co.uk/schools-academies-show-agenda/