2023 Autumn Term

Features

  • End Child Poverty
    Geoff Barton says that the next government must tackle child poverty in order to give children and young people an equal chance to thrive. More
  • Interconnected Leadership
    Trust CEO John Camp OBE says becoming ASCL President is one of the proudest moments of his career. Here, he shares his love of education and his mission to ensure that interconnection is at the heart of his year in office. More
  • Leading AI in education
    Rob Robson explores the ethics and practical implications that school, college and trust leaders should consider so that artificial intelligence (AI) can be used safely and well in their institutions. More
  • Essential support for you
    Taking care of you and your best interests is at the heart of everything we do. Here, ASCL's Richard Tanton provides an overview of what has been another busy year for his team, advising and representing leaders through challenging times. More
  • Cost-of-living Impact
    Research Manager Megan Lucas highlights the devastating impact the cost-of-living crisis is having on school recruitment and retention. More
  • Send Funding: Crisis Point
    ASCL's Julia Harnden and Margaret Mulholland urge the Chancellor to do the right thing in his Autumn Statement by pledging sufficient funding for some of our most vulnerable children. More
Bookmark and Share

Leading AI in education

Rob Robson explores the ethics and practical implications that school, college and trust leaders should consider so that artificial intelligence (AI) can be used safely and well in their institutions.

Leaders are increasingly interested in discussing artificial intelligence (AI) in schools, trusts and colleges, primarily sparked by the significant curiosity surrounding ChatGPT, a widely adopted, large language-generative AI system. While AI has garnered interest in UK education, many leaders acknowledge they haven't had the opportunity to thoroughly explore its ethical and practical implications due to time constraints. 

Generative AI (GAI), such as Google Bard and ChatGPT, has been under development for years, with some companies diligently testing their products before launching them. However, others have hastily introduced untested AI products into the market, using unwitting users as testers. This practice raises numerous concerns, particularly when users are unaware of the origin of the generated information. 

The AI debate 

Technology elicits a spectrum of reactions from school leaders. At one end, some early adopters eagerly embrace new technology, looking to improve staff and student experiences. However, when it is poorly implemented, no matter how enthusiastically, technology can add to workload and stress. At the other end, some leaders approach technology with suspicion, preferring to assess its broader societal impact and may opt for technology-free classrooms, seeing it as a learning distraction. Both ends of the spectrum are understandable, but AI, particularly GAI, is not something that can be either treated with the utmost suspicion or embraced without reservation. 

Sitting back and not engaging is not an option; students and school staff are already using this technology, whether they are using it at school or at home. However, there are several areas in which we need to be active if we are to get our relationship with AI right for ourselves as leaders, our staff and, of course, our children and young people. 

The AI debate often centres on English mathematician and computer scientist Alan Turing's imitation game, also known as the Turing test. In this concept from his 1950 paper Computing Machinery and Intelligence (tinyurl.com/4yczvxaz), a human judge converses via text with an unseen entity, which could be human or machine. The judge's aim is to determine whether they're talking to a human or a machine solely based on the text responses. If the machine could consistently mimic human responses to the point of indistinguishability, Turing proposed considering it as possessing human-like intelligence. 

Brilliant as it is, the problem with the imitation game is that it has pushed our thinking about artificial intelligence in the wrong direction. Seeing AI as being something comparable to a human has sensationalised the whole area, and we have started to over-focus on a future when computers might become sentient, whether they will have feelings and, in the tabloid headlines, whether they will take over the world and destroy humankind. Of course, we should worry about this happening, and we need our governments to engage with this huge debate, but it is important to remember that we are nowhere near this with the current capabilities of artificial intelligence. 

Vital ethical discussions 

While we talk about these futuristic hypothetical situations, we are missing the immediate, and in my opinion, vital, ethical discussions around the current use of artificial intelligence, which, broadly, is unregulated, and, at the moment, lacks the political will needed to regulate it so that it can be used safely and well in schools, colleges and trusts. 

As education leaders, we need to engage with the following areas:

  • Leaders must make informed decisions about the role of generative AI in education. While it's crucial to engage with AI and ensure students use it ethically, it's equally important to recognise where AI doesn't belong. AI won't replace teachers because great teaching relies on relationships, not just knowledge delivery. Areas emphasising emotions and relationships may not be suitable for AI, like mental health support, although it's a growing area. 
  • Understanding AI's biases is essential. AI systems are trained on data sets that can contain biases, sometimes from historical or cultural sources. For instance, ChatGPT, trained on medical knowledge, has saved lives, but biases exist in some medical data sets. To address this, staff training on bias in AI algorithms and lessons on real-world bias examples could be introduced.
  • Transparent use declarations and policies in educational institutions are vital. Schools, colleges and trusts need policies requiring transparency in AI use. It should be clear who owns AI systems and how to contact the owner if issues arise.
  • Knowing the AI system version is crucial. Users should differentiate between free and paid versions, understanding the functionality they offer. If an AI system is a ‘Beta’ product (a new product still being tested), that should be obvious and the limitations of such a system acknowledged.
  • AI training methods matter. Users should understand if the system relies on static data or continuous updates. Monitoring data for accuracy and biases is essential.
  • Source bibliography disclosure is important. When AI uses a source to generate a response, it should automatically reveal the source's bibliography, addressing questions of plagiarism and accuracy warnings.
  • A hard prohibition list helps define AI's boundaries. AI systems should maintain lists of areas they won't access for information, especially in ethical contexts.
  • Reporting known issues and providing live reports is necessary to prevent problems. The Post Office scandal – where over 700 branch managers were given criminal convictions when faulty accounting software made it look as though money was missing from their sites – is one example of when things can go seriously wrong. Transparency in addressing AI system issues is essential.
  • Understanding AI's decision-making rationale is vital. While some aspects may be considered commercially sensitive, open-source AI platforms enable transparency, allowing users to refine tools and suggest improvements. This level of transparency would also encourage people to suggest improvements and corrections to unforeseen outcomes. The success of this approach can be seen in the operating platform Linux (www.linux.com/what-is-linux), where users and engineers constantly refine its effectiveness. 

AI IN ACTION 

For interest, this article was originally over 2,000 words. GAI was used to reduce the article before (human) editing. 


Rob Robson
ASCL Trust Leadership Consultant
@rrobson66

ai-in-education.jpg

LEADING READING