My EduTECH epiphany: The word is ‘mandate’

Copy Link
Image for My EduTECH epiphany: The word is ‘mandate’

By: Dr Kayt Davies

November 7, 2024

Note: As a journalism educator, my perspective is tilted towards my field, but the insights can be generalised to other fields.

The EduTECH 2024 Conference, August 13 to 14, was weird in the way that big conferences are. A flurry of corporate hustle, while amazing people rush through condensed versions of years of research in back-to-back sessions. Through the glaze of overwhelm that inevitably sets in, I got the message. Something needs to change. We need to change. We need to go beyond tolerating AI use or regarding it just as a form of cheating to be deterred and detected. We need to lean in and mandate the use of it.

Image credit: EduTech

The message came from people who have spent time with the issue at the highest levels: We need to be producing graduates who are comfortable with AI, fluent in understanding its strengths and weaknesses, and who can safely and ethically deploy it as a tool.

The people making these calls included Professor Rose Luckin, one of the founders of the UK-based Institute for Ethical AI in Education and an author of some of its key documents.

And Professor Philip Dawson from Deakin University, who was involved in writing a document published by the Tertiary Standards Education Quality and Standards Agency (TEQSA) in late 2023 called Assessment Reform for the Age of Artificial Intelligence.

Apparently ‘experience with AI’ is the new tech-kid skill employers say they are going to be looking for in graduate employees.

So, “mandate it,” the experts say.

The revolution’s here

But this raises a flurry of questions in me: What would we use it for, specifically? How much time do I have to test or practice putting it to that use (whatever I come up with) before I can teach and assess that use of it?

And what about the very real problem that while I am teaching at a university I am not “out there” in industry, doing it for real, so how can I speak with authenticity about the way it is being used?

Last week, the Poynter Institute advertised a short course via four webinars for $649 called Level Up: AI for Journalists to help journalists, and maybe educators, “get the lay of the land” via “weekly hands-on, expert-guided tours of AI tools they can immediately begin implementing in their work”..

Courses like this may be part of the solution but, more broadly, its existence speaks to there being a widespread need to get up to speed because the revolution’s here.

Perhaps one of the ways to understand the task ahead is to break it down into parts, or types of AI, or use cases. This might make it easier to start spooning it into our courses.

Maybe breaking it up into:

  • How are journalists using it in industry?
  • How should we be using it as educators in our work?
  • And how are we teaching students about AI, as a transformer of work practices, industries, and culture more broadly?

On the first point, as well as teaching how it is being used, we need to talk about what it is not acceptable to use it for and why. The obvious issue is that there are trust problems. It sometimes gets things wrong, so how can we teach students to verify its outputs? How can we assess their verification work?

There are also core skills that journalists need to have, in order to be able to assess whether an AI assistant is doing a good job or not. Our students need to learn what good journalism is so they can recognise it when they see it and call out bad journalism, by AIs and other people.

Our challenge is to make sure we can assess their ability to recognise good journalism, even when AI assistants have been used to do some of the tasks that went into creating the content.

Our new AI colleagues

Then there is the teaching side of the issue, because the potential productivity advances mean we will be expected to be using it. To put it bluntly, our workloads are likely to increase because of the expectation that AI will make us able to do some things faster.

So, we need to ponder how we can use it to develop our teaching materials, develop materials for use in assessments and to help us to mark the assessments.

I have mixed feelings about doing this because deep down it feels a bit like cheating, but this is the kind of feeling we are being challenged to get over. Using an AI to do parts of a task is not cheating, if you still have control over the whole task.

My other problem at this juncture was imagining what tasks I could outsource to an AI assistant. My days are about teaching and assessing certain specific things and helping students with specific questions. I couldn’t see how generic AI assistant answers could be helpful with that.

I raised this question with a Google staffer at EduTECH and he wrote my question into Google’s AI Gemini which promptly wrote a list of the ways it could help a journalism educator. These are the things we need to delve into, unpack and test drive.

Rethinking assessment

All well and good, but do I need extra ideas at this point? I have courses that I’m teaching, activities scheduled for each week, assessments that have been refined over years. I’m not starting from scratch.

Deakin’s Professor Phillip Dawson was thought-provoking on this front. He said we need to look at what our students need to be able to do in a world where AI assistance is ubiquitous, and that our assessments need to prepare them for this world.

A wall-sized poster outside the Google room threw down the challenge of thinking into this, quoting a teacher/blogger known as the Fearful Biologist asking: “If AI can do it, why would I ask the student to do it instead?”

Dawson argued that because AI can now do much of what we have previously assessed, we need to reconsider both what and how we assess.

It is a topic he has given a good deal of thought to, as one of the authors of a 2023 TEQSA position paper on assessment reform.

The authors of the eight-page document call AI “an urgent catalyst for change” but say there is considerable expertise, based on evidence, theory and practice, about how to design assessments for a digital world, which includes artificial intelligence.

They acknowledge that generative AI use may make it hard to assess students’ personal learning attainment, but argue that as AI use is becoming more common and more difficult to assess, there is a need to reconsider the nature of our assessments in relation to generative AI.

They also said there is little value in ignoring AI or implementing bans, calling these approaches oversimplifications. They also warned against setting restrictions that could not be enforced, as that damages the validity of the assessment. Therefore, while some assessment tasks need to be secured against AI use, in others AI use is to be expected and accepted.

Their counterargument to over-restriction was that “forming trustworthy judgements about student learning in a time of AI requires multiple, inclusive and contextualised approaches to assessment.”

Specifically, they called for assessments that “encourage students to critically analyse AI’s role in, and value for, work and study, aligned with disciplinary or professional values”.

They also argued that assessment should aim to engage students in learning via partnerships between teachers and students, in which students participate in feedback.

This notion quelled any fears I had about AI threatening my job. It seems that the way to tackle AI is to have teachers and assessors who actually know their students, what they are working on, how they are working and what they are learning. This seems like a move against the ‘massive’ class model that was being spruiked a few years ago, and towards a more intimate and boutique educational experience. Together students, teachers and AI assistants will do interesting things, and I am looking forward to finding out what that is like.

Professor Shelley Kinash from Universal Higher Education, a new start-up private tertiary college, explained that as a new institution with new courses, they had been able to build in consideration of AI from the ground up. Therefore, all of their assessments were taking AI assistance into consideration. She said UHE had two types of assessments in their courses. The first type asked students to use Gen AI to start answering the question (write base code/or a draft answer, then debug or ramp it up, and submit it along with a reflection of what they did and why. The second type asked students to draft an answer, polish it with AI and submit it with a reflection on the whole process.

This sweeping approach is more prescriptive than Dawson’s reference to many and varied types of assessments, and counter to his conclusion that not all assessments of disciplinary outcomes should be substituted with assessments of AI use or critique of its outcomes, but Kinash said it suited the courses UHE is offering.

The kids will get there first

The room next to the one where these discussions about AI and the tertiary sector were underway had a cool big screen stage set up, pumped up music and a bunch of cheerful Google staff handing out lollies and conference novelty gifts. It was good to pop into for a change of pace.

Several of the presenters on the Google stage were teachers from Google Reference Schools: Schools that have early access to new Google teaching tools, so that they can pop into conferences like this and talk about the cool ways they are using Gemini AI, Google Vids and other tools from the Google Workspace with their grade fives, grade threes, grade ones.

The tools were cool. Grade fives were using templates to make videos that integrated footage, still images, text, voiceovers and background music. Grade ones were painstakingly spelling out words letter by letter to an AI assistant called Thea who congratulated them with tireless enthusiasm and corrected them when they stumbled. Thea also offered to send the teacher a summary of words the class stumbled over, and some activities that would help the class master the missing skills.

Image credit: EduTech

It’s worth noting that Google is not alone in the high-tech/coding/AI classroom space. Grok Academy and Khan Academy are also there, along with a host of other providers who filled the exhibition hall pitching their teaching aids to the thousands of teachers in attendance.

The week I was watching these wonders, the Australian and WA Governments jointly announced that they are co-funding an AI in education pilot program to reduce teacher workloads in WA.

Their statement says: “The $4.7 million initiative will use AI at eight WA schools to reduce lesson planning time so teachers can spend more time in the classroom and less time doing admin.”

The initiative is informed by the Australian Framework for Generative Artificial Intelligence in Schools, and the language is all about “workload reduction”, which sounds suspiciously like payroll reduction to me, but the upshot is that it won’t be long before the students in our tertiary classes are people who’ve been working with AI for most of their lives.

What is HI?

The third point is one that UK Professor Rose Lucken drove home.

She said that while we are busy exploring and learning about what AI can do, we will concurrently be refining our understanding of human intelligence (HI) and how it differs from AI.

She said there is so much that people can do that we don’t yet really appreciate: “Embodiment matters: We are complex living, feeling beings. Feeling matters and it is something AI can’t do.”

With decades of experience in this field behind her, she forecast immense change in all fields of human endeavour and stressed the importance of caution and vigilance as we proceed through these turbulent times. This is why talking about it in humanities classes is important.

She worries about AI being advertised as “effortless”. She said the drive by corporations to commercialise and monetise AI would see people being encouraged to simply off-load tasks and that if we do this too fast and too willingly, it may result in widespread skill loss. She cited examples of this already having happened with some skills, and wondered if the effect of more widespread skill loss would be AI dependency.

She also worries that we will overestimate what AI can do and says that working together is the way forward. She said that while she is broadly optimistic, failing to pay enough attention as monetised AI swoops through our culture could have dire consequences.

Copy Link