Face-to-face interviews, during which students would have to demonstrate their work is their own, could become a routine part of third-level assessments under recommendations aimed at combating the improper use of artificial intelligence (AI).
Institutions have been encouraged to redesign courses in ways that ensure work is authentic, with a potential return to more old-fashioned written exams alongside “oral verification”.
The proposals are made under new guidelines published by the Higher Education Authority (HEA), the State body that regulates third-level institutions.
A 20-page report from the HEA says universities and other third-level institutions need to embrace the potential of generative AI to contribute to education – while ensuring that students are learning more than simply how to use the new technology.
Institutions should support both staff and students in getting the best out of generative AI models but must step up efforts to prevent their output being passed off as a student’s work, says the report.
Various generative AI models can create text, images and other material based on prompts from the user. Widely available text-based models now have the ability to produce outputs mirroring the style and content of assigned student work in many fields.
At present, academics routinely complain they are presented with essays or other assignments students claim to be their own but which they believe to be AI generated. The situation can be difficult to resolve in instances where the student denies wrongdoing.
The report is authored by project lead Dr James O’Sullivan of University College Cork, the Higher Education Authority’s Colin Lowry, Ross Woods and Tim Conlon with support from members of the Government’s AI Advisory Council and others.
[ Another huge corporate tax take to AI’s next phase: What’s in store for 2026?Opens in new window ]
It encourages third-level institutions to “redesign” assessment procedures in order to “prioritise authenticity, foregrounding student authorship and human judgment, as well as process-based learning”. Organisations are encouraged to design courses and their delivery to ensure that subsequent assessments are fair.
As a backup, the report advises institutions to establish an “institution-wide oral assessment safeguard that enables staff … to demonstrate authorship directly, with the outcome of this process taking precedence over any existing written artefacts”.
“Oral verification can help ensure authenticity without recourse to unreliable detection technologies. AI detectors and probabilistic tools should not be treated as determinative evidence of misconduct, and all integrity processes should rest on dialogue and evidence-based evaluation consistent with natural justice.”
The recommendations could provide the basis for a greater reliance on old-fashioned exams conducted without recourse to technology and/or face to face interviews at which students would be required to demonstrate in conversation that the work presented in assessments had been their own.
The authors of the HEA commissioned report, suggest AI has a key role to play in the future of the sector and that training to both students and staff will be essential. There must, they say, be an open understanding between all parties with regard to where AI has been used and where the student is responsible for work submitted for assessment.
At present, they contend, there is a lack of cohesion in policies regarding AI.
The report raises issues, meanwhile, about procurement of access to AI systems and the retention of data generated by students, which it suggests should not be used for training purposes by the tech companies. It also raises equitable access to the technology so that it does not reinforce inequality.
In order to provide widespread access to the developing technology and the training required to ensure up to date AI literacy substantial investment across the sector will be required, it is suggested.
[ We should be very worried about AI taking over the classroomOpens in new window ]