A call for a vision and a policy

Fact, fake, and our academic future with AI. We need some action!

ai
Illustration 123rf

The AI landscape is set to evolve at an unprecedented pace. Organisations have to adapt to accommodate the AI potential as well as the challenges that this new technology brings to the table. The educational sector is no exception.

Expecting the Executive Board members to keep abreast of these advances while also taking care of their core responsibilities is neither realistic nor optimal. In addition, AI adoption requires more than textbook knowledge about AI or having prompted a couple of times on ChatGPT. 

Students grow up in this digital age and have no problem embracing AI. Most university staff members typically have a harder time accommodating to this and are running behind  students. They ask for guidance on how to deploy AI tools, what to use and how to use it. 

Devising Generative AI (GenAI) guidelines is an initial step but not the end goal. We need a university-wide vision on AI and a supporting AI policy, if only because there are more types of AI and topics than GenAI guidelines and AI fraud to cover, to name but two examples. Besides, AI will not go away, but rather continue to advance.

Keep pace
Whereas businesses from sectors other than education have the choice to go with the AI flow or not, the educational sector does not. We need to keep pace with our students, by preventing fraud in academic writing and preparing our students for their AI-assisted workplace, for example. 

We also have to keep the data of our students, educators, and staff safe. It is not wise to enter all sorts of data into an AI tool without careful consideration. Everybody knows that by now. At least, most students and educators do. 

But how about all  other employees in finance, communication, student administration etc?  What about the Executive Board members? Everyone will use tools like ChatGPT, not just our students and educators. 

Make use of policy officers
The development of this AI policy requires cross-disciplinarity and understanding across educational management and administration, policy making, student perceptions, ethics, and data management. Furthermore, it requires legal knowledge, and knowledge about privacy and security while striving for alignment with the university strategy. We cannot expect teachers to handle all these responsibilities.

Finally, it needs awareness of the cultural aspects and sensitivities as to how the organisation would best react to change, and willingness to adapt. It needs a roadmap of the way to implementation of AI in education and research. We cannot do it all at once. And we cannot all make AI policies! We already have so many versions of GenAI guidelines.

Why don’t we make use of policy officers, as we would normally do in case of policy development? We could, for example, appoint one “AI policy problem owner”’ per faculty building upon the university vision on AI.

Human or machine
We also need guardrails to protect our intellectual property and our academic values. What's more, academic truth-finding is under pressure. Soon, we might no longer be able to distinguish what text, image, video, code or audio is real (evidence- and fact-based) and what is fake or produced by an AI tool. 

For example, I asked a colleague to check a short piece of text for me for errors, since he is an expert on the subject. I got it back, with the remark that he had run the text through an AI tool… Is that our future, not knowing if we have been communicating with a human or machine?

Algorithms have potential everywhere, from learning analytics to stimulating study success to proctoring during examinations. What do we find responsible and ethical use and what is not? The EU AI Act is designed to protect our (human) rights, but we do need to organise our universities to timely accommodate this appropriately. As a first step, we could use an AI ethical code of conduct.

What we need
It all starts with re-thinking the university as an AI-augmented organisation, where there is more than the Higher Education and Research Act (WHW) and the Education and Examination Regulation (OER) to go by. We have a younger sister coming up: the university AI policy. Whereas the others are more or less carved in stone, this “new kid on the block”’ is growing up at a rapid pace. 

We need to educate the Executive Board members, educators, other staff and students to have them become AI-aware and AI-literate. This also applies to our colleagues from IT, since AI is no regular type of software but comes with additional challenges. 

We need a UU vision on AI adoption in our business operation and some budget to make this happen. 

We also need AI Policy officers and AI compliance officers who will become the cornerstones for the protection of the conditions in which our students and staff can innovate using AI. Furthermore, we need to  protect the academic ground we walk on: that of facts rather than fake. 

As Hannah Arendt put it, “A people that no longer can believe anything cannot make up its own mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.”

Oh, and by the way, this text was not created by ChatGPT but by a human being.

Advertisement