Institutional AI policy
I believe that an institutional AI policy is there to protect the institution; it’s not aimed at supporting student learning (even though many will claim that it does).
When I started recommending to my students that they explore the use of generative AI to support their learning, some of the feedback I got was that they were uncertain about how and when to use gAI. We had an institutional AI policy but the students didn’t find it useful in terms of guiding their use, which made me think about the the challenges of developing an institution AI policy for staff use in the classroom, and the longer I reflect on it, the more I think that it can’t work.
An institutional policy is a set of guidelines, rules, or principles govern operations, activities, procedures, etc. They are often legally binding and can be enforced. And they’re inwardly focused on regulating institutional operations and members.
This is different to an institutional position statement on AI, which is useful to influence the institutional culture around the technology. A position statement is not legally enforceable but represents the institution’s views and perspectives. They therefore tend to be broader and more general. For example, are we open to the idea of generative AI for learning? is this a place where we can talk about AI in assessment? Do we encourage student use of generative AI? These are all questions that an institutional position statement can inform, which is why position statements seem to be useful.
I think that universities probably shouldn’t have an AI policy just like they shouldn’t have a pencil policy. Institutions don’t regulate the use of pencils in learning, teaching, and assessment because pencils are general purpose technologies that can be used for many different purposes in many different contexts. To regulate pencil use would be a frustrating exercise because there cannot be a one-size-fits-all approach. But that’s exactly what policies aim to do; they describe the regulatory framework around the way that tools can be used by lecturers and students.
And I think that’s the key here; tools that are used by lecturers and students.
We need classroom-level AI policies that are ideally developed by lecturers in collaboration with students. Let’s have those who are using the tools to come up with a framework to decide what works for them, in their context.
But this means we need to trust lecturers and students.
Additional resources
- Rowe, M. (2023). Institutional AI policy or classroom AI policy. usr/space blog.