So, what’s the deal with AI?
Ever since Arnold Schwarzenegger popped onto our screens wearing cool sunglasses and declaring himself the Terminator, artificial intelligence (AI) has been ingrained in popular culture as both exciting, and dangerous. Most of the discussion around AI nowadays isn’t about unstoppable overlords and armies of robots, instead it is largely about art and writing. Discussions about ChatGPT are appearing on every social media channel and conference agenda, and you have most likely experimented with it yourself.
We’re not here to delve into the ethical implications of using the software or explore the practical uses AI might have in a business environment. There is plenty of chatter about that. Instead, let’s take a look at a specific impact AI use has on the education sector: the interaction between AI and assessment.
Regardless of whether you see ChatGPT as a sign of a burgeoning utopia or a herald of the apocalypse, knowing how to design assessment with AI in mind grows continually more vital in ensuring fair and accurate testing of learners.
How does AI interact with assessment?
There are too many negative examples of interactions between a student, AI and their assessments. With the busy lives we all lead, entering a few words into ChatGPT and having it churn out a fully-fledged answer in 10 seconds is all too tempting. Learners and professionals usually know the extent to which they should use AI, but it’s all too easy to push the boundaries beyond what is acceptable use.
AI detection software is in place in many assessment bodies, but people still try their luck creating artificially generated writing submission. AI detectors are good, but imperfect. Some people will get through; some will get caught out. Sometimes, it’s difficult to actually prove the use of AI use, and that can lead to inaction as no one wants to make false accusations of cheating.
Does that make dealing with AI generated writing impossible for assessors?
The short answer is no. The longer answer is that combatting AI use may mean overhauling your approach to assessments to make them as resistant as you can to the inclusion of artificially generated text and therefore removing the reliance on AI detectors. Revamping assessment takes time and focus, but the development of tools like ChatGPT is charging forward at a rapid pace, so, as learning designers, we must approach assessment differently. As these systems become increasingly indistinguishable from the writing of real humans, the temptation to using them to achieve competence will only continue to grow. The only way forward is to design assessments in a way that prevents the inappropriate use of AI.
So, how do I make an assessment AI-proof?
This is popular research topic, especially in academic and compliance environments. Here are two ways in to help you navigate the dilemmas of ChatGPT. Although they are in an academic environment, the lessons apply just the same to the corporate world of learning.
Make assessment process based
Both the University of Melbourne and Boston College emphasise the importance of making assessment process based. Instead of the emphasis being on one final assessment, they found it is better to spread the assessment across more but smaller activities. The constant encouragement to be working on the assessment reduces the last-minute pressure. This creates a better environment in which the learner can thrive and reduces both the desire and the demand for programs like ChatGPT.
Often these tasks will be reflective, such as writing learning journals, presenting progress reports, and engaging peer-based feedback sessions. This holds the learner accountable for the entire process of the assessment, rather than just the final outcome. A well-designed set of assessment activities will also ask them to explain their thought process behind creating the final piece of assessment, which makes it harder to craft the prompts for AI and still keep the consistency across multiple separate prompts and questions. Achieving this level of consistency through AI is relatively difficult in its current iterations, and when the learner also has to explain why the AI made the choices it did, they end up actually having to put in more effort anyway!
Deliver real-time assessment
Advice from Innovations in Education and Teaching International states that hosting real time assessments decreases student abilities to use AI programmes such as ChatGPT. Where the process-based option for assessments isn’t possible, hosting the test in person, or in a live digital space gives the assessor more control, allowing them to observe any possible instances of cheating.
This idea can be extended out beyond simply hosting events live. Instead, learners could be asked to complete the task within a defined time frame and record their screen as they do so. This recording can then be replayed if there are concerns about the integrity of the assessment. This method also acknowledges the flexibility learners often want to complete the tasks at a time suitable to them.
While either of these options may still be subverted by those particularly determined to use AI, for the vast majority the effort required to cheat will be more than the effort required to simply do the work. Making the honest option the path of least resistance disincentivises the use of AI and encourages more effort to be put into actually learning the content of the assessment, creating the ideal win-win scenario.
The secret is to put a degree of control back into assessor’s hands.
As AI continues to develop and becomes more accessible it will become harder and harder to prevent programs like ChatGPT from interfering with assessment, but the most effective way to combat AI will be to continue trying to understand it. Knowing the strengths, weaknesses, benefits, and pitfalls of AI will all contribute to a holistic approach to balancing the precarious nature of the everchanging digital landscape. So, while you bear in mind the ways in which AI can both harm and help, remember that with knowledge comes control, and at the end of the day, we are just looking for some control over the chaos!
Explore ethical AI practices for fair assessments and authentic learning experiences. Let’s create a more equitable education system together.
References
Cotton, Debby R. E., et al. “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT.” Innovations in Education and Teaching International, Mar. 2023, pp. 1–12. DOI.org (Crossref), https://doi.org/10.1080/14703297.2023.2190148.
Reimagining Your Assessments in Light of AI – Digital Learning Design Toolkit. 22 Jan. 2024, https://cdil.bc.edu/resources/emerging-technologies/engaging-with-ai/ai-assessments/.
“Shift the Emphasis from Assessing Product to Assessing Process.” Assessment, AI and Academic Integrity, 27 July 2023, https://melbourne-cshe.unimelb.edu.au/ai-aai/home/ai-assessment/designing-assessment-tasks-that-are-less-vulnerable-to-ai/seven-practical-strategies/1.-shift-the-emphasis-from-assessing-product-to-assessing-process.
We are always glad to help, or point you in the right direction!
If you have an engagement or learning challenge in your industry that needs a culturally sensitive approach, get in touch.
Spread the word
Related posts
Categories
Learning and development strategies