Critical AI Comparison Reports are intended to serve as mechanisms for students to consider how AI tools perform in comparison human-led outputs. The goal in this is for students to develop evaluative judgement in the context of AI and hone their own responsible approach to AI for their future professional practice.
Key features
Lane 2: Non-secure assessment
Supported by both in-person and online teaching modes
Scope for students to participate as individuals, pairs or teams
Encourages students to build confidence in their own analytical capabilities before exploring AI tool contributions
Develops professional digital literacy through hands-on experimentation with various AI platforms and approaches
Builds evaluative judgement by requiring students to critically assess AI outputs against their own reasoning
Prepares students for responsible AI integration in their future professional contexts
Promotes understanding of AI tool diversity and the importance of strategic prompting
How it works
Educators provide datasets, case studies, or scenarios suitable for analysis using unit concepts
Students are encouraged to approach the analytical challenge initially through human reasoning and disciplinary knowledge – and that this will help aid their later reflection
Curriculum and in-class time are used to provide initial support on AI tool experimentation, including guidance on testing different platforms and prompting approaches
Assessment framework emphasises learning through comparison and developing professional AI literacy
Students begin analysis using their own disciplinary knowledge and reasoning skills (ideally without AI assistance)
Students then use a range of AI tools to analyse the same materials, documenting prompts and outputs to understand the range of AI capabilities
Students systematically compare findings, identifying where AI succeeded, failed, or provided different perspectives
Students critically evaluate findings, considering where AI insights have complemented or challenged their human analysis
Regular check-ins help students share in a peer network their reflections on appropriate AI integration for their future professional practices
Students submit comprehensive reports documenting their initial human-led analysis, as well as later AI exploration, critical comparison of various approaches and professional insights about the responsible use of AI
Documentation also includes examples of students AI ‘fails’ as well as AI ‘wins’ demonstrating their understanding of AI diversity and strategic application
Curtin snapshot
Case Study
Dr Jose Loureiro
“While AI tools like ChatGPT can quickly provide insightful and data-driven suggestions, it’s valuable for students – our future investors – to understand the underlying assumptions and calculation methods.”
Faculty of Business and Law
Jose’s example assessment
About my unit: Faculty of Business and Law | Above 1000 students | Hybrid | Group work
I redesigned the analytical report assessment in ECOM1000 to help students develop both financial analysis skills and AI literacy simultaneously. Student groups receive two investment portfolios and must determine which offers better returns using Modern Portfolio Theory and Sharpe Ratio calculations.
The assessment follows a strict sequence: groups first complete their own calculations and analysis without AI assistance, documenting their methodology and reasoning. Only after submitting their independent analysis do, they use AI tools to analyse the same portfolios. Groups then critically compare the approaches, identifying where AI provided valuable insights, made errors, or used different assumptions.
This comparative approach revealed fascinating learning opportunities. For example, one group discovered that AI made calculation errors but provided useful context about market conditions they hadn’t considered. Another group found that AI used different risk assumptions, leading them to better understand the importance of clearly defining parameters in financial modelling.
The assessment culminates in groups presenting refined recommendations that integrate the strongest elements from both their original analysis and AI insights while clearly articulating the limitations they identified in AI outputs.
My advice
Encourage students to complete independent work without AI first – this is crucial for developing their own analytical capabilities before AI comparison. Provide standard prompts for initial AI interactions to ensure consistency, but require students to experiment with follow-up questions.
Focus marking on student reasoning and critical evaluation rather than the quality of AI outputs. The learning happens in the comparison and synthesis phases, where students must defend their methodology and integrate insights. I also recommend requiring students to identify specific examples where they trusted their analysis over AI recommendations and explain their reasoning.
Suggested marking criteria
Shows comprehensive exploration of AI tool capabilities through varied platforms, prompting strategies, and approaches. Documents experimentation process effectively, demonstrating understanding of how different tools and prompts produce different outputs.
Demonstrates sophisticated analysis of AI tool strengths, limitations, and inconsistencies. Shows strong evaluative judgement in identifying where AI provides valuable insights versus where it produces errors, biases, or superficial responses. Articulates clear understanding of AI capabilities across different analytical contexts.
Develops nuanced understanding of when and how to integrate AI tools responsibly in professional practice. Shows ability to make informed decisions about AI use, demonstrates awareness of ethical considerations, and articulates sophisticated approach to AI integration that maintains analytical integrity.
Clearly presents comparative analysis and experimental findings. Effective documentation of AI exploration process enables understanding of methodology. Professional presentation of complex comparative insights and decision-making frameworks.
Note: Marking criteria and weighting are suggested guidelines. Specific descriptions should be adapted to relevant content and learning objectives.