Critical AI Comparison Report

Lane 2 Assessment

Students conduct independent analysis, then compare with AI outputs to develop critical evaluation skills and digital literacy.

Overview

Critical AI Comparison Reports are intended to serve as mechanisms for students to consider how AI tools perform in comparison human-led outputs. The goal in this is for students to develop evaluative judgement in the context of AI and hone their own responsible approach to AI for their future professional practice.

Key features

How it works

Curtin snapshot   

Jose Loureiro
Case Study

Dr Jose Loureiro

While AI tools like ChatGPT can quickly provide insightful and data-driven suggestions, it’s valuable for students – our future investors to understand the underlying assumptions and calculation methods.

Faculty of Business and Law  

Jose’s example assessment

About my unit: Faculty of Business and Law | Above 1000 students | Hybrid | Group work

I redesigned the analytical report assessment in ECOM1000 to help students develop both financial analysis skills and AI literacy simultaneously. Student groups receive two investment portfolios and must determine which offers better returns using Modern Portfolio Theory and Sharpe Ratio calculations.

The assessment follows a strict sequence: groups first complete their own calculations and analysis without AI assistance, documenting their methodology and reasoning. Only after submitting their independent analysis do, they use AI tools to analyse the same portfolios. Groups then critically compare the approaches, identifying where AI provided valuable insights, made errors, or used different assumptions.

This comparative approach revealed fascinating learning opportunities. For example, one group discovered that AI made calculation errors but provided useful context about market conditions they hadn’t considered. Another group found that AI used different risk assumptions, leading them to better understand the importance of clearly defining parameters in financial modelling.

The assessment culminates in groups presenting refined recommendations that integrate the strongest elements from both their original analysis and AI insights while clearly articulating the limitations they identified in AI outputs.

My advice 

Encourage students to complete independent work without AI first – this is crucial for developing their own analytical capabilities before AI comparison. Provide standard prompts for initial AI interactions to ensure consistency, but require students to experiment with follow-up questions.

Focus marking on student reasoning and critical evaluation rather than the quality of AI outputs. The learning happens in the comparison and synthesis phases, where students must defend their methodology and integrate insights. I also recommend requiring students to identify specific examples where they trusted their analysis over AI recommendations and explain their reasoning.

Suggested marking criteria

Note: Marking criteria and weighting are suggested guidelines. Specific descriptions should be adapted to relevant content and learning objectives.