This interactive workbook contains multiple workflow tabs. Use the tab navigation above to switch between sections, or use the sidebar navigation to jump to specific content.
Augmented Academia Workbook 3
Need a refresher on GenAI concepts, prompt frameworks, or accessibility? All required foundational concepts, prompt engineering strategies, and accessibility guidelines are covered in the guides below. Please review them as needed before or during this workshop:
These guides contain all the required concepts, frameworks, and best practices for effective and responsible use of GenAI in education. Refer to them any time for step-by-step instructions, prompt templates, and policy guidance.
This workbook guides you through using Copilot/GenAI to author comprehensive assessment briefs, audit and adapt existing assessments for accessibility and real‑world relevance, and generate robust rubrics for reliable marking.
Work through the prompts, and adapt them to your course.
Transform existing assessments into more engaging, authentic learning experiences while maintaining their learning outcomes. Use GenAI to add real-world context, collaborative elements, and process documentation to your assessments.
Transforming Existing Assessments
Most assessments can be transformed into more engaging, authentic tasks without changing their fundamental learning outcomes or increasing workload.
Starting with a basic assessment brief, we'll demonstrate how to iteratively transform it through multiple GenAI prompts.
Example Assessment Brief: Cell Biology Essay (1,800 words)
Task: Write an 1,800-word essay discussing the structure and function of cellular organelles. Your essay should include descriptions of at least five major organelles and explain how they contribute to overall cell function.
Learning outcomes:
- Demonstrate knowledge of cellular structure
- Explain the relationship between organelle structure and function
- Apply concepts of cellular biology to explain cellular processes
Submission: Submit your essay as a Word document to the course Moodle page.
Marking criteria:
- Scientific accuracy (40%)
- Depth of explanation (30%)
- Use of scholarly sources (20%)
- Organization and clarity (10%)
Step 1: Basic Transformation for Engagement
First, let's apply our transformation prompt to add real-world context, collaboration, and process documentation:
Assessment Transformation Prompt:
You are an expert assessment designer specializing in [YOUR SUBJECT].
Transform this into a more engaging version that:
- Adds one authentic, real-world context element
- Includes one collaborative component
- Requires process documentation alongside final work
- Maintains the same learning outcomes and workload
Provide: Enhanced assessment brief (max 200 words) + implementation timeline
Current assessment brief:
[PASTE YOUR ORIGINAL ASSESSMENT BRIEF HERE]
Exercise: Paste your ORIGINAL assessment brief
Your task: Paste the original assessment brief you want to transform (this will be used with the transformation prompt above).
Step 2: Accessibility Enhancement
Now, let's further improve the assessment by specifically addressing accessibility concerns:
Second Transformation Prompt:
Review this assessment brief for accessibility barriers and inclusive design:
[PASTE THE FIRST ENHANCED ASSESSMENT]
Provide specific recommendations to improve:
1. Alternative formats and submission options
2. Flexibility in group work arrangements
3. Clear, plain language instructions
4. Inclusive examples and contexts
5. Reasonable adjustments that maintain academic standards
Return: Specific rewording suggestions and implementation guidance.
Exercise: Paste your GenAI-enhanced assessment brief
Your task: After using the transformation prompt above, paste your GenAI-enhanced assessment brief here for reference and further editing.
Designing Assessments Resistant to GenAI Misuse
Make assessments resilient to inappropriate use of GenAI by combining design, process, and verification strategies. Use a layered approach rather than relying on any single method.
Authentic, situated tasks: Ask students to apply knowledge to specific local/case contexts, datasets, or recent events that require domain judgement and context awareness.
Process-based evidence: Require process logs, annotated drafts, decision rationales, and versioned artefacts that show the student's thinking and iterations.
Personalised prompts: Include small personalised inputs (e.g., a short reflection, a dataset excerpt, or a class-specific parameter) that change each student's required output.
Staged submissions: Break assessment into milestones (proposal, draft, peer review, final) with feedback loops—harder for AI to fabricate consistently across stages.
Oral or viva components: Short recorded explanations or viva voce questions tied to submitted work test understanding and deter outsourcing.
Low-stakes formative practice: Train students to use AI appropriately in formative tasks and require declaration of AI use in summative tasks.
Randomised question pools: Use parameterised questions or banks so individual answers differ and can't be copied wholesale.
Assessment analytics: Monitor metadata (timestamps, submission patterns), similarity checks, and anomalous answer patterns to flag items for review.
Explicit academic integrity: Require a short integrity statement and use spot-checking with source verification.
AI-Resilience Design Prompt (for instructors):
You are an expert assessment designer. Given this assessment brief: [PASTE BRIEF], propose modifications and an evidence collection plan that make the assessment resistant to inappropriate GenAI use while preserving learning outcomes. Include:
- 3 small personalised input variations to give every student a unique task
- 2 process-based evidence requirements (e.g., reflective log, annotated draft)
- A staged submission timeline with checks
- Suggested oral/viva questions to verify understanding
- A short academic integrity statement to require at submission
Return: Modified brief + evidence checklist + timeline + sample viva questions
Exercise: Make your assessment AI-resilient
Your task: Use the AI-Resilience Design Prompt above to create a revised brief and paste it below.
Iterative Enhancement Principles
This example demonstrates how to progressively enhance an assessment through multiple iterations:
First iteration: Add basic engagement elements (authentic context, collaboration, process)
Second iteration: Address accessibility and inclusion needs
Third iteration: Refine for specific disciplinary context and academic standards
Each iteration builds on the previous improvements while maintaining the core learning outcomes.
Transformation Principles
Authentic context: Position students as professionals solving real problems - consultants, practitioners, researchers, etc.
Meaningful collaboration: Design roles that require genuine interdependence (different expertise areas, perspectives, or tasks)
Process visibility: Make thinking visible through documentation that shows decision-making, iteration, and integration
Equivalent challenge: Ensure the core skills and knowledge remain the same, maintaining academic rigor
Quick Transformation Checklist
✅ Is there a clear authentic/real-world context?
✅ Is there a well-defined collaborative component?
✅ Is process documentation incorporated?
✅ Are the original learning outcomes still addressed?
✅ Is the workload equivalent to the original?
Accessibility Considerations
When transforming assessments, build accessibility in from the start:
Assessment Accessibility Prompt:
Review this assessment brief for accessibility barriers and inclusive design:
[PASTE ASSESSMENT BRIEF]
Provide specific recommendations to improve:
1. Alternative formats and submission options
2. Flexibility in group work arrangements
3. Clear, plain language instructions
4. Inclusive examples and contexts
5. Reasonable adjustments that maintain academic standards
Return: Specific rewording suggestions and implementation guidance.
Assessment Transformation Resources
Real-world stakeholders: Alumni networks, community organizations, university departments, local government
Process documentation: Research logs, design journals, decision matrices, reflective blogs, annotated bibliographies
Inclusive practices: Multiple modes of participation, choice in topics/formats, staged submission points
📊 Rubric Enhancement for Standardized Assessment
Create and enhance rubrics iteratively to ensure fair, consistent, and standardized marking. Well-designed rubrics clarify expectations for students, reduce assessor inconsistencies, and make grading more efficient and equitable.
Common Challenges in Assessment Marking
Inconsistency between markers: Different interpretations of the same criteria
Grade boundary confusion: Unclear distinctions between grade levels
Subjective language: Terms like "excellent" or "poor" without concrete examples
Implicit bias: Personal preferences affecting objective assessment
Inefficient feedback: Repetitive writing of similar comments across submissions
Iterative Rubric Enhancement
Similar to our assessment transformation approach, we'll take a basic rubric and enhance it through multiple iterations:
Step 1: Initial Rubric Generation Prompt
Generate a basic assessment rubric for this brief: [PASTE YOUR ASSESSMENT BRIEF FROM THE PREVIOUS TAB]
Include:
1. 5-7 key assessment criteria that align with the learning outcomes
2. For each criterion, create descriptors for these grade levels:
- A (Excellent): 70-100%
- B (Very Good): 60-69%
- C (Good): 50-59%
- D (Satisfactory): 40-49%
- E/F (Fail): Below 40%
3. Weight each criterion based on its importance (e.g., 20%, 15%, etc.)
4. Format as a simple table with criteria as rows and grade levels as columns
Keep the descriptors brief for now - we'll enhance them in later steps.
Exercise: Generate Your Initial Rubric
Copy your assessment brief from the Assessments tab and generate a basic rubric using the prompt above.
Step 2: Adding Specific Examples for Grade Boundaries
Now we'll enhance the basic rubric with specific examples of what constitutes each grade level:
Step 2: Grade Boundary Enhancement Prompt
Enhance this basic rubric with specific examples for each grade level:
[PASTE YOUR INITIAL RUBRIC]
For each criterion and grade level:
1. Add 1-2 concrete examples of what student work at this level looks like
2. Use observable language (e.g., "identifies 5+ key concepts with accurate definitions" rather than "shows excellent understanding")
3. Include quantifiable elements where possible (e.g., number of sources, percentage of accuracy)
4. Ensure clear distinctions between adjacent grade levels
For failing grades (E/F), clearly identify what specific elements are missing or incorrect rather than just stating something is "poor" or "insufficient."
Example: Enhancement of a "Research Quality" Criterion
Before enhancement:
Research Quality (20%)
A: Excellent use of research
B: Very good research
C: Good research with minor issues
D: Basic research
E/F: Poor or insufficient research
After enhancement:
Research Quality (20%)
A: Cites 8+ peer-reviewed sources published within last 5 years; integrates sources to develop original insights; critically evaluates source reliability
B: Cites 6-7 relevant peer-reviewed sources; effectively synthesizes multiple perspectives; demonstrates source evaluation
C: Cites 4-5 academic sources with some peer-reviewed content; sources support main arguments; attempts source evaluation
D: Cites 3-4 sources with limited peer-reviewed content; basic integration of sources; minimal source evaluation
E/F: Fewer than 3 academic sources; OR relies primarily on non-academic sources; OR fails to integrate sources with arguments
Exercise: Enhance Grade Boundaries
Use the prompt above to enhance your initial rubric with specific examples for each grade level.
Step 3: Adding Assessor Guidance and Calibration Notes
Now we'll add guidance for assessors to ensure consistent application of the rubric:
Step 3: Assessor Guidance Enhancement Prompt
Add assessor guidance to improve consistent application of this rubric:
[PASTE YOUR ENHANCED RUBRIC]
Add the following elements:
1. For each criterion, include 2-3 common marking pitfalls and how to avoid them
2. Add guidance for borderline cases (e.g., B/C boundary) with decision points
3. Create 1-2 brief sample excerpts for each criterion at different grade levels
4. Add standardized feedback phrases assessors can use for common issues
5. Include a calibration protocol (steps assessors should take before/during marking)
Format these additions as a separate "Assessor Guide" section below the main rubric.
Example: Assessor Guidance for "Critical Analysis" Criterion
Common Pitfalls for "Critical Analysis" Criterion:
Conflating volume with quality: Long analyses are not necessarily better. Focus on depth of insight, not word count.
Halo effect: Don't let strong writing style influence your evaluation of analytical content.
Discipline bias: Be careful not to privilege one theoretical approach over others unless specified in the brief.
Borderline Case Guidance (B/C boundary):
When deciding between B and C for critical analysis, the key differentiator is whether the student merely identifies multiple perspectives (C) versus actively evaluates their strengths and limitations (B). Look specifically for:
- Explicit comparison of competing interpretations
- Consideration of contextual factors influencing validity
- Recognition of limitations in their own analysis
Sample Excerpt (B-grade):
"While Smith's framework effectively explains the economic factors, it overlooks the social dimensions highlighted by Jones (2019). This limitation is significant because, as my analysis of the case study shows, economic incentives alone were insufficient to drive behavioral change without addressing cultural barriers..."
Exercise: Add Assessor Guidance
Use the prompt above to add assessor guidance to your enhanced rubric.
Step 4: Creating Student-Facing Versions and Exemplars
Finally, we'll create student-friendly versions of the rubric with exemplars:
Step 4: Student-Facing Rubric Enhancement Prompt
Create a student-friendly version of this rubric with exemplars:
[PASTE YOUR ENHANCED RUBRIC WITH ASSESSOR GUIDANCE]
Create two outputs:
1. A simplified student-facing version that:
- Uses clear, jargon-free language
- Includes concrete examples of what's expected at each level
- Highlights key differentiators between grade boundaries
- Adds "action steps" for how to aim for higher grades
2. Two brief exemplar excerpts (200-300 words each):
- One showing high-quality work (A-grade) for a key criterion
- One showing mid-range work (C-grade) with annotations explaining why it's not A-level
- Include specific comments connecting the exemplars to the rubric criteria
The student version should be supportive and instructional rather than evaluative.
Exercise: Create Student-Facing Materials
Use the prompt above to create student-friendly versions of your rubric with exemplars.
Continuous Rubric Improvement
After using your rubric, gather data to refine it further using this prompt:
Rubric Improvement Based on Assessment Data
Help me improve this rubric based on assessment data:
[PASTE YOUR FINAL RUBRIC]
Assessment data observations:
- [DESCRIBE ANY PATTERNS OBSERVED, e.g., "Most students scored poorly on criterion X"]
- [DESCRIBE ANY ASSESSOR FEEDBACK, e.g., "Assessors found criterion Y difficult to apply consistently"]
- [DESCRIBE ANY STUDENT FEEDBACK, e.g., "Students were confused about the expectations for criterion Z"]
Suggest specific modifications to:
1. Clarify confusing criteria
2. Adjust grade boundaries that weren't differentiating effectively
3. Add more concrete examples where needed
4. Simplify overly complex descriptions
5. Address any potential bias issues identified
Provide a concise report with specific wording changes and rationale for each modification.
Integration with Learning Management Systems
Many learning management systems allow rubric import. Use this prompt to format your rubric appropriately:
LMS Rubric Integration Format
Convert my rubric to [LMS NAME] format:
[PASTE YOUR FINAL RUBRIC]
I need this formatted for [CHOOSE ONE: Canvas / Moodle / Blackboard / Brightspace] import.
For Canvas: Generate a CSV file with columns for criteria, descriptions, points, and ratings
For Moodle: Generate a CSV with criteria and level definitions
For Blackboard: Generate an HTML table compatible with Blackboard's rubric creation tool
For Brightspace: Generate XML format with competency mapping
Include any special formatting requirements and step-by-step import instructions for my chosen LMS.
Complete Rubric Enhancement Workflow
Generate: Create basic rubric aligned with learning outcomes
Enhance: Add specific examples and observable language for each grade level
Calibrate: Add assessor guidance, sample excerpts, and borderline case notes
Support: Create student-facing versions with exemplars and action steps
Implement: Format for your LMS and deploy
Improve: Gather data and refine based on actual assessment experience
Rubric Design Best Practices
Criterion-Referenced: Base grades on specific criteria, not comparison to other students
Transparency: Share rubrics with students before they begin the assessment
Clarity: Use concrete, observable language that describes what students did, not who they are
Consistency: Ensure all assessors undergo calibration with exemplars
Fairness: Focus criteria on learning outcomes, not writing style or cultural knowledge
Efficiency: Design to facilitate quick, consistent marking and standardized feedback
Transform assessment tasks into authentic, workplace‑aligned activities using GenAI: stakeholder briefs, datasets, client communications, and formative pathways.
Authentic Assessment Transformation Prompt:
Take this traditional assessment (paste below). Redesign it as an authentic task by:
- Defining a real stakeholder (employer, patient group, policy maker) and a brief they provide
- Creating datasets, sources, or artefacts students will use
- Defining deliverables that mirror workplace outputs (report, pitch deck, technical memo, code repo)
- Specifying group vs individual roles and collaborative assessment mechanics
- Embedding scaffolded formative checkpoints and feedback prompts
- Updating marking rubric to assess authenticity, professional communication and transferable skills
Return: redesigned brief, assets list, scaffold timeline and updated rubric outline.
Exercise: Make your assessment authentic
Implementation tips
Pilot authentic tasks with small cohorts before scaling
Use rubrics that capture professional communication and problem solving
Provide clear guidance on team roles and how contributions are assessed
🤖 QuizBot (Mizou) — Revision QuizBot tutorial
Create interactive revision chatbots for your students using Mizou's QuizBot platform. These chatbots can deliver automatically generated multiple-choice or short-answer questions on your course content, providing immediate feedback and explanations to help students revise effectively.
Why Use QuizBots for Revision?
Self-paced learning - Students can practice at their own pace
Instant feedback - No waiting for answers or explanations
Adaptive difficulty - Students can request easier or harder questions
24/7 availability - Practice anytime, anywhere
Low-stakes practice - Reduce test anxiety through frequent, informal quizzing
Step-by-Step Setup Guide
Register for a free account at Mizou.com (recommend using your university email)
Navigate to My Chatbots → other/bacchalarius
Click "Build a chatbot" and select "custom"
Use the template below for your configuration settings
Test your bot with sample questions before sharing with students
Deploy by sharing the unique bot link with your class
Configuration Template
Copy and paste the template below into your Mizou bot configuration, modifying the highlighted sections to match your course content:
---------- AI INSTRUCTIONS FIELD ----------
Create a revision quiz for [second year university] students for the course, [Animal Biology, Evolution and Ecology]. Encourage critical thinking. Generate conceptual multiple choice questions only and make sure they are concise and requires deeper thought by making use of good distractors. Use the following learning outcomes and terms to determine the scope of the knowledge to be tested:
[paste ~100 word descriptions: learning objectives, terms etc. The more descriptive this is the better the bot – consider using multiple bots to cover more topics separately]
---------- WELCOME MESSAGE FIELD ----------
Welcome to the Revision QuizBot. Questions are AI-generated from relevant learning outcomes for this course but does not come directly from lectures nor a question bank. You can answer, or ask for explanations, adjust the topic/difficulty. Ready?
---------- RULES FIELD (FOR MCQ-FOCUSED BOT) ----------
Only cover topics specified in the AI Instructions.
Only ask multiple choice questions that consist of 5 options (A, B, C, D, and E).
Present one question at a time and number them sequentially.
If the answer is correct, simply say "correct", but if the incorrect answer is given, provide a very concise explanation and ask if ready to move onto next question (i.e. don’t just move on).
The student may ask questions other than providing an answer, e.g. ask for clarification on a question, contest the correct answer, choose no answer, or skip the question.
Rules: (SAQ)
Only cover topics specified in the AI Instructions.
Only ask short answer questions that aims to apply knowledge rather than describe it.
Present one question at a time and number them sequentially.
If the answer is correct, simply say "correct", but if the incorrect answer is given, provide concise feedback and/or explanation, then ask if ready to move onto next question or want to discuss the topic further (i.e. don’t just move on).
The student may ask questions other than providing an answer, e.g. ask for clarification on a question, contest the correct answer, choose no answer, or skip the question.
Genetics Example
This example shows how to set up a genetics-focused QuizBot for a molecular biology course:
----- AI INSTRUCTIONS FIELD -----
Create a revision quiz for third-year undergraduate students for the course Molecular Genetics and Gene Expression. Generate conceptual multiple-choice questions that require application of knowledge rather than simple recall. Focus on gene expression mechanisms, regulation, and experimental techniques. Use the following learning outcomes and terminology:
Learning Outcomes:
- Describe and explain eukaryotic transcription mechanisms and regulation
- Compare and contrast prokaryotic and eukaryotic gene expression
- Analyze experimental approaches for studying transcription factors
- Evaluate the role of chromatin structure in gene expression
- Apply knowledge of molecular techniques to experimental design scenarios
Key Terminology: RNA polymerase, chromatin remodeling, histone modifications, promoter elements, enhancers, silencers, transcription factors, ChIP-seq, CRISPR-Cas9, reporter genes, DNA footprinting, gel shift assay, DNase hypersensitivity, polymerase chain reaction, next-generation sequencing
----- WELCOME MESSAGE FIELD -----
Welcome to the Molecular Genetics Revision QuizBot! I'll generate questions to help you test your understanding of gene expression mechanisms and experimental techniques.
Type 'start' to begin with multiple-choice questions.
Type 'short answers' if you prefer short-answer format.
You can focus on specific areas with commands like 'Ask about transcription factors' or 'Focus on experimental techniques.'
Ready to test your knowledge?
----- RULES FIELD -----
[Use the standard rules template from above]
Anatomy Example
This example demonstrates a QuizBot configuration for a human anatomy course:
----- AI INSTRUCTIONS FIELD -----
Create a revision quiz for second-year medical students for the course Human Anatomy: Musculoskeletal System. Generate challenging multiple-choice questions that require integration of anatomical knowledge with clinical applications. Use the following learning outcomes and terminology:
Learning Outcomes:
- Identify and describe the major bones, joints, muscles and neurovascular structures of the upper and lower limbs
- Explain the functional relationships between musculoskeletal components
- Apply anatomical knowledge to common clinical scenarios and pathologies
- Interpret medical imaging (X-rays, MRIs, CT scans) of musculoskeletal structures
- Analyze the biomechanical principles governing human movement
Key Terminology: origin, insertion, innervation, blood supply, brachial plexus, rotator cuff, carpal tunnel, femoral triangle, anatomical snuffbox, compartment syndrome, joint capsule, ligaments, bursae, tendon sheaths, fascia, osteology, myology, arthrology, gait analysis, surface anatomy, anatomical variation
----- WELCOME MESSAGE FIELD -----
Welcome to the Human Anatomy Revision QuizBot! I'll generate questions to test your knowledge of musculoskeletal anatomy and clinical applications.
Type 'start' to begin with multiple-choice questions.
Type 'short answers' for short-answer format.
You can focus on specific regions (e.g., 'Ask about the upper limb') or applications (e.g., 'Clinical scenarios').
Ready to begin your anatomy review?
----- RULES FIELD -----
[Use the standard rules template from above]
Exercise: Create Your Own QuizBot Configuration
Develop a QuizBot configuration for your own course by modifying the template. Focus on the learning outcomes and key terminology that would be most valuable for your students to practice.
Implementation Tips
Pilot with small groups: Test your QuizBot with a few students before wide release
Create multiple specialized bots: Consider creating separate bots for different topic areas
Update regularly: Refine your AI instructions based on student feedback
Integrate with formative assessment: Use QuizBot interactions as preparation for formal assessments
Monitor usage patterns: Check which topics students are struggling with most
Advanced QuizBot Features
For more sophisticated QuizBots, consider these advanced configuration options:
Progressive difficulty: Structure questions to increase in difficulty based on performance
Case-based scenarios: Include clinical or real-world scenarios that require application
Visual integration: Reference diagrams or images (e.g., "Refer to Figure 3.2 in your textbook")
Spaced repetition: Configure the bot to revisit previously missed questions
Peer discussion prompts: Generate questions that students can discuss in small groups