icon picker
Grading

Perfect—then let’s turn this into a self‑grading kit.

1. How to Use This

Answer a question.
Run the checklist for that question (below). Tick items, assign the listed points.
Apply the generic rubric criteria for quality (analysis, clarity, etc.).
Sum to 100 → convert to German grade (table in §4).
(Optional) Paste your answer + rubric into an LLM with the prompt template in §5.

2. Generic Analytic Rubric (use for every subpart)

Table 57
Criterion
Weight*
1 – Exemplary
2 – Solid
3 – Thin
4 – Minimal
5
Accuracy & Completeness (did you name ALL required elements?)
35%
All required items, precise
1 minor miss
Several misses
Major gaps
None
Application / Analysis / Justification
30%
Clear theory→case link + reasoning
Link present, light reasoning
Vague/partial link
Misapplied
None
Use of Specific Evidence/Examples
15%
Concrete, relevant example(s)
Example generic
Weak/ill‑fitting
Name-drop only
None
Structure & Clarity (logical order, word/time limits)
10%
Tight, well-structured
Minor clutter
Hard to follow
Disorganized
Off-topic
Critical Reflection / Evaluation (when asked)
10%
Weighs perspectives, justified stance
Some evaluation
Mostly descriptive
None
There are no rows in this table
*Weight = of the quality slice for that subpart. First allocate points for required elements (checklists), then apply these percentages to the remaining “quality” points.

3. Question-Specific Checklists & Points

(You can edit numbers if your prof’s allocation differs. “Q” = total question points; “Elem” = element/atoms; “Qual” = quality slice scored with rubric above.)

Q1 (16 pts)

a) Define info asymmetry + link to decision problems (4 pts = Elem 3, Qual 1)
□ Defines asymmetry (unequal info) (2)
□ Mentions market/decision inefficiency or bounded rationality (1)
□ Clear, concise phrasing (Qual 1)
b) Two reduction mechanisms (6 pts = Elem 4, Qual 2)
□ Mechanism 1 named + explained (e.g., signalling) (2)
□ Mechanism 2 named + explained (e.g., screening/standardised labels) (2)
□ Ties to food labels/sector concretely (1)
□ Qual score (structure, clarity) (2)
c) Apply one model (AIDA/ACCA/etc.) to your example (6 pts = Elem 3, Qual 3)
□ Correctly names model + stages (2)
□ Maps each stage to the example (2)
□ Justifies why this model fits (1)
□ Qual (analysis depth/clarity) (1)

Q2 (17 pts)

a) Two key constraints from EU regs (4 = Elem 4)
□ Names constraint 1 (evidence/authorisation) (2)
□ Names constraint 2 (mandatory nutrition info / wording rules) (2)
b) Effect on claim design & autonomy (7 = Elem 4, Qual 3)
□ Explains influence on wording/creative limits (2)
□ Links to consumer autonomy/sovereignty (2)
□ Notes tension (protection vs creativity) (1)
□ Qual (coherence/analysis) (2)
c) Compliant persuasive strategy (6 = Elem 4, Qual 2)
□ Uses authorised claim type (nutrition/health) correctly (2)
□ Shows product qualifies (nutrient present) (1)
□ Adds persuasive but legal framing (story/context) (1)
□ Qual (application clarity, brevity) (2)

Q3 (17 pts)

a) Define nudge + intrusiveness (4 = Elem 3, Qual 1)
□ Nudge definition (choice architecture, no bans/price) (2)
□ Intrusiveness defined (bypass autonomy/deliberation) (1)
□ Qual (clarity) (1)
b) Two perspectives (8 = Elem 5, Qual 3)
□ Pro argument (health outcomes, overload reduction) (2)
□ Critical argument (manipulation/autonomy) (2)
□ Uses food-sector example(s) for both (2)
□ Qual (balanced, structured) (2)
c) Your position (5 = Elem 2, Qual 3)
□ Clear stance + justification (2)
□ Concrete policy suggestion (e.g., transparent FOP labels) (1)
□ Qual (concise synthesis) (2)

Q4 (16 pts)

a) Theory vs model + 2 features of good theory (6 = Elem 4, Qual 2)
□ Defines theory (systematic explanation) (2)
□ Defines model (simplified representation/tool) (1)
□ Two features (e.g., parsimony, scope, falsifiability) (2)
□ Qual (clarity) (1)
b) Pick model + assign tradition/context + justify (6 = Elem 4, Qual 2)
□ Correct model from course (ELM etc.) (1)
□ Tradition named & fits (1)
□ Context named & fits (1)
□ Justification (how/why) (2)
□ Qual (structure) (1)
c) Research question (4 = Elem 3, Qual 1)
□ Testable, specific RQ (2)
□ Links variables to model (1)
□ Qual (clear phrasing) (1)

Q5 (17 pts)

a) Dissonance in ad (5 = Elem 3, Qual 2)
□ Identifies conflict (belief vs behaviour, etc.) (2)
□ Notes determinants (importance, ratio, rationale) (1)
□ Qual (application clarity) (2)
b) Narrative coherence & fidelity (6 = Elem 3, Qual 3)
□ Defines coherence (story hangs together) (1)
□ Defines fidelity (rings true to values) (1)
□ Applies both to ad (specifics) (2)
□ Qual (depth, clarity) (2)
c) Combining CDT + Narrative (6 = Elem 3, Qual 3)
□ Shows synergy (narrative pre-empts/reduces dissonance) (2)
□ Gives tactical implication for messaging (1)
□ Qual (analysis/structure) (3)

Q6 (17 pts)

a) Pre- & post-test metric (6 = Elem 4, Qual 2)
□ Pre-test metric named + purpose (2)
□ Post-test metric named + purpose (2)
□ Both fit hierarchy/effects logic (1)
□ Qual (clarity) (1)
b) Sampling & data collection (4 = Elem 3, Qual 1)
□ Sampling approach (quota/target) (2)
□ Method (lab/online survey etc.) (1)
□ Qual (brevity) (1)
c) Consequential error policy (7 = Elem 3, Qual 4)
□ States “penalise once, carry forward” (2)
□ Links to specific rubric criteria (accuracy vs application) (1)
□ Describes implementation briefly (marker note) (1)
□ Qual (meta-clarity, structure) (3)

4. Quick Grade Conversion (Adjust if your uni differs)

Table 58
Points
Grade (DE)
95–100
1.0
90–94
1.3
85–89
1.7
80–84
2.0
75–79
2.3
70–74
2.7
65–69
3.0
60–64
3.3
55–59
3.7
50–54
4.0
<50
5.0 (fail)
There are no rows in this table
(Check your exam regulations; adjust thresholds if needed.)

5. LLM Self-Grading Prompt (copy/paste)

You are grading my answer to an exam question using this rubric and checklist:

[PASTE the relevant question’s checklist + generic rubric]

My answer:
<<<
[PASTE YOUR ANSWER]
>>>

Tasks:
1. Tick each checklist item Y/N and assign those element points.
2. Score each rubric criterion 0–4, multiply by its weight, show math.
3. Give total points for the question.
4. Briefly list what to improve to reach 4-level in any weak criterion.
Be concise, no fluff.

Want a spreadsheet or auto-summing template?

Say the word—I’ll give you a Google Sheets layout or a tiny Python script to total and map grades.
Good? Or need tweaks for any single question?

Recommended Analytic Grid (5 criteria × 0–4 scale) → convert to points per question

Table 61
Criterion
1 – Exemplary
2 – Competent
3 – Developing
4 – Minimal
5 – Missing/incorrect
Accuracy & Completeness (did they name all 3 items, correct definitions?)
All required elements precise; no substantive error
Minor omission or minor error
Several omissions; some inaccuracies
Major elements missing/wrong
Nothing relevant
Application/Analysis (theory→case)
Insightful, well-justified application; clear reasoning
Adequate link; justification present
Surface-level link; weak reasoning
Vague or misapplied
None
Use of Evidence/Examples
Specific, relevant examples/data
Example present but generic
Weak/irrelevant example
Name-drops only
None
Structure & Clarity
Logical flow, headings/bullets as instructed, within word/time
Generally clear; minor organization issues
Hard to follow; exceeds limits
Disorganized
Illegible/off-topic
Critical Reflection / Evaluation
Weighs perspectives; justified stance
Some evaluation present
Mostly descriptive
No evaluation
There are no rows in this table
Assign point values per subpart → map each criterion slice.
Consequential errors: deduct once, then carry forward logic without double-penalizing.

6. Marking Model (example)

Total 100 pts. Pass ≥50. Suggested grade bands (adjust to your uni scale):
95–100 = 1.0; 90–94 = 1.3; 85–89 = 1.7; 80–84 = 2.0; 75–79 = 2.3; … 50–54 = 4.0; <50 fail. (Placeholder—confirm your faculty’s exact conversion.)
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.