Computing MiniProject Rubric#
Note: This rubric is derived from the MulQuaBio MiniProject appendix.
The MiniProject asks students to answer the biological question “What mathematical models best fit an empirical dataset?” in a fully reproducible way. Students choose (or are given) an empirical dataset, fit and compare ≥2 alternative mathematical models (at least one nonlinear/mechanistic), and produce a LaTeX report. The project must be fully reproducible (runnable) end-to-end.
Summative marking rubric — total = 100 marks (Part A: 50 marks; Part B: 50 marks)
Part A — Computing & Workflow (50 marks)#
# |
Criterion |
Weight |
What earns full marks |
Typical reasons for lost marks |
|---|---|---|---|---|
A1 |
Project organisation & README |
10 marks |
- |
• Missing or misnamed subdirectories. • |
A2 |
Single-script reproducibility |
15 marks |
- A single run script ( |
• Run script absent or empty. |
A3 |
Code quality & style |
10 marks |
- Code is readable: meaningful variable/function names, consistent style (PEP 8 for Python, tidyverse/Google style for R). |
• Meaningless variable names or no comments. |
A4 |
Model fitting & statistical analysis |
10 marks |
- ≥2 mathematical models fitted (at least one nonlinear/mechanistic model via NLLS or equivalent). |
• Only trivial linear models fitted (no NLLS attempted). |
A5 |
Version control & workflow discipline |
5 marks |
- Regular commits throughout development with descriptive messages. |
• Generic or absent commit messages. |
Part B — Written Report (50 marks)#
The report must be written in LaTeX (article class, 11pt, 1.5-spaced, continuous line numbers, ≤3500 words excluding title page, references, and captions). It must include a separate Title page (title, author, affiliation, word count), Abstract, and sections: Introduction, Methods (with a Computing Tools sub-section), Results, and Discussion. References must use a non-numeric in-text citation format (e.g. apalike) compiled with BibTeX.
Key Principle: The narrative must flow coherently from title through discussion, with hypotheses/questions naturally emerging from biological context rather than appearing disconnected. Display items (4–6 figures/tables) should tell most of the story on their own.
# |
Criterion |
Weight |
What earns full marks |
Typical reasons for lost marks |
|---|---|---|---|---|
B1 |
Report format & presentation |
10 marks |
- LaTeX |
• Missing or incorrectly configured LaTeX formatting. |
B2 |
Introduction & objectives |
10 marks |
- Opens with sufficient biological context, with citations, that motivates the study topic. |
• Context too brief, too generic, or disconnected from study focus. |
B3 |
Methods (including Computing Tools)** |
10 marks |
- Data and its provenance clearly described (source, units, how unique datasets/curves are identified). |
• Data provenance or description absent. |
B4 |
Results & display items |
10 marks |
- Results presented clearly and in the same logical order as the objectives (Introduction→Results alignment). |
• Results not related back to stated objectives. |
B5 |
Discussion, conclusions & abstract |
10 marks |
- Opens by reminding reader of original goals; key findings stated succinctly. |
• Discussion fails to return to original objectives or biological context. |
Mark classification#
Total mark |
Classification |
|---|---|
70–100 |
Distinction |
60–69 |
Merit |
50–59 |
Pass |
< 50 |
Below Pass threshold |
Provisional mark format (for assessor use):
Part A (Computing): XX/50
Part B (Report): XX/50
Total Mark: XX/100
Classification: Distinction / Merit / Pass / Below Pass threshold
Engagement-level anchors#
Band |
Typical profile |
|---|---|
Strong Distinction (75–90) |
Complete end-to-end reproducible workflow with no errors; NLLS correctly implemented with ≥2 models (including ≥1 mechanistic); appropriate model comparison metrics; well-crafted Introduction with natural narrative funnel to hypotheses; substantive, concrete Discussion engagement with advanced methods; well-structured LaTeX report showing original synthesis; professional display items (4–6 figures with effective visual communication); clean project organisation; excellent Git history. |
Solid Distinction (70–74) |
Complete or near-complete reproducible workflow; NLLS with ≥2 models and appropriate comparison; Introduction logically structured with clear hypotheses; Discussion explicitly engages advanced methods with concrete reasoning; all required sections present with good depth; clear Computing Tools justification; reasonable display items; solid organisation. |
Solid Merit (62–69) |
Working workflow (possibly minor issues); ≥2 models fitted with comparison metrics; Introduction covers biological context and hypotheses; Discussion acknowledges advanced methods; adequate report with all sections present; Computing Tools section included; reasonable display items; competent organisation. |
Pass (50–61) |
Partially working workflow; some model fitting and comparison attempted; report present with Introduction/Results/Discussion but lacking depth or narrative flow; limited advanced methods engagement; minimal display items; basic organisation; some Computing Tools documentation. |
Below Pass (<50) |
Workflow broken or absent; minimal model fitting; report missing, critically incomplete, or incoherent; no advanced methods engagement; poor project organisation; Computing workflow unclear. |
Important Note: Ambition vs. Coherence Trade-off#
While extra credit is available for attempting more challenging models (multiple nonlinear/mechanistic models), choosing overly ambitious projects risks losing marks overall. Students who spend excessive time on complex model fitting and run out of time to write a coherent, well-structured report with clear narrative flow will score lower than those who tackle a simpler problem well. Coherence and completeness take priority over model complexity. Start with a tractable problem (e.g., two linear models), establish a working workflow end-to-end, then iteratively add model complexity.
Missing submissions policy#
Situation |
Deduction |
|---|---|
|
Treat A2 as 0 |
|
Up to −10 marks (A1) |
LaTeX report absent |
All B criteria scored 0 |
Required report section absent |
Up to −3 marks per section (B2–B5) |
Results committed to repo |
−2 marks (A1) |
Partial credit is always available where effort is clearly demonstrated.
Efficiency fairness note#
Computational efficiency is assessed proportionately and in context: correctness and reproducibility remain primary, and minor runtime differences are not heavily penalized. Efficiency judgements should be relative to project scope, dataset size, and model complexity.