Prompt library

Reusable prompt templates for the four bioinformatics tasks

NoteHow to use this page

Each template below distils a worked pattern from the corresponding bioinformatics page into a fill-in-the-blank prompt. They are starting points, not magic strings. The value is the structure (specifying intent, environment, and constraints up front), not the exact wording. Adapt to your task and verify the output as the linked page describes.

1. Debugging a stack trace

Source pattern: Code assistance, the four-part debugging pattern.

When to use: something failed with a traceback you do not immediately understand and you want a focused first-pass diagnosis.

Template:

I am running {what the script is doing in one sentence} and {package} is failing.

Full traceback:

{paste the entire traceback, not just the last line}


Minimal failing code (the smallest block that reproduces the error):

```python
{the smallest reproducer; strip everything else out}

Intent: I expect {what the function is supposed to do and what you expected to happen}.

Environment: Python {version}, {package}=={version}, {related packages}=={versions}, {dataset / organism / platform context if relevant}.


Verify before trusting: the AI's first guess is often a defensive try/except that hides the bug. Make sure the suggested fix addresses the *cause* (the missing pre-condition) and not just the symptom.

---

## 2. Test writing for a function with a docstring

Source pattern: [Code assistance](../bioinformatics/code-assistance.qmd), the `validate_qc_outputs` worked example.

When to use: you have a function whose contract you have written yourself, and you want pytest scaffolding around it without losing the contract.

Template:

Function {name} has the signature and docstring below. {One sentence on the dataset and the realistic input the function will see.} Environment: Python {version}, {packages}=={versions}, pytest {version}.

{paste your signature and docstring; you wrote the contract, not the AI}

Draft pytest test cases covering the contract in the docstring. Use a fixture where appropriate. Do not invent additional contract beyond the docstring.


Verify before trusting: prune tests that validate *dataset identity* rather than *contract* (a literal "exactly 2,638 cells" assertion is fragile). Add the test the AI missed for your domain (dtype, organism convention, view-vs-copy semantics for AnnData).

---

## 3. Literature triage on an unfamiliar topic

Source pattern: [Literature review](../bioinformatics/literature-review.qmd), the five-step verification workflow.

When to use: scoping a review or background section in a subfield you do not know yet. Use a grounded tool (Elicit, Consensus, SciSpace, Perplexity with sources) for the citation list itself. Use the prompt below for the vocabulary scaffold.

Template:

I am scoping a review of {topic} for {audience: lab meeting / introduction / grant specific aims}. Time horizon: {date range}. {Application context: e.g., “applied to the tumor microenvironment” / “in mouse models” / “in primary tissue”.}

  1. List 8 to 10 key concepts and methods in this area in 2026.
  2. List 3 to 5 landmark or foundational papers.
  3. List 3 to 5 currently active research groups or recent reviews (last 2 to 3 years).

For each citation, give: title, first author, year, journal, and DOI if known.


Verify every citation with the five-step check on [Literature review](../bioinformatics/literature-review.qmd): copy DOI, resolve via doi.org, confirm title and authors match, check the abstract on PubMed, and only then incorporate. Budget about three minutes per citation. Refuse to cite anything you cannot verify.

Switch tools if the ungrounded LLM returns at least one citation whose DOI does not resolve, or resolves to a different paper. Re-run the same query in Elicit or Consensus.

---

## 4. Protocol critique mode

Source pattern: [Protocol design](../bioinformatics/protocol-design.qmd), the sparring-partner workflow, step 4.

When to use: you have drafted (or generated) a protocol and want a fast critique pass for missing controls and likely failure modes before showing it to your PI.

Template:

Below is a draft protocol for {scientific question, one sentence}. Constraints: {platform / organism / sample size / budget / time window}.

{paste the full draft protocol here, including reagent versions, concentrations,
and timing}

Critique this protocol. Specifically:

  1. List potential failure modes (technical and biological) and their likelihood.
  2. List controls that are missing or under-specified.
  3. List version-dependent parameters (chemistries, kit versions, software releases) that I should verify against the current vendor or developer documentation before ordering reagents or running.
  4. Flag anything that depends on tacit lab knowledge an AI cannot have.

Verify before trusting:

- Cross-check every reagent version, kit code, or instrument firmware against the *current* vendor documentation. Training data lags by months to years, and version numbers update silently. (See the GEM-X v4 vs. v3.1 example on [Protocol design](../bioinformatics/protocol-design.qmd).)
- Cell-loading concentrations, cycle counts, and timing parameters depend on cell type and platform. The AI gives the textbook number, not necessarily the right one.
- Treat the critique as a starting list, not a clearance. Your PI's review and a small dry run remain non-optional.

---

## 5. Paragraph feedback using the MEAL framework

Source pattern: adapted from VIB/ELIXIR "Introduction to Generative AI for Research" (CC-BY 4.0), exercise 07.

When to use: you have drafted a paragraph for a methods section, introduction, or grant specific-aims page and want structured feedback before sharing it. Works best on a single paragraph at a time.

The MEAL framework asks whether a paragraph does four things:

| Component | Question to ask |
|:----------|:----------------|
| **M**ain idea | Is there one clear claim the paragraph is making? |
| **E**vidence | Is that claim supported by data, citations, or worked examples? |
| **A**nalysis | Is it explained *why* the evidence supports the claim? |
| **L**ink | Does the paragraph transition to what comes next? |

Template:

Evaluate the paragraph below using the MEAL framework: - Main idea: is there a clear, singular claim? - Evidence: is the claim supported by data or citations? - Analysis: is there explicit reasoning connecting the evidence to the claim? - Link: does the paragraph connect to what follows?

For each MEAL component: state whether it is present, partially present, or absent, and give one concrete revision suggestion if it is missing or weak.

Paragraph:

{paste one paragraph here} ```

Verify before trusting: the AI will often flag “analysis” as weak even when it is present but implicit. Check whether the suggested revision actually adds analytical depth, or merely makes the structure more visible. Accept the suggestion only if the paragraph is genuinely clearer afterwards.


A note on temperature and reproducibility

If you are documenting a prompt for a deliverable’s disclosure, record:

  • The exact tool, model, and version (“Claude Sonnet 4.6, web UI, default settings”).
  • The full prompt text.
  • The date of the session.

LLM outputs are non-deterministic at default settings. Documenting the prompt does not guarantee reproducibility, but it does enable replication and critique. The disclosure rubric in the Syllabus requires “tools listed” and “use described”. These notes satisfy both.