How to use this course
Self-paced. Learning is the goal; weeks are guidance, not gates.
This is a self-paced course. There is no instructor, no schedule, and no grades. The four weeks are recommended pacing. They exist so you can budget time, not so you can be late.
The shape of the course
The course has four units, each labelled “Week N”. Each unit takes most learners about three to five hours of focused work, so a week is a comfortable cadence. Plenty of learners spread a unit over two weeks, and some compress it into a long weekend. The total volume is roughly the same as a typical four-week graduate seminar, so allow yourself four to eight weeks end-to-end depending on your background and time.
| Unit | Theme | Hands-on |
|---|---|---|
| Week 1 | The 4 D’s of AI fluency | Workflow audit |
| Week 2 | LLM literacy | Prompt engineering |
| Week 3 | scRNA-seq I: QC and clustering on PBMC 3k | AI-free baseline plus AI-assisted analysis |
| Week 4 | scRNA-seq II: annotation, literature, protocols | Final project (one of three paths) |
What’s on each week page
Every week page (weeks/week-N.qmd) has the same structure:
- Learning objectives. What you should be able to do after the unit.
- Suggested pacing. A rough breakdown of where the three to five hours go.
- Readings. The conceptual pages for the unit, each with a one-line “what to focus on” pointer.
- Hands-on practice. Two or three small applied exercises with collapsible self-check answers. Do these against your own work, not toy examples.
- Knowledge check. A 5- to 8-question self-test covering recall, applied scenarios, and “diagnose the failure” prompts. Worked answers sit in a collapsible callout. Try to answer first, then check.
- Project. A larger applied piece. Weeks 1 and 2 are short reflections, Week 3 is the PBMC 3k mini-project, and Week 4 is your final project.
- Self-rubric. A small table you use to grade your own project against the same dimensions an instructor would.
- Going further. Optional extensions and pointers.
How to self-assess
You get four kinds of feedback, all self-driven:
- Per-page “Check your understanding” callouts. Every conceptual page (in
fluency/,literacy/,bioinformatics/, and the module READMEs) has 3 to 5 questions with worked answers tucked into a collapsible callout. Use them to confirm a page landed before moving on. - Per-week knowledge checks at the end of each week page. These are cumulative across the week’s readings.
- Per-week practice exercises. Applied tasks against your own data and questions. The point is not to get a “right answer”. The point is to make the patterns muscle memory.
- Project self-rubrics. For each of the three projects (Week 1 reflection, Week 3 mini-project, Week 4 final project), a learner-facing rubric lets you grade your own work against the same dimensions an instructor would.
If you miss two or more knowledge-check questions on a topic, revisit the reading and redo the practice. The check is the signal. The reading is the fix.
Prerequisites
See the Syllabus for the full list. Briefly: comfort reading and modifying R or Python at the 100-line-script level, an LLM chat account (Claude, ChatGPT, or Gemini, where the free tier is fine for Weeks 1 and 2), a coding assistant (Claude Code, Cursor, or VS Code with Copilot), and a GitHub account.
The hands-on track for Weeks 3 and 4 runs in Google Colab out of the box, so no local install is needed for the core exercises.
A learner-progress checklist
Tick these off as you go. Markdown checkboxes won’t render as interactive on the site, but they make a useful copy-paste artifact for a notebook or issue.
Doing this with peers (optional)
Self-paced doesn’t mean you have to be alone. The course works well as a small study group of 3 to 5 people who agree on a cadence and meet weekly to compare notes. If you do this:
- Share prompt logs and discernment notes. That is where the learning happens.
- Use the self-review template on each other’s projects. It is the same template instructors would use, and it is the highest-bandwidth feedback you can get.
- Resist the urge to consensus-grade. Disagreement on what “done well” looks like is data, not a problem.
There is no instructor mode, no facilitator guide, and no answer key beyond what is on the page itself. The repository is open (github.com/mdmanurung/ai-fluency-for-bio). Fork it, adapt it, run it.
Honest scope
This is a foundations course. Real AI fluency develops over months of supervised practice on real research problems. Four weeks (or eight, or twelve at your pace) gives you the framework, vocabulary, and habits to start that practice deliberately, not to finish it. Plan accordingly.