A practical tool in your R workflow — not a replacement for your expertise
RRC-HTA, AIIMS Bhopal | HTAIn, DHR
IS: - AI as a coding assistant for translating your HTA logic into R - A tool to speed up model development - Useful for debugging errors and learning R syntax - One tool among many in your workflow
ISN’T: - AI doing your HTA for you - A substitute for clinical expertise - A replacement for validation and peer review - Able to choose model structure or parameters for you
Mindset: AI is a very fast but occasionally unreliable research assistant who knows R syntax but nothing about your clinical question.
AI excels when: - You know what to calculate but not how to write it in R - You have working code and need to modify it - You get an error and need help understanding it - You’re converting Excel-based logic into R - You’re looking up which function or package does something
AI is NOT reliable for: - Choosing model structures - Selecting clinical parameters - Validating whether your model is correct - Replacing peer review or expert input
You’ve seen both approaches in this workshop:
heemod — package handled everythingWhen using AI: always tell it which package to use.
A naive prompt → raw loops with hidden bugs.
A package-constrained prompt → validated engine does the heavy lifting.
Always include this sentence in your prompt:
“Use the [package name] package. Do not write manual loops.”
| Model Type | Package |
|---|---|
| Decision Tree | rdecision |
| Markov | heemod |
| PSM | flexsurv + manual wrapper |
| DSA/PSA | Same as base case package |
| CE Visualization | BCEA or dampack |
Step 1: Prepare parameter file in Excel (your single source of truth)
Step 2: Prompt for base case → “Attached my parameter file. Use heemod.”
Step 3: Validate — cross-check against hand calculations
Step 4: Prompt for DSA + PSA → “Base case works. Add sensitivity analysis.”
Step 5: Prompt for Shiny app → “Wrap validated model in interactive dashboard.”
I've attached my parameter file (Excel).
Build a Markov cost-effectiveness model using the
heemod package. Do NOT write manual matrix
multiplication. Report ICER with 4-quadrant
interpretation and NMB. Use kable() and ggplot2.
The parameter file carries the detail. The prompt just says what to do with it.
The base case works. Now add sensitivity analysis
using the same model object.
DSA: use define_dsa() with low/high from my file.
PSA: use define_psa() with distributions from my file.
Report mean incremental NMB, P(cost-effective).
Do NOT average ICERs — use NMB instead.
I have a validated Markov model built with heemod.
Create a Shiny app with:
- Sidebar: sliders for key parameters + "Run PSA" button
- Tab 1: Base case (table + trace plot), updates reactively
- Tab 2: Tornado diagram, updates reactively
- Tab 3: PSA (CE plane + CEAC), updates on button click
Critical questions to ask:
for (cycle in 1:n) instead of define_transition(), push back.icer < wtp.AI might write:
Correct approach:
Lesson: Always check if distribution parameterization matches your source data.
AI-generated Markov models often omit half-cycle correction.
If you don’t see this in the code, ask:
“Add half-cycle correction to this Markov model.”
Accounts for the fact that patients spend time both in their current and next state during each cycle.
A real bug we found in this workshop:
S(t) = exp(-(λ × HR) × t^γ) — scales hazardS(t) = exp(-λ × (HR × t)^γ) — scales timeBoth valid, but give different results when γ ≠ 1. Always specify “proportional hazards, not AFT.”
AI-generated PSA code often reports “mean ICER” across iterations.
Problem: A few iterations with near-zero ΔQALYs produce extreme ICERs that distort the average.
AI sometimes invents R functions or packages that don’t exist.
| Tool | Strengths | Limitations |
|---|---|---|
| Claude | Good at R, explains reasoning well | Limited free tier |
| ChatGPT | Widely available, fast | Sometimes generic |
| Copilot | Integrated with VS Code | May suggest outdated packages |
| Gemini | Free, multimodal | R knowledge variable |
Recommendation: Use the tool you have access to. The key is prompt quality, not which tool.
Your parameter file + validated package + AI coding = production-ready model
Your expertise determines: model structure, parameter choices, clinical interpretation.
AI helps translate that into working R code — faster, not better.
Think of AI as a very fast but literal-minded colleague: it will do exactly what you ask, so ask precisely.
When you use AI for HTA coding, you are not replacing expertise. You are amplifying it:
The HTA work is still yours. The code is now just faster to write.
You have completed Day 2 of the workshop:
Next: Day 3 — Advanced topics and practice exercises
R for HTA (Basics) — RRC-HTA, AIIMS Bhopal