Integration Guide
This guide walks through adding auto-pr to any repository so that pushes to ai/** branches automatically create or update pull requests.
Typical setup: GitHub Actions only — auto-pr-init, GitHub App, secrets, then push to ai/**. No package.json or install of auto-pr in your repo; reusable workflows pull from knirski/auto-pr. Optional: install or npx -p github:knirski/auto-pr … to run CLIs locally — Step 1 (optional).
Getting started
Section titled “Getting started”- Run
npx -p github:knirski/auto-pr auto-pr-initin your repo — creates the workflow, PR template,.nvmrc, and.github/llama-server/Dockerfile(llama-server image pin when using local Docker llama) - Create a GitHub App with Contents and Pull requests (Read and write)
- Generate a private key in the app settings and save the
.pemfile - Install the app on your repository
- Add
APP_IDandAPP_PRIVATE_KEYto Settings → Secrets and variables → Actions - Test — push to an
ai/**branch:git checkout -b ai/test && git commit --allow-empty -m "chore: test" && git push -u origin HEAD
No package.json required. Works with any project (Node, Python, Rust, etc.). No Nix required.
Repository setup checklist
Section titled “Repository setup checklist”| Requirement | How to set up |
|---|---|
| Workflow + template | Run npx -p github:knirski/auto-pr auto-pr-init in your repo. Creates .github/workflows/auto-pr.yml, .github/PULL_REQUEST_TEMPLATE.md, .nvmrc, and .github/llama-server/Dockerfile. Step 6 |
| GitHub App | Create at github.com/settings/apps/new. Permissions: Contents, Pull requests (Read and write). Step 2 |
| Private key | Generate in the app settings → Private keys. Save the .pem file. Step 3 |
| App installed | Install the app on your repository (Install App → select repo). Step 4 |
| Secrets | Add APP_ID and APP_PRIVATE_KEY to Settings → Secrets and variables → Actions (optional: GH_TOKEN to override the default token for GitHub Models). Step 5 |
| Branch protection | (Optional) Require Auto-PR generate (reusable) / generate and Auto-PR create (reusable) / create before merging. Step 8 |
Quick setup: npx -p github:knirski/auto-pr auto-pr-init → GitHub App + secrets (Steps 2–5) → push to ai/**.
Overview
Section titled “Overview”- AI agent (or developer) pushes a branch (e.g.
ai/feature-xorai/fix-y) - Workflow runs on push to
ai/**branches (title from first commit subject; for 2+ commits: AI generates description) - GitHub App creates or updates the PR using its token
- PR is opened by
your-app-name[bot]→ you approve it
Step 1 (optional): Install the package for local CLI
Section titled “Step 1 (optional): Install the package for local CLI”Skip this step unless you run auto-pr CLIs on your machine. The default reusable workflow fetches auto-pr from knirski/auto-pr and needs no dependency on auto-pr in your package.json.
When you do install from git (e.g. npx -p github:knirski/auto-pr or bun add github:knirski/auto-pr), the package works with Node only: dist/ is pre-built and committed by CI. With Bun, prepare also builds it on install.
JS/TS projects: The generate and create jobs auto-detect your runtime (npm, yarn, pnpm, bun) from packageManager or lockfile. No config needed.
Step 2: Create the GitHub App
Section titled “Step 2: Create the GitHub App”- Go to github.com/settings/apps/new
- Fill in:
- GitHub App name: e.g.
my-repo-auto-pr-bot(must be unique) - Homepage URL: Your repo URL
- Webhook: Uncheck Active (not needed)
- GitHub App name: e.g.
- Under Repository permissions:
- Contents: Read and write
- Pull requests: Read and write
- Actions: Read and write (if you use workflows that push)
- Under Where can this GitHub App be installed?: Choose Only on this account
- Click Create GitHub App
Step 3: Generate and save the private key
Section titled “Step 3: Generate and save the private key”- On the app’s settings page, scroll to Private keys
- Click Generate a private key
- Save the
.pemfile securely. You’ll need its contents for a secret.
Step 4: Install the app on your repo
Section titled “Step 4: Install the app on your repo”- On the app settings page, click Install App
- Choose Only select repositories and select your repo
- Click Install
Step 5: Add repository secrets
Section titled “Step 5: Add repository secrets”- Go to your repo → Settings → Secrets and variables → Actions
- Add these repository secrets:
| Secret name | Value |
|---|---|
APP_ID | Your app’s App ID (from app settings, “About”) |
APP_PRIVATE_KEY | Full contents of the .pem file |
Optional: GH_TOKEN — use only if you want a specific token for GitHub Models instead of the default GITHUB_TOKEN injected into Actions. The stock auto-pr.yml passes secrets.GH_TOKEN || github.token into the generate workflow (entry workflow must keep models: read).
APP_* are used by the create job (and release-please if you use it).
Step 6: Add the workflow file
Section titled “Step 6: Add the workflow file”Recommended: Run npx -p github:knirski/auto-pr auto-pr-init — creates the workflow, PR template, .nvmrc, and .github/llama-server/Dockerfile in one command. The reusable generate job runs shell via composite actions pinned in knirski/auto-pr; you do not need scripts/ in your repository.
Manual: Copy auto-pr.yml to .github/workflows/auto-pr.yml in your repo. The workflow calls two reusable workflows (generate + create) and pins to a commit SHA for reproducible runs; do not change the ref unless you intend to upgrade.
No action copying required. The reusable workflows fetch those composite actions from knirski/auto-pr. A relative ./ path would resolve to your repo; we use full paths so you do not need anything under .github/actions/ in your project.
All inputs use sensible defaults for the AI model. The PR template path is always .github/PULL_REQUEST_TEMPLATE.md at the repo root. Edit the How to test section in that file directly for project-specific steps (for example npm run check or pytest). Override other options via with: when needed.
Run checks first: See Running checks before PR creation to add a check job before generate/create.
Step 7: Add the PR template
Section titled “Step 7: Add the PR template”npx -p github:knirski/auto-pr auto-pr-init creates this automatically. Otherwise, copy .github/PULL_REQUEST_TEMPLATE.md to your repo. Customize placeholders if needed.
Step 8: Configure branch protection (optional)
Section titled “Step 8: Configure branch protection (optional)”To require the auto-pr workflow and your CI to pass before merging PRs into main:
- Go to Settings → Branches → Add rule (or edit the rule for
main) - Set Branch name pattern to
main(or your default branch) - Enable Require status checks to pass before merging
- Search for and add:
Auto-PR generate (reusable) / generate— content generation (checkout + template fill)Auto-PR create (reusable) / create— PR creation/update- Your CI job(s), e.g.
check / checkortest— if you have workflows that run onpull_request
- Optionally enable Require branches to be up to date before merging (strict mode)
- Save the rule
Note: Status checks must have run successfully in the past 7 days to appear in the list. Push an ai/** branch and open a PR first if Auto-PR generate (reusable) / generate is missing.
See Managing a branch protection rule and Troubleshooting required status checks.
Step 9: Use the right branch names
Section titled “Step 9: Use the right branch names”When creating changes, use branch names that match the workflow:
ai/feature-nameai/fix-bug-description
Or adjust the branches filter in the workflow.
Running checks before PR creation
Section titled “Running checks before PR creation”To run your tests or checks before PR creation, add a check job and make generate depend on it. Edit the check job for your stack.
Pattern: Add a job before generate and set needs: check on the generate job:
jobs: check: runs-on: ubuntu-24.04 steps: - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: ref: ${{ github.ref_name }} fetch-depth: 0 # Add your stack's setup and run command below - name: Check run: echo "Add your check command (npm run check, pytest, cargo test, etc.)" && exit 1
generate: needs: check uses: knirski/auto-pr/.github/workflows/auto-pr-generate-reusable.yml@<SHA>
create: needs: generate uses: knirski/auto-pr/.github/workflows/auto-pr-create-reusable.yml@<SHA> secrets: inheritNode/npm example:
check: runs-on: ubuntu-24.04 steps: - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: ref: ${{ github.ref_name }} fetch-depth: 0 - uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0 with: node-version-file: ".nvmrc" cache: "npm" - run: npm ci - run: npm run checkBun/pnpm/yarn: Use oven-sh/setup-bun, pnpm/action-setup + actions/setup-node, or actions/setup-node with cache: "yarn" respectively. The generate and create jobs auto-detect your runtime; your check job should match.
Python example:
check: runs-on: ubuntu-24.04 steps: - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: ref: ${{ github.ref_name }} fetch-depth: 0 - uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5 with: python-version: "3.12" - run: pip install -e ".[dev]" - run: pytestAdjust the install step for your project (e.g. pip install -r requirements.txt, uv sync).
Rust example:
check: runs-on: ubuntu-24.04 steps: - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 with: ref: ${{ github.ref_name }} fetch-depth: 0 - run: cargo testReplace <SHA> with the SHA from the uses: lines in auto-pr.yml.
Common customizations
Section titled “Common customizations”| I want to… | Set |
|---|---|
| Use my project’s check command in “How to test” | Edit the How to test section in .github/PULL_REQUEST_TEMPLATE.md |
| Use a different GitHub Models id | ai_openai_compat_model (e.g. openai/gpt-4.1) |
| Point local at another host or gateway | ai_openai_compat_url, ai_openai_compat_model, and optionally ai_openai_compat_api_key |
| Run local on GitHub-hosted runners with llama.cpp | ai_provider: local, leave ai_openai_compat_url empty, set ai_llamacpp_model_url (HTTPS link to a .gguf file). Optional: ai_llamacpp_release_tag (Docker image override), ai_llamacpp_port. The workflow uses .github/llama-server/Dockerfile for the image pin, caches the GGUF and Docker image tar, and runs llama-server in Docker. |
| Run checks before PR creation | Add a check job; set needs: check on generate (see Running checks before PR creation) |
Local llama Dockerfile pin
Section titled “Local llama Dockerfile pin”- Purpose: Pins the
llama-serverDocker image when the reusable workflow runs llama in Docker (ai_llamacpp_model_urlset,ai_openai_compat_urlempty). - Tag vs digest: The template uses a tag on
FROM(e.g.ghcr.io/ggml-org/llama.cpp:server) so Dependabot can propose image updates. For a stricter immutable pin, useimage@sha256:…on the same line (Dependabot behavior may differ from tag-only pins). - Parser limits (workflow + tests): Only the first
FROMis used, after optional--flags (e.g.--platform=…). There is no support for backslash line continuation or for picking a later stage in a multi-stage file—keep this file single-stage, or ensure the image you need appears on the firstFROM. - Docker llama composite actions: Start/stop are
llama-server-docker-startandllama-server-docker-stop(pinned in the reusable workflow). The start script usesdocker cpto place the GGUF in the container (not a bind mount), so nested Docker (e.g. act) does not depend on host path alignment for-v. Default container name isauto-pr-llama; passcontainer_namewhen several jobs share one Docker host (nektos/act runs parallel jobs on a single machine). Assume onellama-servercontainer per job unless you use distinctcontainer_nameand host port values. If you copy those composite actions into a custom workflow and need two local servers in the same job, use differentcontainer_name/llama_portinputs or run them in separate jobs. - Runner cache layout: The start action (
llama-server-docker-start) takesllama_server_root. Under that directory it storesmodel/model.ggufanddocker/llama-server-image.tarforactions/cache. This repo’s integration workflow uses${{ github.workspace }}/.cache/auto-pr-llama-stub/…-modelso paths stay under the checkout (nesteddocker -vfrom act matches the host). Each integration job picks an ephemeral TCP port on the runner via an inlinepython3one-liner in integration.yml (bind(127.0.0.1, 0)— Python is preinstalled on GitHub-hosted Ubuntu; not inside nested containers). The generate reusable workflow still uses${{ runner.temp }}/auto-pr-llamafor hosted runs.
AI providers (local, github-models)
Section titled “AI providers (local, github-models)”For branches with 2+ commits, auto-pr generates the PR description via an AI backend. Choose a provider with ai_provider on the generate reusable workflow (maps to AUTO_PR_AI_PROVIDER), or set env when running locally.
How it calls the model: The generate step uses LanguageModel.generateText with a prompt that asks for JSON (title, motivation, benefits, risks, notesForReviewers). It parses the assistant reply and validates with Effect Schema — not OpenAI generateObject / json_schema, because GitHub Models does not support that response format and other OpenAI-compatible servers are inconsistent with it. On repeated parse or transient HTTP failures (network, rate limit, 5xx), auto-pr falls back to commit-derived title and description. Authentication errors (HTTP 401/403) surface directly as a configuration error rather than silently falling back — check your GH_TOKEN or AUTO_PR_AI_OPENAI_COMPAT_API_KEY if you see an auth error in the generate step.
local (OpenAI-compatible HTTP)
Section titled “local (OpenAI-compatible HTTP)”Any OpenAI-compatible endpoint (llama.cpp llama-server, remote gateways, etc.) using the same env names as in src/auto-pr/config.ts: AUTO_PR_AI_OPENAI_COMPAT_URL, optional AUTO_PR_AI_OPENAI_COMPAT_API_KEY, and AUTO_PR_AI_OPENAI_COMPAT_MODEL.
- Workflow:
ai_provider: localand setai_openai_compat_url,ai_openai_compat_model, and optionallyai_openai_compat_api_keyif your server requires a key — or omitai_openai_compat_urland setai_llamacpp_model_urlto an HTTPS.ggufURL so the reusable workflow uses.github/llama-server/Dockerfilefor the image pin, caches the GGUF and image tar, and startsllama-serverin Docker on127.0.0.1(port fromai_llamacpp_port, default8080). - CI: Prefer
github-modelswhen you do not want to host a model on the runner. For local on GitHub-hosted runners, either useai_llamacpp_model_url(Docker +Dockerfilepin + cache), run inference on a self-hosted runner, or expose your server via a tunnel and setai_openai_compat_urlaccordingly. - Local dev: Defaults target
http://127.0.0.1:8080/v1and modelgpt-oss(override via env).
github-models
Section titled “github-models”Uses the GitHub Models inference API (https://models.github.ai/inference) with an OpenAI-compatible client.
-
Token: Optional repository secret
GH_TOKEN; if unset, the entry workflow passes the default Actionsgithub.token(secrets.GH_TOKEN || github.token). See Step 5. The reusable workflow forwards it to generate whenai_providerisgithub-models. -
Workflow: Default is
ai_provider: github-modelswithai_openai_compat_model(e.g.openai/gpt-4.1). -
Env (local / scripts):
AUTO_PR_AI_PROVIDER=github-models,AUTO_PR_AI_OPENAI_COMPAT_MODEL=...,GH_TOKEN=.... -
Legal model ids: The catalog is published as JSON — see REST: List all models. Fetch and read each entry’s
id(formatpublisher/model):Terminal window curl -sL https://models.github.ai/catalog/modelsTo list ids only:
Terminal window curl -sL https://models.github.ai/catalog/models | jq -r '.[].id' | sortThe catalog includes embedding-only models; for PR text generation, pick an entry whose
supported_output_modalitiesincludestext(or use a known chat model id such asopenai/gpt-4.1).
See TROUBLESHOOTING.md for common failures.
Verification
Section titled “Verification”-
Create and push a branch:
Terminal window git checkout -b ai/test-setupgit commit --allow-empty -m "chore: test auto-PR workflow"git push origin ai/test-setup -
Check Actions in your repo — the workflow should run
-
A new PR should appear, opened by
your-app-name[bot]
Environment variables reference
Section titled “Environment variables reference”| Command | Required | Optional |
|---|---|---|
| auto-pr-generate-content | DEFAULT_BRANCH, BRANCH, GITHUB_WORKSPACE | AUTO_PR_AI_PROVIDER (optional; default local), AUTO_PR_AI_OPENAI_COMPAT_* (model for both providers; URL/key for local), GH_TOKEN (github-models). Fetches commits, files, and diff stat directly from git via GitContext. Writes pr-title.txt and pr-body.md. PR template: {GITHUB_WORKSPACE}/.github/PULL_REQUEST_TEMPLATE.md — edit How to test in that file for project-specific copy. |
| auto-pr-create-or-update-pr | GH_TOKEN, BRANCH, DEFAULT_BRANCH, GITHUB_WORKSPACE | — (reads {GITHUB_WORKSPACE}/pr-title.txt and pr-body.md) |
Override AI-related defaults via workflow with: inputs when needed.
Contributors: knirski/auto-pr integration tests
Section titled “Contributors: knirski/auto-pr integration tests”If you work on this repository (not only consuming the workflow), bun run test:integration uses committed .env.ci and an optional gitignored .env.local for local overrides. Variable names and behavior are documented in CI.md and CONTRIBUTING.md.
Troubleshooting
Section titled “Troubleshooting”| Issue | Fix |
|---|---|
| Workflow doesn’t run | Ensure branch name matches ai/**; workflow runs on forks too (add secrets to enable) |
| “workflow was not found” / “failed to fetch workflow” | The pinned SHA may not exist. Run npx -p github:knirski/auto-pr auto-pr-init to get the latest workflow, or copy auto-pr.yml from main. Contributors: when testing on a branch, update all @SHA refs to the current commit (git rev-parse HEAD). See TROUBLESHOOTING.md. |
| ”Missing [path]” (PR template) | Run npx -p github:knirski/auto-pr auto-pr-init or copy the template to the path shown. See TROUBLESHOOTING.md |
| ”node-version-file” error | Ensure .nvmrc exists (run npx -p github:knirski/auto-pr auto-pr-init). Use node-version-file: ".nvmrc" for single source of truth. |
| Check job fails | Ensure your check command exists (e.g. npm run check, pytest, cargo test). See Running checks before PR creation |
| ”Resource not accessible” | Check app permissions (Contents, Pull requests, Actions: Read and write) |
| “Secret not found” | Verify APP_ID and APP_PRIVATE_KEY in repo secrets |
| PR already exists | Workflow updates the PR title and body from the latest commits |
| AI provider returns invalid description | Retries 3×; description override may be empty on failure |