IntelligenceX

IntelligenceX Reviewer Overview

Edit on GitHub

Learn how the IntelligenceX reviewer runs in GitHub Actions, supports ChatGPT or Copilot, posts structured PR feedback, and keeps setup under your control.

Reviewer Overview

The reviewer runs in GitHub Actions (and Azure DevOps summary-only) and posts a structured review comment on PRs. Azure DevOps summary-only uses the PR-level changes endpoint (cumulative diff). It can use:

  • ChatGPT (native transport) with a ChatGPT login bundle.
  • Copilot (via Copilot CLI) for teams already using GitHub Copilot.
  • Copilot direct HTTP transport (experimental) for custom gateways.
  • CLI wizard: intelligencex setup wizard
  • Local web UI (preview): intelligencex setup web

Related docs:

Runtime Flow

flowchart LR
  classDef trigger fill:#BAE6FD,stroke:#0369A1,color:#082F49,stroke-width:2px;
  classDef engine fill:#DDD6FE,stroke:#5B21B6,color:#2E1065,stroke-width:2px;
  classDef provider fill:#FDE68A,stroke:#B45309,color:#451A03,stroke-width:2px;
  classDef output fill:#A7F3D0,stroke:#047857,color:#052E2B,stroke-width:2px;

  A["Pull request event"] --> B["GitHub Actions job"]
  B --> C["IntelligenceX review pipeline"]
  C --> D["Context builder<br/>diff selection, chunking, redaction"]
  D --> E["Provider call<br/>OpenAI or Copilot"]
  E --> F["Findings parser and formatter"]
  F --> G["PR summary comment"]
  F --> H["Inline comments (if enabled)"]
  C --> I["Thread triage and optional auto-resolve"]

  class A,B trigger;
  class C,D,F,I engine;
  class E provider;
  class G,H output;

Trust model (short version)

  • BYO GitHub App is supported for branded bot identity.
  • Secrets are stored in GitHub Actions (you control access).
  • Web UI binds to localhost only; tokens never leave your machine.

Engine Scope

  • Review pipeline: resolve inputs, build context, assemble prompt, call provider, parse inline comments, post summary/inline output.
  • Providers and transports: OpenAI (native/appserver), OpenAI-compatible HTTP endpoints (Ollama/OpenRouter/etc.), and Copilot (CLI/direct).
  • Context builder: diff-range selection, file filtering, chunking, redaction, language hints, related PRs.
  • Formatter/output: summary templates, inline comment formatting, structured findings block.
  • Thread triage/auto-resolve: load threads, require evidence, summarize/append optional replies.

Success Metrics

  • Review latency (p50/p95) from job start to posted comment.
  • Failure rate for review runs (auth, preflight, provider errors).
  • Reviewer usefulness score (maintainer feedback or “kept” findings rate).
  • Inline quality (false-positive rate based on fixes/confirmations).

Default Mode + Model Policy

  • Default review mode: hybrid (summary + inline when supported; falls back to summary-only).
  • Default provider/model: OpenAI with gpt-5.3-codex unless configured otherwise; Copilot is opt-in.
  • Safe defaults: skip drafts; skip workflow changes unless allowed; no secrets/writes on untrusted PRs; fail-open only for transient errors; budget summary enabled; auto-resolve limited to bot threads with evidence; secrets audit on.

Reusable workflow (quick start)

jobs:
  review:
    uses: EvotecIT/IntelligenceX/.github/workflows/review-intelligencex-reusable.yml@<pinned-sha>
    with:
      reviewer_source: source
      openai_transport: native
      review_config_path: .intelligencex/reviewer.json
      mode: hybrid
      length: medium
    secrets:
      INTELLIGENCEX_AUTH_B64: ${{ secrets.INTELLIGENCEX_AUTH_B64 }}
      INTELLIGENCEX_GITHUB_APP_ID: ${{ secrets.INTELLIGENCEX_GITHUB_APP_ID }}
      INTELLIGENCEX_GITHUB_APP_PRIVATE_KEY: ${{ secrets.INTELLIGENCEX_GITHUB_APP_PRIVATE_KEY }}

The long value after @ is a pinned workflow commit SHA. Keep it pinned for security; update it intentionally when you upgrade.

reviewer_source: source is best when you want the latest workflow/reviewer behavior from source. Use reviewer_source: release when you prefer a packaged release artifact for tighter version control. Use style (review tone/style profile) and output_style (rendering preset) as optional inputs when needed.

Inputs → environment mapping (short)

The reusable workflow maps with: inputs to environment variables the reviewer reads.

Workflow inputEnvironment variable
repoINPUT_REPO
pr_numberINPUT_PR_NUMBER
reviewer_tokenINTELLIGENCEX_GITHUB_TOKEN
reviewer_sourceREVIEWER_SOURCE
reviewer_release_repoREVIEWER_RELEASE_REPO
reviewer_release_tagREVIEWER_RELEASE_TAG
reviewer_release_assetREVIEWER_RELEASE_ASSET
reviewer_release_urlREVIEWER_RELEASE_URL

Minimal config (native ChatGPT)

{
  "review": {
    "provider": "openai",
    "openaiTransport": "native",
    "model": "gpt-5.3-codex",
    "mode": "inline",
    "length": "long",
    "reviewUsageSummary": true
  }
}

Quick flow (end-to-end)

# 1) Auth login (stores tokens locally)
intelligencex auth login

# 2) Setup reviewer (creates PR)
intelligencex setup wizard

What to configure next

  • Model/provider + output style
  • Review length and strictness
  • Auto-resolve/triage behavior for bot threads
  • Triage-only mode (skip full review, only triage threads)
  • Usage summary line (optional)

How to interpret the review comment

By default:

  • Todo List ✅ and Critical Issues ⚠️ are treated as merge blockers.
  • Other Issues 🧯 are suggestions and should not block merges by default.

Usage and credits line

Enable reviewUsageSummary to append limits/credits (ChatGPT native only). See Configuration . When a code-review rate-limit window is present, its label is explicitly prefixed with code review (for example, code review weekly limit ) so it is distinct from general limits.