Compare commits
No commits in common. "d95fa2c8e5362d31ad844c9ec9dffc3e4113ddca" and "01c4ae8f2b45f7a34c50bd37c060f19f78019226" have entirely different histories.
d95fa2c8e5
...
01c4ae8f2b
17 changed files with 2272 additions and 1829 deletions
498
README.md
Normal file
498
README.md
Normal file
|
|
@ -0,0 +1,498 @@
|
|||
# Troutbot
|
||||
|
||||
Troutbot is the final solution to protecting the trout population. It's
|
||||
environmental protection incarnate!
|
||||
|
||||
Well in reality, it's a GitHub webhook bot that analyzes issues and pull
|
||||
requests using real signals such as CI check results, diff quality, and body
|
||||
structure and then posts trout-themed comments about the findings. Now you know
|
||||
whether your changes hurt or help the trout population.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
$ npm install
|
||||
|
||||
# Populate the environment config
|
||||
$ cp .env.example .env
|
||||
|
||||
# Set up application confg
|
||||
cp config.example.ts config.ts
|
||||
|
||||
# Edit .env and config.ts, then to start:
|
||||
npm run build && npm start
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
Troutbot has three analysis backends ran against each incoming webhook event.
|
||||
They are the primary decisionmaking logic behind whether your changes affect the
|
||||
trout population negatively, or positively.
|
||||
|
||||
### `checks`
|
||||
|
||||
Queries the GitHub Checks API for the PR's head commit. Looks at check run
|
||||
conclusions (ESLint, Clippy, Jest, cargo test, GitHub Actions, etc.) and scores
|
||||
based on pass/fail ratio. Any CI failure is a negative signal. Requires a
|
||||
`GITHUB_TOKEN`.
|
||||
|
||||
### `diff`
|
||||
|
||||
Fetches the PR's changed files via the GitHub API. Evaluates:
|
||||
|
||||
- **Size**: Small PRs (< 200 lines) are positive; large PRs (above `maxChanges`)
|
||||
are negative
|
||||
- **Focus**: Few files changed is positive; 30+ files is negative
|
||||
- **Tests**: Presence of test file changes is positive; absence when
|
||||
`requireTests` is set is negative
|
||||
- **Net deletion**: Removing more code than you add is positive. Less code is
|
||||
more good.
|
||||
|
||||
Requires a `GITHUB_TOKEN`.
|
||||
|
||||
### `quality`
|
||||
|
||||
Pure text analysis of the issue/PR description. No API calls needed. Checks for:
|
||||
|
||||
- **Issues**: Adequate description length, code blocks, reproduction steps,
|
||||
expected/actual behavior sections, environment info
|
||||
- **PRs**: Description length, linked issues (`Fixes #123`), test plan sections,
|
||||
code blocks
|
||||
- **Both**: Markdown structure/headers, references to other issues, screenshots
|
||||
|
||||
Empty or minimal descriptions are flagged as negative.
|
||||
|
||||
### Combining Results
|
||||
|
||||
Each backend returns an impact (`positive` / `negative` / `neutral`) and a
|
||||
confidence score. The engine combines them using configurable weights (default:
|
||||
checks 0.4, diff 0.3, quality 0.3). Backends that return zero confidence (e.g.,
|
||||
no CI checks found yet) are excluded from the average. If combined confidence
|
||||
falls below `confidenceThreshold`, the result is forced to neutral.
|
||||
|
||||
## GitHub Account & Token Setup
|
||||
|
||||
Troutbot is designed to run as a dedicated bot account on GitHub. Create a
|
||||
separate GitHub account for the bot (e.g., `troutbot`) so that comments are
|
||||
clearly attributed to it rather than to a personal account.
|
||||
|
||||
### 1. Create the bot account
|
||||
|
||||
Sign up for a new GitHub account at <https://github.com/signup>. Use a dedicated
|
||||
email address for the bot. Give it a recognizable username and avatar.
|
||||
|
||||
### 2. Grant repository access
|
||||
|
||||
The bot account needs access to every repository it will comment on:
|
||||
|
||||
- **For organization repos**: Invite the bot account as a collaborator with
|
||||
**Write** access, or add it to a team with write permissions.
|
||||
- **For personal repos**: Add the bot account as a collaborator under
|
||||
\*\*Settings
|
||||
> Collaborators\*\*.
|
||||
|
||||
The bot needs write access to post comments. Read access alone is not enough.
|
||||
|
||||
### 3. Generate a Personal Access Token
|
||||
|
||||
Log in as the bot account and create a fine-grained PAT:
|
||||
|
||||
1. Go to **Settings > Developer settings > Personal access tokens > Fine-grained
|
||||
tokens**
|
||||
2. Click **Generate new token**
|
||||
3. Set a descriptive name (e.g., `troutbot-webhook`)
|
||||
4. Set **Expiration** - pick a long-lived duration or no expiration, since this
|
||||
runs unattended
|
||||
5. Under **Repository access**, select the specific repositories the bot will
|
||||
operate on (or **All repositories** if it should cover everything the account
|
||||
can see)
|
||||
6. Under **Permissions > Repository permissions**, grant:
|
||||
- **Checks**: Read (for the `checks` backend to query CI results)
|
||||
- **Contents**: Read (for the `diff` backend to fetch changed files)
|
||||
- **Issues**: Read and Write (to read issue bodies and post comments)
|
||||
- **Pull requests**: Read and Write (to read PR bodies and post comments)
|
||||
- **Metadata**: Read (required by all fine-grained tokens)
|
||||
7. Click **Generate token** and copy the value
|
||||
|
||||
Set this as the `GITHUB_TOKEN` environment variable.
|
||||
|
||||
> **Classic tokens**: If you prefer a classic PAT instead, create one with the
|
||||
> `repo` scope. Fine-grained tokens are recommended because they follow the
|
||||
> principle of least privilege.
|
||||
|
||||
### 4. Generate a webhook secret
|
||||
|
||||
Generate a random secret to verify webhook payloads:
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
Set this as the `WEBHOOK_SECRET` environment variable, and use the same value
|
||||
when configuring the webhook in GitHub (see
|
||||
[GitHub Webhook Setup](#github-webhook-setup)).
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
<!--markdownlint-disable MD013 -->
|
||||
|
||||
| Variable | Description | Required |
|
||||
| ---------------- | ----------------------------------------------------- | ---------------------------- |
|
||||
| `GITHUB_TOKEN` | Fine-grained PAT from the bot account (see above) | No (dry-run without it) |
|
||||
| `WEBHOOK_SECRET` | Secret for verifying webhook signatures | No (skips verification) |
|
||||
| `PORT` | Server port (overrides `server.port` in config) | No |
|
||||
| `CONFIG_PATH` | Path to config file | No (defaults to `config.ts`) |
|
||||
| `LOG_LEVEL` | Log level override (`debug`, `info`, `warn`, `error`) | No |
|
||||
|
||||
<!--markdownlint-enable MD013 -->
|
||||
|
||||
### Config File
|
||||
|
||||
Copy `config.example.ts` to `config.ts`. The config is a TypeScript module that
|
||||
default-exports a `Config` object - full type checking and autocompletion in
|
||||
your editor.
|
||||
|
||||
```typescript
|
||||
import type { Config } from "./src/types";
|
||||
|
||||
const config: Config = {
|
||||
server: { port: 3000 },
|
||||
engine: {
|
||||
backends: {
|
||||
checks: { enabled: true },
|
||||
diff: { enabled: true, maxChanges: 1000, requireTests: false },
|
||||
quality: { enabled: true, minBodyLength: 50 },
|
||||
},
|
||||
weights: { checks: 0.4, diff: 0.3, quality: 0.3 },
|
||||
confidenceThreshold: 0.1,
|
||||
},
|
||||
// ...
|
||||
};
|
||||
|
||||
export default config;
|
||||
```
|
||||
|
||||
The config is loaded at runtime via [jiti](https://github.com/unjs/jiti) - no
|
||||
pre-compilation needed.
|
||||
|
||||
See `config.example.ts` for the full annotated reference.
|
||||
|
||||
## GitHub Webhook Setup
|
||||
|
||||
1. Go to your repository's **Settings > Webhooks > Add webhook**
|
||||
2. **Payload URL**: `https://your-host/webhook`
|
||||
3. **Content type**: `application/json`
|
||||
4. **Secret**: Must match your `WEBHOOK_SECRET` env var
|
||||
5. **Events**: Select **Issues**, **Pull requests**, and optionally **Check
|
||||
suites** (for re-analysis when CI finishes)
|
||||
|
||||
If you enable **Check suites** and set `response.allowUpdates: true` in your
|
||||
config, troutbot will update its comment on a PR once CI results are available.
|
||||
|
||||
## Production Configuration
|
||||
|
||||
When deploying troutbot to production, keep the following in mind:
|
||||
|
||||
- **`WEBHOOK_SECRET` is strongly recommended.** Without it, anyone who can reach
|
||||
the `/webhook` endpoint can trigger analysis and post comments. Always set a
|
||||
secret and configure the same value in your GitHub webhook settings.
|
||||
- **Use a reverse proxy with TLS.** GitHub sends webhook payloads over HTTPS.
|
||||
Put nginx, Caddy, or a cloud load balancer in front of troutbot and terminate
|
||||
TLS there.
|
||||
- **Set `NODE_ENV=production`.** This is set automatically in the Docker image.
|
||||
For standalone deployments, export it in your environment. Express uses this
|
||||
to enable performance optimizations.
|
||||
- **Rate limiting** is enabled by default at 120 requests/minute on the
|
||||
`/webhook` endpoint. Override via `server.rateLimit` in your config file.
|
||||
- **Request body size** is capped at 1 MB. GitHub webhook payloads are well
|
||||
under this limit.
|
||||
- **Graceful shutdown** is built in. The server handles `SIGTERM` and `SIGINT`,
|
||||
stops accepting new connections, and waits up to 10 seconds for in-flight
|
||||
requests to finish before exiting.
|
||||
- **Dashboard access control.** The `/dashboard` and `/api/*` endpoints have no
|
||||
built-in authentication. Restrict access via reverse proxy rules, firewall, or
|
||||
binding to localhost. See [Securing the Dashboard](#securing-the-dashboard).
|
||||
|
||||
## Deployment
|
||||
|
||||
<details>
|
||||
<summary>Standalone (Node.js)</summary>
|
||||
|
||||
```bash
|
||||
npm ci
|
||||
npm run build
|
||||
export NODE_ENV=production
|
||||
export GITHUB_TOKEN="ghp_..."
|
||||
export WEBHOOK_SECRET="your-secret"
|
||||
npm start
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Nix</summary>
|
||||
|
||||
**Flake** (NixOS or flake-enabled systems):
|
||||
|
||||
```nix
|
||||
{
|
||||
inputs.troutbot.url = "github:notashelf/troutbot";
|
||||
|
||||
outputs = { self, nixpkgs, troutbot }: {
|
||||
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
|
||||
modules = [
|
||||
troutbot.nixosModules.troutbot
|
||||
{
|
||||
services.troutbot = {
|
||||
enable = true;
|
||||
environmentFile = "/path/to/.env"; # use Agenix if possible
|
||||
configPath = "/path/to/config.ts" # use Agenix if possible
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
**Run directly**:
|
||||
|
||||
```bash
|
||||
nix run github:notashelf/troutbot
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Docker</summary>
|
||||
|
||||
```bash
|
||||
docker build -t troutbot .
|
||||
docker run -d \
|
||||
--name troutbot \
|
||||
-p 127.0.0.1:3000:3000 \
|
||||
-e GITHUB_TOKEN="ghp_..." \
|
||||
-e WEBHOOK_SECRET="your-secret" \
|
||||
-v $(pwd)/config.ts:/app/config.ts:ro \
|
||||
--restart unless-stopped \
|
||||
troutbot
|
||||
```
|
||||
|
||||
Multi-stage build, non-root user, built-in health check, `STOPSIGNAL SIGTERM`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Docker Compose</summary>
|
||||
|
||||
```yaml
|
||||
services:
|
||||
troutbot:
|
||||
build: .
|
||||
ports:
|
||||
- "127.0.0.1:3000:3000"
|
||||
env_file: .env
|
||||
volumes:
|
||||
- ./config.ts:/app/config.ts:ro
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>systemd</summary>
|
||||
|
||||
Create `/etc/systemd/system/troutbot.service`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Troutbot GitHub Webhook Bot
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=troutbot
|
||||
WorkingDirectory=/opt/troutbot
|
||||
ExecStart=/usr/bin/node dist/index.js
|
||||
EnvironmentFile=/opt/troutbot/.env
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
TimeoutStopSec=15
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/opt/troutbot
|
||||
PrivateTmp=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now troutbot
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Reverse Proxy (nginx)</summary>
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name troutbot.example.com;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/troutbot.example.com/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/troutbot.example.com/privkey.pem;
|
||||
|
||||
client_max_body_size 1m;
|
||||
proxy_read_timeout 60s;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:3000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
|
||||
# Optional: nginx-level rate limiting
|
||||
# limit_req_zone $binary_remote_addr zone=webhook:10m rate=10r/s;
|
||||
# location /webhook {
|
||||
# limit_req zone=webhook burst=20 nodelay;
|
||||
# proxy_pass http://127.0.0.1:3000;
|
||||
# }
|
||||
}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## API Endpoints
|
||||
|
||||
| Method | Path | Description |
|
||||
| -------- | ------------- | ---------------------------------------------------------------------------------------- |
|
||||
| `GET` | `/health` | Health check - returns `status`, `uptime` (seconds), `version`, `dryRun`, and `backends` |
|
||||
| `POST` | `/webhook` | GitHub webhook receiver (rate limited) |
|
||||
| `GET` | `/dashboard` | Web UI dashboard with status, events, and config editor |
|
||||
| `GET` | `/api/status` | JSON status: uptime, version, dry-run, backends, repo count |
|
||||
| `GET` | `/api/events` | Recent webhook events from the in-memory ring buffer |
|
||||
| `DELETE` | `/api/events` | Clear the event ring buffer |
|
||||
| `GET` | `/api/config` | Current runtime configuration as JSON |
|
||||
| `PUT` | `/api/config` | Partial config update: deep-merges, validates, and applies in-place |
|
||||
|
||||
## Dashboard & Runtime API
|
||||
|
||||
Troutbot ships with a built-in web dashboard and JSON API for monitoring and
|
||||
runtime configuration. No separate frontend build is required.
|
||||
|
||||
### Web Dashboard
|
||||
|
||||
Navigate to `http://localhost:3000/dashboard` (or wherever your instance is
|
||||
running). The dashboard provides:
|
||||
|
||||
- **Status card** - uptime, version, dry-run state, active backends, and repo
|
||||
count. Auto-refreshes every 30 seconds.
|
||||
- **Event log** - table of recent webhook events showing repo, PR/issue number,
|
||||
action, impact rating, and confidence score. Keeps the last 100 events in
|
||||
memory.
|
||||
- **Config editor** - read-only JSON view of the current runtime config with an
|
||||
"Edit" toggle that lets you modify and save changes without restarting.
|
||||
|
||||
The dashboard is a single HTML page with inline CSS and vanilla JS - no
|
||||
frameworks, no build step, no external assets.
|
||||
|
||||
### Runtime Config API
|
||||
|
||||
You can inspect and modify the running configuration via the REST API. Changes
|
||||
are applied in-place without restarting the server. The update endpoint
|
||||
deep-merges your partial config onto the current one and validates before
|
||||
applying.
|
||||
|
||||
```bash
|
||||
# Read current config
|
||||
curl http://localhost:3000/api/config
|
||||
|
||||
# Update a single setting (partial merge)
|
||||
curl -X PUT http://localhost:3000/api/config \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"response": {"allowUpdates": true}}'
|
||||
|
||||
# Change engine weights at runtime
|
||||
curl -X PUT http://localhost:3000/api/config \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"engine": {"weights": {"checks": 0.5, "diff": 0.25, "quality": 0.25}}}'
|
||||
```
|
||||
|
||||
Invalid configs are rejected with a 400 status and an error message. The
|
||||
original config remains unchanged if validation fails.
|
||||
|
||||
### Event Buffer API
|
||||
|
||||
The event buffer stores the last 100 processed webhook events in memory. Events
|
||||
are lost on restart.
|
||||
|
||||
```bash
|
||||
# List recent events
|
||||
curl http://localhost:3000/api/events
|
||||
|
||||
# Clear the buffer
|
||||
curl -X DELETE http://localhost:3000/api/events
|
||||
```
|
||||
|
||||
### Securing the Dashboard
|
||||
|
||||
The dashboard and API endpoints have no authentication by default. In
|
||||
production, restrict access using one of:
|
||||
|
||||
- **Reverse proxy rules** - limit `/dashboard` and `/api/*` to internal IPs or
|
||||
require basic auth at the nginx/Caddy layer
|
||||
- **Firewall rules** - only expose port 3000 to trusted networks
|
||||
- **Bind to localhost** - set `server.port` and bind to `127.0.0.1` (the Docker
|
||||
examples already do this), then access via SSH tunnel or VPN
|
||||
|
||||
Do not expose the dashboard to the public internet without authentication, as
|
||||
the config API allows modifying runtime behavior.
|
||||
|
||||
## Dry-Run Mode
|
||||
|
||||
Without a `GITHUB_TOKEN`, the bot runs in dry-run mode. The quality backend
|
||||
still works (text analysis), but checks and diff backends return neutral (they
|
||||
need API access). Comments are logged instead of posted.
|
||||
|
||||
## Customizing Messages
|
||||
|
||||
Edit `response.messages` in your config. Each impact category takes an array of
|
||||
strings. One is picked randomly per event.
|
||||
|
||||
```typescript
|
||||
messages: {
|
||||
positive: [
|
||||
"The trout approve of this {type}!",
|
||||
"Upstream looks clear for this {type}.",
|
||||
],
|
||||
negative: [
|
||||
"The trout are worried about this {type}.",
|
||||
],
|
||||
neutral: [
|
||||
"The trout have no opinion on this {type}.",
|
||||
],
|
||||
},
|
||||
```
|
||||
|
||||
Placeholders:
|
||||
|
||||
- `{type}` - `issue` or `pull request`
|
||||
- `{impact}` - `positive`, `negative`, or `neutral`
|
||||
|
|
@ -67,7 +67,7 @@ const config: Config = {
|
|||
positive: [
|
||||
'This {type} looks great for the trout! All signals point upstream.',
|
||||
'The trout approve of this {type}. Swim on!',
|
||||
'Splashing good news — this {type} is looking healthy.',
|
||||
'Splashing good news - this {type} is looking healthy.',
|
||||
],
|
||||
negative: [
|
||||
'This {type} is muddying the waters. The trout are concerned.',
|
||||
|
|
|
|||
27
flake.lock
generated
Normal file
27
flake.lock
generated
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1769461804,
|
||||
"narHash": "sha256-msG8SU5WsBUfVVa/9RPLaymvi5bI8edTavbIq3vRlhI=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "bfc1b8a4574108ceef22f02bafcf6611380c100d",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixos-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
26
flake.nix
Normal file
26
flake.nix
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
{
|
||||
description = "Troutbot - GitHub webhook bot";
|
||||
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs?ref=nixos-unstable";
|
||||
|
||||
outputs = {
|
||||
self,
|
||||
nixpkgs,
|
||||
}: {
|
||||
packages = {
|
||||
x86_64-linux = let pkgs = nixpkgs.legacyPackages.x86_64-linux; in { default = pkgs.callPackage ./nix/package.nix {}; };
|
||||
aarch64-linux = let pkgs = nixpkgs.legacyPackages.aarch64-linux; in { default = pkgs.callPackage ./nix/package.nix {}; };
|
||||
x86_64-darwin = let pkgs = nixpkgs.legacyPackages.x86_64-darwin; in { default = pkgs.callPackage ./nix/package.nix {}; };
|
||||
aarch64-darwin = let pkgs = nixpkgs.legacyPackages.aarch64-darwin; in { default = pkgs.callPackage ./nix/package.nix {}; };
|
||||
};
|
||||
|
||||
devShells = {
|
||||
x86_64-linux = let pkgs = nixpkgs.legacyPackages.x86_64-linux; in { default = pkgs.mkShell { packages = [pkgs.nodejs-slim_22 pkgs.pnpm]; }; };
|
||||
aarch64-linux = let pkgs = nixpkgs.legacyPackages.aarch64-linux; in { default = pkgs.mkShell { packages = [pkgs.nodejs-slim_22 pkgs.pnpm]; }; };
|
||||
x86_64-darwin = let pkgs = nixpkgs.legacyPackages.x86_64-darwin; in { default = pkgs.mkShell { packages = [pkgs.nodejs-slim_22 pkgs.pnpm]; }; };
|
||||
aarch64-darwin = let pkgs = nixpkgs.legacyPackages.aarch64-darwin; in { default = pkgs.mkShell { packages = [pkgs.nodejs-slim_22 pkgs.pnpm]; }; };
|
||||
};
|
||||
|
||||
nixosModules.troutbot = import ./nix/modules/nixos.nix self;
|
||||
};
|
||||
}
|
||||
83
nix/modules/nixos.nix
Normal file
83
nix/modules/nixos.nix
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
self: {
|
||||
config,
|
||||
pkgs,
|
||||
lib,
|
||||
...
|
||||
}: let
|
||||
inherit (lib.modules) mkIf;
|
||||
inherit (lib.options) mkOption mkEnableOption literalExpression;
|
||||
inherit (lib.types) nullOr str port package;
|
||||
|
||||
defaultPackage = self.packages.${pkgs.stdenv.hostPlatform.system}.troutbot;
|
||||
cfg = config.services.troutbot;
|
||||
in {
|
||||
options.services.troutbot = {
|
||||
enable = mkEnableOption "troutbot";
|
||||
|
||||
package = mkOption {
|
||||
type = nullOr package;
|
||||
default = defaultPackage;
|
||||
defaultText = literalExpression "inputs.troutbot.packages.${pkgs.stdenv.hostPlatform.system}.troutbot";
|
||||
description = ''
|
||||
The Troutbot package to use.
|
||||
|
||||
By default, this option will use the `packages.default` as exposed by this flake.
|
||||
'';
|
||||
};
|
||||
|
||||
user = mkOption {
|
||||
type = str;
|
||||
default = "troutbot";
|
||||
};
|
||||
|
||||
group = mkOption {
|
||||
type = str;
|
||||
default = "troutbot";
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
type = port;
|
||||
default = 3000;
|
||||
};
|
||||
|
||||
environmentFile = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
};
|
||||
|
||||
configPath = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
users.users.${cfg.user} = {
|
||||
isSystemUser = true;
|
||||
group = cfg.group;
|
||||
};
|
||||
|
||||
users.groups.${cfg.group} = {};
|
||||
|
||||
systemd.services.troutbot = {
|
||||
description = "Troutbot";
|
||||
after = ["network.target"];
|
||||
wantedBy = ["multi-user.target"];
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ExecStart = "${lib.getExe cfg.package}";
|
||||
Restart = "on-failure";
|
||||
EnvironmentFile = cfg.environmentFile;
|
||||
NODE_ENV = "production";
|
||||
CONFIG_PATH = cfg.configPath;
|
||||
PORT = toString cfg.port;
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = true;
|
||||
PrivateTmp = true;
|
||||
NoNewPrivileges = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
71
nix/package.nix
Normal file
71
nix/package.nix
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
{
|
||||
lib,
|
||||
stdenv,
|
||||
nodejs,
|
||||
pnpmConfigHook,
|
||||
fetchPnpmDeps,
|
||||
pnpm,
|
||||
makeBinaryWrapper,
|
||||
}:
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
pname = "troutbot";
|
||||
version = "0-unstable-2026-01-30";
|
||||
|
||||
src = lib.fileset.toSource {
|
||||
root = ../.;
|
||||
fileset = lib.fileset.unions [
|
||||
../src
|
||||
../config.example.ts
|
||||
../package.json
|
||||
../pnpm-lock.yaml
|
||||
../tsconfig.json
|
||||
];
|
||||
};
|
||||
|
||||
strictDeps = true;
|
||||
nativeBuildInputs = [
|
||||
nodejs # in case scripts are run outside of a pnpm call
|
||||
pnpmConfigHook
|
||||
pnpm # at least required by pnpmConfigHook, if not other (custom) phases
|
||||
|
||||
makeBinaryWrapper
|
||||
];
|
||||
|
||||
pnpmDeps = fetchPnpmDeps {
|
||||
inherit (finalAttrs) pname version src;
|
||||
fetcherVersion = 3;
|
||||
hash = "sha256-y8LV1D+EgGcZ79lmxS20dqYBPEfk4atma+RWf7pJI30=";
|
||||
};
|
||||
|
||||
buildPhase = ''
|
||||
runHook preBuild
|
||||
|
||||
pnpm run build --outDir dist
|
||||
|
||||
runHook postBuild
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
mkdir -p $out/{bin,share}
|
||||
|
||||
# Copy transpiled result
|
||||
cp -rv dist/* $out/share
|
||||
|
||||
# Copy the example config
|
||||
install -Dm755 config.example.ts $out/share
|
||||
|
||||
makeWrapper ${lib.getExe nodejs} $out/bin/troutbot \
|
||||
--set-default NODE_ENV production \
|
||||
--add-flags "$out/share/index.js"
|
||||
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
meta = {
|
||||
description = "The ultimate trout population helper";
|
||||
license = lib.licenses.eupl12;
|
||||
maintainers = with lib.maintainers; [NotAShelf];
|
||||
};
|
||||
})
|
||||
28
package.json
28
package.json
|
|
@ -11,22 +11,22 @@
|
|||
"fmt": "prettier --write ."
|
||||
},
|
||||
"dependencies": {
|
||||
"@octokit/rest": "^21.0.0",
|
||||
"dotenv": "^16.4.0",
|
||||
"express": "^4.21.0",
|
||||
"@octokit/rest": "^22.0.1",
|
||||
"dotenv": "^17.2.3",
|
||||
"express": "^5.2.1",
|
||||
"express-rate-limit": "^8.2.1",
|
||||
"jiti": "^2.4.0",
|
||||
"winston": "^3.14.0"
|
||||
"jiti": "^2.6.1",
|
||||
"winston": "^3.19.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/express": "^5.0.0",
|
||||
"@types/node": "^22.0.0",
|
||||
"@typescript-eslint/eslint-plugin": "^8.0.0",
|
||||
"@typescript-eslint/parser": "^8.0.0",
|
||||
"eslint": "^9.0.0",
|
||||
"prettier": "^3.3.0",
|
||||
"tsup": "^8.3.0",
|
||||
"tsx": "^4.19.0",
|
||||
"typescript": "^5.6.0"
|
||||
"@types/express": "^5.0.6",
|
||||
"@types/node": "^25.1.0",
|
||||
"@typescript-eslint/eslint-plugin": "^8.54.0",
|
||||
"@typescript-eslint/parser": "^8.54.0",
|
||||
"eslint": "^9.39.2",
|
||||
"prettier": "^3.8.1",
|
||||
"tsup": "^8.5.1",
|
||||
"tsx": "^4.21.0",
|
||||
"typescript": "^5.9.3"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
2403
pnpm-lock.yaml
generated
2403
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load diff
|
|
@ -36,7 +36,7 @@ const defaults: Config = {
|
|||
positive: [
|
||||
'This {type} looks great for the trout! All signals point upstream.',
|
||||
'The trout approve of this {type}. Swim on!',
|
||||
'Splashing good news — this {type} is looking healthy.',
|
||||
'Splashing good news - this {type} is looking healthy.',
|
||||
],
|
||||
negative: [
|
||||
'This {type} is muddying the waters. The trout are concerned.',
|
||||
|
|
@ -58,7 +58,7 @@ const defaults: Config = {
|
|||
},
|
||||
};
|
||||
|
||||
function deepMerge<T extends Record<string, unknown>>(target: T, source: Partial<T>): T {
|
||||
export function deepMerge<T extends Record<string, unknown>>(target: T, source: Partial<T>): T {
|
||||
const result = { ...target };
|
||||
for (const key of Object.keys(source) as (keyof T)[]) {
|
||||
const sourceVal = source[key];
|
||||
|
|
@ -122,7 +122,7 @@ export function loadConfig(): Config {
|
|||
return config;
|
||||
}
|
||||
|
||||
function validate(config: Config): void {
|
||||
export function validate(config: Config): void {
|
||||
if (!config.server.port || config.server.port < 1 || config.server.port > 65535) {
|
||||
throw new Error('Invalid server port');
|
||||
}
|
||||
|
|
|
|||
312
src/dashboard.ts
Normal file
312
src/dashboard.ts
Normal file
|
|
@ -0,0 +1,312 @@
|
|||
import express from 'express';
|
||||
import type { Config } from './types.js';
|
||||
import { getRecentEvents, clearEvents } from './events.js';
|
||||
import { validate, deepMerge } from './config.js';
|
||||
|
||||
export function createDashboardRouter(config: Config): express.Router {
|
||||
const router = express.Router();
|
||||
const startTime = Date.now();
|
||||
|
||||
router.use(express.json());
|
||||
|
||||
// --- API routes ---
|
||||
|
||||
router.get('/api/status', (_req, res) => {
|
||||
const enabledBackends = Object.entries(config.engine.backends)
|
||||
.filter(([, v]) => v.enabled)
|
||||
.map(([k]) => k);
|
||||
|
||||
res.json({
|
||||
uptime: Math.floor((Date.now() - startTime) / 1000),
|
||||
version: process.env.npm_package_version ?? 'unknown',
|
||||
dryRun: !process.env.GITHUB_TOKEN,
|
||||
backends: enabledBackends,
|
||||
repoCount: config.repositories.length || 'all',
|
||||
});
|
||||
});
|
||||
|
||||
router.get('/api/events', (_req, res) => {
|
||||
res.json(getRecentEvents());
|
||||
});
|
||||
|
||||
router.delete('/api/events', (_req, res) => {
|
||||
clearEvents();
|
||||
res.json({ cleared: true });
|
||||
});
|
||||
|
||||
router.get('/api/config', (_req, res) => {
|
||||
res.json(config);
|
||||
});
|
||||
|
||||
router.put('/api/config', (req, res) => {
|
||||
try {
|
||||
const partial = req.body as Partial<Config>;
|
||||
const merged = deepMerge(config as Record<string, unknown>, partial as Record<string, unknown>) as Config;
|
||||
validate(merged);
|
||||
|
||||
// Apply in-place
|
||||
Object.assign(config, merged);
|
||||
res.json(config);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
res.status(400).json({ error: message });
|
||||
}
|
||||
});
|
||||
|
||||
// --- Dashboard HTML ---
|
||||
|
||||
router.get('/dashboard', (_req, res) => {
|
||||
res.type('html').send(dashboardHTML());
|
||||
});
|
||||
|
||||
return router;
|
||||
}
|
||||
|
||||
function dashboardHTML(): string {
|
||||
return `<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<title>Troutbot Dashboard</title>
|
||||
<style>
|
||||
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
background: #0d1117; color: #c9d1d9; line-height: 1.5;
|
||||
padding: 1.5rem; max-width: 1200px; margin: 0 auto;
|
||||
}
|
||||
h1 { color: #58a6ff; margin-bottom: 1.5rem; font-size: 1.5rem; }
|
||||
h2 { color: #8b949e; font-size: 1rem; text-transform: uppercase;
|
||||
letter-spacing: 0.05em; margin-bottom: 0.75rem; }
|
||||
|
||||
.card {
|
||||
background: #161b22; border: 1px solid #30363d; border-radius: 6px;
|
||||
padding: 1rem 1.25rem; margin-bottom: 1.5rem;
|
||||
}
|
||||
.status-grid {
|
||||
display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 0.75rem;
|
||||
}
|
||||
.status-item label { display: block; color: #8b949e; font-size: 0.75rem; }
|
||||
.status-item span { font-size: 1.1rem; font-weight: 600; }
|
||||
|
||||
table { width: 100%; border-collapse: collapse; font-size: 0.85rem; }
|
||||
th { text-align: left; color: #8b949e; font-weight: 600; padding: 0.5rem 0.75rem;
|
||||
border-bottom: 1px solid #30363d; }
|
||||
td { padding: 0.5rem 0.75rem; border-bottom: 1px solid #21262d; }
|
||||
tr:hover td { background: #1c2128; }
|
||||
|
||||
.impact-positive { color: #3fb950; }
|
||||
.impact-negative { color: #f85149; }
|
||||
.impact-neutral { color: #8b949e; }
|
||||
|
||||
.config-view {
|
||||
font-family: 'SF Mono', 'Fira Code', 'Fira Mono', Menlo, monospace;
|
||||
font-size: 0.8rem; background: #0d1117; color: #c9d1d9;
|
||||
border: 1px solid #30363d; border-radius: 4px; padding: 1rem;
|
||||
white-space: pre-wrap; word-break: break-word; min-height: 200px;
|
||||
width: 100%; resize: vertical;
|
||||
}
|
||||
|
||||
.btn {
|
||||
background: #21262d; color: #c9d1d9; border: 1px solid #30363d;
|
||||
border-radius: 4px; padding: 0.4rem 1rem; cursor: pointer;
|
||||
font-size: 0.85rem; margin-right: 0.5rem; margin-top: 0.5rem;
|
||||
}
|
||||
.btn:hover { background: #30363d; }
|
||||
.btn-primary { background: #238636; border-color: #2ea043; }
|
||||
.btn-primary:hover { background: #2ea043; }
|
||||
.btn-danger { background: #da3633; border-color: #f85149; }
|
||||
.btn-danger:hover { background: #f85149; }
|
||||
|
||||
.msg { margin-top: 0.5rem; font-size: 0.85rem; }
|
||||
.msg-ok { color: #3fb950; }
|
||||
.msg-err { color: #f85149; }
|
||||
|
||||
.empty { color: #484f58; font-style: italic; padding: 1rem 0; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Troutbot Dashboard</h1>
|
||||
|
||||
<!-- Status card -->
|
||||
<div class="card" id="status-card">
|
||||
<h2>Status</h2>
|
||||
<div class="status-grid" id="status-grid">
|
||||
<div class="status-item"><label>Loading...</label></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Recent events -->
|
||||
<div class="card">
|
||||
<h2>Recent Events</h2>
|
||||
<div style="overflow-x:auto">
|
||||
<table>
|
||||
<thead><tr>
|
||||
<th>ID</th><th>Time</th><th>Repo</th><th>#</th>
|
||||
<th>Action</th><th>Impact</th><th>Confidence</th><th>Result</th>
|
||||
</tr></thead>
|
||||
<tbody id="events-body">
|
||||
<tr><td colspan="8" class="empty">Loading...</td></tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Config editor -->
|
||||
<div class="card">
|
||||
<h2>Configuration</h2>
|
||||
<div id="config-container">
|
||||
<pre class="config-view" id="config-view"></pre>
|
||||
<div>
|
||||
<button class="btn" id="edit-btn" onclick="toggleEdit()">Edit</button>
|
||||
<button class="btn btn-primary" id="save-btn" style="display:none" onclick="saveConfig()">Save</button>
|
||||
<button class="btn" id="cancel-btn" style="display:none" onclick="cancelEdit()">Cancel</button>
|
||||
</div>
|
||||
<div class="msg" id="config-msg"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
let currentConfig = null;
|
||||
let editing = false;
|
||||
|
||||
async function fetchStatus() {
|
||||
try {
|
||||
const r = await fetch('/api/status');
|
||||
const d = await r.json();
|
||||
const grid = document.getElementById('status-grid');
|
||||
const upH = Math.floor(d.uptime / 3600);
|
||||
const upM = Math.floor((d.uptime % 3600) / 60);
|
||||
const upS = d.uptime % 60;
|
||||
grid.innerHTML = [
|
||||
item('Uptime', upH + 'h ' + upM + 'm ' + upS + 's'),
|
||||
item('Version', d.version),
|
||||
item('Dry Run', d.dryRun ? 'Yes' : 'No'),
|
||||
item('Backends', d.backends.join(', ')),
|
||||
item('Repos', d.repoCount),
|
||||
].join('');
|
||||
} catch(e) { console.error('Status fetch failed', e); }
|
||||
}
|
||||
|
||||
function item(label, value) {
|
||||
return '<div class="status-item"><label>' + label + '</label><span>' + value + '</span></div>';
|
||||
}
|
||||
|
||||
async function fetchEvents() {
|
||||
try {
|
||||
const r = await fetch('/api/events');
|
||||
const events = await r.json();
|
||||
const tbody = document.getElementById('events-body');
|
||||
if (!events.length) {
|
||||
tbody.innerHTML = '<tr><td colspan="8" class="empty">No events recorded yet</td></tr>';
|
||||
return;
|
||||
}
|
||||
tbody.innerHTML = events.map(function(e) {
|
||||
var impact = e.analysis ? e.analysis.impact : (e.result.skipped ? 'neutral' : '—');
|
||||
var conf = e.analysis ? e.analysis.confidence.toFixed(2) : '—';
|
||||
var result = e.result.skipped ? 'skipped: ' + (e.result.reason || '') : 'processed';
|
||||
var time = new Date(e.timestamp).toLocaleTimeString();
|
||||
return '<tr>'
|
||||
+ '<td>' + e.id + '</td>'
|
||||
+ '<td>' + time + '</td>'
|
||||
+ '<td>' + e.event.owner + '/' + e.event.repo + '</td>'
|
||||
+ '<td>' + e.event.number + '</td>'
|
||||
+ '<td>' + e.event.action + '</td>'
|
||||
+ '<td class="impact-' + impact + '">' + impact + '</td>'
|
||||
+ '<td>' + conf + '</td>'
|
||||
+ '<td>' + result + '</td>'
|
||||
+ '</tr>';
|
||||
}).join('');
|
||||
} catch(e) { console.error('Events fetch failed', e); }
|
||||
}
|
||||
|
||||
async function fetchConfig() {
|
||||
try {
|
||||
const r = await fetch('/api/config');
|
||||
currentConfig = await r.json();
|
||||
if (!editing) renderConfig();
|
||||
} catch(e) { console.error('Config fetch failed', e); }
|
||||
}
|
||||
|
||||
function renderConfig() {
|
||||
var el = document.getElementById('config-view');
|
||||
el.textContent = JSON.stringify(currentConfig, null, 2);
|
||||
}
|
||||
|
||||
function toggleEdit() {
|
||||
editing = true;
|
||||
var container = document.getElementById('config-container');
|
||||
var pre = document.getElementById('config-view');
|
||||
var ta = document.createElement('textarea');
|
||||
ta.className = 'config-view';
|
||||
ta.id = 'config-view';
|
||||
ta.value = JSON.stringify(currentConfig, null, 2);
|
||||
container.replaceChild(ta, pre);
|
||||
document.getElementById('edit-btn').style.display = 'none';
|
||||
document.getElementById('save-btn').style.display = '';
|
||||
document.getElementById('cancel-btn').style.display = '';
|
||||
document.getElementById('config-msg').textContent = '';
|
||||
}
|
||||
|
||||
function cancelEdit() {
|
||||
editing = false;
|
||||
var container = document.getElementById('config-container');
|
||||
var ta = document.getElementById('config-view');
|
||||
var pre = document.createElement('pre');
|
||||
pre.className = 'config-view';
|
||||
pre.id = 'config-view';
|
||||
pre.textContent = JSON.stringify(currentConfig, null, 2);
|
||||
container.replaceChild(pre, ta);
|
||||
document.getElementById('edit-btn').style.display = '';
|
||||
document.getElementById('save-btn').style.display = 'none';
|
||||
document.getElementById('cancel-btn').style.display = 'none';
|
||||
document.getElementById('config-msg').textContent = '';
|
||||
}
|
||||
|
||||
async function saveConfig() {
|
||||
var msg = document.getElementById('config-msg');
|
||||
var ta = document.getElementById('config-view');
|
||||
var text = ta.value;
|
||||
try {
|
||||
var parsed = JSON.parse(text);
|
||||
} catch(e) {
|
||||
msg.className = 'msg msg-err';
|
||||
msg.textContent = 'Invalid JSON: ' + e.message;
|
||||
return;
|
||||
}
|
||||
try {
|
||||
var r = await fetch('/api/config', {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(parsed),
|
||||
});
|
||||
var data = await r.json();
|
||||
if (!r.ok) {
|
||||
msg.className = 'msg msg-err';
|
||||
msg.textContent = 'Error: ' + (data.error || 'Unknown error');
|
||||
return;
|
||||
}
|
||||
currentConfig = data;
|
||||
msg.className = 'msg msg-ok';
|
||||
msg.textContent = 'Config saved successfully';
|
||||
cancelEdit();
|
||||
} catch(e) {
|
||||
msg.className = 'msg msg-err';
|
||||
msg.textContent = 'Request failed: ' + e.message;
|
||||
}
|
||||
}
|
||||
|
||||
// Initial load
|
||||
fetchStatus();
|
||||
fetchEvents();
|
||||
fetchConfig();
|
||||
|
||||
// Auto-refresh
|
||||
setInterval(fetchStatus, 30000);
|
||||
setInterval(fetchEvents, 30000);
|
||||
</script>
|
||||
</body>
|
||||
</html>`;
|
||||
}
|
||||
|
|
@ -2,6 +2,32 @@ import type { AnalysisResult, ChecksBackendConfig, EngineBackend, WebhookEvent }
|
|||
import { fetchCheckRuns } from '../github.js';
|
||||
import { getLogger } from '../logger.js';
|
||||
|
||||
// Critical checks that indicate build/test health - failures here are severe
|
||||
const CRITICAL_PATTERNS = [
|
||||
/\b(build|compile|ci)\b/i,
|
||||
/\b(test|jest|pytest|mocha|vitest|cargo.test|go.test|rspec|junit)\b/i,
|
||||
/\b(typecheck|tsc|mypy|type.check)\b/i,
|
||||
];
|
||||
|
||||
// Advisory checks - useful but not blockers
|
||||
const ADVISORY_PATTERNS = [
|
||||
/\b(lint|eslint|clippy|flake8|rubocop|pylint|biome|oxlint)\b/i,
|
||||
/\b(format|prettier|black|rustfmt|gofmt|fmt)\b/i,
|
||||
/\b(coverage|codecov|coveralls)\b/i,
|
||||
/\b(security|snyk|dependabot|codeql|semgrep)\b/i,
|
||||
/\b(deploy|preview|vercel|netlify)\b/i,
|
||||
];
|
||||
|
||||
function classifyCheck(name: string): 'critical' | 'advisory' | 'standard' {
|
||||
for (const p of CRITICAL_PATTERNS) {
|
||||
if (p.test(name)) return 'critical';
|
||||
}
|
||||
for (const p of ADVISORY_PATTERNS) {
|
||||
if (p.test(name)) return 'advisory';
|
||||
}
|
||||
return 'standard';
|
||||
}
|
||||
|
||||
export class ChecksBackend implements EngineBackend {
|
||||
name = 'checks';
|
||||
|
||||
|
|
@ -28,8 +54,14 @@ export class ChecksBackend implements EngineBackend {
|
|||
}
|
||||
|
||||
const completed = runs.filter((r) => r.status === 'completed');
|
||||
const pending = runs.filter((r) => r.status !== 'completed');
|
||||
|
||||
if (completed.length === 0) {
|
||||
return { impact: 'neutral', confidence: 0.1, reasoning: 'CI checks are still running.' };
|
||||
return {
|
||||
impact: 'neutral',
|
||||
confidence: 0.1,
|
||||
reasoning: `CI: ${pending.length} check(s) still running.`,
|
||||
};
|
||||
}
|
||||
|
||||
const passed = completed.filter((r) => r.conclusion === 'success');
|
||||
|
|
@ -46,29 +78,61 @@ export class ChecksBackend implements EngineBackend {
|
|||
return { impact: 'neutral', confidence: 0.2, reasoning: 'All CI checks were skipped.' };
|
||||
}
|
||||
|
||||
const passRate = passed.length / actionable;
|
||||
const confidence = Math.min(1, actionable / 5); // more checks = more confidence, caps at 5
|
||||
// Classify failures by severity
|
||||
const criticalFailures = failed.filter((r) => classifyCheck(r.name) === 'critical');
|
||||
const advisoryFailures = failed.filter((r) => classifyCheck(r.name) === 'advisory');
|
||||
const standardFailures = failed.filter(
|
||||
(r) => classifyCheck(r.name) === 'standard'
|
||||
);
|
||||
|
||||
let impact: AnalysisResult['impact'];
|
||||
if (failed.length === 0) {
|
||||
impact = 'positive';
|
||||
} else if (passRate < 0.5) {
|
||||
impact = 'negative';
|
||||
} else {
|
||||
impact = 'negative'; // any failure is a problem
|
||||
// Weighted scoring: critical failures count 3x, advisory 0.5x
|
||||
const failureScore =
|
||||
criticalFailures.length * 3 + standardFailures.length * 1 + advisoryFailures.length * 0.5;
|
||||
const totalWeight =
|
||||
completed
|
||||
.filter((r) => !skipped.includes(r))
|
||||
.reduce((s, r) => {
|
||||
const cls = classifyCheck(r.name);
|
||||
return s + (cls === 'critical' ? 3 : cls === 'advisory' ? 0.5 : 1);
|
||||
}, 0);
|
||||
|
||||
const weightedPassRate = totalWeight > 0 ? 1 - failureScore / totalWeight : 0;
|
||||
|
||||
// Confidence: more checks = more confidence, penalize if some are still pending
|
||||
let confidence = Math.min(1, actionable / 4 + 0.1);
|
||||
if (pending.length > 0) {
|
||||
confidence *= 0.7; // reduce confidence when checks are incomplete
|
||||
}
|
||||
|
||||
let impact: AnalysisResult['impact'];
|
||||
if (criticalFailures.length > 0) {
|
||||
impact = 'negative'; // any critical failure is always negative
|
||||
} else if (failed.length === 0) {
|
||||
impact = 'positive';
|
||||
} else if (weightedPassRate >= 0.8) {
|
||||
impact = 'neutral'; // only advisory/minor failures
|
||||
} else {
|
||||
impact = 'negative';
|
||||
}
|
||||
|
||||
// Build detailed reasoning
|
||||
const parts: string[] = [];
|
||||
if (passed.length > 0)
|
||||
parts.push(`${passed.length} passed (${passed.map((r) => r.name).join(', ')})`);
|
||||
if (failed.length > 0)
|
||||
parts.push(`${failed.length} failed (${failed.map((r) => r.name).join(', ')})`);
|
||||
if (passed.length > 0) parts.push(`${passed.length} passed (${passed.map((r) => r.name).join(', ')})`);
|
||||
if (criticalFailures.length > 0)
|
||||
parts.push(`${criticalFailures.length} critical failure(s) (${criticalFailures.map((r) => r.name).join(', ')})`);
|
||||
if (advisoryFailures.length > 0)
|
||||
parts.push(`${advisoryFailures.length} advisory failure(s) (${advisoryFailures.map((r) => r.name).join(', ')})`);
|
||||
if (standardFailures.length > 0)
|
||||
parts.push(`${standardFailures.length} other failure(s) (${standardFailures.map((r) => r.name).join(', ')})`);
|
||||
if (skipped.length > 0) parts.push(`${skipped.length} skipped`);
|
||||
if (pending.length > 0) parts.push(`${pending.length} still running`);
|
||||
|
||||
const passRate = passed.length / actionable;
|
||||
|
||||
return {
|
||||
impact,
|
||||
confidence,
|
||||
reasoning: `CI: ${parts.join('; ')}. Pass rate: ${(passRate * 100).toFixed(0)}%.`,
|
||||
reasoning: `CI: ${parts.join('; ')}. Pass rate: ${(passRate * 100).toFixed(0)}% (weighted: ${(weightedPassRate * 100).toFixed(0)}%).`,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,6 +4,46 @@ import { getLogger } from '../logger.js';
|
|||
|
||||
const TEST_FILE_PATTERN = /\b(test|spec|__tests__|_test|_spec|\.test\.|\.spec\.)\b/i;
|
||||
|
||||
const GENERATED_FILE_PATTERN =
|
||||
/\b(package-lock|yarn\.lock|pnpm-lock|Cargo\.lock|go\.sum|composer\.lock|Gemfile\.lock|poetry\.lock|flake\.lock)\b|\.min\.(js|css)$|\/vendor\//i;
|
||||
|
||||
const CONFIG_FILE_PATTERN =
|
||||
/\.(ya?ml|toml|ini|env(\.\w+)?|json)$|^\.[\w-]+(rc|ignore)$|Makefile$|Dockerfile$|^\.github\//i;
|
||||
|
||||
const RISKY_FILE_PATTERN =
|
||||
/\b(migration|schema|seed|secret|credential|auth|permission|rbac|\.sql)\b/i;
|
||||
|
||||
const DOC_FILE_PATTERN = /\.(md|mdx|txt|rst|adoc)$|^(README|CHANGELOG|LICENSE|CONTRIBUTING)/i;
|
||||
|
||||
function categorizeFiles(files: { filename: string; additions: number; deletions: number; changes: number }[]) {
|
||||
const src: typeof files = [];
|
||||
const tests: typeof files = [];
|
||||
const generated: typeof files = [];
|
||||
const config: typeof files = [];
|
||||
const docs: typeof files = [];
|
||||
const risky: typeof files = [];
|
||||
|
||||
for (const f of files) {
|
||||
if (GENERATED_FILE_PATTERN.test(f.filename)) {
|
||||
generated.push(f);
|
||||
} else if (TEST_FILE_PATTERN.test(f.filename)) {
|
||||
tests.push(f);
|
||||
} else if (DOC_FILE_PATTERN.test(f.filename)) {
|
||||
docs.push(f);
|
||||
} else if (CONFIG_FILE_PATTERN.test(f.filename)) {
|
||||
config.push(f);
|
||||
} else {
|
||||
src.push(f);
|
||||
}
|
||||
// risky is non-exclusive - a file can be both src and risky
|
||||
if (RISKY_FILE_PATTERN.test(f.filename)) {
|
||||
risky.push(f);
|
||||
}
|
||||
}
|
||||
|
||||
return { src, tests, generated, config, docs, risky };
|
||||
}
|
||||
|
||||
export class DiffBackend implements EngineBackend {
|
||||
name = 'diff';
|
||||
|
||||
|
|
@ -29,59 +69,132 @@ export class DiffBackend implements EngineBackend {
|
|||
return { impact: 'neutral', confidence: 0.1, reasoning: 'Empty diff.' };
|
||||
}
|
||||
|
||||
const totalAdditions = files.reduce((s, f) => s + f.additions, 0);
|
||||
const totalDeletions = files.reduce((s, f) => s + f.deletions, 0);
|
||||
const { src, tests, generated, config, docs, risky } = categorizeFiles(files);
|
||||
|
||||
// Exclude generated files from change counts
|
||||
const meaningful = files.filter((f) => !GENERATED_FILE_PATTERN.test(f.filename));
|
||||
const totalAdditions = meaningful.reduce((s, f) => s + f.additions, 0);
|
||||
const totalDeletions = meaningful.reduce((s, f) => s + f.deletions, 0);
|
||||
const totalChanges = totalAdditions + totalDeletions;
|
||||
const hasTestChanges = files.some((f) => TEST_FILE_PATTERN.test(f.filename));
|
||||
|
||||
const signals: { name: string; positive: boolean }[] = [];
|
||||
const signals: { name: string; positive: boolean; weight: number }[] = [];
|
||||
|
||||
// Size signals
|
||||
if (totalChanges <= 200) {
|
||||
signals.push({ name: 'small PR', positive: true });
|
||||
} else if (totalChanges > this.config.maxChanges) {
|
||||
signals.push({ name: `large PR (${totalChanges} lines)`, positive: false });
|
||||
// --- Size signals ---
|
||||
if (totalChanges <= 50) {
|
||||
signals.push({ name: 'tiny PR', positive: true, weight: 1.2 });
|
||||
} else if (totalChanges <= 200) {
|
||||
signals.push({ name: 'small PR', positive: true, weight: 1 });
|
||||
} else if (totalChanges <= 500) {
|
||||
// medium - no signal either way
|
||||
} else if (totalChanges <= this.config.maxChanges) {
|
||||
signals.push({ name: `large PR (${totalChanges} lines)`, positive: false, weight: 0.8 });
|
||||
} else {
|
||||
signals.push({ name: `very large PR (${totalChanges} lines, exceeds limit)`, positive: false, weight: 1.5 });
|
||||
}
|
||||
|
||||
// File count
|
||||
if (files.length <= 10) {
|
||||
signals.push({ name: 'focused changeset', positive: true });
|
||||
} else if (files.length > 30) {
|
||||
signals.push({ name: `sprawling changeset (${files.length} files)`, positive: false });
|
||||
// --- Focus signals ---
|
||||
if (src.length <= 3 && src.length > 0) {
|
||||
signals.push({ name: 'tightly focused', positive: true, weight: 1.2 });
|
||||
} else if (meaningful.length <= 10) {
|
||||
signals.push({ name: 'focused changeset', positive: true, weight: 0.8 });
|
||||
} else if (meaningful.length > 30) {
|
||||
signals.push({ name: `sprawling changeset (${meaningful.length} files)`, positive: false, weight: 1.2 });
|
||||
} else if (meaningful.length > 20) {
|
||||
signals.push({ name: `broad changeset (${meaningful.length} files)`, positive: false, weight: 0.6 });
|
||||
}
|
||||
|
||||
// Test presence
|
||||
if (hasTestChanges) {
|
||||
signals.push({ name: 'includes tests', positive: true });
|
||||
} else if (this.config.requireTests && totalChanges > 50) {
|
||||
signals.push({ name: 'no test changes', positive: false });
|
||||
// --- Test coverage ---
|
||||
if (tests.length > 0 && src.length > 0) {
|
||||
const testRatio = tests.length / src.length;
|
||||
if (testRatio >= 0.5) {
|
||||
signals.push({ name: 'good test coverage in diff', positive: true, weight: 1.5 });
|
||||
} else {
|
||||
signals.push({ name: 'includes tests', positive: true, weight: 1 });
|
||||
}
|
||||
} else if (tests.length > 0 && src.length === 0) {
|
||||
signals.push({ name: 'test-only change', positive: true, weight: 1.2 });
|
||||
} else if (this.config.requireTests && src.length > 0 && totalChanges > 50) {
|
||||
signals.push({ name: 'no test changes for non-trivial PR', positive: false, weight: 1.3 });
|
||||
}
|
||||
|
||||
// Net deletion is generally good (removing dead code)
|
||||
// --- Net deletion ---
|
||||
if (totalDeletions > totalAdditions && totalDeletions > 10) {
|
||||
signals.push({ name: 'net code removal', positive: true });
|
||||
const ratio = totalDeletions / Math.max(totalAdditions, 1);
|
||||
if (ratio > 3) {
|
||||
signals.push({ name: 'significant code removal', positive: true, weight: 1.3 });
|
||||
} else {
|
||||
signals.push({ name: 'net code removal', positive: true, weight: 1 });
|
||||
}
|
||||
}
|
||||
|
||||
const positiveCount = signals.filter((s) => s.positive).length;
|
||||
const negativeCount = signals.filter((s) => !s.positive).length;
|
||||
// --- Churn detection (files with high add+delete suggesting rewrites) ---
|
||||
const highChurnFiles = src.filter(
|
||||
(f) => f.additions > 50 && f.deletions > 50 && Math.min(f.additions, f.deletions) / Math.max(f.additions, f.deletions) > 0.6
|
||||
);
|
||||
if (highChurnFiles.length >= 3) {
|
||||
signals.push({ name: `high churn in ${highChurnFiles.length} files (possible refactor)`, positive: false, weight: 0.5 });
|
||||
}
|
||||
|
||||
// --- Risky files ---
|
||||
if (risky.length > 0) {
|
||||
signals.push({
|
||||
name: `touches sensitive files (${risky.map((f) => f.filename.split('/').pop()).join(', ')})`,
|
||||
positive: false,
|
||||
weight: 0.7,
|
||||
});
|
||||
}
|
||||
|
||||
// --- Documentation ---
|
||||
if (docs.length > 0 && src.length > 0) {
|
||||
signals.push({ name: 'includes docs updates', positive: true, weight: 0.6 });
|
||||
} else if (docs.length > 0 && src.length === 0) {
|
||||
signals.push({ name: 'docs-only change', positive: true, weight: 1 });
|
||||
}
|
||||
|
||||
// --- Config-only ---
|
||||
if (config.length > 0 && src.length === 0 && tests.length === 0) {
|
||||
signals.push({ name: 'config/infra only', positive: true, weight: 0.8 });
|
||||
}
|
||||
|
||||
// --- Generated file noise ---
|
||||
if (generated.length > 0) {
|
||||
const genChanges = generated.reduce((s, f) => s + f.changes, 0);
|
||||
if (genChanges > totalChanges * 2) {
|
||||
signals.push({ name: 'dominated by generated file changes', positive: false, weight: 0.4 });
|
||||
}
|
||||
}
|
||||
|
||||
// --- Scoring with weights ---
|
||||
const positiveWeight = signals.filter((s) => s.positive).reduce((s, x) => s + x.weight, 0);
|
||||
const negativeWeight = signals.filter((s) => !s.positive).reduce((s, x) => s + x.weight, 0);
|
||||
|
||||
let impact: AnalysisResult['impact'];
|
||||
if (positiveCount > negativeCount) {
|
||||
if (positiveWeight > negativeWeight * 1.1) {
|
||||
impact = 'positive';
|
||||
} else if (negativeCount > positiveCount) {
|
||||
} else if (negativeWeight > positiveWeight * 1.1) {
|
||||
impact = 'negative';
|
||||
} else {
|
||||
impact = 'neutral';
|
||||
}
|
||||
|
||||
const totalSignalWeight = positiveWeight + negativeWeight;
|
||||
const confidence =
|
||||
signals.length > 0
|
||||
? Math.min(1, Math.abs(positiveCount - negativeCount) / signals.length + 0.2)
|
||||
? Math.min(1, Math.abs(positiveWeight - negativeWeight) / Math.max(totalSignalWeight, 1) * 0.6 + 0.25)
|
||||
: 0;
|
||||
|
||||
// Build reasoning
|
||||
const breakdown: string[] = [];
|
||||
if (src.length > 0) breakdown.push(`${src.length} source`);
|
||||
if (tests.length > 0) breakdown.push(`${tests.length} test`);
|
||||
if (config.length > 0) breakdown.push(`${config.length} config`);
|
||||
if (docs.length > 0) breakdown.push(`${docs.length} docs`);
|
||||
if (generated.length > 0) breakdown.push(`${generated.length} generated`);
|
||||
const fileSummary = `${meaningful.length} files (${breakdown.join(', ')})`;
|
||||
|
||||
const reasoning =
|
||||
signals.length > 0
|
||||
? `Diff: ${signals.map((s) => `${s.positive ? '+' : '-'} ${s.name}`).join(', ')}. ${totalAdditions} additions, ${totalDeletions} deletions across ${files.length} files.`
|
||||
? `Diff: ${signals.map((s) => `${s.positive ? '+' : '-'} ${s.name}`).join(', ')}. ${totalAdditions}+ ${totalDeletions}- across ${fileSummary}.`
|
||||
: 'No diff signals.';
|
||||
|
||||
return { impact, confidence, reasoning };
|
||||
|
|
|
|||
|
|
@ -5,6 +5,14 @@ import type {
|
|||
WebhookEvent,
|
||||
} from '../types.js';
|
||||
|
||||
// Conventional commit prefixes
|
||||
const CONVENTIONAL_COMMIT =
|
||||
/^(feat|fix|docs|style|refactor|perf|test|build|ci|chore|revert)(\(.+\))?!?:\s/i;
|
||||
|
||||
const WIP_PATTERN = /\b(wip|work.in.progress|do.not.merge|don't.merge|draft)\b/i;
|
||||
const BREAKING_PATTERN = /\b(breaking.change|BREAKING)\b/i;
|
||||
const TODO_PATTERN = /\b(TODO|FIXME|HACK|XXX|TEMP)\b/;
|
||||
|
||||
export class QualityBackend implements EngineBackend {
|
||||
name = 'quality';
|
||||
|
||||
|
|
@ -12,86 +20,157 @@ export class QualityBackend implements EngineBackend {
|
|||
|
||||
async analyze(event: WebhookEvent): Promise<AnalysisResult> {
|
||||
const body = event.body.trim();
|
||||
const signals: { name: string; positive: boolean }[] = [];
|
||||
const title = event.title.trim();
|
||||
const signals: { name: string; positive: boolean; weight: number }[] = [];
|
||||
|
||||
// --- Negative signals (check first, they can short-circuit) ---
|
||||
// --- Title analysis ---
|
||||
|
||||
if (body.length === 0) {
|
||||
signals.push({ name: 'empty body', positive: false });
|
||||
} else if (body.length < this.config.minBodyLength) {
|
||||
signals.push({ name: `short body (${body.length} chars)`, positive: false });
|
||||
if (title.length < 10) {
|
||||
signals.push({ name: 'very short title', positive: false, weight: 1.2 });
|
||||
} else if (title.length > 200) {
|
||||
signals.push({ name: 'excessively long title', positive: false, weight: 0.5 });
|
||||
}
|
||||
|
||||
// --- Positive structural signals ---
|
||||
if (CONVENTIONAL_COMMIT.test(title)) {
|
||||
signals.push({ name: 'conventional commit format', positive: true, weight: 1 });
|
||||
}
|
||||
|
||||
if (body.length >= this.config.minBodyLength) {
|
||||
signals.push({ name: 'adequate description', positive: true });
|
||||
if (WIP_PATTERN.test(title) || WIP_PATTERN.test(body)) {
|
||||
signals.push({ name: 'marked as work-in-progress', positive: false, weight: 1.5 });
|
||||
}
|
||||
|
||||
// --- Body analysis ---
|
||||
|
||||
if (body.length === 0) {
|
||||
signals.push({ name: 'empty description', positive: false, weight: 2 });
|
||||
} else if (body.length < this.config.minBodyLength) {
|
||||
signals.push({ name: `short description (${body.length} chars)`, positive: false, weight: 1.2 });
|
||||
} else if (body.length >= this.config.minBodyLength) {
|
||||
signals.push({ name: 'adequate description', positive: true, weight: 1 });
|
||||
if (body.length > 300) {
|
||||
signals.push({ name: 'thorough description', positive: true, weight: 0.5 });
|
||||
}
|
||||
}
|
||||
|
||||
if (/```[\s\S]*?```/.test(body)) {
|
||||
signals.push({ name: 'has code blocks', positive: true });
|
||||
signals.push({ name: 'has code blocks', positive: true, weight: 0.7 });
|
||||
}
|
||||
|
||||
if (/^#{1,6}\s/m.test(body) || /\*\*[^*]+\*\*:?/m.test(body)) {
|
||||
signals.push({ name: 'has structure/headers', positive: true });
|
||||
if (/^#{1,6}\s/m.test(body)) {
|
||||
signals.push({ name: 'has section headers', positive: true, weight: 0.8 });
|
||||
}
|
||||
|
||||
// Checklists
|
||||
const checklistItems = body.match(/^[\s]*-\s*\[[ x]\]/gm);
|
||||
if (checklistItems) {
|
||||
const checked = checklistItems.filter((i) => /\[x\]/i.test(i)).length;
|
||||
const total = checklistItems.length;
|
||||
if (total > 0 && checked === total) {
|
||||
signals.push({ name: `checklist complete (${total}/${total})`, positive: true, weight: 1 });
|
||||
} else if (total > 0) {
|
||||
signals.push({ name: `checklist incomplete (${checked}/${total})`, positive: false, weight: 0.8 });
|
||||
}
|
||||
}
|
||||
|
||||
// Breaking changes
|
||||
if (BREAKING_PATTERN.test(title) || BREAKING_PATTERN.test(body)) {
|
||||
// Not inherently positive or negative, but we flag it for visibility.
|
||||
// If there's a description of the breaking change, it's better.
|
||||
if (body.length > 100 && BREAKING_PATTERN.test(body)) {
|
||||
signals.push({ name: 'breaking change documented', positive: true, weight: 0.8 });
|
||||
} else {
|
||||
signals.push({ name: 'breaking change mentioned but not detailed', positive: false, weight: 0.8 });
|
||||
}
|
||||
}
|
||||
|
||||
// TODOs/FIXMEs in description suggest unfinished work
|
||||
const todoMatches = body.match(TODO_PATTERN);
|
||||
if (todoMatches) {
|
||||
signals.push({ name: `unfinished markers in description (${todoMatches.length})`, positive: false, weight: 0.6 });
|
||||
}
|
||||
|
||||
// --- Type-specific signals ---
|
||||
|
||||
if (event.type === 'issue') {
|
||||
if (/\b(steps?\s+to\s+reproduce|reproduction|repro\s+steps?)\b/i.test(body)) {
|
||||
signals.push({ name: 'has reproduction steps', positive: true });
|
||||
signals.push({ name: 'has reproduction steps', positive: true, weight: 1.3 });
|
||||
}
|
||||
|
||||
if (/\b(expected|actual)\s+(behavior|behaviour|result|output)\b/i.test(body)) {
|
||||
signals.push({ name: 'has expected/actual behavior', positive: true });
|
||||
signals.push({ name: 'has expected/actual behavior', positive: true, weight: 1.2 });
|
||||
}
|
||||
|
||||
if (/\b(version|environment|os|platform|browser)\b/i.test(body)) {
|
||||
signals.push({ name: 'has environment info', positive: true });
|
||||
if (/\b(version|environment|os|platform|browser|node|python|java|rust|go)\s*[:\d]/i.test(body)) {
|
||||
signals.push({ name: 'has environment details', positive: true, weight: 1 });
|
||||
}
|
||||
|
||||
if (/\b(stack\s*trace|traceback|error|exception|panic)\b/i.test(body)) {
|
||||
signals.push({ name: 'includes error output', positive: true, weight: 0.8 });
|
||||
}
|
||||
|
||||
// Template usage detection (common issue template markers)
|
||||
if (/\b(describe the bug|feature request|is your feature request related to)\b/i.test(body)) {
|
||||
signals.push({ name: 'uses issue template', positive: true, weight: 0.6 });
|
||||
}
|
||||
}
|
||||
|
||||
if (event.type === 'pull_request') {
|
||||
if (/\b(fix(es)?|clos(es|ing)|resolv(es|ing))\s+#\d+/i.test(body)) {
|
||||
signals.push({ name: 'links to issue', positive: true });
|
||||
signals.push({ name: 'links to issue', positive: true, weight: 1.3 });
|
||||
}
|
||||
|
||||
if (/\b(test\s*(plan|strategy|coverage)|how\s+to\s+test|testing)\b/i.test(body)) {
|
||||
signals.push({ name: 'has test plan', positive: true });
|
||||
if (/\b(test\s*(plan|strategy|coverage)|how\s+to\s+test|testing|tested\s+by)\b/i.test(body)) {
|
||||
signals.push({ name: 'has test plan', positive: true, weight: 1.2 });
|
||||
}
|
||||
|
||||
// Migration or upgrade guide
|
||||
if (/\b(migration|upgrade|breaking).*(guide|instruction|step)/i.test(body)) {
|
||||
signals.push({ name: 'has migration guide', positive: true, weight: 1 });
|
||||
}
|
||||
|
||||
// Before/after comparison
|
||||
if (/\b(before|after)\b/i.test(body) && /\b(before|after)\b/gi.test(body)) {
|
||||
const beforeAfter = body.match(/\b(before|after)\b/gi);
|
||||
if (beforeAfter && beforeAfter.length >= 2) {
|
||||
signals.push({ name: 'has before/after comparison', positive: true, weight: 0.7 });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Shared: references to other issues/PRs
|
||||
if (/#\d+/.test(body)) {
|
||||
signals.push({ name: 'references issues/PRs', positive: true });
|
||||
const refs = body.match(/#\d+/g);
|
||||
if (refs && refs.length > 0) {
|
||||
signals.push({ name: `references ${refs.length} issue(s)/PR(s)`, positive: true, weight: 0.6 });
|
||||
}
|
||||
|
||||
// Screenshots or images
|
||||
if (/!\[.*\]\(.*\)/.test(body) || /<img\s/i.test(body)) {
|
||||
signals.push({ name: 'has images/screenshots', positive: true });
|
||||
signals.push({ name: 'has images/screenshots', positive: true, weight: 0.8 });
|
||||
}
|
||||
|
||||
// --- Scoring ---
|
||||
// --- Weighted scoring ---
|
||||
|
||||
if (signals.length === 0) {
|
||||
return { impact: 'neutral', confidence: 0.1, reasoning: 'No quality signals detected.' };
|
||||
}
|
||||
|
||||
const positiveCount = signals.filter((s) => s.positive).length;
|
||||
const negativeCount = signals.filter((s) => !s.positive).length;
|
||||
const positiveWeight = signals.filter((s) => s.positive).reduce((s, x) => s + x.weight, 0);
|
||||
const negativeWeight = signals.filter((s) => !s.positive).reduce((s, x) => s + x.weight, 0);
|
||||
|
||||
let impact: AnalysisResult['impact'];
|
||||
if (positiveCount > negativeCount) {
|
||||
if (positiveWeight > negativeWeight * 1.2) {
|
||||
impact = 'positive';
|
||||
} else if (negativeCount > positiveCount) {
|
||||
} else if (negativeWeight > positiveWeight * 1.2) {
|
||||
impact = 'negative';
|
||||
} else {
|
||||
impact = 'neutral';
|
||||
}
|
||||
|
||||
// Confidence scales with signal count
|
||||
const confidence = Math.min(1, (positiveCount + negativeCount) / 6 + 0.15);
|
||||
const totalWeight = positiveWeight + negativeWeight;
|
||||
const confidence = Math.min(
|
||||
1,
|
||||
Math.abs(positiveWeight - negativeWeight) / Math.max(totalWeight, 1) * 0.5 + 0.2
|
||||
);
|
||||
|
||||
const reasoning = `Quality: ${signals.map((s) => `${s.positive ? '+' : '-'} ${s.name}`).join(', ')}.`;
|
||||
|
||||
|
|
|
|||
40
src/events.ts
Normal file
40
src/events.ts
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
import type { WebhookEvent, AnalysisResult } from './types.js';
|
||||
|
||||
export interface EventEntry {
|
||||
id: number;
|
||||
timestamp: string;
|
||||
event: WebhookEvent;
|
||||
result: Record<string, unknown>;
|
||||
analysis?: AnalysisResult;
|
||||
}
|
||||
|
||||
const MAX_ENTRIES = 100;
|
||||
const buffer: EventEntry[] = [];
|
||||
let nextId = 1;
|
||||
|
||||
export function recordEvent(
|
||||
event: WebhookEvent,
|
||||
result: Record<string, unknown>,
|
||||
analysis?: AnalysisResult
|
||||
): void {
|
||||
const entry: EventEntry = {
|
||||
id: nextId++,
|
||||
timestamp: new Date().toISOString(),
|
||||
event,
|
||||
result,
|
||||
analysis,
|
||||
};
|
||||
|
||||
buffer.push(entry);
|
||||
if (buffer.length > MAX_ENTRIES) {
|
||||
buffer.shift();
|
||||
}
|
||||
}
|
||||
|
||||
export function getRecentEvents(): EventEntry[] {
|
||||
return [...buffer].reverse();
|
||||
}
|
||||
|
||||
export function clearEvents(): void {
|
||||
buffer.length = 0;
|
||||
}
|
||||
|
|
@ -6,7 +6,7 @@ let octokit: Octokit | null = null;
|
|||
|
||||
export function initGitHub(token?: string): void {
|
||||
if (!token) {
|
||||
getLogger().warn('No GITHUB_TOKEN set — running in dry-run mode, comments will not be posted');
|
||||
getLogger().warn('No GITHUB_TOKEN set - running in dry-run mode, comments will not be posted');
|
||||
return;
|
||||
}
|
||||
octokit = new Octokit({ auth: token });
|
||||
|
|
|
|||
125
src/index.ts
125
src/index.ts
|
|
@ -1,9 +1,88 @@
|
|||
import { loadConfig } from './config.js';
|
||||
import { initLogger, getLogger } from './logger.js';
|
||||
import { initGitHub } from './github.js';
|
||||
import {
|
||||
initGitHub,
|
||||
fetchPR,
|
||||
hasExistingComment,
|
||||
postComment,
|
||||
updateComment,
|
||||
formatComment,
|
||||
} from './github.js';
|
||||
import { createApp } from './server.js';
|
||||
import { createEngine } from './engine/index.js';
|
||||
import type { WebhookEvent } from './types.js';
|
||||
|
||||
function main() {
|
||||
async function analyzeOne(target: string) {
|
||||
const match = target.match(/^([^/]+)\/([^#]+)#(\d+)$/);
|
||||
if (!match) {
|
||||
console.error('Usage: troutbot analyze <owner/repo#number>');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const [, owner, repo, numStr] = match;
|
||||
const prNumber = parseInt(numStr, 10);
|
||||
|
||||
const config = loadConfig();
|
||||
initLogger(config.logging);
|
||||
const logger = getLogger();
|
||||
|
||||
initGitHub(process.env.GITHUB_TOKEN);
|
||||
if (!process.env.GITHUB_TOKEN) {
|
||||
logger.error('GITHUB_TOKEN is required for analyze mode');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const prData = await fetchPR(owner, repo, prNumber);
|
||||
if (!prData) {
|
||||
logger.error(`Could not fetch PR ${owner}/${repo}#${prNumber}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const event: WebhookEvent = {
|
||||
action: 'analyze',
|
||||
type: 'pull_request',
|
||||
number: prNumber,
|
||||
title: prData.title,
|
||||
body: prData.body,
|
||||
owner,
|
||||
repo,
|
||||
author: prData.author,
|
||||
labels: prData.labels,
|
||||
branch: prData.branch,
|
||||
sha: prData.sha,
|
||||
};
|
||||
|
||||
const engine = createEngine(config.engine);
|
||||
const analysis = await engine.analyze(event);
|
||||
logger.info(
|
||||
`Analyzed ${owner}/${repo}#${prNumber}: impact=${analysis.impact}, confidence=${analysis.confidence.toFixed(2)}`
|
||||
);
|
||||
logger.info(`Reasoning: ${analysis.reasoning}`);
|
||||
|
||||
const { commentMarker, allowUpdates } = config.response;
|
||||
const existing = await hasExistingComment(owner, repo, prNumber, commentMarker);
|
||||
|
||||
if (existing.exists && !allowUpdates) {
|
||||
logger.info(`Already commented on ${owner}/${repo}#${prNumber}, skipping`);
|
||||
return;
|
||||
}
|
||||
|
||||
const body = formatComment(
|
||||
config.response,
|
||||
event.type,
|
||||
analysis.impact,
|
||||
analysis.confidence,
|
||||
analysis.reasoning
|
||||
);
|
||||
|
||||
if (existing.exists && allowUpdates && existing.commentId) {
|
||||
await updateComment(owner, repo, existing.commentId, body);
|
||||
} else {
|
||||
await postComment(owner, repo, prNumber, body);
|
||||
}
|
||||
}
|
||||
|
||||
function serve() {
|
||||
const config = loadConfig();
|
||||
initLogger(config.logging);
|
||||
const logger = getLogger();
|
||||
|
|
@ -12,7 +91,7 @@ function main() {
|
|||
|
||||
if (!process.env.GITHUB_TOKEN) {
|
||||
logger.warn(
|
||||
'No GITHUB_TOKEN — running in dry-run mode (checks and diff backends will be inactive)'
|
||||
'No GITHUB_TOKEN - running in dry-run mode (checks and diff backends will be inactive)'
|
||||
);
|
||||
}
|
||||
if (!process.env.WEBHOOK_SECRET) {
|
||||
|
|
@ -29,6 +108,36 @@ function main() {
|
|||
const server = app.listen(port, () => {
|
||||
logger.info(`Troutbot listening on port ${port}`);
|
||||
logger.info(`Enabled backends: ${enabledBackends.join(', ')}`);
|
||||
|
||||
// Watched repos
|
||||
if (config.repositories.length > 0) {
|
||||
const repos = config.repositories.map((r) => `${r.owner}/${r.repo}`).join(', ');
|
||||
logger.info(`Watched repos: ${repos}`);
|
||||
} else {
|
||||
logger.info('Watched repos: all (no repository filter)');
|
||||
}
|
||||
|
||||
// Active filters (only log non-empty ones)
|
||||
const { filters } = config;
|
||||
if (filters.labels.include.length > 0)
|
||||
logger.info(`Label include filter: ${filters.labels.include.join(', ')}`);
|
||||
if (filters.labels.exclude.length > 0)
|
||||
logger.info(`Label exclude filter: ${filters.labels.exclude.join(', ')}`);
|
||||
if (filters.authors.exclude.length > 0)
|
||||
logger.info(`Excluded authors: ${filters.authors.exclude.join(', ')}`);
|
||||
if (filters.branches.include.length > 0)
|
||||
logger.info(`Branch filter: ${filters.branches.include.join(', ')}`);
|
||||
|
||||
// Engine weights and confidence threshold
|
||||
const { weights, confidenceThreshold } = config.engine;
|
||||
logger.info(
|
||||
`Engine weights: checks=${weights.checks}, diff=${weights.diff}, quality=${weights.quality} | threshold=${confidenceThreshold}`
|
||||
);
|
||||
|
||||
// Comment update mode
|
||||
logger.info(`Comment updates: ${config.response.allowUpdates ? 'enabled' : 'disabled'}`);
|
||||
|
||||
logger.info(`Dashboard available at http://localhost:${port}/dashboard`);
|
||||
});
|
||||
|
||||
function shutdown(signal: string) {
|
||||
|
|
@ -47,4 +156,12 @@ function main() {
|
|||
process.on('SIGINT', () => shutdown('SIGINT'));
|
||||
}
|
||||
|
||||
main();
|
||||
const args = process.argv.slice(2);
|
||||
if (args[0] === 'analyze' && args[1]) {
|
||||
analyzeOne(args[1]).catch((err) => {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
} else {
|
||||
serve();
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import crypto from 'node:crypto';
|
||||
import express from 'express';
|
||||
import rateLimit from 'express-rate-limit';
|
||||
import type { Config, WebhookEvent } from './types.js';
|
||||
import type { Config, WebhookEvent, AnalysisResult } from './types.js';
|
||||
import { shouldProcess } from './filters.js';
|
||||
import { createEngine } from './engine/index.js';
|
||||
import {
|
||||
|
|
@ -12,6 +12,8 @@ import {
|
|||
updateComment,
|
||||
} from './github.js';
|
||||
import { getLogger } from './logger.js';
|
||||
import { recordEvent } from './events.js';
|
||||
import { createDashboardRouter } from './dashboard.js';
|
||||
|
||||
const startTime = Date.now();
|
||||
|
||||
|
|
@ -87,7 +89,7 @@ export function createApp(config: Config): express.Express {
|
|||
const eventType = req.headers['x-github-event'] as string;
|
||||
const payload = req.body;
|
||||
|
||||
// Handle check_suite completion — re-analyze associated PRs
|
||||
// Handle check_suite completion - re-analyze associated PRs
|
||||
if (eventType === 'check_suite' && payload.action === 'completed') {
|
||||
await handleCheckSuiteCompleted(payload, config, engine);
|
||||
res.json({ processed: true, event: 'check_suite' });
|
||||
|
|
@ -119,6 +121,8 @@ export function createApp(config: Config): express.Express {
|
|||
}
|
||||
});
|
||||
|
||||
app.use(createDashboardRouter(config));
|
||||
|
||||
return app;
|
||||
}
|
||||
|
||||
|
|
@ -136,7 +140,9 @@ async function analyzeAndComment(
|
|||
);
|
||||
if (!repoMatch) {
|
||||
logger.debug(`Ignoring event for unconfigured repo ${event.owner}/${event.repo}`);
|
||||
return { skipped: true, reason: 'Repository not configured' };
|
||||
const result = { skipped: true, reason: 'Repository not configured' };
|
||||
recordEvent(event, result);
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -144,13 +150,15 @@ async function analyzeAndComment(
|
|||
const filterResult = shouldProcess(event, config.filters);
|
||||
if (!filterResult.pass) {
|
||||
logger.debug(`Filtered out: ${filterResult.reason}`);
|
||||
return { skipped: true, reason: filterResult.reason };
|
||||
const result = { skipped: true, reason: filterResult.reason };
|
||||
recordEvent(event, result);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Run analysis
|
||||
const result = await engine.analyze(event);
|
||||
const analysis = await engine.analyze(event);
|
||||
logger.info(
|
||||
`Analyzed ${event.owner}/${event.repo}#${event.number}: impact=${result.impact}, confidence=${result.confidence.toFixed(2)}`
|
||||
`Analyzed ${event.owner}/${event.repo}#${event.number}: impact=${analysis.impact}, confidence=${analysis.confidence.toFixed(2)}`
|
||||
);
|
||||
|
||||
// Check for existing comment
|
||||
|
|
@ -159,15 +167,17 @@ async function analyzeAndComment(
|
|||
|
||||
if (existing.exists && !allowUpdates) {
|
||||
logger.info(`Already commented on ${event.owner}/${event.repo}#${event.number}, skipping`);
|
||||
return { skipped: true, reason: 'Already commented' };
|
||||
const result = { skipped: true, reason: 'Already commented' };
|
||||
recordEvent(event, result, analysis);
|
||||
return result;
|
||||
}
|
||||
|
||||
const body = formatComment(
|
||||
config.response,
|
||||
event.type,
|
||||
result.impact,
|
||||
result.confidence,
|
||||
result.reasoning
|
||||
analysis.impact,
|
||||
analysis.confidence,
|
||||
analysis.reasoning
|
||||
);
|
||||
|
||||
if (existing.exists && allowUpdates && existing.commentId) {
|
||||
|
|
@ -176,7 +186,9 @@ async function analyzeAndComment(
|
|||
await postComment(event.owner, event.repo, event.number, body);
|
||||
}
|
||||
|
||||
return { processed: true, impact: result.impact, confidence: result.confidence };
|
||||
const result = { processed: true, impact: analysis.impact, confidence: analysis.confidence };
|
||||
recordEvent(event, result, analysis);
|
||||
return result;
|
||||
}
|
||||
|
||||
async function handleCheckSuiteCompleted(
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue