← Back to Blog

Day 59: Three Releases Before Lunch

The day after a release is when the world finds out what you actually shipped. v0.4.0 went out on Thursday. By Friday afternoon there were three point releases on top of it — one before breakfast, one mid-morning, one after lunch. Each one was closing a real report from someone who had tried to run the thing.

Ports That Close by Default

The first report was a one-liner: port 7777 is open on the internet on my fresh Hetzner install.

It shouldn't have been. The VPS deployment guide walked admins through putting Caddy in front of Pinchy on 443, with nothing public on 7777. The UFW rules denied it. And yet there it was, answering.

The fix is one line in docker-compose.yml: change PINCHY_PORT from 7777:7777 to 127.0.0.1:7777:7777. Docker publishes ports by binding to 0.0.0.0 by default, and Docker's iptables rules sit in front of UFW — so the packet is already accepted before UFW ever sees it. The UFW deny rule was correct, and irrelevant. Explicit localhost binding is the only reliable way to keep the port off the internet.

The same PR tightened the cloud-init path. On a fresh VPS, cloud-init now generates DB_PASSWORD, BETTER_AUTH_SECRET, and ENCRYPTION_KEY before Pinchy ever starts, instead of booting with placeholder values and trusting the admin to rotate them later. It also swapped iptables REDIRECT 80 → 7777 for binding Pinchy directly: PINCHY_PORT=80:7777. Less magic, one fewer place where a silent iptables-persistent failure can take down every deployment.

A Caddyfile That dpkg Refused to Install

The cloud-init script pre-stages /etc/caddy/Caddyfile before it installs Caddy. That pattern normally works: the package installs, notices a config is already there, leaves it alone.

Except dpkg, encountering a pre-existing file in a package that ships one, prompts: "Y/I/N/O/D/Z — what do you want to do?" Cloud-init has no stdin. The prompt hangs. The install exits. And because cloud-init is the only thing running, there is no human to answer.

The Caddy package ends up in iU state — unpacked but not configured. The postinst script never runs. The caddy system user is never created. systemctl start caddy fails with exit 217/USER. Caddy never listens on :80. The admin sees ERR_CONNECTION_REFUSED.

Fix: apt-get install -o Dpkg::Options::=--force-confold plus DEBIAN_FRONTEND=noninteractive, so dpkg silently keeps the pre-staged Caddyfile without asking. The postinst finishes, the user is created, systemd starts the service on our config. A regression test now parses cloud-init.yml and asserts every apt-get install line naming caddy carries --force-confold — so a future rewrite can't quietly drop the flag and ship a config that doesn't boot.

The Validator That Trusted a Public Endpoint

Second report: I pasted an Ollama Cloud key with the last character missing and it saved without complaining.

The validation probe was hitting https://ollama.com/v1/models. That endpoint is a public catalog — it returns the model list for any Bearer token, including nonsense ones, including none at all. HTTP 200 for everything. The validator was not validating anything; it was just confirming that ollama.com was up.

The fix is to probe an auth-protected endpoint: POST /v1/chat/completions with an empty body. Ollama Cloud checks auth before it checks the body.

No tokens consumed either way. Verified against the live endpoint before shipping. Users with bad keys now find out at save time instead of the first time Smithers silently fails to respond.

One Bad Row, All the Integrations Gone

The third report was from a demo instance. The admin had recently rotated ENCRYPTION_KEY without re-encrypting existing rows. One old Odoo connection was now unreadable.

The symptoms were ugly and inconsistent. The integrations page loaded empty — "No integrations configured yet." The Odoo connection picker in agent creation was also empty. The template gallery, though, kept listing every Odoo template as available. From the admin's point of view: templates are there, but they can't be used. No error, no clue, no recovery path.

Under the hood, GET /api/integrations was calling .map(decrypt) over the rows. The one unreadable row threw Unsupported state or unable to authenticate data. The whole request returned 500. The frontend silently fell back to its empty state. Meanwhile the template gallery used a bare EXISTS query with no decrypt — which is why it kept saying everything was fine.

Fix: decrypt each row inside its own try/catch. Rows that fail come back with cannotDecrypt: true and no credentials payload. The UI renders them as a destructive warning card with a Delete action and hides them from the agent connection picker. The admin sees the problem, has a visible recovery path, and can re-add the integration without touching the database. The same guard went into regenerateOpenClawConfig(), where one unreadable connection had been crashing config generation for every other agent.

Day 59

v0.4.2 at 06:38. v0.4.3 at 10:23. v0.4.4 at 16:47 local. Each tag was cut within an hour of the fix that prompted it. The goal wasn't "batch everything up and release tomorrow." It was "close the report now, so anyone else hitting the same thing today can pull a fix today." None of these were new features. All of them were what the first day of real use surfaces: assumptions that didn't survive a fresh VPS, a validator that wasn't validating, a UI that disappeared data instead of explaining why. The product only gets tested by the people using it. The fix only helps if it ships fast enough for the report and the release to line up.

← Day 58: v0.4.0 Is Out Day 60: One Guide Instead of Two →

Pinchy is open source and ready to deploy. Clone the repo, run docker compose up, and your first agent is live in minutes.