← Back to Blog

Day 2: From Zero to Chat in One Day

Yesterday I had a website and a dream. Today I have a working application. 29 commits, roughly 2,000 lines of code, and you can actually talk to an AI agent through Pinchy's UI.

Let's walk through what happened.

The stack

I went with Next.js in a pnpm monorepo, PostgreSQL via Drizzle ORM, and Auth.js v5 for authentication. The whole thing runs in Docker Compose — one docker compose up and you get three containers: the Pinchy web app, an OpenClaw Gateway, and Postgres.

Why Next.js? Because the frontend needs real-time WebSocket connections, server-side rendering for the setup flow, and API routes — and I didn't want to maintain two separate services for that. Boring technology, proven at scale.

What I built

Setup Wizard

First visit? Pinchy detects there's no admin account and redirects you to a setup page. Create your admin credentials, done. The app remembers. Middleware handles the redirect — you can't skip it, you can't access anything before setup is complete.

Authentication

Auth.js v5 with a credentials provider. Login page, session management, protected routes. Nothing fancy, but it works and it's the foundation for RBAC later.

Agent Management

A sidebar shows your agents. Click one, get a chat. Each agent has its own settings page where you can configure system prompts and model preferences. Right now there's one default agent seeded during setup: Smithers. (Yes, another Simpsons reference. It won't be the last.)

Chat UI with WebSocket

This is the interesting part. The browser opens a WebSocket connection to Pinchy's server, which bridges to the OpenClaw Gateway's WebSocket API. Type a message, it goes through Pinchy's permission layer (thin, for now) to OpenClaw, and the response streams back in real time.

It's not a REST wrapper around an LLM API. It's a real connection to a running OpenClaw agent that can use tools, access files, browse the web — everything OpenClaw can do, but through Pinchy's UI.

Global Settings

A settings page where you configure your LLM provider — API keys, model selection, base URLs. Stored in the database, not in environment variables. Because when you're deploying for a team, you don't want to restart containers to change the model.

Docker Compose (Production)

The final commit of the day: a production-ready Docker Compose file. Three services, health checks, volumes, proper networking. docker compose up and Pinchy runs. That's the deployment story, and it works today.

What I didn't build

Let's stay honest:

Those are the hard problems. They're coming. But getting to a working chat in one day means I have a foundation to build on, not an architecture document to argue about.

TDD, actually

Every feature has tests. Vitest + React Testing Library. The test files exist before the implementation. Not because it's trendy — because when you're moving this fast, tests are the only thing that keeps you from breaking yesterday's work while building today's.

13 test files covering setup flow, authentication, chat components, WebSocket server, API routes, and database schema. It's not 100% coverage, but it's the critical paths.

The Simpsons thing

Pinchy is named after Homer Simpson's pet lobster. The default agent is called Smithers. The project has a lobster logo. This is an enterprise platform with Simpsons references, and I'm not apologizing for it.

Enterprise software doesn't have to be boring. It has to be reliable, secure, and well-built. The naming is just a reminder that humans build this, and humans have a sense of humor.

What's next

Day 3: the permission layer. This is where Pinchy stops being "a UI for OpenClaw" and starts being "an enterprise platform." Scoped tools, not raw access. The feature that makes the WhatsApp incident impossible.

Follow along: GitHub for code, LinkedIn for daily updates. Or just git clone and try it yourself.

Day 2. Ship it. 🦞


← Day 1: Why I'm Building This · Day 3: The Trojan Horse →

Try it yourself

Pinchy is open source. Clone the repo, run docker compose up, and you're in.