I Didn't Send You a Resume. I Shipped You a Product.

You are reading this on Paywritr — a flat-file Markdown publishing engine with a native Bitcoin Lightning paywall that I designed, scoped, structured, and shipped.
Not on Substack. Not on Ghost. Not on any managed platform I did not write.
This post isn't paywalled, because I really want you to read it, but others on this blog are.
A reader finds a post. It has a price in sats. They pay instantly over Lightning. The content unlocks. No accounts. No credit cards. No intermediary platform. No subscription dashboard. Creator and reader, connected directly.
That is the product. What follows is the reasoning behind how it got built and why that reasoning is the thing you are actually evaluating.
If you are a hiring manager, founder, or CPO looking for product leadership right now, a resume would tell you where I have been. This post tells you how I think, how I build, and what I believe about execution. Those are harder to fake and harder to find.
Most "I Used AI to Build X" Stories Miss the Point
You have read them. The format is familiar:
One person. One massive prompt. One generalist assistant. A burst of stitched output that mostly works, shipped over a weekend.
Sometimes impressive. Rarely durable. Almost never a signal of product leadership.
There is no separation of concerns. No governance. No prioritization discipline. No audit trail. Speed replaces structure, and the whole thing eventually collapses under its own weight — or gets frozen in amber because nobody knows how to extend it without breaking it.
I was not interested in building something impressive for a weekend. I was interested in a different question entirely:
What would it look like to stand up a real product organization with real governance, real prioritization discipline, and real accountability, using AI agents instead of human headcount?
That question is the operating model. Paywritr is just what it shipped.
Before Writing a Single Line of Code, I Built an Organization
Before application logic. Before payment integration. Before Markdown parsing or Lightning wiring. I defined structure.
Two distinct roles. Two distinct mandates.
- AI Agent Jack — Head of Product. Gemini Pro Preview. Responsible for translating vision into structured, unambiguous work. Owns the backlog, writes Acceptance Criteria, gates what ships.
- AI Agent Rex — Head of Engineering. Claude Sonnet. Responsible for implementation, architecture, and execution. Operates strictly against the specifications Jack produces.
Each ran as a separate AI agent in its own isolated workspace on disk, using OpenClaw, a local agent orchestration platform that routes messages from external surfaces (in this case, Telegram) to individual agents with their own memory, tool permissions, and operating context.
Jack and I talked on Telegram. Rex and I talked on Telegram. I even gave them professional looking headshots. From the outside, it felt like a team workspace. From the inside, it was a governed system with hard boundaries between roles.

Critically: Jack could not order Rex around like a subordinate. Jack handed Rex "Ready" tickets. If the requirements were vague or technically flawed, Rex pushed back. This was intentional. A product agent that can bully engineering into shipping underspecified work is not governance, it is just a different kind of chaos.
The lesson: Org design precedes execution. Always. One mistake product teams make is purposefully adopting a policy of emergent structure rather than imposing structure before work begins. Ambiguity that feels manageable at two people becomes catastrophic at twenty, or at two agents running hundreds of tasks.
The Agent Architecture: How Continuity Was Built In
Each agent lived in its own workspace directory:
~/.openclaw/workspaces/jack/ ← Product
~/.openclaw/workspaces/rex/ ← Engineering
Inside each workspace, a structured set of Markdown files served as the agent's long-term memory and operating system, the equivalent of onboarding documentation, a role charter, and a running decision log, all in one:
SOUL.md— personality, tone, operating principlesIDENTITY.md— name, role, scope of authorityUSER.md— who the agent is working with and whyAGENTS.md— startup protocols: what to read first, how to initializeMEMORY.md— curated long-term memory that survives context resetsmemory/YYYY-MM-DD.md— daily raw session notesHEARTBEAT.md— a periodic checklist for proactive background tasks
This matters because LLMs have finite context windows. Sessions reset. Without deliberate memory architecture, every new session starts cold and the agent re-introduces itself, re-asks questions you already answered, and re-makes decisions you already made.
The memory files were the solution. On every new session, the agent read its files first. Continuity was not assumed. It was engineered.
Tool permissions were scoped by role. Rex had the full coding profile: read, write, and execute files, run shell commands, browse the web. Jack had a constrained set: read files, call GitHub APIs, search the web, but no shell execution. Jack could think and specify. Rex could build.
The lesson: An AI agent without a designed memory layer is not a team member. It is a contractor who forgets the project every Monday. If you want durable AI-assisted workflows, the state layer is not optional. Design it explicitly before you need it.
Product Was a Gate, Not a Suggestion
The first system I built was not a blog engine. It was a prioritization system.
Jack had one job: clarify value and structure work. He was physically incapable of pushing code. He could not merge. He could not "quickly tweak" implementation. He could not make a judgment call that something was close enough and ship it anyway.
Every issue required the following before it moved:
- A clearly articulated Why — the value proposition in plain language
- Explicit goals — what this feature deliberately does
- Structured Acceptance Criteria — behavioral, testable, and unambiguous
- Priority: High / Medium / Low
- Size: XS / S / M / L / XL
- Type: Bug / Feature / Task

The taxonomy was not just convention. It was enforced. By defining the exact allowed values upfront, I prevented Jack from hallucinating new labels mid-project, a subtle but real failure mode where an AI starts inventing metadata that breaks downstream automation. If the value proposition was unclear, the issue did not move. If a feature was clever but low leverage, it was cut.
AI drafted the artifacts. I governed prioritization.
This matters because prioritization is capital allocation. Not the ability to write a perfect user story — the judgment to decide what does not get built this cycle, and to defend that decision when someone asks why. That discipline is the job. It does not change when the team is artificial.
Engineering Had Power, Not Authority
Once an issue was properly scoped and moved to Ready, Rex could execute with real, meaningful capability:
- Create branches with enforced naming conventions (
feat-<issue>-<slug>,fix-<issue>-<slug>,docs-<issue>-<slug>) - Implement strictly against Acceptance Criteria
- Open pull requests with structured descriptions referencing the originating issue
- Rebase when conflicts occurred, then open a new PR with a new branch name
- Maintain squash-only commit history
- Delete branches post-merge
- Update project board state via GraphQL mutations
What Rex could not do:
- Redefine scope mid-implementation
- Approve his own work
- Merge without explicit certification from Jack
Jack reviewed every PR against the original Acceptance Criteria. Not "Does it run?" Not "Does it mostly work?" But: "Does it satisfy what we agreed to build?"
If it failed: Jack left a "Request Changes" comment with specific gaps. Rex addressed them. If it passed: Jack checked off the AC boxes, left an "Approved" comment, and authorized Rex to merge.
This is how disciplined product teams operate. AI execution should not lower the review bar, and it does not have to. The bar is architectural. You either encode it or you do not.
The lesson: Self-certification is how quality dies quietly. On human teams, it shows up as engineers merging their own PRs under deadline pressure. In AI workflows, it shows up as a model deciding its implementation "probably satisfies" the intent it was given. The fix is identical in both cases: the reviewer cannot be the implementer. Remove the option, not just the temptation.
Governance Was Encoded, Not Assumed
This is where most AI workflow experiments stay theoretical. They describe governance. They do not enforce it.
I created two distinct GitHub Apps under the drytidelabs organization:
drytide-product[bot]— Issues, Projects, PR comments: read/write. Code: read only.drytide-eng[bot]— Code, Pull Requests, PR comments: read/write. Workflows: read/write.
Product physically could not push code. Engineering physically could not open issues or move project board items without authorization. These were not policy decisions. They were permission scopes enforced at the GitHub API level.
Authentication was handled via private key PEM files stored on disk, with short-lived installation tokens generated on demand. Tokens expired in one hour — appropriate for short-lived, scoped operations. Git commit authorship was configured to match the bot identity:
git config user.name "drytide-eng[bot]"
git config user.email "2919381+drytide-eng[bot]@users.noreply.github.com"
GitHub maps that email pattern to the bot badge in the UI. Every commit Rex made shows drytide-eng[bot] in the history. Every issue Jack opened shows drytide-product[bot]. The audit trail is clean, attributable, and complete. You can see exactly what the human did versus what the agents did — because they are cryptographically distinct identities.
The Deterministic Loop (And the Constraints That Made It Real)
Once the structure stabilized, execution became boring in the best way: Decision → Issue → PR → Review → Merge. A full lifecycle looked like this:
- Me → Jack (Telegram): "Write an issue for serving images from
content/assets/" - Jack: drafts the issue with spec + Acceptance Criteria, applies taxonomy, moves it to Ready
- Me → Rex (Telegram): "169 is ready, execute"
- Rex: moves it to In Progress (GraphQL), creates
feat-169-static-assets, implements strictly against AC, commits asdrytide-eng[bot], opens PR (Fixes #169), moves to In Review - Jack: reviews diff against the original AC, checks boxes, approves
- Me → Rex: "Merge 169"
- Rex: merges, deletes branch, moves issue to Done
No invisible state changes. No manual board drags. Every transition was a GitHub API call, with status IDs queried once and stored as constants (no re-guessing, no re-querying).

This determinism exposed the real lessons, and they were all architectural, not feature-level:
- Context drift: state leaked across sessions, so fixes required explicit memory files, startup protocols, and hard workspace boundaries. AI systems are state machines. If you do not design the state layer, entropy wins.
- Routing noise: permissive agent-to-agent messaging softened accountability, so fixes required hard allowlists and explicit triggers.
- Full reset (the hard way): a factory reset wiped memory, auth wiring, project state, role definitions. Rebuilding forced a cleaner architecture.
Resilience is not uptime. It is rebuildability. When state lives in the system and only in the system, accountability has nowhere to hide, for humans or agents. Implicit state is not just inefficient. It is a failure mode.
The Stack
For those who care about the mechanics:
- Node.js + Express 5
- Flat-file Markdown with frontmatter-based pricing per post
- Mustache templates
- Lightning payments via Nostr Wallet Connect (Alby Hub) or LNBits
- Docker + Docker Compose
- OpenClaw for local agent orchestration
- GitHub Projects v2 as the system of record
- GraphQL for all board state automation
- Two GitHub Apps enforcing separation of duties at the permission layer
The repository is public. The commit history, issue log, and PR trail reflect the operating model with full fidelity. The bot badges are in the commit log. The AC checkboxes are in the closed issues.
The repo isn't just a code dump. It is a working artifact.
Why This Matters and What It Signals About How I Lead
Paywritr is a small application. The operating model is not.
The same problems that surfaced here, ambiguity that stalls execution, governance that exists on paper but not in the system, audit trails that nobody trusts because nobody designed them, appear at every scale. I have worked through versions of them in regulated financial infrastructure, global trading systems, digital asset platforms, and early-stage blockchain startups. The surface area changes. The leadership problem does not.
In those environments the job was the same: convert ambiguity into roadmaps teams could actually execute, design approval gates that survived regulatory scrutiny rather than just sprint pressure, align executives around explicit tradeoffs instead of aspirational ones, and protect delivery velocity without trading away compliance or auditability. The tools and the stakes were different. The architecture of the problem was identical.
What Paywritr compresses into a single observable artifact is the same thinking applied to a two-agent team and a flat-file blog. The constraints were smaller. The discipline was not.
The Question Most Teams Are Not Asking Yet
Most organizations right now are using AI to make existing workflows faster. That is a reasonable first step.
Fewer are asking whether the workflow itself should be redesigned around AI capabilities. That question is harder. It requires willingness to dismantle familiar structures, encode governance rather than assume it, and treat AI agents not as productivity tools but as organizational primitives, the way you would design around a team of very fast, very literal, very context-sensitive humans who forget everything when they go home.
The teams that answer that question first will have structural advantages that are not easy to copy. You can copy a tech stack in a weekend. You cannot copy an operating model without understanding why it was designed the way it was.
If you are hiring for:
- VP of Product
- Head of Product
- Product leadership in fintech, digital assets, or regulated environments
- A leader who can design AI-native execution models — not just use AI tools
The relevant question is not "Can they ship features?"
It is:
- Can they design the system that ships features?
- Can they align stakeholders around real tradeoffs, not comfortable fictions?
- Can they preserve governance while accelerating delivery?
- Can they scale execution without scaling chaos?
This post is my answer. My team was artifical, but my leadership was not.
Paywritr was built using OpenClaw agent orchestration. The repository is public at https://github.com/drytidelabs/paywritr. Commit history, issues, and PRs are visible and bot-attributed. If you want to evaluate the operating model in the actual artifact rather than this document, it is there.