Jeremy Le-Tran
← Back
April 20, 2026

Releases >
Architecture

Recalibrating our engineering instincts

Stop obsessing over architecture. Obsess over release.

That runs counter to how we're trained to think as engineers. Good architecture is the foundation. Get it wrong early and you pay for it forever. The instinct to design for scale, for maintainability, for the problem you'll have in six months - it's not a bad instinct. It's just expensive. And for most of software history, code was expensive enough that the instinct made sense.

LLMs changed this. Code is cheap now. Iteration is cheap. The thing that used to justify over-engineering — the high cost of getting it wrong and rebuilding — is dissolving. Which means the tradeoff is shifting: the cost of not releasing, of not getting real signal from real users, is starting to outweigh the cost of imperfect architecture.

To be clear: this isn't true for everything. Long-term products with complex data infrastructure still need to get the foundation right. The cost of rebuilding a data pipeline or untangling a bad schema at scale is real, and no LLM changes that.

But for growth hacks, internal tools, anything where you have a hypothesis and not a spec — releasing fast is now the right call in a way it simply wasn't before.

Diagram 01
What we didn't build
Typical content-automation project — weeks
1. Spec2. DB schema3. Admin panel4. Auth5. Moderation queue6. Scheduler7. Monitoring8. First tweet
Most of this infrastructure exists to manage content a human still has to write.
What we shipped — days
1. Skill + guidelines2. Post script3. First tweet
The terminal is the UI. The reviewer is the queue. The goal was tweets, not an app.
“If the end goal is a recurring action, not a product, stop building a product.”

Distill to the output first.

A few weeks ago, we started pursuing a semi-automated tweeting solution to increase impressions and open up our funnel. The natural engineering response: build a proper review interface, a queue, an MCP to share outputs across the team sustainably. All reasonable. Also weeks from shipping a single tweet.

With LLMs in the picture, we could distill instead. It took a bit of re-thinking, but we realized a skill in Cowork with a system prompt, a guidelines file, and a few scripts to pull live OddsJam data and publish content to X would get us there. The marketing lead could use Claude as the interface: review in chat, ask for modifications, then post.

Diagram 02
Claude as the interface
The Cowork project
System prompt
voice + goals
oddsjam-wins.md
hard rules
references/
748 tweets
fetch-bets.py
tracker adapter
post-tweets.py
scripted post
The runtime
Cowork
“draft 3 threads from today's winning bets”
Pulls the bets, composes against reference patterns, applies the guidelines. Returns drafts in the chat.
✓ drafts look good · post-tweets.py queued them
No database. No admin panel. No login the team doesn't already have.

The output was identical to what the full architecture would have produced. Adn the timeline was days, not weeks. And every question we would have spent weeks debating in a spec — what tone works, what hooks land, how often to post — answered itself once real tweets were going out. We could make tweaks to the guidelines simply by adjusting the Cowork reference files. No deployment necessary.

In short, The product is the output. Architecture is a lever we pull to deliver more efficient results. But, in this new world of LLMs, we need to be cautious of the "platform" slowing us down.