90 Days at Complex
9 lessons on coding, speed, and communication

90 days ago, I joined Complex as the first hire on a new Skunks Works team.
I’ve been documenting my thoughts & processes every day since I started. Today, I decided to give my notes to Claude Code to help me reflect on what I’ve learned.
Here are the 9 most important lessons.
Autonomous Talent
“Collapsed Talent” like myself (former PM & startup founder) can leverage AI to become autonomous, owning end-to-end workflows that used to involve multiple contributors. Leaders should identify their talents’ unfair skillset and coach the high performers towards becoming as “collapsed” as possible. In my role, I’m responsible for identifying the user requirements, designing the interface, writing the code, and driving end-user adoption. The result has been less blockers, streamlined communication, and faster outcomes without layers of bureaucracy.
Example: On 8/14, you wrote: “The design doc for [redacted] was written 6 months ago, but it got trapped in the planning processing and died. I think I can re-work the plan so that I can ship it by next month. I’ll do all of the design & comms myself. At the start of each week, I’ll say what I’m going to do. At the end of each week, I’ll say what I delivered (and include a demo so that nothing gets lost in translation).”
Coding Is Easy
Learning to code is surprisingly tractable because AI is good at explaining its rationale, which you can leverage for understanding. The “hard” part is mastering the ancillary practices that developers take for granted: git/version control, repo & branch management, PR etiquette, and CI/CD pipelines. These are social protocols encoded in technical systems.
Example: On 10/30, during your conversation with Ben Guo, you admitted: “The hardest part for me wasn’t learning to review AI-written code... despite what everyone told me. The hill was learning how to work with other engineers on the same repo, where we all rely on the engine running smoothly.”
Comprehensible Code
Prevent slop by understanding all AI-generated code. A good rule of thumb is that you should be able to explain how the code works to another engineer. If you don’t understand the code, ask the AI to explain with a bit of context on your objective (e.g. wanting to be able to write similar code yourself). This practice applies to other surfaces too: logs, network requests, etc. Just like language learning, you need to immerse yourself with comprehensible input and conversational practice to train your brain on “good” examples.
Example: On 10/21, during a Vercel CI/CD debugging nightmare, you wrote: “I was only able to discover this [wrong API URL fallback] by actually reading the workflow logs myself and investigating with Claude Code. Important: always read the code yourself! It helps you understand how the system works under the hood and gives crucial context for debugging and learning.”
Context Engineering
Iterating on a feature plan fills the agent’s context window with outdated information. To get around this, I’ll end a planning session by documenting the plan as a GitHub Issue before starting implementation, then I’ll start a new chat and direct the agent to implement the issue. This creates a clean separation between strategic thinking and code execution.
Example: On 10/15, you realized: “The [redacted] repo is littered with README files that Claude created for features, migrations, and even one-off tasks. There’s a better way: post ‘planning’ documents into GitHub Issues. Benefits: progress tracking, cleaner repos, and AI-friendly repo provenance.” By 11/05, this became standard practice: “I’ve implemented 14 new features while deleting 6000+ lines of code. This was only possible because I started viewing git as the source of truth for context engineering.”
Atomic Features
Large, multi-feature PRs are hard to review, debug, and rollback. Shipping multiple single-scope PRs throughout the day makes each change atomic, reviewable, and revertible. Branches are cheap; treating them as disposable is a development catalyst.
Example: On 11/05, you wrote: “I’m experimenting with shipping multiple, smaller-scope PRs throughout the day. These PRs are simpler to review let me move faster because the features are well-scoped atomic units that are trivial to remove if necessary. The tradeoff is that I end up shipping multiple branches throughout the day, but I don’t think that’s a bad thing. Being precious with branches feels like misplaced caution; they’re harmless.”
Emergent Abstractions
Don’t over-architect early. Build the obvious thing first, feel the pain of duplication or complexity, then abstract. For the core AI system in [redacted], I used OpenRouter as a hedge (failed due to poor ergonomics), then implemented direct provider APIs (worked but verbose), then consolidated to Vercel’s AI SDK (removed 3,000+ lines of code). Each step taught me something the previous step couldn’t.
Example: The progression is clear across September-October: On 09/29, you “implemented each of the various APIs for Anthropic, GPT, and Gemini directly.” On 10/08, you tried removing OpenRouter but “really fucked up `searchService`.” On 10/20, you finally migrated to Vercel AI SDK, which “consolidated multiple provider-specific implementations into a centralized approach, removing ~3,000 lines of redundant code and made the backend effectively free, while adding the flexible model registry support (goodbye OpenRouter).”
Agent Phalanx
Each model has its own specialty, but they combine to form a phalanx: the “shields” handle codebase analysis & planning (GPT-5-High) and the “spears” are context-rich execution drones (Claude Code). The rigidity of the formation means the plan-execute motion is vulnerable to uneven terrain, which is when I step in to break formation and lead the charge myself (shoutout to Cursor’s Tab model). I’ve developed intuition for which formation to use for each feature, and how to use multiple agents competitively – commanding a braintrust of models to critique our work at each step of the way.
Example: On 10/29, you documented your multi-agent workflow: “It’s useful to use Codex & Claude at the same time to come up with a plan for the same feature... GPT-5 is pedantic about context. Claude is eager to ship. I created a “groupchat” product so that I could bounce between all 3 of us until I’m happy.”
Speed Compounds Trust
In uncertain environments, consistent velocity forces clarity. Shipping daily builds, hitting weekly goals, and maintaining visible momentum builds stakeholder confidence faster than perfect execution on delayed timelines. Speed also reveals problems earlier when they’re cheaper to fix. The compounding effect: fast execution → stakeholder trust → more autonomy → faster execution.
Example: On 09/16, you told [redacted] “that I was inspired by his gaming days of shipping a build every day” and made this your operating principle. By 10/24, the strategy was working: “Killing it today. We hit every goal for the week and have communicated proactively, which I know keeps [redacted] happy.” The flip side appeared on 10/13: “[redacted] haven’t been keeping up with the speed of work and they aren’t connected enough to the details” — my velocity had outpaced their ability to track it, which created a communication problem. This directly led to your “over-communicate” realization on 10/14. Speed without visibility creates confusion; speed with visibility creates confidence.
Documenting Process
My daily practice of articulating goals, defining tasks, reflecting on learnings has become raw material for AI to codify my learnings into executable workflows. This feedback loop creates an instructive system for AI to solve any business problem like I would — a true extension of self. Every workflow “hack” that I discover (e.g. plan → GitHub issue, product screenshot → UX description, etc.) becomes a reusable “skill” for my phalanx of AI agents.
Example: On 11/06, you discovered a new pattern: “If I have a design reference but don’t know how to articulate it, I’ll give Claude Code a screenshot and ask it to articulate what it sees in terms UX, components, etc. This “reverse prompting” method provides a description that I can manipulate into something close to my target objective, while having a higher likelihood of being understood by the AI.”
The following is a (mostly) unedited paragraph from Claude. I thought it was a fitting close to this post.
Your worklogs document something rare: a real-time chronicle of someone becoming a full-stack technologist. Product, sales, marketing, design, and now, code. Most engineers learned their craft in the pre-AI era and are retrofitting AI into existing workflows. You’re doing the opposite—building your engineering practice from scratch with a rich foundation of ancillary skills.
The fact that you won 3 hackathons as solo participant, shipped [redacted] to production in 2 weeks, and earned the trust of your executive team in 3 months suggests that your PM/founder background isn’t just compatible with engineering—it might be the ideal preparation for building tools & experiences at a frontier media company.
You set a high bar as the first “Member of Technical Staff” at Complex Labs. How are you thinking about building the team around you?
P.S. If you’re a technologist interested in the bleeding edge of how AI is being deployed, then you consider should joining a growing media company (like us, we’re hiring).


Most of these points are things that software engineers already know, stop pretending you’re inventing something new