Feedback round 1 (2026-04-30, Ryan, dictated)
Feedback round 1 (2026-04-30, Ryan, dictated)
Verbatim feedback Ryan gave on the harmonized draft, to be routed into the
relevant seeds under seeds-1. Captured here so the source of each seed
update is preserved in draft_work/.
Intro (three-times feature hook)
This is why I’m having a hard time with this agent-generated post. Your example is about re-implementing a feature. Then your question at the very end is did I already implement in this or did I already opt to not bother with this feature? Different ideas. The question should really be the last question for that paragraph should be “I don’t know what.” Because then the [paragraph] says this tool fixes that problem. But that’s not the only thing it fixes. I mean, if the problem was that I’m constantly re-implementing features that already exist, there’s a much simpler tool to have built for this. Right? The point of this is that I am looking to capture the details that went into the decisions that turned into implementation. Not just for posterity, but to avoid blind alleys, to recall why certain features weren’t ever implemented, why certain features were implemented in the way that they were. Right? This intro to me misses the point.
Big-picture: AI-written structure is wrong, not just the wording
I think what this is saying to me is that the AI-generated blog isn’t just choosing phrasing that wasn’t mine. It is not structuring the arguments or the post in the way that I would want to. It is not touching on the details I feel are important. I like the last sentence of the second paragraph. The rest of it’s a little rough. I’ll continue with feedback. We’ll see if we can actually bang this into shape. I’m losing confidence in the ability to do this via AI.
My first attempt: ADRs — mostly working
Okay, my hat’s off to you on that last paragraph in the my first attempt ADRs. That’s pretty good. And admittedly the rest of the wording after the rocky intro has been pretty solid. I’ve only had to make minor tweaks. I think what that tells me is that we just need to woodshed the intro.
seeds, briefly — CLI snippet wrong
Okay, I know I’m not paying great attention to my own tool, but what the fuck is
seeds explorein the seeds example?
The last paragraph of “seeds, briefly”. Is that actually true? This is another paragraph I’m not super thrilled with. I don’t know what you’re saying when you say “the point is that every status transition is a place to capture the why.” I don’t know what you’re saying.
Voice/style: avoid “Claude Code” specifically
I’d like to avoid calling out Claude Code specifically and instead always refer to it as like an AI agent.
The awkward part — wrong substantively
I’m disliking the awkward part section. And you’re again you’re doing this thing where you are aping the way I talk but you’re not saying anything substantive or accurate. I don’t want to say that my eyes glaze over when reading other people’s code. I want to say that my mind isn’t used to reading other people’s code. I do have tons of experience deliberating with a select handful of people, so I don’t like that phrase. I would say something like, I don’t have practice writing documentation for other people or recording decisions for other people to read.
The last two sentences of the paragraph that I’m talking about, these “maybe I’m holding beads wrong, maybe I’m holding seeds wrong” — bad. Those are two separate sentences. Bad. I’m not holding beads wrong. I’m not holding seeds wrong. No one even knows what the fuck that means. I would say something more like “despite this lackadaisical or laissez-faire approach, beads and seeds have been working for me anyway.”
I’m not thrilled with your last paragraph of the awkward part, but we’ll keep it.
Workflow — multiple step rewrites
Last sentence of step two should read: “I answer what I can and ask questions when I don’t know the answer.”
Step three is that the agent is the magic part. The agent figures out what questions need to be answered in order to answer other questions or open up other topics. I do not direct which questions can’t be answered. It figures that out.
Step four, “working the seeds” is a dumb phrase. I would call it “nurturing the seeds.”
Step five is not just me usually feeling it. The AI has questions, I answer them. I often think of other things that I want to be seeds, that don’t relate to what we’re talking about. So we just make those seeds, and we’ll deal with them later. But at least they’ve been captured. At some point, the AI runs out of questions. Either questions for me or questions it has gone out and investigated on my behalf. We’ve made all the decisions we need to make for that session. We kind of reached that together organically, and AI will do its classic: hey, want me to make some beads so I can get to writing code, the thing I love to do the most? And often the answer at that point is yes. You and I have spent enough time thinking, and we now have all the information we need. You, AI, are not prematurely running off to implement something. And I am not lingering in the planning phase to the point that I’m over-specifying and creeping the scope.
Step seven is the loop, but the way it’s phrased undersells how effective the planning part has been. More often than not implementation goes off without a hitch. There is no revisiting the implementation. There are no weird surprises. There isn’t much going back to the drawing board. Certainly not like there was with the plan files. But if we do have to go back to the drawing board, we just go back to revisiting and revising seeds, or adding new seeds to supersede seeds. I agree with the idea though that the implementation will sometimes generate new thoughts and that we do go back to resolved and reopen. But that churn of implementation is generally not there.
Not to go off on another tangent, and I’m not sure where this would be. But the thing that I’m finding is, when we are planning a feature, a lot of the time we will have an advanced notion about what a feature should do. And we will just take the complexity, capture the nuance and the the complex feature and then boil it down to a more simple approach that will satisfy us for now. A feature doesn’t have to capture all possibilities right away. We’ve captured the notion of what we might want in the long run as a seed, but we can focus on implementing what we need in the moment as another seed. So we can go as pie-in-the-sky as we want at any given session and then just defer the “you ain’t gonna need it” until we need it.
When does planning stop?
The “when does planning stop” second paragraph: I would portray AI more like the following. When it’s in a coding context, it is always pressuring me to just start implementing. And when it’s in a planning context, it was happy to stay in a planning context. And then we would do the part — but at least in 2025. I guess the major pushback I have is that AI in coding context is unreasonably satisfied with insufficient planning.
The scope creep thing — I would say “I was already suffering scope creep in my plan files before I even started implementation.” The result was giant implementation undertakings. Not “giant projects.” Like giant features, giant sprints, maybe — agile something along those lines.
Greenfield — replace entirely
The Greenfield projects item is worthless. I don’t know what you’re trying to tell me, but you didn’t tell me anything.
Coming back to the greenfield projects, I will say that I haven’t started a greenfield project in two months that I didn’t start with seeds using the approach that I described above. I’ve used it to design two web apps and I think three random projects in support of those web apps. In particular, it’s helped me organize a very complicated data collection tool that is ultimately going to ingest data from forty-two different sources. And it has helped me prioritize which sources should be explored first, and how to catalog and prepare each source for harvesting.
Decade-old codebases — anecdote NEVER HAPPENED, replace
Your example in decades-old codebases never happened, and I don’t like lying to people. So we can’t do that. I haven’t backfilled from GitHub issues that I’m aware of. Not in any meaningful way. And it certainly has never — I mean, the whole point was that my issue tracker is not keeping track of stuff I didn’t implement. I mean, we have the occasional “won’t fix” ticket, but that is not the same thing as “here’s why we spoke for three hours about a particular feature, and here are the seven reasons why we didn’t bother going forward with it.” Or “here’s the alternative approach we ended up with.”
For the decades-old projects, I have once again gotten a set of feature requests, some of which have dubious merit and/or require a great deal more discussion and consideration before we make any changes. And I’ve happily captured the requests along with any discussion that’s been made about them, so that we can pick up those topics at any time to further refine the ideas behind them. It’s not the longitudinal archive of deliberation that I ultimately would like to have for every one of my projects, but it at least demonstrated to me that I can slot seeds into an existing project with very minimal friction.
Beads — first paragraph, lifecycle, investigation needed
For the first paragraph under the relationship of beads, it’s not only that he doesn’t want to forget it and doesn’t want to context switch, he wants his AI to not forget it and not context switch, and then he wants to be able to revisit the problem later and he knows it has been faithfully captured. And it will live until he is ready to deal with it.
I hesitate to make assumptions about how Steve Yegge views his tool and/or how the tool is actually designed. So, for the lifecycle comparison, I want you to go out there and find something you can cite, preferably from Steve Yegge, that actually shows that that is the lifecycle and the only lifecycle supported by beads. I don’t know what this wisps/memory and decision stuff is that I’m seeing. And probably we need to spawn off a little sub-agent to do some investigation on that to better inform me about what it is that’s actually going on with beads.
What seeds doesn’t do — corrections
Okay, for what seeds doesn’t do, I agree, it doesn’t enforce completeness, but it will bring my attention to incompleteness if it perceives incompleteness. So if it has a seed for each of those twenty source columns, because each source column needs to be a decision, it will know that three of them have not been deliberated and resolved. It’s in how you use it. But there isn’t a workflow enforcing that.
For the “capture in the moment is still hard,” the first sentence is garbage and doesn’t mean anything. “The resolved deliberations end up better documented than active ones” does not mean anything. It’s really just more a matter of: even with access to the seeds CLI and having a seeds prime command, just like there’s a beads prime command, an agent doesn’t always remember to reach for seeds to record a decision that is implicit, or a design question or anything like that that is implicit in a conversation. It’s pretty vigilant and it’s pretty good about it, but it generally needs to be reminded. I have some thoughts captured in seeds, by the way, about how seeds might review conversations and sessions with agentic agents and harvest seeds from those conversations after they have been had. But that’s not the point of this post.
Some projects I
seeds init‘d and never grew into anything — I don’t know that I really care. I’m not sure how super-great that information is. I mean, admittedly, there’s a little bit of ego where I’m slightly embarrassed that I apparently have let seeds lay fallow in a couple of places, but at the same time, I’m not sure bringing it up brings any clarity or information to people.
I’m not willing to keep hand-waving as per the “quantify any part of this.” I hate hand-waving. I would just say that “for me, seeds passes the vibe check.”
Let’s do that beads investigation before we say “beads might absorb seeds eventually.” I’m not even sure if that’s a “what seeds doesn’t do” kind of a thing. And we already said it before. So it feels redundant and perhaps wrong.
Two points determine a line
Be careful here because beads also said that plan files are a problem, but the problem with plan files were that they were not great at directing agent implementation behavior. My problem with plan files and my coworker’s problem with plan files was: they are lossy. They are incomplete. They are focused on final decisions and implementation. They are not up to the task of maintaining a coherent history of thought, planning, decision-making, and investigation.
Closing philosophical paragraph — needs major rewrite
The first sentence, good. Second sentence needs to establish that software development is simply codifying that solution using a programming language to specify the solution. That’s what implementation is. And agents now do the implementation just fine.
I’m also not going to argue that the honing-in on the solution step is the sole purview of developers. AI is very good at figuring out solutions to problems. It’s a great collaborator in that space. But its context, at least for now, is too limited. It generally cannot see as big a picture as we can and doesn’t necessarily have the experience and weird random assortment of knowledge a human does. It doesn’t seem to make cognitive leaps, it doesn’t seem to yet match our creativity, and this is all straying from the main topic, and maybe this paragraph doesn’t need to exist.
I wouldn’t argue that humans are better at solving problems than AI. Maybe we are. Maybe we still have that edge. That’s not the point. The point is neither AI nor humans have a decent tool for recording WHY they chose a particular solution, for recording what went into the decision-making that led to the solution. And that is extremely important information. If your assumptions were wrong at the time, or if your assumptions have changed over time, the decisions borne out from those assumptions and that information may very well change. If there were compelling reasons to avoid a problem years ago and you’ve forgotten them, you may end up making a mistake that you’d avoid in the past. Those are the important things, and there’s nothing keeping track of that to the level of fidelity and granularity that I think is key.
Bottom line — also kitchen-sink, woodshed
Your bottom line is as kitchen-sink and garbled as your intro. I do not like our dismount, so we’ll need to woodshed this one too. I especially don’t like the sentence “the agent’s experience of the tool matters more than yours does.” That’s not true. Your experience of the tool, when that agent is your collaborator using the tool for you — that’s what’s crucial. But I don’t even want to capture that right now. My brain’s starting to melt.