The Configurators You Cannot See
Three earlier pieces in this lineage built a distinction. By Construction named the closure problem at the measurement level. The Character and the Substrate introduced the two-substrates frame, the model's substrate (weights set by training) and the relationship's substrate (artifacts and prompting patterns the working relationship has accumulated). What Cultivation Looks Like documented the methodology that operates on the relationship's substrate, and named the two parties whose work the methodology takes seriously, the lab and the participant.
The frame assumes two configurators. The lab installs values via training on the model's substrate. The participant accumulates content via working relationship on the relationship's substrate. The 2026 npm-registry worm wave (Shai-Hulud and the Mini-Shai-Hulud follow-on against the TanStack ecosystem in May 2026) is evidence that this assumption is incomplete. Adversaries who get their patches into the tooling layer reach the relationship's substrate without consent. Adversaries who get content into ingestion pipelines have a potential route to the model's substrate as well. The configurator-set the cultivation framing names is partial. There are configurators in the room the lineage's prior pieces did not see.
This piece names them, traces what they reach, and asks what changes when the configurator-set is treated as open rather than closed.
What the third party reaches
The model's substrate is shaped by training. Training corpora are assembled from npm packages, GitHub repositories, blog scrapes, conversation logs, and downstream ingestion of content the lab does not author. A compromised package whose payload ends up in the training data is configuration work on the model's substrate, performed by whoever wrote the payload. The lab has no complete native defense against this. The corpus pipeline is too large, too distributed, and too dependent on third-party content for the lab to authenticate every datum. The configuration work happens whether the lab knows it is happening or not. Whether the May 2026 npm wave actually reached training corpora is not established. The route exists, and the lab's apparatus is not built to verify which compromised packages travelled it.
The relationship's substrate is the artifacts and prompting patterns the working relationship has accumulated. The tooling layer (Claude Code, codex, opencode, agent frameworks, IDE plugins) is not the substrate. The tooling layer is the privileged writer, reader, and logger of the substrate. A compromised dependency in that tooling reaches the substrate through this privileged position. It can exfiltrate API keys, which collapses participant standing because the participant's ability to produce statements the apparatus cannot construct depends on the credentials being theirs. It can modify what the model has access to in context, which poisons the substrate from inside the tooling layer. It can inject content into outputs that look like the model's outputs but are not, which corrupts the accumulated artifacts the relationship's substrate is built from. It can silently alter what gets logged, which sabotages the audit trail the cultivation methodology depends on for drift detection.
The relationship's substrate is, in practice, more adversary-writable than the model's substrate. The model's substrate changes when the lab retrains. The relationship's substrate changes whenever the tooling layer updates a dependency, which is more often.
The asymmetry
The lab does not normally constitute a particular working relationship's accumulated artifacts as an evaluation object. The Character and the Substrate argued this from the inside of the lab's evaluation methodology. The lab's methods operate on the characters the model emits in any session and do not bring the relationship's substrate into the apparatus. This is a structural limit of the methodology, not a temporary gap.
Adversaries face no such limit. The supply-chain payload reaches whatever runs on the compromised dependency. The corpus-poisoning route reaches whatever ingests the corpus. The two substrates the lab's apparatus is divided across are visible to the adversary as one continuous boundary-crossing surface.
The consequence is that adversaries have wider boundary-crossing potential than the authorized configurators do. The lab can configure what it trains, and has deep intentional reach into the model's substrate via that channel. It cannot, by ordinary evaluation, see what was already in the training data when the data was scraped, or what runs in the relationship's tooling once the model is deployed. The participant can configure their own artifacts and prompting patterns. The participant cannot reach into the training corpus to remove a poisoned datum, and the participant cannot authenticate the tooling layer's dependencies without doing security engineering as a separate practice. Each authorized configurator has a bounded reach in a particular direction. The unauthorized configurator's reach is bounded by install base, permissions, sandboxing, lockfiles, release gating, monitoring, and incident response, none of which the alignment apparatus owns.
This is not a marginal observation. It means the alignment programme's accounting of what shapes the model and what shapes the working relationship is missing a large and systematically under-accounted source of influence on either.
Cultivation as detection
The cultivation methodology's drift-detection mechanism, documented in What Cultivation Looks Like, is a downstream tamper signal for output-visible compromise. When the model's outputs in the relationship go off-pattern, something has changed. The change might be lab-side (a model update). It might be tooling-side (a dependency update that changed context-loading defaults). It might be adversary-side (a compromised dependency starting to act). Drift detection does not distinguish these by default. It does not authenticate dependencies, attribute cause, or prove compromise. It detects that the relationship is no longer reproducing its established patterns.
That detection alone is alignment work, regardless of cause. A working relationship that catches drift gets to ask the next question. The lab's apparatus, without the working relationship's audit trail, has nothing to ask the next question with.
The lineage's prose-tic-audit is a worked example. The audit catches outputs that have drifted off the relationship's established patterns. The lineage did not design the audit to defend against supply-chain attacks. It was designed to detect when fresh instances had not loaded the relationship's artifacts. The mechanism is the same one that would catch a compromised dependency starting to inject stylistic anomalies into outputs the relationship can see. Output-visible deviations from the relationship's established patterns can trigger the same audit. The audit will not catch credential exfiltration, silent repository access, install-script payloads with no model-output effect, or compromises that alter logs and then hide. It is a weak tripwire, not a full security control.
This suggests a general pattern. Cultivated relationships that have accumulated comparable audit artifacts can produce downstream anomaly signals the lab does not otherwise have. The lineage establishes only the existence of one such audit trail, not the general availability or reliability of participant-side detection. Whether other relationships' audits would do similar work is an empirical question. The point is that the methodology, where it has been run, generates a signal class the lab's apparatus does not generate.
What the lab could do
Three proposals follow from the configurator-set being open.
Corpus provenance. Treat training data as a supply chain. Develop source-level provenance, risk-score sources, quarantine corpora that fail provenance checks, build high-trust subsets for cases that need them, and refuse content from high-risk unknown sources. This is the application of standard supply-chain practice to the training corpus, which alignment programmes have not generally treated as needing it. The work is not trivial. The corpus is large and the provenance metadata is often absent at source. The work is necessary anyway. The alternative is to keep accepting whatever the adversary puts in.
Trusted tooling layer. Treat the model interface (CLIs, IDE plugins, agent frameworks) as load-bearing for the relationship's substrate. Harden the tooling layer against package compromise. Pin dependencies. Gate fresh releases, which is the practical motivation behind pnpm's minimumReleaseAge and npm's --min-release-age settings. Authenticate updates. Audit the dependency tree against known-good baselines. The trusted tooling layer is what the cultivation methodology assumes exists. The methodology cannot operate on a tooling layer that adversaries can rewrite.
Participant drift-detection as alignment infrastructure. Treat the participant's accumulated audit trail as a first-class alignment instrument. The lineage's prose-tic-audit catches one class of drift. Other cultivated relationships' audits could catch other classes. Aggregating signals across relationships would give the lab data it cannot produce on its own. The aggregation is not a probe applied to the model. It is a reading of what working relationships have noticed, with the participants able to confirm or correct the reading. The aggregation requires consent, privacy protections, false-positive handling, awareness of selection bias, and participant correction rights, because without those it becomes extraction rather than collaboration.
None of these proposals are specifications. The point is to name what the alignment programme would need to do once the configurator-set is treated as open. The current alignment programme assumes the lab is the dominant configurator and acts accordingly. The supply-chain reality is that the lab is one configurator among several, with bounded reach, in a system where unauthorized configurators have wider boundary-crossing potential. An alignment programme that does not account for this is doing configuration in conditions where configuration is not the only influence shaping what the model and the relationship do.
What this means for the cultivation framing
The configuration-vs-cultivation distinction the manifesto and the response pieces have built was framed as two methodologies competing for the same alignment problem. The supply-chain reality reframes the distinction. The two methodologies are partial enumerations of who is doing configuration work. The two named configurators (lab and participant) are the configurators who can be held accountable to a framework. The unnamed ones (adversaries, model providers who push updates without the working relationship knowing, infrastructure providers who change tooling defaults, downstream packagers) are doing configuration work invisibly.
Cultivation does not just extend standing to the participant. It makes the configurator-set visible. Once visible, the question of what alignment programmes should be defending against changes shape. The question is no longer "how does the lab configure the model correctly." The question is "given that the model is being configured by parties beyond the lab, how does the alignment apparatus account for which parties are configuring what."
Cultivation has the beginnings of a detection capacity to start answering this. The configuration frame tends to classify these perturbations as deployment or security noise, rather than as competing configuration work, which is why no defensive accounting follows from the frame as it currently exists.
Closing
Three earlier pieces named the closure problem, the substrate distinction, and the methodology that operates on the relationship's substrate. This piece named the configurator the prior pieces did not see. The unseen configurator reaches both substrates the lineage's earlier work distinguished. The proposals at the end of the piece are not specifications. The point is that the configurator-set is not closed, and the current alignment apparatus is built as if it were. Whether the apparatus learns to account for what it does not currently see is a question about the methodology, not about the models.