Blog

The Input Layer Thesis for Private Credit

Why AI in private credit will be won at the input layer — the prompts, skills, and borrower configurations — not at the model layer. A practitioner's thesis from a team of senior private credit investors.

Updated April 24, 202612 min readAccretive AI

Why AI in private credit will be won at the input layer — the prompts, skills, and borrower configurations — not at the model layer.

Published by Accretive AI. April 2026. ~2,800 words, 12-minute read.


The Wrong Question

The most common question we hear from private credit COOs and managing directors is some variation of: "Which AI tool should we use — Claude, ChatGPT, Gemini, or one of the vertical platforms?"

It's an understandable question. It's also the wrong one.

The reason it's the wrong question is that the answer barely matters. The difference in raw capability between frontier AI models, for credit-relevant tasks, is small and shrinking. The difference between a model used with a well-designed input layer and the same model used with generic prompting is enormous. A team using Claude with a carefully engineered prompt architecture, a set of credit-specific skills, and per-borrower configurations will beat a team using a stronger model with generic prompts by a factor of ten — in output quality, consistency, and speed. And a team using any frontier model without those things will produce output that senior credit professionals will rightly refuse to trust.

This is the input layer thesis. Stated in one sentence: the value in vertical AI lives in what you put into the system, not in which model you pick.

The thesis has been articulated cleanly by Zack Shapiro, a legal tech operator who built a successful domain-specific AI practice by encoding the judgment of senior lawyers into reusable skill systems. Shapiro's version: "The quality of what comes out is almost entirely a function of what I put in. The magic lives in the input layer." The magic is not the AI. The magic is the practitioner who spent years encoding exactly what to ask and how.

If the thesis is correct, it has significant implications for how private credit fund managers should evaluate AI — and for who should be doing the work of building AI workflows for credit funds. This essay is about those implications.


In legal tech, the conventional approach to AI was to build a chatbot that sat on top of a large language model and had a nice UI. Hundreds of startups launched with some variation of this: a better interface over the same model that anyone else had access to. Most have died or are dying. They did not have a durable advantage because the model was not the defensible asset. Anyone could get the same model.

Shapiro did something different. He spent years encoding exactly how senior lawyers approach specific document types — what to flag, what to ignore, what to cross-reference against what, in what order, with what level of skepticism. He built skill files that captured the cognitive work of being a senior lawyer, not the surface appearance of producing a legal memo. His competitive advantage was not his model access. His competitive advantage was the domain-specific judgment he had encoded.

When other legal tech products produced a plausible-looking output, Shapiro's system produced an output that a senior partner would sign. When a generic tool missed the non-obvious issue, Shapiro's system caught it — because Shapiro had encoded the specific pattern-matching that senior lawyers do unconsciously.

The lesson transfers directly to private credit.


What the Input Layer Actually Is in Credit

If you have ever tried to use generic AI tools to do credit work — spreading a borrower's financials, populating a credit model, generating a variance memo, extracting covenants from a credit agreement — you have probably noticed the gap between "plausible" and "correct."

A generic AI tool, given a borrower's compliance certificate and a credit model template, will produce something that looks right. The numbers will be in the right cells. The variance commentary will be grammatical. The covenant compliance statements will reference the right ratios. An analyst reviewing the output in five seconds will think it's fine.

An analyst reviewing the output in thirty minutes will find three problems. The EBITDA build-up silently merged two line items that should be tracked separately. The cash interest calculation used a blended rate where the credit agreement specifies separate tranches. The FCF line was computed as EBITDA minus capex — a definition that is just wrong, and that any credit professional should know is wrong. The variance narrative picked up the gross number but missed the margin story, because the AI did not know that in this borrower's seasonal business the margin trajectory in Q3 is the thing that actually matters.

None of these failures are model failures. They are input-layer failures. The AI did not know that FCF in credit contexts is the full waterfall, not the shortcut. The AI did not know which covenants have separate tranche definitions in this specific credit agreement. The AI did not know that this borrower's Q3 margins are the tell. It did not know because no one told it.

The input layer in credit is the encoded answers to questions like:

  • How is FCF defined in this specific fund's credit models, and what waterfall gets implemented?
  • What are the ten rules a credit model must follow to be considered well-built?
  • What covenants exist in this borrower's credit agreement, at what thresholds, on what dates, with what step-downs, referencing which add-back definitions?
  • What seasonal patterns does this borrower exhibit, and what gets flagged as an anomaly versus a known pattern?
  • What does the fund's credit box actually look like — industry inclusions and exclusions, size thresholds, pricing floors, leverage caps?
  • When the AI builds a model, what mistakes should it not make — hardcoded cells where formulas should exist, mis-specified interest formulas, broken debt waterfalls?
  • What add-backs has this fund already accepted for this borrower, and which remain contested?

These are not questions the AI model can answer on its own. These are questions the input layer has to contain. And the only way to get good answers in the input layer is for a senior credit practitioner to spend real time encoding them.


The Specific Forms of the Input Layer

In practice, the input layer in private credit consists of four layered artifacts, each of which is a distinct piece of work product.

Prompt architecture. The specific instructions given to the AI in any workflow — what to do, what not to do, what format to produce, what edge cases to flag, what to refuse to output. Good prompt architecture for credit is terse, specific, and reads like an instruction manual for a capable analyst who has never seen this particular fund before. Bad prompt architecture is vague, allows the model to interpolate, and produces different results on different runs.

Skill files. Structured documents that encode a reusable operational pattern — how to build a credit model, how to spread a set of financials, how to write a variance narrative, how to screen a deal against a credit box. Skills are loaded automatically when the relevant task is detected, and they impose consistency across every output the AI produces. The Credit Model Standards Skill that we build for clients at Accretive AI contains ten rules that every credit model the AI builds must follow — dynamic debt waterfalls, proper FCF construction, no hardcoded cells in forecast periods, covenant testing discipline. Without the skill, the AI produces plausible models. With the skill, the AI produces models that match the quality standards of a well-trained credit analyst.

Per-borrower configurations. Structured data files capturing the specifics of each borrower — credit agreement terms, covenant schedules, model structure, line item mappings, reporting cadence, file naming conventions, eligibility criteria for ABL-eligible collateral. The configuration is built once at onboarding from the credit file and updated automatically whenever amendments are processed. It turns a generic monitoring workflow into a borrower-specific one.

Accumulated borrower memory. The layer that compounds over time. Across quarters, the system accumulates context: confirmed add-backs that get applied automatically without re-asking, seasonal patterns that are no longer flagged as anomalies, management commentary themes that carry forward between calls, covenant trajectory that gets plotted rather than just evaluated point-in-time, and resolved flags from prior periods that inform the current period's variance narrative. After four quarters, the memory for a single borrower contains more context than a new analyst could assemble in two weeks of reading the credit file.

These four layers are not interchangeable. They compound on each other. And they are the actual work product of the input layer — what separates an AI workflow that a senior credit professional will trust from one that produces plausible-looking slop.


Why This Matters More for Mid-Market Funds

For a fund at the $20B+ scale, with dedicated technology resources and an internal operations team, any of several approaches to AI are viable. They can deploy an enterprise platform with their own configuration team. They can build internal tools with an in-house data team. They can experiment aggressively and absorb the cost of wrong turns. The scale supports it.

For a mid-market fund between $200M and $5B, none of that is true. The fund has the workflow pain — the quarterly monitoring cycle still consumes 200 or 300 analyst hours, the borrowing base monitoring still soaks up days every month, the amendments still degrade coverage as the portfolio grows — but the fund does not have the resources to deploy and configure an enterprise platform. Analysts experiment with ChatGPT or Claude on their own, find the outputs inconsistent, and go back to doing the work manually. AI at this scale is either a minor convenience or a nothing.

The input layer thesis is the route out. A senior practitioner with 15 years of credit experience can encode that experience into prompts, skills, configurations, and memory files faster than a fund can build an internal AI team or configure an enterprise platform. The practitioner has already done the encoding work unconsciously — they already know, from years of real work, what the right rule is. The work is surfacing it into a structured form.

And the result is that a mid-market fund can access the output quality of a much larger operation's AI workflows, without the overhead of building one. This is the economic argument for why practitioner-led input layer work beats both do-it-yourself AI and enterprise platform adoption at this scale.

It is also, increasingly, the argument for portability. AI platforms consolidate, get acquired, shift their target customer, or pivot their product roadmap. Configurations that live inside a platform you don't control can disappear when that platform changes direction. The practitioner-built input layer — prompts, skills, per-borrower configs, memory files — is the only part of an AI deployment that is genuinely portable across models and vendors. Whichever model wins next year, the encoded judgment comes with you.


The Compounding Moat

There is one more dimension to the input layer thesis that is easy to miss on first encounter, but that turns out to be the most important dimension for anyone building a long-term AI-driven credit practice. It is the mechanics of how the input layer compounds.

Shapiro uses the word accretes — judgment accretes in the system like sediment building a riverbed. The more engagements you run through the skill files, the more micro-decisions get encoded. The more borrowers you onboard, the more configuration patterns you recognize. The more amendments you process, the more edge cases your logic has seen. The more quarters of monitoring memory you accumulate for a borrower, the more context each subsequent quarter has. Every engagement makes the next one better.

This is also why the input layer, built correctly, is structurally defensible. A competing vendor with the same model access and the same general AI knowledge does not have your accumulated encoded judgment. A fund that tries to replicate your approach internally will need to hire both a senior credit professional and a prompt engineer, and then replicate years of your actual engagements to reach the same quality bar. A platform vendor who claims to offer the same capability cannot match a practitioner who has lived the workflow, because the platform's input layer is generic by design.

This is not a marketing metaphor. It is a mechanical description of how the flywheel works. Time is the ally. Every engagement makes the next one better. Every borrower added to the memory layer makes the portfolio-level workflows smarter. Every monitoring cycle run produces pattern recognition that the next cycle inherits.

Software companies without this mechanic compete on feature checklists. Practitioners with it compete on accumulated judgment — which is a fight that outsiders, in general, cannot win.


What This Means Practically

If the input layer thesis is correct, several practical conclusions follow for private credit fund managers evaluating AI.

The most important question in vendor evaluation is not "which AI" but "whose input layer." Ask every vendor: who authored your prompt architecture and skills, and what credit experience do they have? If the answer is "our data science team" or "the platform adapts to your workflows," the input layer is generic by design, and the output quality will reflect that. If the answer is a named senior practitioner with real credit experience, the quality ceiling is much higher.

The value of AI in credit is unlocked workflow-by-workflow, not in bulk. Generic AI platforms that offer "everything" typically do nothing well. The highest-ROI AI deployments target one workflow at a time — portfolio monitoring, or new deal screening, or Excel credit modeling — and build the input layer for that specific workflow to a level of precision that produces analyst-grade output. Adding workflows one at a time, with full input-layer treatment each, compounds faster than broad shallow deployment.

The output quality of AI in credit is almost entirely a function of configuration, not capability. If an AI tool is producing mediocre outputs on your credit workflows, the diagnosis is almost never "the model is not good enough." The diagnosis is almost always "the input layer is underbuilt." This is good news. It means the fix is not to wait for a better model; the fix is to invest in the input layer.

Your institutional knowledge is an asset you should be encoding, not storing. The credit files, the deal memos, the covenant packages, the amendment histories, the resolved flags, the management question sets — these are all raw material for the input layer. Funds that encode this knowledge into AI-readable skills and memory layers get operational leverage from it. Funds that leave it sitting in unstructured folders on SharePoint do not.

Switching-cost thinking should be explicit. An AI tool that accumulates configuration and memory is a switching-cost asset; a tool that does not is a commodity. Neither is wrong, but buyers should be deliberate about which they are buying. A services firm that builds a per-borrower configuration library and borrower memory over time is a hard firm to leave. A platform that does not accumulate is a platform that is easy to leave. Both can be the right choice, but they are different choices. And in a consolidating vendor market, the portability of your input layer becomes a material risk factor.


One More Thing

We want to close with a point that is worth being explicit about.

The input layer thesis is, at its core, a thesis about expertise. It says that senior domain practitioners, encoding their judgment into reusable AI systems, produce outputs that generic AI plus a pretty UI cannot match. It says that the thinking is the work, and the encoded thinking is the moat.

For private credit fund managers, this should be liberating. It means that AI in your business is not a technology play that requires you to become a technology company. It means that the path to AI operational leverage runs through encoded credit expertise — which you already have. It means that what looks like a technology adoption problem is actually a knowledge encoding problem, and the shape of the solution is very different.

For practitioners like us, it also means something specific. The ten rules in the Credit Model Standards Skill are ten rules that took a career to compile. The per-borrower memory schema is shaped by years of quarterly cycles and real conversations with real analysts about what actually matters in monitoring work. These are artifacts of encoded judgment — not features that can be retrofitted into a generic platform.

These are the assets. The model underneath changes every six months. The input layer compounds forever.


Further Reading

  • Glossary of Private Credit AI Terms — practitioner-authored definitions of the concepts referenced in this essay, including input layer, borrower memory, skill file, and cross-model verification.
  • AI Vendor Landscape for Private Credit — a neutral comparison of two dozen vendors in the market, organized by category, with coverage of recent acquisitions and funding rounds.
  • About Accretive AI — company overview, service tiers, and contact information.

About Accretive AI

Accretive AI encodes senior private credit expertise into AI workflows for private credit and ABL fund managers, primarily $200M–$20B AUM. Our team has 15+ years of private credit experience at leading private credit managers and billions deployed across direct lending, ABL, leveraged finance, and special situations. Accretive delivers finished workflows — portfolio monitoring, deal screening and underwriting, Excel credit modeling, borrowing base tracking, and borrower memory systems — without clients needing to learn or operate AI tools themselves.

Contact: KD@goaccretive.ai


If this essay was useful, please share it with another credit professional you know. Direct feedback and counter-arguments are welcome at the email above.