The Core Loop
Why LLMs Haven't Revolutionized Decision-Making (Yet)
Maximilian Hofer
9 January 2026

Decision-making is the central task of knowledge work. Decisions are often based on incomplete, unstructured, and scattered information. From finance to law and insurance, the raw material for high-stakes decisions rarely arrives in a clean, organized format. Instead, it exists as a sprawling corpus of information: contracts, reports, messages, and tacit knowledge.
The status quo for completing this task is fundamentally human: a process that is cognitive, artisanal, and slow. For example, loan underwriters sift through application packages to make an underwriting decision, and analysts perform company diligence by painstakingly reading through data rooms to make an investment decision.
To make a decision, knowledge workers repeatedly perform three sub-tasks:
Read: Consume content
Recombine: Dissect, contextualize, and augment content
Write: Generate new content
We call the read-recombine-write sequence the core loop of decision-making. Any one decision combines many core loops. While the mechanics of completing the core loop have remained largely unchanged for decades, AI promises change.
Naive AI Applications
Large Language Models (LLMs) present an intriguing parallel. At their core, LLMs ingest and generate sequences of tokens; in other words, they read, recombine, and write content. While this symmetry suggests opportunity, naive applications quickly collide with practical reality. Four challenges emerge in production settings:
Scale: Processing large numbers of heterogeneous documents (often >10,000 pages per decision).
Reliability: Ensuring minimal error rates, especially false negatives (missed risks, clauses, or exceptions).
Workflow Coordination: Orchestrating system components such as document routing, extraction validation, automated cross-referencing, LLM retries, and extraction lineage.
Expert Control: Integrating expert know-how and human control, such as editable business logic.