The Core Loop
Why LLMs Haven't Revolutionized Decision-Making (Yet)

Maximilian Hofer
9 January 2026

Decision-making is the central task of knowledge work. Decisions are often based on incomplete, unstructured, and scattered information. From finance to law and insurance, the raw material for high-stakes decisions rarely arrives in a clean, organized format. Instead, it exists as a sprawling corpus of information: contracts, reports, messages, and tacit knowledge.

The status quo for completing this task is fundamentally human: a process that is cognitive, artisanal, and slow. For example, loan underwriters sift through application packages to make an underwriting decision, and analysts perform company diligence by painstakingly reading through data rooms to make an investment decision.


To make a decision, knowledge workers repeatedly perform three sub-tasks:

  • Read: Consume content

  • Recombine: Dissect, contextualize, and augment content

  • Write: Generate new content


We call the read-recombine-write sequence the core loop of decision-making. Any one decision combines many core loops. While the mechanics of completing the core loop have remained largely unchanged for decades, AI promises change.

Naive AI Applications


Large Language Models (LLMs) present an intriguing parallel. At their core, LLMs ingest and generate sequences of tokens; in other words, they read, recombine, and write content. While this symmetry suggests opportunity, naive applications quickly collide with practical reality. Four challenges emerge in production settings:

  • Scale: Processing large numbers of heterogeneous documents (often >10,000 pages per decision).

  • Reliability: Ensuring minimal error rates, especially false negatives (missed risks, clauses, or exceptions).

  • Workflow Coordination: Orchestrating system components such as document routing, extraction validation, automated cross-referencing, LLM retries, and extraction lineage.

  • Expert Control: Integrating expert know-how and human control, such as editable business logic.


Today's most common LLM applications fall short.

First, consider chat interfaces on top of RAG/ GraphRAG systems. Scale and reliability are unsolved, as anyone who has tried to build an internal application or has used Microsoft Copilot in a real project will know. Why? Vector embeddings, next-token prediction, and human-written prompts compound uncertainty, often beyond repair. Furthermore, workflow coordination is a black box, and expert control is largely confined to the “art of prompting” – a tedious, non-scalable task. Taken together, today’s chat-based solutions have not delivered the productivity gains they promised.


Second, API-first tools are closer to transforming the core loop because they address scale and reliability (e.g., for specific tasks like table parsing). Yet, APIs leave workflow coordination to the customer and make it difficult to integrate expert control. Workflow coordination requires multiple, fail-safe API calls to extract, structure, validate, and cross-reference information. Expert control requires that a business expert – an underwriter, not an engineer – can directly encode domain know-how to steer system behavior. Building the appropriate business UI on top of APIs is non-trivial. In short, the gap between raw processing power and expert usability has held back the value of many API tools. Let’s look into workflow coordination and expert control.

Lower vs. Higher-Level Document Processing


It is useful to categorize document processing into two buckets to understand why workflow coordination and expert control are so critical to the core loop for decision-making: Lower-level and higher-level document processing.


Lower-level processing includes invoice parsing, digitizing hand-written notes, and processing standardized forms, for example. These workflows require limited workflow coordination, limited human subject-matter expertise, and limited expert control. API-first tools have largely solved lower-level document processing.


Higher-level processing, however, includes evaluating a company's data room against a set of investment criteria, mapping a complete mortgage application package into a specific underwriting template, or converting a complex SME loan application into a standardized loan form. This type of document processing requires human expertise: information extraction is nuanced, cross-document reasoning and validation checks are complex, and disagreement resolution requires subject matter expertise.


Arguably, the majority of enterprise value lies in decisions that require higher-level information processing.

Solving for Workflow Coordination and Expert Control


At this point, the logical next step for transforming the core loop is to build on the scale and reliability of API-first tools and solve for workflow coordination and expert control. In other words, develop a new abstraction layer on top of API-first document processing tools.

A similar abstraction occurred in software deployment years ago. Before cloud computing, launching an application required knowledge of low-level details, from provisioning servers to managing databases. Cloud platforms like Amazon Web Services abstracted away the complexity and made deployment accessible to a much wider range of developers. AWS succeeded by orchestrating the entire underlying infrastructure while preserving explicit human control (e.g., applying your security rules).


If we can solve workflow coordination and integrate expert control, we can fundamentally reshape knowledge work and, with that, large parts of our economy. Transforming the core loop from a cognitive, artisanal, and slow process into a fast, AI-enabled process under human control requires moving beyond the probabilistic generation of text. It demands a system with deterministic workflow coordination, scalable infrastructure, and intuitive human-computer interaction. Such a system can unlock the true potential of language models for the decision-making that matters, resulting in sustained competitive advantage and growth for those who embrace it.

At Parsewise Labs, our applied research focuses on building this new abstraction layer, so experts can command large-scale document intelligence with governable, traceable systems.

© Parsewise Inc. 2025. All rights reserved.