Accurate Legal Reasoning at Scale: Neuro-Symbolic Offloading and Structural Auditability for Robust Legal Adjudication
Stanisław Sójka, Witold Kowalczyk
- Published
- May 4, 2026 — 11:13 UTC
- Summary length
- 435 words
- Relevance score
- 80%
Problem
This paper addresses the limitations of existing Large Reasoning Models (LRMs) in the context of legal reasoning, particularly the high inference costs and reasoning errors that hinder the deployment of production-ready systems. The authors highlight the need for a robust framework that can handle complex legal clauses while ensuring auditability and consistency in legal adjudication. This work is presented as a preprint and has not yet undergone peer review.
Method
The authors introduce a neuro-symbolic approach termed Amortized Intelligence, which leverages a two-step process. Initially, a legal text is translated into a Deterministic Autonomous Contract Language (DACL), a typed graph intermediate representation, using a single invocation of an LLM. This translation allows for deterministic graph executions during adjudication, which are visually auditable. The architecture relies on the inherent structure of DACL to facilitate logical reasoning without the probabilistic uncertainties associated with traditional LRM outputs. The authors do not disclose specific details about the training compute or the exact architecture of the LLM used for translation, but they benchmark their approach against runtime LRM baselines, including GPT-5.2 and Gemini 3 Pro.
Results
The DACL-based Agent demonstrates near-perfect consistency in legal reasoning tasks, significantly outperforming the LRM baselines. Specifically, the system achieves over 90% reduction in compute costs in high-volume workflows, which is critical for practical applications in legal contexts. The authors report that their approach mitigates the “reasoning cliff” phenomenon observed in probabilistic models, where performance degrades sharply under certain conditions. The paper provides quantitative comparisons, although specific numerical results and effect sizes are not detailed in the abstract.
Limitations
The authors acknowledge that their approach, while effective, may still be limited by the initial translation accuracy of the LLM and the complexity of legal texts. They do not address potential challenges related to the scalability of the DACL representation for diverse legal systems or the adaptability of the model to evolving legal standards. Additionally, the reliance on a single LLM invocation may introduce a bottleneck in scenarios requiring real-time processing of legal documents.
Why it matters
This work has significant implications for the field of legal technology, particularly in automating legal reasoning and adjudication processes. By providing a framework that combines the strengths of symbolic reasoning with the capabilities of LLMs, the authors pave the way for more reliable and cost-effective legal systems. The emphasis on auditability aligns with regulatory requirements in legal contexts, making this approach particularly relevant for compliance-driven industries. Future research could explore the integration of this framework with other AI systems to enhance its applicability across various domains of legal practice.
Authors: Stanisław Sójka, Witold Kowalczyk
Source: arXiv:2605.02472
URL: https://arxiv.org/abs/2605.02472v1