Notable theory

AIs and Humans with Agency

David Mumford

Published
May 4, 2026 — 16:48 UTC
Summary length
455 words
Relevance score
70%

Problem
This preprint addresses the gap in understanding the concept of agency in AI systems compared to human agency. While human agency is a complex construct that develops over years, primarily through the maturation of the frontal lobe, existing AI models, particularly large language models (LLMs), have struggled to exhibit similar agency. The paper critiques current methodologies that attempt to imbue LLMs with agency, highlighting significant limitations and the need for a novel architectural approach that facilitates joint action and planning with human collaborators in real-world contexts.

Method
The core technical contribution of this work is the proposal of a new architecture designed to enable AI systems to formulate actions and plans in conjunction with human agents. While specific architectural details are not disclosed in the abstract, the emphasis is on a collaborative framework that integrates human input into the decision-making processes of AI. This approach contrasts with traditional LLMs, which typically operate in isolation from human agency. The paper suggests that this joint formulation of actions could lead to more effective and contextually aware AI systems, although the exact training compute and data requirements are not specified.

Results
The paper does not present quantitative results or benchmark comparisons against existing models, as it primarily focuses on theoretical exploration rather than empirical validation. Consequently, there are no headline numbers or effect sizes reported against named baselines on established benchmarks. The lack of empirical data may limit the immediate applicability of the proposed architecture, as it remains to be seen how it performs in practice compared to current state-of-the-art models.

Limitations
The authors acknowledge that the proposed framework is still in a conceptual stage and lacks empirical validation. They do not provide specific limitations regarding scalability, computational efficiency, or the practical challenges of implementing such a collaborative architecture in real-world scenarios. An obvious limitation not discussed is the potential difficulty in quantifying human input and the variability in human decision-making, which could complicate the integration of human agency into AI systems. Additionally, the paper does not address ethical considerations or the implications of AI systems that can act with agency in human contexts.

Why it matters
This work has significant implications for the future development of AI systems that can operate effectively alongside humans. By proposing a framework that emphasizes joint action and planning, it opens avenues for research into more sophisticated human-AI collaboration. If successful, this approach could lead to AI systems that are not only more responsive to human needs but also capable of adapting to complex, dynamic environments. The exploration of agency in AI could also inform broader discussions on the ethical and societal impacts of autonomous systems, particularly as they become more integrated into daily life.

Authors: David Mumford
Source: arXiv:2605.02810
https://arxiv.org/abs/2605.02810v1

Turing Wire
Author Turing Wire editorial staff