Infinite Regress

Recursive Inference in Two-Agent Systems: A Step Toward Conscious Dynamics

Abstract

This article introduces a simple yet powerful two-agent system inspired by variational message passing and predictive coding. Each agent updates its internal state based on predictions of the other's behavior, dynamically adjusting synaptic weights and uncertainty estimates (precisions). We show how this reciprocal inference mechanism simulates key features of active inference systems and relate it to infinite regress in conscious models. The resulting dynamics demonstrate emergent adaptation, attention-like mechanisms, and recursive feedback that can conceptually ground discussions about the architecture of conscious experience.

Introduction

Recursive self-modeling is considered a hallmark of conscious systems. In computational neuroscience and artificial intelligence, predictive coding and active inference offer formal frameworks where agents infer hidden causes of sensory input by minimizing variational free energy. This project implements a minimal two-agent system, each modeled as a sigmoid unit, that approximates variational inference using local prediction errors and weight updates. The agents form a simple yet nontrivial loop: each influences the other’s belief and adapts based on ongoing prediction mismatches.
We propose this system as a conceptual model of proto-consciousness in recursive systems. Its structure mimics the infinite regress problem: each agent must infer the other, who is inferring it, ad infinitum.

System Description

Architecture

The system is formed by two agents, X and Y. The agents represent systems that seek to maintain their own structure against entropy. They can be neurons, populations, cities, countries, or an agent and its environment. Their states are, respectively, State X and State Y.
Their states are updated via sigmoid activation functions.
Each agent receives input from the other (feedforward) and sends feedback (recurrent). The inputs are weighted by, respectively, Weight XY and Weight YX.

IR figure 1.png

Infinite Regress and Conscious Inference

The recursive structure of this two-agent system embodies the philosophical problem of infinite regress in consciousness. In higher-order theories of mind, a conscious agent is one that is aware of its own mental states—implying a meta-level model of itself. But if each level models the other recursively, the system becomes non-terminating.
Our model simulates this by:
Letting X infer Y, and Y infer X.
Continuously updating beliefs, states, and uncertainties.
As precision modulates attention to prediction error, the system learns which inputs to trust. This introduces meta-beliefs about reliability, resembling attentional allocation in biological systems.

Data Visualization

Below are plots from a typical run (T=500 iterations):
simulation_metrics_IR_20250513_195432.png

Discussion

This model illustrates:
How the convergence of local prediction errors to zero, characterize the role of reciprocal belief updates as a foundation for recursive self-modeling.
How such dynamics might approximate components of consciousness, as a process based on infinite, yet stable, self-inference.
How the minimization of current Variational Free Energy (VFE), a process undergone by any natural system seeking thermodynamical equilibrium, drives such systems towards dynamic attractors, which are stable despite their non-terminating nature. The figure below demonstrates the evolution towards the atractor relating both agents’ states, as VFE is minimized:

animated_2D_trajectories_IR_scatter20250514_125721.gif
Since the two agents are isolated from any external environment, and just infer each other’s state, State X vs State Y can also be regarded as the inner world-map maintained by the system. In the we will extend this system to incorporate an environment.

Future Directions

Extend to multi-agent systems with hierarchical structure.
Embed agents in an environment with future goal-directed and novelty-seeking (curiosity) behavior.
Incorporate symbolic or language-like interactions.


This article was generated and edited as part of a project on computational consciousness simulation using variational inference models.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.