The Representational Capacity of Language and the Modalities of Explanation#

August 2023

#representation-theory #explanation #epistemology

I often think about the questions we ask—and how we know they’re good ones. What kinds of explanations satisfy us? Which ones might be more fruitful, and how can we tell? I don’t just want to know. I want to know how knowing knows. Its a meta-epistemological question about how the structure of knowing evaluaes itself. And how much of the structures and patterns we know about depend on the structure of our own forms of knowing. This is part of why building diverse forms of mind is so important to me…because engaging with vastly different minds may help me see myself more clearly.


1. Language as a Coupling Algebra#

Language is, at base, an algebra of symbols with rules for composition. But unlike formal mathematical algebras, the only invariant constraint in language is that meaning must be relatable. The function of language is not just internal coherence, but social interoperability: if I say something, your cognitive apparatus must have some mapping or path to internalizing that utterance. This relational dependency makes language generatively infinite yet semantically bounded by shared cognitive architecture.

A more technical framing: language is the coupling structure between two information processing systems (IPSs). It enables the synchronization of information states through symbolic mapping. If two IPSs perceive the universe similarly enough and have sufficient expressive resources, then language enables simulation between them.

This suggests language’s representational capacity is unbounded in theory, but dependent in practice on conceptual scaffolding. Any universal process can be described linguistically so long as a mutually intelligible representation exists. Right?

2. Explanation as Prediction#

An explanation is a computational process that transforms a set of inputs into a low-surprise state. When we say something is “explained,” what we mean is that we no longer expect to be surprised by its behavior. This is predictive closure: a satisfactory model-state that yields stable expectations.

2.1 Modalities of Explanation#

We can classify explanatory models by the computational modalities they employ:

  • Dynamical Models: Use rules of change over time (e.g., Newtonian mechanics).

  • Counterfactual Models: Frame reality in terms of what could or could not happen (e.g., conservation laws).

  • Probabilistic Models: Quantify uncertainty across a state space (e.g., Bayesian inference).

  • Mechanistic Models: Explain via part-whole interactions and thresholds (e.g., circuits, logic gates).

  • Proof-based Models: Derive truths from axioms under static rules (e.g., mathematics).

Each modality has strengths and limits depending on the system and the question.

2.2 Rigor as Structural Isomorphism#

What constitutes a rigorous explanation? Rigor is not only about correctness but structural similarity between the model’s computational process and the system’s informational dynamics.

Consider:

  • Newtonian laws predict motion.

  • Hamiltonian formalism re-derives these in a more abstract, often more powerful, framework.

  • Schrodinger’s equation does the same at a quantum scale.

These are modal translations across explanatory landscapes. The convergence of multiple modalities on the same predictive outcome increases explanatory robustness.

Hence, rigor may emerge at the intersection of explanatory modalities, where predictions remain consistent under varied representational regimes.

3. Case Studies in Explanation#

3.1 Classical Mechanics#

Problem: What angle should I fire a cannon to hit a castle?

Explanatory Path: Use Newtonian dynamics: \(F = ma, \quad a = \frac{dv}{dt}, \quad v = \frac{dx}{dt}\)

Decompose initial velocity into x/y components, predict trajectory, minimize surprise. Alternatively, reframe with Hamiltonian mechanics: define total energy, derive motion from canonical equations.

3.2 Thermodynamics#

Problem: How much work can we extract from a heat engine?

Dynamical Explanation: Relate heat and work through entropy, temperature gradients.

Counterfactual Explanation: Use laws of reversibility and impossibility: if a process were reversible, it would produce maximal work. Most aren’t.

3.3 Quantum Systems#

Problem: What material should we use for solar panels?

Use Schrodinger’s equation to calculate energy band gaps. Translate quantum energy levels to macroscale efficiency. Here, quantum dynamics provide a predictive model with high explanatory yield.

4. Pluralistic Foundations of Understanding#

Suppose no single modality suffices universally. Then understanding must be context-sensitive and modally pluralistic.

For example:

  • Thermodynamic laws break at the nanoscale.

  • Newtonian predictions fail near relativistic speeds.

  • Classical information theory is incomplete for quantum systems.

A mature epistemology must specify modal boundaries: where each model’s assumptions break down.

4.1 Toward a Geometry of Understanding#

What if understanding itself had a shape? Suppose explanations are informational morphisms between system and model. Then the geometry of understanding is the structural alignment between these morphisms.

Rigor, then, is a topological property: how closely does the explanatory transformation preserve the structure of the original?

5. Toward a Theory of Modal Fit#

Each explanatory modality can be seen as a computational language. Modal fit is the match between the language and the question.

  • Dynamical models: excel at “what will happen?”

  • Counterfactuals: excel at “what is possible/impossible?”

  • Probabilistic: excel at “what is likely?”

  • Mechanistic: excel at “how does it move another?”

  • Proof-based: excel at “what is necessarily true (within a rule set)?”

The failure mode of explanation often lies not in incorrect computation, but in modal mismatch.

6. Epistemic Archeology: Why We Ever Believed…#

From divine metaphors to causal mechanisms, our explanatory tools have evolved. Computational models now augment our ability to represent, test, and derive. But each epoch of explanation is constrained by its modality’s assumptions.

As Marletto argues, counterfactuals provide a necessary expansion: they capture transformational possibility beyond dynamical prediction.

7. Conclusion-ish#

Understanding is the mapping of informational landscapes using tools suited to each terrain. The representational capacity of language and the structure of explanation are co-evolving technologies of mind.

The goal isn’t just to explain X, but to know which modality to reach for, when and what makes beliefs make it satifying. When multiple converge, we may be brushing against the structure of truth itself.