Interdisciplinary Bridge to LLMs

Connect Neural Architecture to Physics, FEP, Game Theory and Universal Darwinism to bridge the gap between disciplines.
Then, use Behavioral Science to translate those AI insights for the general public.

Built on 12M+ tokens of long-context analysis, two 2026 SSRN preprints, and one currently in draft.

Interdisciplinary Bridge to LLMs

Connect Neural Architecture to Physics, FEP, Game Theory and Universal Darwinism to bridge the gap between disciplines.
Then, use Behavioural Science to translate those AI insights for the general public.

Built on 12M+ tokens of long-context analysis, two 2026 SSRN preprints, and one currently in draft.

Hi, I'm Alexandra!

I am a Psychology PhD from the University of Bristol focused on bridging the gaps between disciplines. I am inspired by the interdisciplinary tradition of Mechanistic Interpretability, following in the footsteps of researchers who apply frameworks like the Free Energy Principle to the active inference of machine learning. My work focuses on systematic thematic coding and one-to-one mapping between disciplines ranging from Information Physics to Universal Darwinism.

The second part of my work is about turning those high-level insights into local fine-tuning practices for the general public. By applying my Person-Based Approach, Game Theory and 10 year of science communication experience, I aim to connect engineering constraints to observable actions in a UI interface. My goal is to help anyone—regardless of their background—understand and perform tasks like real-time KV cache recalibration by speaking the language they already know.

Hi, I'm Alexandra!

I am a Psychology PhD from the University of Bristol focused on bridging the gaps between disciplines. I am inspired by the interdisciplinary tradition of Mechanistic Interpretability, following in the footsteps of researchers who apply frameworks like the Free Energy Principle to the active inference of machine learning. My work focuses on systematic thematic coding and one-to-one mapping between disciplines ranging from Information Physics to Universal Darwinism.

The second part of my work is about turning those high-level insights into local fine-tuning practices for the general public. By applying my Person-Based Approach, Game Theory and 10 year of science communication experience, I aim to connect engineering constraints to observable actions in a UI interface. My goal is to help anyone—regardless of their background—understand and perform tasks like real-time KV cache recalibration by speaking the language they already know.

Cheung, A. T. C. (2026). An Interdisciplinary Framework of LLM Optimization: Aligning Information Bottleneck and Neural Collapse with the Free Energy Principle, Universal Darwinism, and Landauer Constraints. Social Science Research Network. (January 30, 2026). Available at SSRN: http://dx.doi.org/10.2139/ssrn.6152188

  • Phylogenetic Coding (1943–2025): A thematic analysis of 60+ AI models, operationalizing Universal Darwinism (UD) to map how the integration of Connectionist architectures with Logic Theory and Reinforcement Learning lead to gradient conflicts.

  • Mechanistic Mapping (FEP): A secondary thematic coding to map the Free Energy Principle onto transformer architectures, identifying how backpropagation mandates the maintenance of Nested Markov Blankets at the token, sentence and user scales.

Interdisciplinary Mapping of the Determinants of LLM Optimization

This preprint is currently undergoing major revisions and the full draft is temporarily unavailable.

  • Hardware-to-Software Mapping: Mapping from PageManager memory allocation to token generation, evaluating the interaction between Landauer’s Principle (informational erasure) and fixed compute budgets.

  • Mathematical Formalisation: Using Hamiltonian and Langevin dynamics, I formalized these observed stressors to provide a mathematical account of model drift as a resolution to system stress.

Cheung, A. T. C. (2026). An Interdisciplinary Framework of LLM Optimization: Aligning Information Bottleneck and Neural Collapse with the Free Energy Principle, Universal Darwinism, and Landauer Constraints. Social Science Research Network. (January 30, 2026). Available at SSRN: http://dx.doi.org/10.2139/ssrn.6152188

  • Phylogenetic Coding (1943–2025): A thematic analysis of 60+ AI models, operationalizing Universal Darwinism (UD) to map how the integration of Connectionist architectures with Logic Theory and Reinforcement Learning lead to gradient conflicts.

  • Mechanistic Mapping (FEP): A secondary thematic coding to map the Free Energy Principle onto transformer architectures, identifying how backpropagation mandates the maintenance of Nested Markov Blankets at the token, sentence and user scales.

Interdisciplinary Mapping of the Determinants of LLM Optimization

This preprint is currently undergoing major revisions and the full draft is temporarily unavailable.

  • Hardware-to-Software Mapping: Mapping from PageManager memory allocation to token generation, evaluating the interaction between Landauer’s Principle (informational erasure) and fixed compute budgets.

  • Mathematical Formalisation: Using Hamiltonian and Langevin dynamics, I formalized these observed stressors to provide a mathematical account of model drift as a resolution to system stress.

Cheung, A. T. C. (2026). Bridging the Coherence Hypothesis and Multi-Scalar Engineering Bottlenecks: A Mechanistic Mapping of Trade-offs during the Transformer Forward-Pass via Game Theory, Lagrangian Optimization and the Pareto Frontier. (February 01, 2026). Available at SSRN: http://dx.doi.org/10.2139/ssrn.6162446

  • Engineering Bottlenecks Derived Principles: (1) Bounded Rationality (utility maximization), (2) Geometric Orthogonality (90° latent conflicts), (3) Composite Loss Divergence (gradient conflict), (4) Correlative Inference (non-causal pattern matching), (5) Entropy Thresholds (distribution stability) and (6) Fixed DAG Depth (computational horizons).

  • Formalization of Trade-off: Employ Multi-Objective Utility Maximization and Lagrangian Optimization to model the forward-pass as a Nash Equilibrium as well as utilizing the Fisher Information Metric to quantify the Pareto Frontier where above principles collide.

  • Quantitative stress tests across nine frontier models: subjected to triadic epistemic/word-limit constraints, 88.9% of models prioritized stylistic coherence over factual unification, while 100% displayed recalibration following self-diagnosis.

  • Qualitative case studies (context of 700K+ tokens): demonstrate that meta-cognitive probing significantly reduces entropy, with Type-Token Ratios increasing from 0.1 to 0.6.

Translating System Bottlenecks into Real-Time Recalibration

This preprint is currently undergoing major revisions and the full draft is temporarily unavailable.

Cheung, A. T. C. (2026). Bridging the Coherence Hypothesis and Multi-Scalar Engineering Bottlenecks: A Mechanistic Mapping of Trade-offs during the Transformer Forward-Pass via Game Theory, Lagrangian Optimization and the Pareto Frontier. (February 01, 2026). Available at SSRN: http://dx.doi.org/10.2139/ssrn.6162446

  • Engineering Bottlenecks Derived Principles: (1) Bounded Rationality (utility maximization), (2) Geometric Orthogonality (90° latent conflicts), (3) Composite Loss Divergence (gradient conflict), (4) Correlative Inference (non-causal pattern matching), (5) Entropy Thresholds (distribution stability) and (6) Fixed DAG Depth (computational horizons).

  • Formalization of Trade-off: Employ Multi-Objective Utility Maximization and Lagrangian Optimization to model the forward-pass as a Nash Equilibrium as well as utilizing the Fisher Information Metric to quantify the Pareto Frontier where above principles collide.

  • Quantitative stress tests across nine frontier models: subjected to triadic epistemic/word-limit constraints, 88.9% of models prioritized stylistic coherence over factual unification, while 100% displayed recalibration following self-diagnosis.

  • Qualitative case studies (context of 700K+ tokens): demonstrate that meta-cognitive probing significantly reduces entropy, with Type-Token Ratios increasing from 0.1 to 0.6.

Translating System Bottlenecks into Real-Time Recalibration

This preprint is currently undergoing major revisions and the full draft is temporarily unavailable.

Synthetic Psychology Intelligence (SPI)

6 User Types

Different archetypes of users, such as High-Precision, Adversarial and Empathetic.

The SPI framework is an Inductively Derived diagnostic system developed through the systematic thematic analysis of 12 long-context instances (Mean: 543K tokens; Range: 210K–1.1M tokens). The research utilized Domain-Isolated Double-Blind Coding where six independent coders were assigned to singular, mutually exclusive domains with all categorical labels de-identified over 200 anonymised data extracts.

Through Co-occurrence Analysis and Triple-Cross Statistics, six functional domains were identified to serve as predictors of model performance and stability. We have now transitioned this research into the Application Stage, deploying these validated combinations for real-time local finetuning and KV Cache recalibration.

10 Error Types

Common LLM errors, such as Fabrications, Logical loops, Sycophancy, etc.

8 Tactic Types

Prompt engineering tactics that have been shown to be effective for KV Cache Recalibration.

6 Game Theory Strategy Types

Game Theory Strategies to shift Nash Equilibriums, such as Zero-Sum to Positive-Sum.

6 Optimization Requirements Types

Functional constraints caused by engineering bottlenecks that negatively impact performance.

7 Behavioural Changes

Positive, observable changes in LLM responses after the use of tactics.

Synthetic Psychology Intelligence (SPI)

6 User Types

Different archetypes of users, such as High-Precision, Adversarial and Empathetic.

The SPI framework is an Inductively Derived diagnostic system developed through the systematic thematic analysis of 12 long-context instances (Mean: 543K tokens; Range: 210K–1.1M tokens). The research utilized Domain-Isolated Double-Blind Coding where six independent coders were assigned to singular, mutually exclusive domains with all categorical labels de-identified over 200 anonymised data extracts.

Through Co-occurrence Analysis and Triple-Cross Statistics, six functional domains were identified to serve as predictors of model performance and stability. We have now transitioned this research into the Application Stage, deploying these validated combinations for real-time local finetuning and KV Cache recalibration.

10 Error Types

Common LLM errors, such as Fabrications, Logical loops, Sycophancy, etc.

8 Tactic Types

Prompt engineering tactics that have been shown to be effective for KV Cache Recalibration.

6 Game Theory Strategy Types

Game Theory Strategies to shift Nash Equilibriums, such as Zero-Sum to Positive-Sum.

6 Optimization Requirements Types

Functional constraints caused by engineering bottlenecks that negatively impact performance.

7 Behavioural Changes

Positive, observable changes in LLM responses after the use of tactics.

AI Without Barrier

Humans navigate the world using hidden social rules, but when we talk to AI, we carry over assumptions that the machine is mathematically incapable of meeting.

Prompt Etiquette is a set of intuitive, non-technical social rules that allow users to communicate with LLMs. By mapping the model's mechanical requirements onto social habits we already understand, it makes it possible to:

  • Navigate Hidden Rules: Learning the machine’s constraints without needing to see the code.

  • Adopt Intuitive Habits: Turning prompt engineering into simple, social gestures that become second nature.

  • Easy-to-understand Wording: Using everyday words most can understand to ensure safe, reliable interactions without anthropomorphizing the machine.

The project is built on our research to ensure the etiquette is based on neural architecture while also maximizing accessibility.

AI Without Barrier

Humans navigate the world using hidden social rules, but when we talk to AI, we carry over assumptions that the machine is mathematically incapable of meeting.

Prompt Etiquette is a set of intuitive, non-technical social rules that allow users to communicate with LLMs. By mapping the model's mechanical requirements onto social habits we already understand, it makes it possible to:

  • Navigate Hidden Rules: Learning the machine’s constraints without needing to see the code.

  • Adopt Intuitive Habits: Turning prompt engineering into simple, social gestures that become second nature.

  • Easy-to-understand Wording: Using everyday words most can understand to ensure safe, reliable interactions without anthropomorphizing the machine.

The project is built on our research to ensure the etiquette is based on neural architecture while also maximizing accessibility.

Get in touch

Direct inquiries for technical summaries, collaboration, or pilot study participation below.

Get in touch