Autocatalytic Gradient Concentration
A Universal Framework for Hierarchy Formation
From River Formation to Market Monopolies: One Physical Process
What if winner-take-all markets, the 80/20 rule, Zipf’s law, runaway sexual selection, river formation, monopoly emergence, wealth concentration, and citation cascades are all the same phenomenon—identical physics operating in different substrates?
They are.
I’ve derived a universal framework for understanding why hierarchies form. It’s not domain-specific theory—it’s thermodynamics. And it unifies dozens of phenomena across economics, biology, network science, geology, and sociology that have been treated as separate.
This is Autocatalytic Gradient Concentration: the physical mechanism by which dominant structures emerge whenever positive feedback operates on shared gradients. The process is deterministic, mathematically precise, and testable. When multiple entities compete for the same energy gradient with positive feedback, concentration occurs through deterministic physical process. This isn’t a collection of similar patterns—it’s one process, derivable from foundational physical principles, with universal predictive power.
What follows is a complete theoretical framework: the derivation, the mathematics, the testable predictions, and the unification of dozens of phenomena previously treated as distinct. This framework is one component of a larger thermodynamic theory of organization that underpins two books currently in development: On the Origin of Physics by Means of Immanent Causation and World Destroyer’s Handbook: The Thermodynamics of Human Coordination.
What This Framework Unifies
Economics & Markets: Increasing returns to scale, network effects, winner-take-all markets, monopoly formation, platform dominance, lock-in and standard dominance, compounding financial returns, wealth concentration, path dependence
Network Science: Preferential attachment, scale-free networks, hub formation, citation cascades, algorithmic amplification, attention economy dynamics
Biology & Evolution: Runaway sexual selection, competitive exclusion, founder effects, cumulative cultural evolution, dominance hierarchies, hierarchical organization, metabolic specialization
Sociology: Matthew effects / accumulated advantage, social stratification, prestige hierarchies, rich-get-richer dynamics
Urban & Geographic Systems: Agglomeration economies, urban scaling laws, Zipf’s law (city sizes), traffic network formation, infrastructure hub emergence, supply-chain centralization
Physics & Geology: Crystal nucleation and growth, channel formation in hydrology, drainage network emergence, avalanche and sandpile dynamics, nucleation-driven phase transitions
Complex Systems: Autocatalysis in chemical networks, self-reinforcing feedback loops, Pareto and power-law distributions, technological lock-in, standard-setting processes
Same equation. Different parameters. Identical dynamics.
Animal Taggart, 1/16/2026
Autocatalytic Gradient Concentration: Overview
Autocatalytic Gradient Concentration reveals the thermodynamic mechanism generating hierarchical dominance across all scales. Derived from foundational physical principles, this framework explains why concentration emerges necessarily whenever positive feedback operates on shared gradients, provides quantitative predictions through measurable parameters, and unifies phenomena previously thought distinct. What appeared as separate dynamics—network effects, runaway selection, preferential attachment, increasing returns, winner-take-all markets—are shown to be identical physics operating in different substrates.
Short Form
Autocatalytic Gradient Concentration: Systems processing energy gradients spontaneously concentrate flow into dominant pathways through positive feedback. Concentrated structures persist because they capture sufficient gradient flow to exceed their maintenance costs—they maximize capturable dissipation, not system-level efficiency.
Extended Definition
Autocatalytic Gradient Concentration describes the physical process whereby:
Symmetry breaking: Small random variations in advantage among competing nodes become amplified through positive feedback mechanisms.
Preferential flow allocation: Resources, energy, or information route to nodes offering lower resistance paths, following differential persistence.
Compound amplification: Captured flow increases capacity to capture future flow (γ > 1), accelerating divergence from uniformity.
Path-dependent concentration: Early advantages compound into dominant positions that resist reversal because disruption requires more energy than maintenance.
Emergent hierarchy: Systems self-organize toward configurations where few nodes control majority flow through differential persistence of efficient dissipation structures.
Physical Basis and Derivation from Foundational Laws
Autocatalytic Gradient Concentration emerges necessarily from the interaction of four Physical Laws:
From Structural Expedience: Gradients are followed according to physics. Once a channel forms, it creates steeper gradients → more flow follows those gradients → channel deepens. This is the α term: gradient capture efficiency - how readily energy flow routes to a node based on the gradients that node’s structure creates.
From Energy Priority: Only structures providing energy return exceeding cost persist. This is the β term: maintenance cost per unit advantage - the energy required to sustain structure. Persistence requires captured flow to exceed maintenance costs. In the normalized competitive model, symmetry-breaking concentration occurs whenever γ > 1, regardless of absolute throughput levels.
From Obligate Dependency: As concentrated pathways capture flow, distributed alternatives lose capacity to maintain themselves. Redundancy becomes thermodynamically untenable. This makes concentration irreversible - returning to distributed states would require rebuilding eliminated capacity.
From Scale-Antagonistic Selection: Optimization at one scale (efficient extraction by dominant node) necessarily degrades fitness at other scales (reduced system resilience, market competition, innovation). This creates the tension that eventually triggers phase transitions.
Thermodynamic Foundation: Concentrated structures maximize capturable dissipation—they position themselves to harvest maximum gradient flow through their own structure. Rivers capture more elevation gradient than sheet flow. Monopolies capture more market gradient than fragmented competition. This isn’t system-level optimization—structures persist because they capture sufficient gradient flow relative to their maintenance requirements, not because they serve total system dissipation.
Critical insight: Scale-Antagonistic Selection means what dissipates efficiently at one scale may create instability at another. Concentration that efficiently extracts at the firm level may destabilize the economic system. Individual optimization ≠ system optimization, and true system-level optimization is impossible.
Mathematical Form
Variable definitions: Let Aᵢ denote the capacity of node i to capture throughput—market share, channel depth, wealth stock, citation count, or any metric of gradient-capture capability. Φ represents total available throughput (energy flow, capital, attention, resources) that nodes compete to capture. A “gradient” is any structured differential in potential that enables directional flow—energy differentials, profit opportunities, attention allocation, mating access, or citation probability.
The process follows:
dAᵢ/dt = α·Φ_total·(Aᵢ^γ / Σⱼ Aⱼ^γ) - βAᵢ
Where the parameters derive from foundational constraints:
α (gradient capture efficiency): How readily flow routes to a node based on the gradients its structure creates (Structural Expedience). Higher α means steeper gradients capture flow more effectively.
β (maintenance cost coefficient): Energy required to sustain structure per unit advantage (Energy Priority). Must satisfy β < α·Φ·γ for concentration to occur.
γ (feedback amplification factor): How much captured flow increases future capture capacity. γ > 1 creates positive feedback; γ ≤ 1 produces stability or negative feedback.
Φ_total: Total gradient available for dissipation in the system.
Critical thresholds:
For the normalized allocation model, the symmetric fixed point A* = αΦ/(βN) loses stability exactly when γ > 1. The symmetry-breaking growth rate is λ = β(γ-1), so higher γ produces faster instability.
Near the symmetric state, symmetry-breaking grows exponentially at rate β(γ-1). For unnormalized superlinear growth (dA/dt ∝ A^γ), the time to reach scale A scales as t ∝ A^(1-γ), showing how higher γ produces faster approach to dominance.
Open systems and transient regimes exhibit heavy-tailed distributions whose exponent decreases with γ. Closed systems with γ > 1 undergo condensation to dominance rather than stationary power law. Both regimes emerge from the same autocatalytic mechanism.
The mathematics shows that once γ exceeds the critical threshold, concentration is deterministic—small perturbations grow exponentially until dominance emerges.
The γ Parameter: Determining Concentration Dynamics
The γ parameter does the heaviest lifting—it determines whether systems concentrate or stabilize:
γ < 1: Negative feedback, stability, no concentration
Frequency-dependent selection in biology (rare variants favored)
Saturating returns (doubling effort doesn’t double output)
Resource depletion effects (fishing grounds, grazing commons)
γ = 1: Linear dynamics, potential stability
Constant returns to scale
Many commodity markets
Simple interest without compounding
γ > 1: Positive feedback, inevitable concentration
This is where autocatalytic concentration occurs
Domain-specific γ values:
Network effects: γ ≈ 1.5-2.0
Facebook, LinkedIn: value scales superlinearly with users
Telephone networks: metcalfe’s law suggests γ ≈ 2
Payment systems (Visa, PayPal): merchant and consumer sides amplify
Compound financial returns: γ ≈ 1.05-1.10
Stock market returns compound annually
Real estate appreciation plus rental income
Venture capital: successful investments fund more investments
Winner-take-all attention markets: γ > 2.0
Podcast attention: Joe Rogan captures 10%+ of market
YouTube creators: top 1% captures majority of views
Social media influencers: algorithmic amplification creates extreme γ
Platform markets: γ ≈ 1.8-2.5
Amazon: more sellers attract buyers attract more sellers
App stores: more apps attract users attract developers
Uber/Lyft: more drivers reduce wait times attract riders
Academic citation networks: γ ≈ 1.5-2.0
Cited papers receive more citations
Foundational works accumulate exponentially
Matthew effect in scientific prestige
Geographic concentration: γ ≈ 1.3-1.5
Urban agglomeration: city growth attracts more businesses
Silicon Valley effects: talent density attracts more talent
Industry clusters (finance in NYC, entertainment in LA)
The specific γ value determines:
How fast concentration occurs (higher γ = faster exponential growth)
What concentration regime emerges (higher γ → heavier transient tails or faster condensation)
Whether intervention can prevent it (γ closer to 1 = more preventable)
Multiple Gradient Competition
The basic formulation assumes entities compete for the same gradient. Real systems exhibit more complex dynamics:
1. Orthogonal Gradients (Different Niches) → Prevents Concentration
When entities exploit different gradients, concentration doesn’t occur:
Biological species: Different food sources, habitats, or reproductive strategies
Market segments: Luxury vs. budget vs. mid-market products
Academic disciplines: Physics, biology, sociology compete for different prestige/funding sources
Geographic regions: Local businesses serving distinct populations
Example: Restaurants can coexist because different gradients exist—fine dining, fast food, ethnic cuisine, family-friendly, bars. Each captures a distinct gradient rather than competing for identical customers.
Prediction: Diversity persists when niches remain orthogonal. Homogenization of gradients (e.g., delivery apps collapsing all restaurants into single interface) triggers concentration.
2. Overlapping Gradients (Partial Competition) → Partial Concentration
When gradients partially overlap:
Some concentration within overlapping region
Diversity persists in non-overlapping portions
Boundary dynamics determine final structure
Example: Streaming services compete for entertainment time (shared gradient) but also serve different preferences (Netflix vs. Disney+ vs. Crunchyroll). Results in oligopoly rather than pure monopoly.
Example: Academic journals compete for citations (shared gradient) but also serve disciplinary specializations (orthogonal gradients). Major generalist journals (Nature, Science) concentrate heavily; specialized journals remain distributed.
3. Sequential Gradients (Concentration Enables Access) → Cascading Dominance
Concentration in one gradient provides access to adjacent gradients:
Amazon: Book dominance → marketplace dominance → cloud computing dominance
Google: Search dominance → advertising dominance → email/maps/docs dominance
Standard Oil: Refining dominance → distribution dominance → retail dominance
Mechanism: Success in primary gradient generates resources/position to exploit secondary gradients. Each captured gradient becomes launching point for adjacent capture.
Prediction: Once an entity achieves dominance in one gradient, expect expansion into adjacent gradients. Multi-domain monopolies emerge from sequential gradient capture.
Analytical Application:
To predict concentration in any domain, identify:
How many distinct gradients exist? (Orthogonal = diversity; shared = concentration)
What is γ for the primary gradient? (γ > 1 = concentration inevitable)
Are gradients sequential? (If yes, expect cascading dominance)
Example Analysis - Podcast Market:
Primary gradient: Listener attention (shared across all podcasts)
γ ≈ 1.8 (algorithmic amplification + social proof)
Sequential gradients: Attention → sponsorships → celebrity guests → more attention
Prediction: Extreme concentration inevitable (observed: top 1% captures >50% of listening)
Example Analysis - Craft Beer:
Multiple orthogonal gradients: Local/regional preferences, style preferences (IPA vs. stout vs. lager)
γ ≈ 1.2 within each niche (modest economies of scale)
Gradients remain distinct (local breweries serve local tastes)
Prediction: Concentration within styles and regions, but diversity persists across niches (observed: thousands of breweries coexist despite macro beer concentration)
Deterministic Within Scope
Autocatalytic gradient concentration will occur when:
Multiple entities compete for the same gradient
Positive feedback exists (γ > 1)
Sufficient time passes for compound effects
No artificial constraints prevent it
Concentration does not occur when:
Negative feedback dominates (γ ≤ 1, frequency-dependent selection)
Entities occupy different gradients (niche separation)
Active prevention mechanisms operate (regulation, social enforcement)
Maintenance costs scale faster than advantages (β > α·Φ·γ)
System undergoes phase transition before completion
Observable Signatures
Systems undergoing autocatalytic gradient concentration exhibit:
Increasing Gini coefficient over time
Power-law rank-size distributions
Winner-take-all or winner-take-most outcomes
Accelerating divergence between leaders and followers
Resistance to reversal until phase transition
Unification: Revealing the Common Mechanism
Autocatalytic Gradient Concentration reveals that phenomena described separately across disciplines—preferential attachment in networks, increasing returns in economics, runaway selection in biology, winner-take-all dynamics in markets, Matthew effects in sociology, and channel formation in hydrology—are all the same thermodynamic process. When multiple entities compete for the same energy gradient with positive feedback (γ > 1), concentration occurs through identical physics regardless of domain.
Previously Fragmented Understanding
Each field independently discovered the pattern:
Economists studied “network effects” and “increasing returns to scale”
Biologists studied “runaway sexual selection” and “competitive exclusion”
Physicists studied “nucleation and growth” in phase transitions
Sociologists studied “Matthew effects” and “accumulated advantage”
Network scientists studied “preferential attachment” and “scale-free networks”
Geologists studied “channel formation” and “drainage networks”
Each field developed domain-specific vocabulary for the same underlying process. The observations were accurate, but the common mechanism remained hidden.
What Gets Unified
From Economics:
Increasing returns to scale (Arthur, 1996) → α increasing with captured flow
Network effects (Metcalfe’s Law) → γ > 1 through connectivity value
Winner-take-all markets (Frank & Cook, 1995) → extreme γ in attention/platform economies
From Network Science:
Preferential attachment (Barabási-Albert) → P(new link) ∝ degree^γ
Scale-free networks → power-law distributions when γ ≈ 2
Hub formation → dominant nodes from autocatalytic dynamics
From Biology:
Runaway sexual selection (Fisher, 1930) → γ > 1 from mate preference feedback
Competitive exclusion (Gause, 1934) → single species dominates shared niche
Founder effects → early random advantage compounds over generations
From Sociology:
Matthew effects (Merton, 1968) → “accumulated advantage” when γ > 1
Social stratification → wealth/status concentration through inheritance
Prestige hierarchies → citation/reputation cascades
From Physics/Geology:
River network formation → channel erosion creates gradients, captures flow
Crystal nucleation → stable nuclei grow at expense of unstable regions
Avalanche dynamics → threshold events reorganizing distributed stress
From Technology:
Platform dominance → Facebook, Amazon, Google via network effects (γ > 1.5)
Standard emergence → VHS vs. Betamax, QWERTY keyboard
Open source → Linux kernel, popular repositories
Same equation. Different parameters. Identical dynamics.
Why Unification Matters
1. Insights Transfer Immediately Across Domains
Once you recognize the common mechanism, lessons from one domain apply to all others:
From river formation:
Concentrated flow is more efficient than distributed flow
Early channel formation determines final network structure
Reversal requires massive energy input
Applied to markets:
Monopolies dissipate gradients more efficiently (this is why they emerge)
First-mover advantage compounds into dominance
Breaking monopolies requires external energy (regulation)
2. Quantitative Predictions Become Universal
The mathematical framework lets you:
Identify γ in any domain by measuring growth dynamics
Calculate concentration threshold from system parameters
Predict timeline for dominance emergence
Forecast equilibrium distribution from γ value
Design interventions by targeting α, β, or γ
Example: Measure γ ≈ 1.8 in podcast market → predict power-law exponent ≈ -1.56 → predict top 1% captures >50% attention → design intervention to reduce γ (change recommendation algorithms, subsidize discovery).
3. False Dichotomies Dissolve
Traditional analysis treats these as separate categories:
Market failure vs. natural monopoly
Social inequality vs. meritocracy
Network effects vs. economies of scale
Random drift vs. natural selection
Unified view reveals these as false dichotomies—different aspects of the same process:
Market “failure” = thermodynamic success — Markets concentrate because concentration efficiently dissipates gradients. From the gradient’s perspective, monopoly isn’t failure—it’s optimal dissipation.
Inequality = natural outcome — Not a bug requiring explanation but the equilibrium state when γ > 1 with shared gradients. The question isn’t “why inequality?” but “why would equality persist?”
Network effects ARE economies of scale — Both are γ > 1. Network effects: value scales superlinearly with users. Economies of scale: costs scale sublinearly with production. Same thermodynamic structure.
Drift vs. selection = false binary — Both operate through differential persistence. “Drift” is selection with weak fitness differences. “Selection” is drift with strong fitness differences. Same process, different parameter regime.
The Power of Mechanism
Previous understanding: Observations that concentration occurs, domain-specific explanations, limited transferability.
This framework: Reveals why concentration must emerge from thermodynamic necessity, provides the mechanism (positive feedback on shared gradients), enables quantitative prediction across all domains.
It’s analogous to:
Kepler’s laws (accurate orbital descriptions) → Newton’s gravity (explains why orbits follow those patterns)
Mendelian genetics (inheritance patterns) → DNA/molecular genetics (reveals mechanism)
Thermodynamic observations (heat flows hot to cold) → Statistical mechanics (explains why from particle dynamics)
The observations were accurate. The mechanism was missing. Now it’s explicit.
Canonical Examples
River formation: Distributed rainfall → small rills form → erosion deepens channels → steeper gradients capture more flow → dominant river emerges
α: erosion rate creates gradient
β: bank stability costs
γ ≈ 1.3-1.5: erosion amplifies through flow capture
Wealth distribution: Initial equality → investment returns compound → wealth enables better opportunities → extreme concentration
α: investment access/quality
β: lifestyle maintenance costs
γ ≈ 1.05-1.1: compound returns over decades
Urban hierarchy: Scattered settlements → agglomeration economies attract businesses → talent follows → dominant city emerges
α: economic opportunity density
β: infrastructure/housing costs
γ ≈ 1.3-1.5: superlinear productivity scaling
Platform dominance: Multiple competitors → network effects favor larger platform → users and suppliers concentrate → monopoly emerges
α: network value to users
β: platform maintenance costs
γ ≈ 1.5-2.5: extreme feedback from two-sided markets
Academic citation: Equal papers → quality/relevance attracts citations → visibility drives more citations → dominant works emerge
α: paper quality/accessibility
β: negligible (citations are free)
γ ≈ 2: preferential citation creates power law
Each persists because the concentrated configuration dissipates efficiently enough to survive current selection pressures.
Scale-Antagonistic Tensions
Concentration at one scale creates predictable instability at others:
Firm-level: Monopoly efficiently extracts surplus
Market-level: Reduced competition decreases innovation
System-level: Extreme concentration triggers regulatory response or collapse
The framework predicts concentration continues until cross-scale tensions force phase transition.
Relationship to Other Concepts
Derives from:
Structural Expedience (gradients followed according to physics)
Energy Priority (only viable structures persist)
Obligate Dependency (redundancy elimination)
Scale-Antagonistic Selection (cross-scale tensions)
Thermodynamic basis:
Maximum Entropy Production Principle (concentrated structures often dissipate gradients faster)
Dissipative Coherence (structure maintained through continuous gradient processing)
More specific than:
Positive feedback, differential persistence
More general than:
Preferential attachment, Matthew effect, increasing returns, runaway selection
Physical basis for:
Power laws, winner-take-all dynamics, hierarchy formation, Pareto distributions
Mathematical formalism:
Phase transition from distributed to condensed state at critical γ
As Framework for Analysis
Autocatalytic Gradient Concentration provides the analytical framework for understanding hierarchy formation across all domains:
Questions it answers:
Why does wealth concentrate? (Compound returns with γ ≈ 1.05-1.1)
Why do cities form? (Agglomeration economies create γ ≈ 1.3-1.5)
Why do monopolies emerge? (Network effects and economies of scale with γ > 1.5)
Why are power laws ubiquitous? (Natural outcome when γ ≈ 2)
When will concentration reverse? (When cross-scale tensions trigger phase transition)
Analytical power: By identifying α (capture efficiency), β (maintenance cost), and γ (feedback strength) in any domain, you can predict:
Whether concentration will occur (γ > 1 for instability)
How fast it will proceed (exponential rate β(γ-1) near equilibrium)
What distributional regime will result (transient heavy tails vs. condensation to dominance)
Which interventions might prevent or reverse it (change α, β, or γ)
Key Insight
Autocatalytic gradient concentration reveals that hierarchical dominance emerges from differential persistence under selection pressure, not optimization or teleological drives.
Concentrated structures persist because:
They dissipate gradients efficiently
Alternatives face higher energy costs
Path dependence locks them in
Cross-scale destabilization hasn’t occurred yet
Not because:
They’re “optimal” (Scale-Antagonistic Selection makes optimization impossible)
Systems “try” to maximize anything (no teleology)
Some planner designed them
This is thermodynamic necessity, not social choice or market failure. When energy flows through competing pathways, lower resistance paths capture more flow, captured flow reduces resistance further, and the physics is identical across scales: water forms rivers, traffic creates highways, wealth accumulates, firms monopolize, cities dominate, podcasts concentrate attention.
Concentration isn’t a bug—it’s what happens when positive feedback operates on shared gradients.
Attempts to prevent concentration face continuous thermodynamic pressure toward reconcentration. This pressure can be overcome through sufficient energy input or regulatory constraints, but requires continuous expenditure. The natural tendency is toward concentration.
Predictive Power
The framework predicts:
New markets: Will concentrate unless actively prevented (when γ > 1)
Deregulation: Triggers rapid concentration in previously constrained systems
Technology platforms: Winner-take-all dynamics from network effects (γ > 1.5)
Wealth: Continuous concentration absent redistribution mechanisms (γ ≈ 1.05-1.1)
Information: Authority consolidation through citation/attention networks (γ ≈ 1.5-2)
Resistance futility: Distributed systems reconcentrate unless feedback structure changes
Framework Status
Autocatalytic Gradient Concentration is:
A derived mechanism emerging from foundational physical laws
Both a description of physical process and an analytical framework
Testable through observation of α, β, γ parameters across domains
Predictive of hierarchy formation wherever positive feedback operates on shared gradients
A unification of previously fragmented observations across every domain
In essence: Autocatalytic gradient concentration is the fundamental thermodynamic process generating hierarchy across all domains. When multiple entities compete for the same gradient with positive feedback (γ > 1), concentration occurs. The only question is which specific nodes will dominate—determined by initial conditions and path dependence. The process itself is as deterministic as water flowing downhill, crystal growth, or any other gradient dissipation phenomenon.
Technical Notes (Optional/Advanced)
Technical Note 1: Phase Transition Dynamics
The framework predicts concentration continues “until cross-scale tensions trigger phase transition.” This can be formalized:
Phase transition occurs when: β·N scales faster than α·Φ·γ due to:
Maintenance costs accelerating (system complexity, overhead)
Available gradient depleting (resource exhaustion, market saturation)
Cross-scale instability (regulatory intervention, systemic collapse, revolutionary redistribution)
Formal condition: System transitions when dβ/dt > d(α·Φ·γ)/dt
Phase Transition Condition: Linear stability analysis of the normalized competitive allocation model shows that linearization around the symmetric equilibrium A* = αΦ/(βN) with symmetry-breaking perturbations (Σεᵢ = 0) yields eigenvalue λ = β(γ-1). Thus γ < 1 produces a stable uniform state; γ > 1 produces an unstable uniform state with exponential symmetry-breaking. Concentration becomes thermodynamically inevitable for γ > 1 regardless of system size or throughput levels—the transition is determined purely by the feedback exponent.
This captures:
Monopolies collapsing under their own complexity costs
Resource depletion triggering reorganization
External intervention when extraction becomes politically untenable
Technical Note 2: Power Laws, Condensation Regimes, and Testability
Systems with γ > 1 exhibit different concentration patterns depending on system openness and noise:
Open/Quasi-Stationary Systems (continuous entry, exits, noise, heterogeneity):
Exhibit heavy-tailed distributions, often power-law-like
Exponent decreases monotonically with γ
Power laws appear as signatures of ongoing concentration, not static equilibrium
Examples: citation networks (continuous new papers), podcast attention (constant new shows), urban hierarchies (ongoing migration)
Closed/Deterministic Systems (fixed participants, low noise):
Undergo condensation to dominance (winner-take-all)
One or few nodes capture O(Φ/β) of total flow
Remaining nodes decay toward zero
Examples: monopoly formation in fixed markets, extreme wealth concentration, dominant river channels
Both regimes emerge from the identical autocatalytic mechanism. The difference is whether new nodes continuously enter (maintaining distributed tail) or the system is closed (driving toward complete dominance).
Empirical Testing Strategy:
Rather than inferring γ from a single power-law exponent, measure γ directly from growth dynamics:
Track growth rates: Measure dAᵢ/dt vs. Aᵢ for multiple nodes
Fit to reinforcement structure: dAᵢ/dt ∝ Aᵢ^γ identifies γ
Predict regime:
Open system → expect power-law-like distribution during growth
Closed system → expect condensation to dominance
Test interventions: Changes to α, β, or γ should shift concentration dynamics predictably
Why observed power laws persist:
Many real systems maintain power-law-like distributions because:
Continuous entry (startups, new papers, new creators)
Heterogeneity (varying β across nodes prevents full collapse)
Noise/shocks (disruptions prevent complete condensation)
Regulatory intervention (antitrust preventing monopoly completion)
These prevent the system from reaching full condensation equilibrium, keeping it in the transient power-law regime.
Concentration Timescale:
In the competitive allocation model, symmetry-breaking grows exponentially near the uniform state:
ε(t) ≈ ε(0)·e^(β(γ-1)t)
Time to visible dominance: t_dom ∼ [β(γ-1)]^(-1)·ln(ε_target/ε₀)
Higher γ produces faster exponential growth toward dominance
For unnormalized superlinear growth (dA/dt = kA^γ with γ > 1), integration gives t ∝ A^(1-γ), showing how time-to-scale decreases rapidly with increasing γ.
Meaning:
Higher γ → faster concentration (exponentially in the competitive regime)
γ = 2 → λ = β, e-folding time ∼ 1/β
γ = 1.1 → λ = 0.1β, slower but still exponential growth
Framework Remains Fully Testable:
The refined understanding makes predictions more precise, not less:
Systems with measured γ > 1 will concentrate (✓)
Open systems show power-law transients (✓)
Closed systems condense to dominance (✓)
Timescale depends on γ as predicted (✓)
Interventions changing γ alter concentration dynamics (✓)
The distinction between transient power laws and equilibrium condensation strengthens the framework by explaining why some concentrated systems maintain distributed tails while others achieve near-total dominance.

