The Metabolic Basis of Ecology and Evolutionary dynamics#
The struggle for existence of living beings is not for the fundamental constituents of food, but for the possession of the free energy obtained, chiefly by means of the green plant, from the transfer of radiant energy from the hot sun to the cold earth.
—Ludwig Boltzmann, 1886, “The Second Law of Thermodynamics”
Introduction#
Metabolism is fundamental to the understanding of ecological and evolutionary processes.
The laws of thermodynamics are fundamental principles describing how energy and heat behave in physical, chemical, and living systems. As Boltzmann noted [Boltzmann, 1886], the Second Law of Thermodynamics is fundamental for understanding how biological cells and systems work as well as their ecological and evolutionary dynamics:
In any spontaneous process, the total entropy of an isolated system can never decrease. Equivalently, it can stay the same (in a reversible process) or increase (in an irreversible process).
The two direct implications of this Law are directly relevant to biological systems:
The first implication is the “arrow of time” — natural processes tend to move toward greater Entropy (disorder). What makes life and biological processes unique (in contrast to purely physical or chemical processes) is that in essence, the purpose of living organisms is to counteract this law by producing order (living biomass) from disorder (photons and molecules). At the most fundamental level, this is possible because living organisms are not isolated — thay are open systems that continuously exchange energy and matter with their surroundings.
Note
While organisms maintain or increase their internal order (i.e., decrease entropy locally) by building and organizing complex molecules, they do so by taking in energy from external sources (e.g., sunlight or food) and releasing waste heat and byproducts back into their environment. This flow of energy and matter ultimately does increase the total entropy of the universe. Thus, life’s local decrease in entropy comes at the expense of a larger increase in entropy elsewhere, in accordance with the second law.
The second implication is that it is impossible to convert heat completely into work without waste (there is no perfect engine) — living organisms like any engine are subject to this limitation, but are typically designed by natural selection to maximise or optimise their energy conversion efficiency. Indeed, Lotka [Lotka, 1922] made this observation over a 100 years ago:
It has been pointed out by Boltzmann’ that the fundamental object ofcontention in the life-struggle, in the evolution of the organic world, is available energy. In accord with this observation is the principle that,in the struggle for existence, the advantage must go to those organismswhose energy-capturing devices are most efficient in directing available energy into channels favorable to the preservation of the species.
Energy and Metabolic rate#
Energy is a measurable property that quantified an object’s capacity to perform work (in a physical sense). In living organisms, a key function is to harness and utilize energy to support growth, maintenance, and reproduction. Plants and other autotrophs capture photons through photosynthesis, transforming light energy into chemical energy stored in glucose. Heterotrophs, such as animals, obtain energy by breaking chemical bonds in organic molecules. A crucial molecule in this process is ATP (adenosine triphosphate), which serves as a primary energy carrier within cells and powers numerous biochemical reactions.
Metabolism, the set of chemical reactions within an organism, determines metabolic rate, which is the rate of energy use, often measured in J/s, kcal/day, or Watts. Organisms must balance energy consumption and expenditure:
Metabolic rates influence key life-history traits, including movement rates, development speed, lifespan, and reproductive output [Dell et al., 2011].
The metabolic theory of ecology (MTE) [Brown et al., 2004] provides a framework for understanding how metabolic processes scale across levels of biological organization and influence ecological dynamics. It combines the biomechanical constraints of cell or body size of organisms and the thermodynamic effects of temperature on biological reaction kinetics into one equation for organism-level (that of single, integral organisms, either unicells or multi-cells) metabolic rate.
We will now seperately consider the biophysical effects of size and temperature and then combine them into a unified understanding (and equations).
Importance of Size#
Metabolic rate (\(B\)) increases with body size (\(M\)) according to the allometric scaling law [Kleiber, 1947]:
or equivalently:
where \(b\) is typically close to 3/4 for multicellular euykaryotic organisms. This implies that larger organisms have higher absolute metabolic rates but lower mass-specific rates (\(B/M\)):
For example, a mouse (0.1 kg) needs approximately 12.3 kcal/day, while its mass-specific rate is 123 kcal/(kg\(\cdot\)day).
This reduction in per-mass energy use with size arises from a combination of constraints - geometric, overheating risk (refs), and within-organism resource distribution (West et al., 2005).
Geometric foundations of the metabolic scaling law#
Historically, a geometric argument was used to suggest that metabolic rate scales like the surface area, which in turn scales as the \(\tfrac{2}{3}\) power of mass.
Let us assume an organism (or cell) is approximately spherical of radius \(l\).
Note
We will use \(l\) instead of the traditional \(l\) to denote the radius of a sphere to avoid confusion, because the latter is a comonlyu used symbol for population growth rate (which we will encounter in the next chapter).
Let’s start with the basic equations:
Surface area of sphere \(A = 4\pi l^2\)
Volumev of a sphere \(V = \tfrac{4}{3}\pi l^3\)
If the organism/cell has a constant density \(\rho\), then its mass \(m\) is proportional to its volume:
Then, the surface-to-volume ratio is
That is, as \(l\) increases, \(\tfrac{A}{V}\) decreases (\(\propto 1/l\)).
Now, in biological cells, the surface area is where exchange of nutrients, oxygen, and waste takes place, while the volume represents total metabolic demand (more “living material” inside).
So, larger cell radius \(\rightarrow\) lower surface-to-volume ratio \(\rightarrow\) harder to meet the metabolic needs of the cell or organism.
This leads to the classic geometric argument that metabolic rate \(B\) (e.g., oxygen consumption per unit time) is limited by the total surface available for exchange:
Using \(A = 4\pi l^2\), we get
Since \(m \propto l^3\), we have \(l \propto m^{1/3}\). Therefore,
Hence,
Including a scaling constant upo front,
which is the same as the first scaling equation above with \(b = \frac{2}{3}\).
That is, this scaling arises from the fgact that an organism’s metabolic rate scales with its ability to exchange materials through a surface (which goes as length squared), while its mass (and thus total metabolic demand) goes as length cubed.
Biological implications of the geometric scaling constraint#
Thus, small cells have a relatively large surface area for their volume. They can exchange nutrients and wastes quickly, supporting higher metabolic rates per unit volume. In contrast, larger cells (or organisms) have a relatively smaller surface area per unit volume. Consequently, they may be limited in how fast they can exchange vital substances, placing a cap on metabolic rate per unit mass or per unit volume. This is why cells are relatively small: They need large surface area relative to volume to ensure adequate exchange of nutrients/oxygen and removal of wastes for the metabolic demands of their volume.
Going beyond geometric constraints#
Th above surface-to-volume geo biophysical constraint indeed holds at the level of cells, but is not the full story.
Empirically, the scaling most multicellular organisms in fact follows a somewhat different exponent (\(\approx \frac{3}{4}\), known as Kleiber’s law. This comes from the two additonal constraints: the requirement of heat dissipation and energy distribution (circulation) thorughout the body. We will not cover these here, but if you are interested, see [Kleiber, 1947] and [Brown et al., 2004] for classic and modern treatments.
Importance of Temperature (and Thermodynamic constraints)#
Temperature profoundly impacts metabolism, as biochemical reaction rates in cells increase inexorably with temperature following the Boltzmann-Arrhenius equation:
where \(k\) is the reaction rate, \(k_0\) is a constant, \(E_a\) is the activation energy, \(k_B\) is the Boltzmann constant, and \(T\) is temperature in Kelvin. This relationship explains the characteristic thermal performance curve observed in biological processes, where rates increase with temperature to an optimum before declining due to enzyme denaturation or other stress factors (Johnson et al., 1974) (which we will consider later below).
The Arrhenius equation and its variants (often collectively referred to as the “Boltzmann-Arrhenius” equation in certain contexts) are cornerstones of chemical kinetics. These formulations describe how reaction rates vary with temperature. For enzyme kinetics, the temperature dependence can also be described via modified forms of the Arrhenius equation, sometimes combined with effects such as protein denaturation at higher temperatures.
Historically, this work can be traced back to Jacobus Henricus van ‘t Hoff (1852–1911), whose studies first quantified the temperature dependence of reaction rates. Building on van ‘t Hoff’s work, Svante Arrhenius (1859–1927) proposed a more explicit relationship that connected the rate constant \(k\) of a chemical reaction to an exponential function of temperature: Historically, this work can be traced back to Jacobus Henricus van ‘t Hoff (1852–1911), whose studies first quantified the temperature dependence of reaction rates [van 't Hoff, 1884]. Building on van ‘t Hoff’s work, Svante Arrhenius (1859–1927) proposed a more explicit relationship that connected the rate constant \(k\) of a chemical reaction to an exponential function of temperature [Arrhenius, 1889]: $\( k = A \exp\left( -\frac{E_a}{rT} \right), \)$
where:
\( A \) is the pre-exponential factor (or frequency factor),
\( E_a \) is the activation energy,
\( r \) is the universal gas constant, and
\( T \) is the absolute temperature in Kelvin.
For enzyme-catalyzed reactions, temperature significantly affects the reaction velocity \(v\) or the turnover number (catalytic rate constant \(k_{\mathrm{cat}}\)). At “moderate” temperatures, higher temperature generally increases reaction rates, consistent with Arrhenius behavior. However, at higher temperatures, thermal denaturation of the enzyme reduces or abolishes its catalytic activity. Hence, more realistic models for enzyme kinetics must account for both the Arrhenius increase in rate and the loss of enzyme activity due to denaturation, which we will consider later below.
Note
All organisms on earth are dependent on temperature directly or indirectly: Poikilotherms such as unicells and tiny insects cannot thermoregulate much; Ectotherms such as plants, insects and reptiles, rely on external temperature to regulate their metabolic processes (thermoregulation); Endotherms, such as mammals and birds, generate internal heat to maintain a stable body temperature, but this comes at a significant energetic cost, and ultimately are reliant on the energy harnessed by poikilotherms and ectotherms (starting with green plants).
The rate of an enzyme-catalyzed reaction can be modelled as:
where,
\(k_\text{cat}(T)\) is strongly dependent on \(T\),
\([\text{C}]\) is the enzyme concentration,
\(f([\text{S}])\) is a function of substrate concentration (e.g. the Michaelis-Menten form \(\tfrac{[\text{S}]}{K_M + [\text{S}] }\); see [Johnson and Goody, 2011]).
We can model \(k_\text{cat}(T)\) using either the Arrhenius or the Eyring Equations. However, because real enzymes denature at higher temperatures, we typically observe a bell-shaped temperature activity profile.
Classical Transition State Theory (TST)#
We will now look at classical transition state theory (TST), which provides a microscopic rationale for the temperature dependence of reaction rates, explaining both how energy barriers (enthalpy) slow down reactions and how changes in molecular order (entropy) can further modulate the speed at which reactants cross from the well of the reactants to the well of reaction products.
TST was developed in the 1930s by Henry Eyring (1901–1981) and others (in particular Meredith Evans and Michael Polanyi) [Eyring, 1935, Polanyi, 1935].
The key idea underlying the TST is that a chemical reaction proceeds through a high-energy configuration of the reactants called the transition state (or activated complex). Once formed, the transition state either proceeds forward to products or reverts back to reactants, adhering to two key assumptions:
reagents must first form an activated complex, which is in a “quasi-equilibrium” with the reactants.
The rate of the reaction then depends on how frequently and efficiently this activated complex “crosses over” from the reactant side of the potential energy surface to the product side.
Using statistical mechanics arguments, TST yields the famous Eyring (or Eyring–Polanyi) equation, sometimes called the “absolute rate equation” [Eyring, 1935, Polanyi, 1935]:
where
\(k\) is the rate constant,
\(k_B\) is Boltzmann’s constant,
\(h\) is Planck’s constant,
\(r\) is the gas constant,
\(T\) is the absolute temperature,
\(\Delta H^\ddagger\) is the activation enthalpy,
\(\Delta S^\ddagger\) is the activation entropy.
The key result of TST is the Eyring (or Eyring-Polanyi) Equation, usually written as:
where
\(\kappa\) is the transmission coefficient (often assumed to be ~1 for many simple reactions),
\(k_B\) is the Boltzmann constant,
\(h\) is Planck’s constant,
\(\Delta S^\ddagger\) is the entropy of activation,
\(\Delta H^\ddagger\) is the enthalpy of activation, and
\(r\) and \(T\) are as before.
This equation is more often called the “Eyring equation” and offers a more explicitly thermodynamics-based perspective. It reduces to an Arrhenius-like form when you group constants appropriately.
Through this equation, TST provides a molecular-level interpretation for the pre-exponential factor
\(\frac{k_B T}{h} \exp \Delta S^\ddagger / r)\)) and the exponential barrier term \(\exp(-\Delta H^\ddagger / (rT))\)) in the rate expression.
\(\Delta H^\ddagger\) is the difference in enthalpy (heat content) between the transition state (activated complex) and the reactants. It represents the energy barrier that must be surmounted for the reaction to occur. A larger \(\Delta H^\ddagger\) means a higher barrier, which tends to slow down the reaction, especially at lower temperatures.
In the factor \(\exp(-\Delta H^\ddagger / (rT))\), \(\Delta H^\ddagger\) appears in the exponential term with a \(1/T\) dependence.Consequently, the higher the activation enthalpy, the more strongly the rate constant depends on temperature (i.e., higher sensitivity in an Arrhenius-type plot).
Next, \(\Delta S^\ddagger\) is the difference in entropy between the transition state and the reactants. It reflects the change in molecular disorder when going from reactants to the activated complex. A positive \(\Delta S^\ddagger\) implies that forming the transition state is relatively more disordered than the reactants, favoring the reaction (larger rate constant). A negative \(\Delta S^\ddagger\) implies a more ordered transition state, which lowers the rate constant by decreasing the pre-exponential factor.
Finally, \(\exp \Delta S^\ddagger / r)\) multiplies the basic frequency factor \(\frac{k_B T}{h}\). Therefore, even if \(\Delta H^\ddagger\) is moderate, a large positive \(\Delta S^\ddagger\) can significantly enhance the reaction rate, whereas a negative \(\Delta S^\ddagger\) can suppress it.
Overall Temperature Dependence of the rate#
By combining \(\Delta H^\ddagger\) and \(\Delta S^\ddagger\), TST clarifies how and why reaction rates depend on temperature:
The \(\exp(-\Delta H^\ddagger / (rT))\) term governs the primary exponential sensitivity of \(k\) to changes in temperature. The \(\exp\)\Delta S^\ddagger / r)$ term influences the intrinsic magnitude of the rate constant at any given temperature by encoding the “positional” or “configurational” freedom of the transition state.
Experimentalists often plot \(\ln(k/T)\) versus \(1/T\), known as an Eyring plot:
Putting it together: temperature, size scaling, and metabolic rate#
Thus, metabolism drives growth rates and influences population dynamics through its scaling with size and temperature (Brown et al., 2004). Larger organisms exhibit slower per-mass metabolic rates but greater absolute energy use (Savage et al., 2004). Temperature affects metabolic rates exponentially, shaping life-history strategies and population dynamics (Dell et al., 2011).
Combining these gives MTE’s fundamental equation:
Where \(B'_0\) is a size- and temperature-independent (normalization) constant.
Ecological implications#
We can now consider how the temperature dependence of enzyme kinetics and basic cellular physiology leads to a temperature‐dependent rate of biomass production and (for microbes) a temperature‐dependent rate of division. We will use microbes as a model because their growth process is relatively simple and closely tied to cell division.
From intracellular reaction rates to cell growth#
A living cell’s growth and division require a multitude of coupled enzyme reactions (nutrient uptake, biosynthesis, energy generation, etc.). MTE assumes that one key rate‐limiting step (or a small set of them) dominates the overall pace of cell growth and division. If those rate‐limiting reactions follow Arrhenius‐type kinetics, then the cellular physiological rate (e.g., biomass production rate, cell cycle progression rate) will inherit a similar temperature dependence:
Hence, as \(T\) increases (within a tolerable range), enzyme‐catalyzed processes speed up, and so does the cell’s overall metabolism and ability to replicate.
From single‐cell growth to division#
Cell division is a stochastic process#
To move from enzyme kinetics to cell‐level dynamics, a cell must replicate all essential components (DNA, proteins, membranes, etc.), reach a completion threshold in biomass or chromosome replication, and divide into two daughter cells.
Because cellular biochemical events occur stochastically, the exact division time of any one cell can vary. However, in an exponential (log) growth phase with abundant nutrients, the average cell‐division rate \(\lambda\) (divisions per unit time) is fairly constant (Jafarpour et al. 2020). This average division rate is tied to the cumulative rate of intracellular biochemical and biosynthetic reactions, and thus inherits their temperature dependence.
(Average) doubling time and specific growth rate#
Let \(t_d\) be the average doubling time (the mean time for one cell to become two). Then
From an Arrhenius perspective, we can write
so that doubling time decreases with increasing temperature (again, within the physiological range). Equivalently, the division rate increases with temperature:
In population data and growth curves, microbiologists typically work with the specific growth rate \(\mu\) (units time\(^{-1}\)). In balanced exponential-phase microbial growth, it is essentially the same object as the average division rate (up to conventions and measurement details):
With this notation, the doubling time is often written as
Note
This chapter stops at the level of per-cell / per-individual energetics and doubling. In the next chapter (Populations), we treat \(\mu(T,M)\) as an input parameter and build explicit population models in discrete and continuous time (exponential growth, density dependence / logistic growth, and beyond).
Incorporating Cell Size Scaling#
MTE’s core relationship for an organism’s metabolic rate can be written as:
where:
\(M\) is organism mass (for microbes, think cell mass),
\(E\) is an effective activation energy for metabolic reactions,
\(k_B\) is Boltzmann’s constant.
A common (simple) assumption is that an organism’s growth rate is roughly proportional to its mass-specific metabolic rate, i.e. energy flux per unit mass:
This motivates an MTE-style form for the per-capita (per-individual) growth rate / division rate:
where \(\mu_0\) absorbs constants such as \(B_0\) and conversion factors.
For the commonly used case \(b=3/4\), this becomes \(\mu(T,M)\propto M^{-1/4}\): larger organisms (or larger cells) tend to have lower mass-specific metabolic rates and therefore lower per-capita growth rates, all else equal.
Note
For microbes, empirical size scaling can differ from the canonical quarter-power law (it may be closer to isometric or even super-linear in some datasets; see DeLong et al. 2010). The main goal here is to show how size could enter a growth-rate parameter, not to claim one universal exponent.
This scaling immediately implies a size- and temperature-dependence of doubling time:
That is: increasing \(T\) typically decreases \(t_d\) (faster doubling), while increasing \(M\) typically increases \(t_d\) (slower doubling), under this simplified picture.
In the next chapter (Populations), we take \(\mu(T,M)\) (and later \(K\)) as parameters and study the resulting population dynamics.
Note
Population-level allometries (e.g., \(r_{\max}\propto M^{-1/4}\), Damuth’s law, and how carrying capacities scale with size and trophic level) build on the individual-level scalings above, but belong in the population and community context. We return to these ideas later in the modelling sequence.
Transition: from individuals to populations#
At this point we have a mechanistic story for how temperature (and, in simple MTE arguments, size) can shape a per-capita growth parameter \(\mu(T,M)\) and therefore the doubling time \(t_d\).
What we have not yet done is model how population size changes through time when individuals reproduce, when resources become limiting, or when the environment varies.
The next chapter (Populations) starts from \(\mu\) and builds population models in:
Discrete time (difference equations; e.g. synchronous division as a toy model)
Continuous time (differential equations; exponential growth as a limit)
Density dependence (logistic growth, carrying capacity \(K\))
Connections back to metabolism (how \(T\) and \(M\) shift parameters and affect stability)