Fluctuation spectra of large random dynamical systems reveal hidden structure in ecological networks
Power spectral density for a general Ornstein–Uhlenbeck processIn the following we develop a method to compute the power spectral density of N-dimensional Ornstein–Uhlenbeck processes,$$frac{d{boldsymbol{xi }}}{dt}={boldsymbol{A}}{boldsymbol{xi }}+{boldsymbol{zeta }}(t),$$
(15)
where ζ(t) is an N-vector of Gaussian white noise with correlations ({mathbb{E}}[{boldsymbol{zeta }}(t){boldsymbol{zeta }}{(t^{prime} )}^{T}]=delta (t-t^{prime} ){boldsymbol{B}}). The matrix A determines the mean behaviour of ξ and is considered to be locally stable, i.e. all eigenvalues of A have negative real part. Using the matrices A and B one can fully determine the power spectral density of fluctuations for the Ornstein-Uhlenbeck process.We are interested in the case that the coefficients Aij and Bij are derived from a complex network of interactions with weights drawn at random, possibly with correlations. This framework encompasses a very general class of models with a wealth of real-world applications including but not limited to the ecological focus we have here. The method we describe exploits the underlying network structure of A and B to deduce a self-consistent scheme of equations whose solution contains information on the power spectral density.We start with the definition of the power spectral density Φ(ω) as the Fourier transform of the covariance ({mathbb{E}}[{boldsymbol{xi }}(t){boldsymbol{xi }}{(t+tau )}^{T}]) at equilibrium,$${mathbf{Phi }}(omega )=int_{-infty }^{infty }{{rm{e}}}^{-{rm{i}}omega tau }{mathbb{E}}[{boldsymbol{xi }}(t){boldsymbol{xi }}(t+tau )]dtau .$$
(16)
From ref. 33 on multivariate Ornstein–Uhlenbeck processes, we know that the power spectral density can also be written in the form of the matrix equation,$${mathbf{Phi }}(omega )={({boldsymbol{A}}-iomega {boldsymbol{I}})}^{-1}{boldsymbol{B}}{({{boldsymbol{A}}}^{T}+iomega {boldsymbol{I}})}^{-1}.$$
(17)
In practice, this equation is difficult to use for large systems as large matrix inversion is analytically intractable and numerical schemes are slow and sometimes unstable. We take an alternative route by recasting Eq. (17) as a complex Gaussian integral reminiscent of problems appearing in the statistical physics of disordered systems. Our approach in the following is to treat ω as a fixed parameter and drop the explicit dependence from our notation. We begin by writing$${mathbf{Phi }}(omega )=frac{| {boldsymbol{A}}-iomega {boldsymbol{I}}{| }^{2}}{{pi }^{N}| {boldsymbol{B}}| }int_{{mathbb{C}}}{e}^{-{{boldsymbol{u}}}^{dagger }{{boldsymbol{Phi }}}^{-1}{boldsymbol{u}}}{boldsymbol{u}}{{boldsymbol{u}}}^{dagger }mathop{prod }limits_{i=1}^{N}d{u}_{i} .$$
(18)
Simplification of the integrand is achieved by unpicking the matrix inversion in the exponent via a Hubbard-Stratonovich transformation46,47. To this end we recast the system in the language of statistical mechanics by introducing N complex-valued ‘spins’ ui and N auxiliary variables vi, with the ‘Hamiltonian’$${mathcal{H}}({boldsymbol{u}},{boldsymbol{v}})=-{{boldsymbol{u}}}^{dagger }({boldsymbol{A}}-{rm{i}}omega ){boldsymbol{v}}+{{boldsymbol{v}}}^{dagger }{({boldsymbol{A}}-{rm{i}}omega )}^{dagger }{boldsymbol{u}}+{{boldsymbol{v}}}^{dagger }{boldsymbol{B}}{boldsymbol{v}} .$$
(19)
Introducing a bracket operator$$langle cdots rangle :=frac{{int}_{{mathbb{C}}}{e}^{-{mathcal{H}}({boldsymbol{u}},{boldsymbol{v}})}(cdots )d{boldsymbol{u}}d{boldsymbol{v}}}{{int}_{{mathbb{C}}}{e}^{-{mathcal{H}}({boldsymbol{u}},{boldsymbol{v}})}d{boldsymbol{u}}d{boldsymbol{v}}} ,$$
(20)
we can obtain succinct expressions for the power spectral density Φ = 〈uu†〉 as well as the resolvent matrix ({boldsymbol{{mathcal{R}}}}={({rm{i}}omega -{boldsymbol{A}})}^{-1}=langle {boldsymbol{u}}{{boldsymbol{v}}}^{dagger }rangle). Thus we may write,$${mathbf{Phi }}=frac{1}{{mathcal{Z}}}{int}_{{mathbb{C}}}{e}^{-{mathcal{H}}({boldsymbol{u}},{boldsymbol{v}})}{boldsymbol{u}}{{boldsymbol{u}}}^{dagger }mathop{prod }limits_{i=1}^{N}d{u}_{i}d{v}_{i} ,$$
(21)
where ({mathcal{Z}}=| {boldsymbol{A}}-iomega {boldsymbol{I}}{| }^{2}/{pi }^{2N}).This construction may seem laborious at first, but it unlocks a powerful collection of statistical mechanics tools, including the ‘cavity method’. Originally, the cavity method has been introduced in order to analyse a model for spin glass systems48,49. Further applications of the method include the analysis of the eigenvalue distribution in sparse matrices50,51,52. We will exploit the network structure in a similar fashion in order to compute the power spectral density.In our analysis, we find that it is convenient to split the Hamiltonian in Eq. (19) into the sum of its local contributions at sites i, ({{mathcal{H}}}_{i}), and contributions from interactions between i and j, ({{mathcal{H}}}_{ij}),$${mathcal{H}}=mathop{sum}limits_{i}{{mathcal{H}}}_{i}+mathop{sum}limits_{i sim j}{{mathcal{H}}}_{ij} .$$
(22)
These terms can be decomposed as ({{mathcal{H}}}_{i}={{boldsymbol{w}}}_{i}^{dagger }{{boldsymbol{chi }}}_{i}{{boldsymbol{w}}}_{i}) and ({{mathcal{H}}}_{ij}={{boldsymbol{w}}}_{i}^{dagger }{{boldsymbol{chi }}}_{ij}{{boldsymbol{w}}}_{j}), where we introduce the compound spins ({{boldsymbol{w}}}_{i}={({u}_{i},{v}_{i})}^{T}) and transfer matrices,$${{boldsymbol{chi }}}_{i} = , left(begin{array}{ll}0&{A}_{ii}+iomega \ -{A}_{ii}+iomega &{B}_{ii}end{array}right) ,\ {{boldsymbol{chi }}}_{ij} = , left(begin{array}{ll}0&{A}_{ji}\ -{A}_{ij}&{B}_{ij}end{array}right) .$$
(23)
Let us focus on the power spectral density of a particular variable ξi, obtained from the diagonal element ϕi = Φii. For this we compute the single-site marginal fi by integrating over all other variables,$${f}_{i}({{boldsymbol{w}}}_{i})=frac{1}{{mathcal{Z}}}{int}_{{mathbb{C}}}{e}^{-{mathcal{H}}}mathop{prod}limits_{jne i}d{{boldsymbol{w}}}_{j}.$$
(24)
Alternatively, ϕi can be obtained as the top left entry of the covariance matrix ({{mathbf{Psi }}}_{i}=langle {{boldsymbol{w}}}_{i}{{boldsymbol{w}}}_{i}^{dagger }rangle). We write the covariance matrix as the integral,$${{mathbf{Psi }}}_{i}={int}_{{mathbb{C}}}{f}_{i}({{boldsymbol{w}}}_{i}){{boldsymbol{w}}}_{i}{{boldsymbol{w}}}_{i}^{dagger }d{{boldsymbol{w}}}_{i} ,$$
(25)
which could also be expressed in terms of a Gaussian integral,$${{mathbf{Psi }}}_{i}=frac{1}{{pi }^{2}| {{mathbf{Psi }}}_{i}| }{int}_{{mathbb{C}}}{e}^{-{{boldsymbol{w}}}_{i}^{dagger }{{mathbf{Psi }}}_{i}^{-1}{{boldsymbol{w}}}_{i}}{{boldsymbol{w}}}_{i}{{boldsymbol{w}}}_{i}^{dagger }d{{boldsymbol{w}}}_{i} .$$
(26)
By comparing Eqs. (25) and (26) we find that$${f}_{i}({{boldsymbol{w}}}_{i})=frac{1}{{pi }^{2}| {{mathbf{Psi }}}_{i}| }{e}^{-{{boldsymbol{w}}}_{i}^{dagger }{{mathbf{Psi }}}_{i}^{-1}{{boldsymbol{w}}}_{i}} .$$
(27)
We now insert Eq. (22) into Eq. (24) and obtain,$${f}_{i}({{boldsymbol{w}}}_{i})=frac{1}{{pi }^{2}| {{mathbf{Psi }}}_{i}| }{e}^{-{{mathcal{H}}}_{i}}{int}_{{mathbb{C}}}mathop{prod}limits_{i sim j}left({e}^{-{{mathcal{H}}}_{ij}-{{mathcal{H}}}_{ji}}{f}_{j}^{(i)}d{{boldsymbol{w}}}_{j}right) ,$$
(28)
where we write ({f}_{j}^{(i)}) for the ‘cavity marginals’,$${f}_{j}^{(i)}({{boldsymbol{w}}}_{j})=frac{1}{{{mathcal{Z}}}^{(i)}}{int}_{{mathbb{C}}}{e}^{-{{mathcal{H}}}^{(i)}}mathop{prod}limits_{kne i,j}d{{boldsymbol{w}}}_{k} .$$
(29)
In essence, the above discussion amounts to organising the 2N integrals in Eq. (21) in a convenient way, with the advantage of providing a simple intuition for the role of the underlying network. The superscript (i) is used to indicate that the quantity corresponds to the cavity network where node i has been removed. We will further use this notation for the ‘cavity covariance matrix’ ({{mathbf{Psi }}}_{jl}^{(i)}) introduced in the following.Next we perform the integration in Eq. (28) and compare to the form in Eq. (27). We thus obtain a recursion formula for the covariance matrix Ψi and the cavity covariance matrices ({{mathbf{Psi }}}_{jl}^{(i)}),$${{mathbf{Psi }}}_{i}={left({{boldsymbol{chi }}}_{i}-mathop{sum}limits_{{{i !sim! j}atop {i! sim! l}}}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}_{jl}^{(i)}{{boldsymbol{chi }}}_{li}right)}^{-1},$$
(30)
where the notation i ~ j indicates that we sum over nodes j connected to node i. Unless there is some specific structure underlying the network, we assume that most real world cases have a ‘tree-like’ structure from the local view point of a single node i. Hence, it is highly unlikely that the nodes j and l are nearby in the cavity network where node i is removed, and thus ({{mathbf{Psi }}}_{jl}^{(i)}) only gives non-zero contributions if j = l. We therefore reduce Eq. (30) and obtain for the covariance matrix,$${{mathbf{Psi }}}_{i}={left({{boldsymbol{chi }}}_{i}-mathop{sum}limits_{i sim j}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}_{j}^{(i)}{{boldsymbol{chi }}}_{ji}right)}^{-1}.$$
(31)
Similarly, the cavity covariance matrix obeys the equation,$${{mathbf{Psi }}}_{j}^{(i)}={left({{boldsymbol{chi }}}_{j}-mathop{sum}limits_{j sim k,kne i}{{boldsymbol{chi }}}_{jk}{{mathbf{Psi }}}_{k}^{(j)}{{boldsymbol{chi }}}_{kj}right)}^{-1}.$$
(32)
Here we use that Ψ(i, j) = Ψ(j) when the nodes i and k are not connected. In other words, removing node j from the cavity network where node i is missing, has the same effect as removing it from the full network. The system in Eq. (31) describes a collection of nonlinear matrix equations that must be solved self-consistently.For networks with high enough connectivity (and to good approximation even with modest connectivity), the removal of a single node does not affect the rest of the network, as its contribution is negligible compared to the full system. Hence the system in Eq. (31) can be reduced to a smaller set of equations approximately satisfied by the matrices Ψi:$${{mathbf{Psi }}}_{i}approx {left({{boldsymbol{chi }}}_{i}-mathop{sum}limits_{i sim j}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}_{j}{{boldsymbol{chi }}}_{ji}right)}^{-1}.$$
(33)
The power spectral density ϕi can be obtained as the top left entry of Ψi.In order to progress further, we now consider specific approximations that help us compute the power spectral density. First, we take a mean-field approach in order to obtain the mean power spectral density for all nodes part of the network; we then use the result for the mean-field in order to compute a close approximation to the local power spectral density of a single node. Later, we adapt the method to partitioned networks where nodes belong to different types of connected groups.Mean fieldFor the following, we assume that all agents in the system behave the same on average. In practice, the terms governed by self-interactions Aii are drawn from the same distribution for all agents. Similarly, the terms including Bii are governed by one distribution. Interaction strengths and connections with other nodes in the network are also sampled equally for all agents (we have explored a large Lotka-Volterra ecosystem as an example of such a network). In the mean-field (MF) formulation we assume that the mean degree and excess degree are approximately equal, and replace all quantities in Eqs. (31) and (32) with their average. Ψi = ΨMF ∀ i. We then obtain the following recursion equation,$${{mathbf{Psi }}}^{{rm{MF}}}={left[{mathbb{E}}[{{boldsymbol{chi }}}_{i}]-{mathbb{E}}left(mathop{sum}limits_{i sim j}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}^{{rm{MF}}}{{boldsymbol{chi }}}_{ji}right)right]}^{-1}.$$
(34)
In order to solve this equation, we parameterise,$${{mathbf{Psi }}}^{{rm{MF}}}=left(begin{array}{ll}phi &r\ -bar{r}&0end{array}right),$$
(35)
where the top left entry ϕ corresponds to the mean power spectral density, and we introduce r as the mean diagonal element of the resolvent matrix ({boldsymbol{{mathcal{R}}}}). Finally by inserting the ansatz of Eq. (35) into Eq. (34) we obtain,$$left(begin{array}{ll}phi &r\ -bar{r}&0end{array}right)^{-1}= , left(begin{array}{ll}0&{mathbb{E}}[{A}_{ii}]+iomega \ -{mathbb{E}}[{A}_{ii}]+iomega &{mathbb{E}}[{B}_{ii}]end{array}right)\ , +cleft(begin{array}{ll}0&bar{r}{mathbb{E}}[{A}_{ij}{A}_{ji}]\ -r{mathbb{E}}[{A}_{ij}{A}_{ji}]&phi {mathbb{E}}[{A}_{ij}^{2}]+(r+bar{r}){mathbb{E}}[{A}_{ij}{B}_{ij}]end{array}right),$$
(36)
where c is the average degree (i.e. number of connections) per node. Moreover, the expectations in the second term are to be taken over connected nodes i ~ j (i.e. non-zero matrix entries).From Eq. (36) above, we obtain the equations,$$frac{phi }{| r{| }^{2}} = , {mathbb{E}}[{B}_{ii}]+cleft(phi {mathbb{E}}[{A}_{ij}^{2}]+2{rm{Re}}(r){mathbb{E}}[{A}_{ij}{B}_{ij}]right),\ frac{bar{r}}{| r{| }^{2}} = , -!{mathbb{E}}[{A}_{ii}]+iomega -cr{mathbb{E}}[{A}_{ij}{A}_{ji}].$$
(37)
We solve the second equation in Eq. (37) for r and write the mean power spectral density in terms of r,$$phi = , | r{| }^{2}frac{{mathbb{E}}[{B}_{ii}]+2c{rm{Re}}(r){mathbb{E}}[{A}_{ij}{B}_{ij}]}{1-c| r{| }^{2}{mathbb{E}}[{A}_{ij}^{2}]},\ r = , frac{1}{2c{mathbb{E}}[{A}_{ij}{A}_{ji}]}left[-{mathbb{E}}[{A}_{ii}]+iomega right.\ , left.-sqrt{{(-{mathbb{E}}[{A}_{ii}]+iomega )}^{2}-4c{mathbb{E}}[{A}_{ij}{A}_{ji}]}right]$$
(38)
This equation informs the first part of the results presented in the main text.Single defect approximationThe single defect approximation (SDA) makes use of the mean-field approximation for the cavity fields, but retains local information about individual nodes. We parameterise similarly to Eq. (35) for a single individual. Moreover, we replace all other quantities with the respective mean-field approximation. Specifically, we obtain$${left(begin{array}{ll}{phi }_{i}^{{rm{SDA}}}&{r}_{i}^{{rm{SDA}}}\ -{bar{r}}_{i}^{{rm{SDA}}}&0end{array}right)}^{-1}= ,left(begin{array}{ll}0&{A}_{ii}+iomega \ -{A}_{ii}+iomega &{B}_{ii}end{array}right)\ , +mathop{sum}limits_{i sim j}left(begin{array}{ll}0&{bar{r}}^{{rm{MF}}}{A}_{ij}{A}_{ji}\ -{r}^{{rm{MF}}}{A}_{ij}{A}_{ji}&{phi }^{{rm{MF}}}{A}_{ij}^{2}+({r}^{{rm{MF}}}+{bar{r}}^{{rm{MF}}}){A}_{ij}{B}_{ij}end{array}right).$$
(39)
We solve this equation for ({phi }_{i}^{{rm{SDA}}},{r}_{i}^{{rm{SDA}}}), which delivers$$frac{{phi }_{i}^{{rm{SDA}}}}{| {r}_{i}^{{rm{SDA}}}{| }^{2}} = , {phi }^{{rm{MF}}}mathop{sum}limits_{i sim j}{A}_{ij}^{2}+2{rm{Re}}({r}^{{rm{MF}}})mathop{sum}limits_{i sim j}{A}_{ij}{B}_{ij}+{B}_{ii} ,\ {r}_{i}^{{rm{SDA}}} = , {left({A}_{ii}+iomega +{bar{r}}^{{rm{MF}}}mathop{sum}limits_{i sim j}{A}_{ij}{A}_{ji}right)}^{-1}.$$
(40)
Partitioned networkPreviously we assumed that all nodes in a network are interchangeable in distribution. However, many real-world applications feature agents with different properties, imposing a high-level structure on the network. We realise this by partitioning nodes into distinct groups that interact with each other (see the section Trophic structure model for a simple example).In order to handle different connected groups we make use of the cavity method as in Eqs. (31) and (32). In particular, we split the sum in the second term on the right-hand side of these equations into contributions from each group in the partitioned network. Let M denote the number of subgroups Vm in a partitioned network then we write,$${{mathbf{Psi }}}_{i}= , {left({{boldsymbol{chi }}}_{i}-mathop{sum }limits_{m}^{M}mathop{sum}limits_{{{i !sim! j}atop {j!in! {V}_{m}}}}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}_{j}^{(i)}{{boldsymbol{chi }}}_{ji}right)}^{-1} ,\ {{mathbf{Psi }}}_{j}^{(i)} = , {left({{boldsymbol{chi }}}_{j}-mathop{sum }limits_{m}^{M}mathop{sum}limits_{{{j !sim! k}atop {k!in !{V}_{m}}}}{{boldsymbol{chi }}}_{jk}{{mathbf{Psi }}}_{k}^{(j)}{{boldsymbol{chi }}}_{kj}right)}^{-1}.$$
(41)
Similar to the previous sections we replace all quantities with a mean-field average ({{mathbf{Psi }}}_{m}^{{rm{MF}}}), but for each group separately. Hence we obtain M equations of the form$${{mathbf{Psi }}}_{i}^{{rm{MF}}}={left[{mathbb{E}}[{{boldsymbol{chi }}}_{i}]-{mathbb{E}}left(mathop{sum }limits_{m}^{M}mathop{sum}limits_{{{i !sim! j}atop {j!in !{V}_{m}}}}{{boldsymbol{chi }}}_{ij}{{mathbf{Psi }}}_{m}^{{rm{MF}}}{{boldsymbol{chi }}}_{ji}right)right]}^{-1}.$$
(42)
In order to compute the mean power spectral density for different groups separately, we use a parameterisation as in Eq. (35) for each group. Therefore we have,$${{mathbf{Psi }}}_{m}^{{rm{MF}}}=left(begin{array}{ll}{phi }_{m}&{r}_{m}\ -{bar{r}}_{m}&0end{array}right),$$
(43)
for all m = 1, …, M. This delivers 2M equations to solve for all rm and ϕm. Numerically this is straightforward, although algebraically long-winded for the general case. However, the equations simplify for special cases. In the section Trophic structure model we demonstrate this method for a bipartite network where a lack of intra-group interactions simplifies the analysis.Large Lotka-Volterra ecosystemModel descriptionFirst, we define the framework for a general Lotka-Volterra ecosystem with N species and a large but finite system size V ≫ 1. Note that this parameter can be interpreted as a scaling factor for the fluctuation amplitude and thus, larger systems exhibit higher stability and quantitative reliability for our analytic results. Let Xi denote the number of individuals and xi = Xi/V the density of species i = 1, …, N. We start from the following set of reactions that define the underlying stochastic dynamics of the system:$$ , {X}_{i},mathop{to }limits^{{b}_{i}},2{X}_{i} ({rm{birth}})\ , 2{X}_{i},mathop{to }limits^{{R}_{ii}},{X}_{i} ({rm{death}})\ , {X}_{i}+{X}_{j},mathop{to }limits^{{R}_{ij}},left{begin{array}{ll}2{X}_{i}+{X}_{j}&({rm{mutualism}}),hfill\ {X}_{i}&({rm{competition}}),\ 2{X}_{i}&({rm{predation}}).hfillend{array}right.$$
(44)
The self-interactions are governed by the birth rate bi > 0 and density-dependent mortality rate Rii > 0. Furthermore, we define three interaction types between species i and j, namely mutualism, competition and predation. In the case of mutualistic interactions, both species benefit from each other, whereas competition means that both species have a higher mortality rate, depending on the density of the other species. For predator-prey pairs, one predator species benefits from the death of a prey species. The predator and prey species are chosen randomly, such that species i is equally likely to be a predator or prey of species j.With probability Pc we assign an interaction rate Rij > 0 to the species pair (i, j), and with probability 1 − Pc there is no interaction between species i and j (i.e. Rij = 0). In other words, each species has on average c = NPc interaction partners. The reaction rates are considered to be i.i.d. random variables drawn from a half-normal distribution (| {mathcal{N}}(0,{sigma }^{2})|), where we write for the mean reaction rate (mu ={mathbb{E}}[{R}_{ij}]=sigma sqrt{2/pi }) and raw second moment ({sigma }^{2}={mathbb{E}}[{R}_{ij}^{2}]). For each interaction pair, the interaction type is chosen such that the proportion of predator-prey pairs is p ∈ [0, 1], and all non-predator-prey interactions are equally distributed between mutualistic and competitive interactions (i.e. the overall proportion of mutualistic/competitive interactions is 1/2(1 − p)). Lastly, we define the symmetry parameter γ = 1 − 2p, where γ = −1 if all interactions are of predator-prey type (p = 1), and similarly γ = +1 if there are no predator-prey interactions (p = 0). In a mixed case where predator-prey and mutualistic/competitive interactions have equal proportion (p = 1/2), we have γ = 0. Later we will see that γ is equivalent to the correlation of signed interaction strengths.In the limit V → ∞, the dynamics of the species density xi obey the ordinary differential equations,$$frac{d{x}_{i}}{dt}={x}_{i}left({b}_{i}+mathop{sum }limits_{j}^{N}{alpha }_{ij}{x}_{j}right),$$
(45)
where αij are the interaction coefficients with ∣αij∣ = ∣αji∣ = Rij. The signs of the interaction coefficients are determined by the type of interaction between species i and j. For mutualistic interactions we have αij = αji > 0, and αij = αji More