Time: Thursdays at 13:00Location: MB-503
If requested, talks may be streamed on Zoom (https://qmul-ac-uk.zoom.us/j/81810915169) - please email the organisers.
Organisers: Lennart Dabelow and Oliver Jenkinson
Back to Seminar listing
We are given a graph, now pick any involution and delete all of the vertices which are moved by this involution. Repeat with the new graph until your current graph is involution-free. This involution-free graph is uniquely defined (up to isomorphism) by the original, ie, it is independent of the choice of involution at each stage. This is proved using a lemma of Newman on the confluence of reduction systems.
We compute the distribution of the partition functions for a class of one-dimensional Random Energy Models (REM) with logarithmically correlated random potential, above and at the glass transition temperature. The random potential sequences represent various versions of the 1/f noise generated by sampling the two-dimensional Gaussian Free Field (2dGFF) along various planar curves. The method is based on an analytical continuation of the Selberg integral from positive integers to the complex plane. In particular, we unveil a duality relation satisfied by the suitable generating function of free energy cumulants in the high-temperature phase. It reinforces the freezing scenario hypothesis for that generating function, from which we derive the distribution of extrema for the 2dGFF on the [0,1] interval and unit circle. If time permits, the relation to the velocity statistics in decaying Burgers turbulence and to the distribution of length of curves in Liouville quantum gravity will be shortly discussed. The results reported are obtained in collaboration with J.-P. Bouchaud, P. Le Doussal, and A. Rosso.
Many-body systems involving long-range interactions, such as self-gravitating particles or unscreened plasmas, give rise to equilibrium and nonequilibrium properties that are not seen in short-range systems. One such property is that long-range systems can have a negative heat capacities, which implies that these systems cool down by absorbing energy. This talk will discuss the origin of this unusual property, as well as some of its connections with phase transitions, metastability, and the nonequivalence of statistical ensembles. It will be seen that the essential difference between long- and short-range systems is that the entropy can be non-concave as a function of the energy for long-range systems. For short-range systems, the entropy is always concave.
We present a study of a delay differential equation (DDE) model for the glacial cycles of the Pleistocene climate. The model is derived from the Saltzman andMaasch 1988 model, which is an ODE system containing a chain of first-order reactions. Feedback chains of this type limit to a discrete delay for long chains. Weapproximate the chain by a delay, resulting in a scalar DDE for ice mass with fewer parameters than the original ODE model. Through bifurcation analysis under varyingthe delay, we discover a previously unexplored bistable region and consider solutions in this parameter region when subjected to periodic and astronomical forcing. Theastronomical forcing is highly quasiperiodic, containing many overlapping frequencies from variations in the Earth's orbit. We find that under the astronomical forcing, the model exhibits a transition in time that resembles what is seen in paleoclimate records, known as the Mid-Pleistocene Transition. This transition is a distinct feature of the quasiperiodic forcing, as confirmed by the change in sign of the leading finite-time Lyapunov exponent. We draw connections between this transition and non-smooth saddle-node bifurcations of quasiperiodically forced 1D maps.
Here I present my ongoing work of estimating mutation rate per cell divisions by combining stochastic processes, Bayesian methods and genomic sequencing data.
Human cancers usually contain hundreds of billions of cells at diagnosis. During tumour growth these cells accumulate thousand of mutations, errors in the DNA, making each tumour cell unique. This heterogeneity is a major source for evolution within single tumours, subsequent progression and possible treatment resistance. Recent technological advances such as increasingly cheaper genome sequencing allows measuring some of the heterogeneity. However, the theoretical understanding and interpretation of the available data remains mostly unclear. For example, the most basic evolutionary properties of human tumours, such as mutation and cell survival rates or tumour ages are mostly unknown. Here I will present some mathematical modelling of the underlying stochastic processes. In more detail, I will construct the distribution of mutational distances in a tumour that can be measured from multi-region sequencing. I show that these distributions can be understood as random sums of independent random variables. In combination with appropriate sequencing data and Bayesian inference based on our theoretical results some of the evolutionary parameters can be recovered for tumours of single patients.
Systems with delayed interactions play a prominent role in a variety of fields, ranging from traffic and population dynamics, gene regulatory and neural networks or encrypted communications. When subjecting a semiconductor laser to reflections of its own emission, a delay results from the propagation time of the light in the external cavity. Because of its experimental accessibility and multiple applications, semiconductor lasers with delayed feedback or coupling have become one of the most studied delay systems. One of the most experimentally accessible properties to characterise these chaotic dynamics is the autocorrelation function. However, the relationship between the autocorrelation function and other nonlinear properties of the system is generally unknown. Therefore, although the autocorrelation function is often one of the key characteristics measured, it is unclear which information can be extracted from it. Here, we present a linear stochastic model with delay, that allows to analytically derive the autocorrelation function. This linear model captures fundamental properties of the experimentally obtained autocorrelation function of laser with delayed feedback, such as the shift and asymmetric broadening of the different delay echoes. Fitting this analytical autocorrelation to its experimental counterpart, we find that the model reproduces, in most dynamical regimes of the laser, the experimental data surprisingly well. Moreover, it is possible to establish a relation between the set of parameters from the linear model and dynamical properties of the semiconductor lasers, as relaxation oscillation frequency and damping rate.
Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain micro reentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment. Finally, the model is able to explain clinically observed patient variability w.r.t. the time-course of AF.
The Graph Minors Project of Robertson and Seymour is one of the highlightsof twentieth-century mathematics. In a long series of mostly difficult papersthey prove theorems that give profound insight into the qualitative structureof members of proper minor-closed classes of graphs. This insight enablesthem to prove some remarkable banner theorems, one of which is that inany finite set of graphs there is one that is a minor of the other; in otherwords, graphs are well-quasi-ordered under the minor order.
A canonical way to obtain a matroid is from a set of columns of a matrix overa eld. If each column has at most two nonzero entries there is an obviousgraph associated with the matroid; thus it is not hard to see that matroidsgeneralise graphs. Robertson and Seymour always believed that their resultswere special cases of more general theorems for matroids obtained frommatrices over nite elds. For over a decade, Jim Geelen, Bert Gerards andI have been working towards achieving this generalisation. In this talk I willdiscuss our success in achieving the generalisation for binary matroids, thatis, for matroids that can be obtained from matrices over the 2-element eld.
In this talk I will give a very general overview of my work with Geelenand Gerards. I will not assume familiarity with matroids nor will I assumefamiliarity with the results of the Graph Minors Project.
Abstract: In a seminal paper, Alon and Tarsi have introduced an algebraictechnique for proving upper bounds on the choice number of graphs (and thus,in particular, upper bounds on their chromatic number). The upper bound onthe choice number of G obtained via their method, was later coined theAlon-Tarsi number of G and was denoted by AT(G). Theyhave provided a combinatorial interpretation of this parameter in terms ofthe eulerian sub-digraphs of an appropriate orientation of G.Shortly afterwards, for the special case of linegraphs of d-regular d-edge-colourable graphs, Alon gave anotherinterpretation of AT(G), this time in terms of the signed d-colourings ofthe line graph. In the talk I will generalize both results.I will then use these results to prove some choosability results.In the first part of the talk I will introduce chromatic, choice, andAlon-Tarsi numbers of graphs.In the second part I will state the two generalizations as well assome applications.
The Ramsey number rk(s,n) is the minimum N such that every red-blue colouring of the k-tuples of an N-element set contains either a red set of size s or a blue set of size n, where a set is called red (blue) if all k-tuples from this set are red (blue). Determining or estimating Ramsey numbers is one of the central problems in combinatorics. In this talk we discuss recent progress on several old and very basic hypergraph Ramsey problems.
Joint work with D. Conlon and J. Fox.
We consider discrete Schrodinger operators with bounded potentials on large finite boxes $N^d$. We show that it is possible to delocalize most eigenfunctions with a uniformly small deterministic perturbation of the potential. This result is obtained from a dynamical result about ergodic Schrodinger operators on $\Z^d$ via a correspondence principle in the spirit of Furstenberg. Our proof is based on an optimization technique which makes use of a “Hellman-Feynman formula”for the integrated density of states. This is joint work with David Damanik.
I will discuss the ultimate limits for secret-key generation, qubit transmission, and entanglement distribution over a lossy quantum channel. I will then consider the extensions of these limits to repeater chains and quantum networks. In the final part of the seminar, I will cover the mathematical tools used to establish these results by exploiting recently introduced proof techniques based on teleportation simulation.
The word complexity function p(n) of a subshift X measures the number of n-letter words appearing in sequences in X, and X is said to have linear complexity if p(n)/n is bounded. It’s been known since work of Ferenczi that linear word complexity highly constrains the dynamical behavior of a subshift.
In recent work with Darren Creutz, we show that if X is a transitive subshift with limsup p(n)/n < 3/2, then X is measure-theoretically isomorphic to a compact abelian group rotation. On the other hand, limsup p(n)/n = 3/2 can occur even for X measurably weak mixing. Our proofs rely on a substitutive/S-adic decomposition for such subshifts.
I’ll give some background/history on linear complexity, discuss our results, and will describe several ways in which 3/2 turns out to be a key threshold (for limsup p(n)/n) for several different types of dynamical behavior.
We study the onset of non-equilibrium phase transitions in a 3D particle system, that constitutes a variant of a classical billiard. N point particles move at constant speed inside spherical urns connected by cylindrical channels. The microscopic dynamics differs from a standard 3D billiard because of a kind of Maxwell’s demon, that mimics clogging in one of the two channels, when the number of particles residing in it exceeds a fixed threshold. Nonequilibrium phase transitions arise and are sustained by stationary particle currents moving from the lower to the higher density urn, through the channel haunted by the demon. The coexistence of different phases and their stability are obtained analytically within the proposed kinetic theory framework, and are confirmed with remarkable accuracy by numerical simulations. The considered dynamical system describes a kind of experimentally realizable Maxwell’s demonand may unveil new perspectives in transport theory and mass separation technologies.
Oncogene amplification on circular extrachromosomal DNA (ecDNA) has been linked to poor prognosis and higher treatment resistance in multiple types of human cancer. ecDNA are mobile genetic elements outside the chromosome that lack centromeres and, as a result, segregate randomly into daughter cells during mitotic cell division. While random segregation of ecDNA has been shown to drive intratumour ecDNA copy-number heterogeneity in cancer cell populations, its effect on phenotypic heterogeneity is less well understood. In this talk, I will present our work on modelling stochastic gene expression and show how this predicts phenotypic heterogeneity beyond copy-number heterogeneity in cancer cells.
The ability of bacteria to become resistant to previously successful antibiotic treatments is an urgent and increasing worldwide problem. Solutions can be sought via a number of methods including, for example, identifying novel antibiotics, re-engineering existing antibiotics or developing alternative treatment methods. The nonlinear interactions involved in infection and treatment render it difficult to predict the success of any of these methods without the use of computational tools in addition to more traditional experimental work. We use mathematical modelling to aid in the development of anti-virulence treatments which, unlike conventional antibiotics that directly target a bacterium's survival, may instead attenuate bacteria and prevent them from being able to cause infection or evade antibiotics. Many of these approaches, however, are only partially successful when tested in infection models. Our group are studying a variety of potential targets, including preventing bacteria from binding to host cells, inhibiting the formation of persister cells (these can tolerate the presence of antibiotics) and blocking efflux pump action (a key mechanism of antimicrobial resistance). I will present results that illustrate how mathematical modelling can suggest ways in which to improve the efficacy of these approaches.
A dynamical system is usually represented by a probabilistic model of which the unknown parameters must be estimated using statistical methods. When measuring the uncertainty of such parameter estimations, the bootstrap stands out as a simple but powerful technique. In this talk, I will discuss the bootstrap for the Birkhoff averages of expanding maps and establish not only its consistency but also its second-order accuracy using the continuous first-order Edgeworth expansion.
Cut and project sets are obtained by taking an irrational slice through a lattice and projecting it to a lower dimensional subspace. This usually results in a set which has no translational period, even though it retains a lot of the regularity of the lattice. As such, cut and project sets are one of the archetypical examples of point sets featuring aperiodic order. In this talk I will give an overview of the definition and basic properties of cut and project sets, to demonstrate how they can be naturally studied in the context of dynamical systems, discrete geometry, harmonic analysis, or Diophantine approximation, for example, depending on one's own tastes and interests. As I will explain in the talk, all of these contexts are relevant to a new result of mine on discrepancy estimates for density of points in cut and project sets (the work is joint with Jean Lagacé, with an appendix by Tobias Hartnick and Michael Bjorklund).
When subject to slow, low-amplitude, oscillatory shear at low temperatures, amorphous solids reach special memory retaining states where plastic particle rearrangements become repetitive and the system reaches the same configuration after one or more forcing cycles. However, in realistic environments, these materials are subject to thermal and mechanical noise and shearing is not necessarily slow. Therefore, understanding the role of noise in altering the response of a material is of prime importance in material science. It is yet unknown how noise affects the memory formation ability of an amorphous system. Here, using non-equilibrium molecular dynamics simulations of two-dimensional discs, we show that under oscillatory shear and up to a certain finite temperature the material can still form and retain memory. At elevated temperatures, however, the system makes probabilistic escape from the memory encoded state into a transient state, and can reach a new memory state after sometime. Importantly, we propose an interesting measure for the stability of the memory encoded states which can be easily tested in experimental setups.
Koopman operators globally linearise nonlinear dynamical systems, and their spectral information can be a powerful tool for analysing and decomposing nonlinear dynamical systems. However, Koopman operators are infinite-dimensional, and computing their spectral information is a considerable challenge. We introduce measure-preserving extended dynamic mode decomposition (mpEDMD), the first Galerkin method whose eigendecomposition converges to the spectral quantities of Koopman operators for general measure-preserving dynamical systems. mpEDMD is a data-driven and structure-preserving algorithm based on an orthogonal Procrustes problem that enforces measure-preserving truncations of Koopman operators using a general dictionary of observables. It is flexible and easy to use with any pre-existing DMD-type method and with different data types. We prove the convergence of mpEDMD for projection-valued and scalar-valued spectral measures, spectra, and Koopman mode decompositions. For the case of delay embedding (Krylov subspaces), our results include convergence rates of the approximation of spectral measures as the size of the dictionary increases. We demonstrate mpEDMD on a range of challenging examples, its increased robustness to noise compared with other DMD-type methods, and its ability to capture the energy conservation and cascade of a turbulent boundary layer flow with Reynolds number > 60000 and state-space dimension >100000. Finally, if time permits, we discuss how this algorithm forms part of a broader program on the foundations of infinite-dimensional spectral computations.
Ref: Colbrook, Matthew J. "The mpEDMD algorithm for data-driven computations of measure-preserving dynamical systems." SIAM Journal on Numerical Analysis 61.3 (2023): 1585-1608.
Space-time long-range correlations currently occur in complex systems. Their origin might be purely probabilistic or dynamical in nature. In such cases, (i) the corresponding (additive) Boltzmann-Gibbs-von Neumann-Shannon entropy typically behaves anomalously, either as a function of the size of the system and/or as a function of time; and/or (ii) the standard central limit theorem and large deviation theory do not apply. The assumption of nonadditive entropies, e.g. S_q or S_delta, appears, in many if not all cases, to elegantly overcome such difficulties. We will illustrate these issues through classical, quantum, cosmological systems. Updated bibliography is available at https://tsallis.cbpf.br/biblio.htm
Consider an intermittent map with two fixed points with the same neutrality in the regime where the absolutely continuous invariant measure is infinite. Empirical measures do not converge weak*; the limit points are the convex combinations of the Dirac deltas at the fixed points. Nevertheless, the pushforwards of Lebesgue measure (or any absolutely continuous probability measure) converge weak* to a specific convex combination of the Dirac deltas. (This is joint work with Coates, Luzzatto and Talebi.)
Note the unusual day and time!
Two competing types of interactions often play an important part in shaping system behaviour, such as activatory or inhibitory functions in biological systems. Hence, signed networks, where each connection can be either positive or negative, have become popular models over recent years. However, the primary focus of the literature is on the unweighted and structurally balanced ones, where all cycles have an even number of negative edges. Hence here, we first introduce a classification of signed networks into balanced, antibalanced or strictly balanced ones, and illustrate the shared spectral properties within each type. We then apply the results to characterise the dynamics of random walks on signed networks, where local consensus can be achieved asymptotically when the graph is structurally balanced, while global consensus will be obtained when it is strictly unbalanced. Finally, we will show that the results can be generalised to networks with complex-valued weights.
The universal and irreversible tendency of closed systems towards thermal equilibrium is a well-established empirical fact in the macroscopic world, but in spite of more than a century of theoretical efforts, it has still not been satisfactorily reconciled with the basic laws of physics, which govern the microscopic world and which are fundamentally reversible. In this talk, some interesting new developments in the context of quantum many-body systems will be highlighted.
TBA
Alan Cobham proved in 1969 that finite automata can only recognize sets within one number system. He did not think too much of this result and concluded that "insofar as the recognition of sets of numbers goes, finite automata are weak and somewhat unnatural." This has not stopped mathematicians from generalizing Cobham's theorem in many directions, including substitution minimal sets in symbolic dynamics.
Cobham's paper is self-contained and short. The proof including the necessary lemmas only takes five pages and does not require any deep notions. In spite of this, Samuel Eilenberg somewhat mysteriously deemed it "long, correct, and hard" and asked for a "more reasonable proof". In this talk I will discuss what Eilenberg may have meant by this and why, in spite of all the work on this theorem, its proper generalization has not yet been found.
Eigenvalues of transfer operators, known as Pollicott-Ruelle resonances provide insight into the long-term behaviour of the underlying dynamical system, in particular determining its exponential mixing rates. In this talk, I will present a complete description of Pollicott-Ruelle resonances for a class of rational Anosov diffeomorphisms on the two-torus. This allows us to show that every homotopy class of two-dimensional Anosov diffeomorphisms contains (non-linear) maps with the sequence of resonances decaying stretched exponentially, exponentially or having only trivial resonances.
Recently, much progress has been made in the mathematical study of self-consistent transfer operators which describe the mean-field limit of globally coupled maps. Conditions for the existence of equilibrium measures (fixed points for the self-consistent transfer operator) have been given, and their stability under perturbations and linear response have been investigated. In this talk, I am going to describe some novel developments on dynamical systems made of N uniformly expanding coupled maps when N is finite but large. I will introduce self-consistent transfer operators that approximate the evolution of measures under the dynamics, and quantify this approximation explicitly with respect to N. Using this result, I will show that uniformly expanding coupled maps satisfy propagation of chaos when N tends to infinity, and I will characterize the absolutely continuous invariant measures for the finite dimensional system.
Reciprocity is a hallmark of thermal equilibrium, but ubiquitously broken in far-from-equilibrium systems. I will give some insights into how nonreciprocal interactions can fundamentally affect the phases and fluctuations of many-body systems. Using a two-dimensional XY model, where spins interact only with neighbours within their 'vision cones', we show how nonreciprocity can lead to true long-range order and directional propagation of defects [1]. In binary fluids, nonreciprocal coupling between fluid components can cause the emergence of travelling waves through PT symmetry-breaking phase transitions. Using a hydrodynamic model, we find that fluctuations not only inflate, as in equilibrium criticality, but also develop an asymptotically increasing time-reversal asymmetry [2-4] and associated surging entropy production. We can trace the formation of dissipative patterns and the emergence of irreversible fluctuations to the same origin, namely a mode-coupling mechanism near critical exceptional points.
[1] Loos, Klapp, Martynec, Long-Range Order and Directional Defect Propagation in the Nonreciprocal XY Model with Vision Cone Interactions, Phys. Rev. Lett. 130, 198301 (2023).[2] Suchanek, Kroy, Loos, Irreversible mesoscale fluctuations herald the emergence of dynamical phases, Phys. Rev. Lett. in press (2023).[3] Suchanek, Kroy, Loos, Time-reversal and parity-time symmetry breaking in non-Hermitian field theories, Phys. Rev. E in press (2023).[4] Suchanek, Kroy, Loos, Entropy production in the nonreciprocal Cahn-Hilliard model, Phys. Rev. E in press (2023).
The first application of dimension group to Cantor dynamical systems was in the work of I. Putnam in 1989 where he used dimension group to study interval exchange transformations (IET). Then in his works with T. Giordano, R. I. Herman, and C.F. Skau they developed those ideas which was based on creating Kakutani-Rokhlin (K-R) partitions for IET's to the general case of Cantor systems. They made a break through with this theory when they proved that every uniquely ergodic Cantor minimal system is orbit equivalent to either a Denjoy's or an odometer system; a topological analogues of the well-known Dye's theorem in ergodic theory. Having sequences of K-R partitions, which is the bridge to dimension group, has been established for every zero-dimensional systems (by the works of S. Bezuglyi, K. Medynets, T. Downarowicz, O. Karpel and T. Shimomura) and is a strong tool in studying continuous and measurable spectrum of Cantor systems as well as topological factoring between them. In this talk I will make an introduction to the notions of K-R towers and dimension group and then I will discuss some recent results about applications of them in studying spectrum and topological factoring of dynamical systems on Cantor sets.
Please note that this seminar will be on Tuesday 11:00, rather than our standard complex system seminar time.
Speaker: Arne Traulsen, Director of Department for Theoretical Biology, Max Planck Institute for Evolutionary Biology https://www.evolbio.mpg.de/person/12087/16397
Zoom link: https://qmul-ac-uk.zoom.us/j/81250086190
Abstract:
Cells need to organize their internal structure in different compartments to function properly. Recent years have seen the discovery of many biomolecular condensates that form compartments that do not require a membrane to separate them from the cytoplasm. Their formation and dynamics can be modelled by the physics of phase separation. Single-molecule experiments allow us to follow the motion of individual molecules in these condensates and across their phase boundaries. I will discuss the stochastic trajectories of single molecules in a phase-separated liquid, showing how the physics of phase coexistence affects the statistics of molecular trajectories. Starting from these results, I will investigate the thermodynamics of these individual trajectories, discuss how they can reveal the non-equilibrium nature of condensates and present how they can be used to infer key phase separation parameters. I will close by considering chemically active condensates.
A famous result of Kesten from 1959 relates symmetric random walks on countable groups to amenability. Precisely, provided the support of the walk generates the group, the probability of return to the identity in 2n steps decays exponentially fast if and only if the group is not amenable. This led to many analogous “amenability dichotomies”, for example for the spectrum of the Laplacian of manifolds and critical exponents of discrete groups of isometries. I will discuss some of these topics and present a version of the dichotomy for non-symmetric walks. Time permitting, I will also discuss a new ratio limit theorem for amenable groups. This is joint work with Rhiannon Dougall.
Our understanding of aposematism (the conspicuous signalling of a defence for the deterrence of predators) has advanced notably since its first observation in the late nineteenth century. Indeed, it extends the scope of a well-established game-theoretical model of this very same process both from the analytical standpoint (by considering regimes of varying background mortality and colony size) and from the practical standpoint (by assessing its efficacy and limitations in predicting the evolution of prey traits in finite simulated populations). In this talk first I will discuss the relationship between evolutionarily stable levels of defence and signal strength under various regimes of background mortality and colony size. Second, I will compare these predictions with simulations of finite prey populations that are subject to random local mutation. Absolute resident fitness, mutant fitness and stochasticity feature in the evolution of prey traits and their importance in populations of finite size is assessed.
Here is a link to the publication on which the talk is based:
https://www.sciencedirect.com/science/article/pii/S0040580923000114
Phase separating active systems display surprising phenomenology that is absent in passive fluid-fluid or liquid-vapor phase separation: activity can cause the Ostwald process to go into reverse, or capillary waves to become unstable. When this happens, active systems self-organize in novel types of phase separated morphologies which are either impossible in passive systems or require fine-tuning to be obtained. The universal properties of these phase separated states, such as the critical exponents associated to the roughening of the interface, also differ from those of passive fluids. I will discuss how such findings can be rationalized via field theoretical analysis, particle-based modeling, and their experimental relevance.
Under the evolution of a chaotic system, distributions which are sufficiently regular in a certain sense often converge rapidly to the system’s physical measure, a property which closely relates to the statistical behaviour of the system. In this talk we consider the behaviour of other less regular measures, in particular slices of these physical measures along reasonable generic submanifolds. We give evidence that such conditional measures also have exponential convergence back to the full SRB measures, even though they lack the regularity usually considered to be necessary for this: for example, they may be Cantor measures. Using Fourier dimension results, we will prove that so-called conditional mixing holds in a class of generalised baker’s maps, and we will give rigorous numerical evidence in its favour for some non-Markovian piecewise hyperbolic maps. We will discuss some applications: conditional mixing naturally encodes the idea of long-term forecasting of systems using perfect partial observations, and appears key to a rigorous understanding of the emergence of linear response in high-dimensional systems.
The ability of honeybee swarms to collectively choose the best option among a set of alternatives is remarkable. In particular, previous studies of the nest-site selection behaviour of honeybees have described the mechanisms that can be employed to adaptively switch between deliberate and greedy choices, the latter being taken when the value of the available alternatives is comparable. In this talk, I will review evidence about self-organised mechanisms for collective choices, highlighting emergent properties like adherence to psychophysical laws. I will then introduce a design methodology for decentralised multi-agent systems that guarantees the attainment of desired macroscopic properties. In particular, I will present a design pattern for collective decision making that provides formal guidelines for the microscopic implementation of collective decisions to quantitatively match the macroscopic predictions. Additionally, I will provide examples of the design methodology through several case studies with multi-agent systems and robot swarms.
Sampling polymer melts remains a paradigmatically hard problem in computational physics, despite the various ingenious Monte Carlo and Molecular strategies that have been developed so far. As a matter of fact, achieving an efficient and unbiased sampling of densely packed polymers is hard even in the minimalistic case of crossable polymers on a lattice. Here we tackle the problem from a novel perspective, namely by using a quadratic unconstrained binary optimization (QUBO) model, which is amenable to being implemented on quantum machines. The QUBO model naturally lends to imposing various physical constraints that would otherwise be difficult to handle with conventional MC and MD schemes, such as fixing the packing density (lattice filling fraction), contact energy, and bending energy of the system. This facile handling of multiple physical constraints enables the study of properties not addressed before, as we demonstrate by computing the overall entanglement properties of self-assembling rings. Porting the model to D-Wave quantum annealers can speed up the QUBO-based sampling by orders of magnitudes compared to classical simulated annealing.
I will discuss a minimal urn model that proved to reproduce statistical behaviour shared by systems featuring innovation. I will then discuss its connection with seminal methods for nonparametric Bayesian inference. Finally, I will show that the urn generative models perspective allows us to formulate simple yet powerful inference methods. I will present an implementation for the authorship attribution task.
The standard formulation of thermodynamics relies on additivity. However, there are numerous examples of macroscopic systems that are non additive, due to long-range interactions among the constituents. These systems can attain equilibrium states under completely open conditions, for which energy, volume and number of particles simultaneously fluctuate. The unconstrained ensemble is the statistical ensemble describing completely open systems; temperature, pressure and chemical potential constitute a set of independent control parameters, thus violating Gibbs-Duhem relation.We illustrate the properties of this ensemble by considering a modified version of Thirring model with both attractive and repulsive long-range interactions and we compare the results with those in other ensembles. This work contributes to the understanding of long-range interacting systems exchanging heat, work and matter with the environment.
Primality and factorisation play a key role in number theory, being key aspects on the study of the multiplicative structure of the integers and motivating similar constructions in more general number systems. It is thus not surprising that sets of algebraic numbers defined by imposing restrictions on their factorisation structure have deep and interesting properties. We discuss (mostly) two-dimensional shift spaces constructed using number-theoretically defined sets as a basis, and connect the shift space with the local structure and behaviour of the generating set; classic examples such as the shift of visible lattice points and the one-dimensional squarefree shift have natural generalisations in the family of k-free shift spaces, the latter being deeply intertwined with ideas from algebraic number theory; these also serve as first examples of the more general class of multidimensional B-free shift spaces. These shift spaces exhibit interesting (and, from certain perspectives, unusual) properties, including the combination of high complexity (positive entropy) with symmetry rigidity (the automorphism group, or centralizer, is “essentially” trivial, containing only shift maps). Our discussion will be focused on a geometrical interpretation of the notion of “symmetry” in these systems, for which the most appropriate tool is the extended symmetry group (or normalizer), a natural generalisation of the automorphism group which exhibits a wide variety of interesting and non-trivial behaviours in this context, in contrast to the standard automorphism group. This is joint work with Michael Baake, Christian Huck, Mariusz Lemanczyk and Andreas Nickel.
This talk is cancelled
The stability behaviour of a linear dynamical system is easily understood: each trajectory can either converge exponentially to the origin, escape exponentially to infinity, escape to infinity at a precise polynomial rate, or remain indefinitely within a bounded region not containing the origin. No other options are possible. In particular, even if the initial state of the system is not known then a "worst case" bound for the stability behaviour can still be given in terms of the preceding four options. In this talk I will discuss worst-case stability bounds for switched linear dynamical systems. In this context, at each time step the dynamical system is evolved by applying a matrix which is chosen arbitrarily from some prescribed finite set of matrices. I will describe a proof that there exist switched linear systems for which the worst-case stability behaviour is unbounded but is not asymptotically polynomial. The methods used in this talk will mainly be elementary, but I will discuss connections with ergodic theory towards the end.
Abstract: We study a general model of recursive trees as models for complex networks, where nodes are equipped with random weights, arrive one at a time and are e and connect to existing node with probability proportional to a general function of their degree and their weight. We prove a general formula for the degree distribution in this model, and show that, under certain circumstances, a 'condensation' effect can occur, which depends intimately on the initial attractiveness of a node, and its reinforcement from having more neighbours. We also study the limiting infinite tree associated with this model, and show that, under a certain 'explosive' regime, the limiting tree has only a single node of infinite degree, or a single infinite path. We provide explicit criteria to determine which occurs.
Abstract: Substitution tilings arise from graph iterated function systems. Adding a contraction constant, the attractor recovers the prototiles. On the other hand, without the contraction one obtains an infinite tiling. In this talk I'll introduce substitution tilings and an associated semigroup defined by Kellendonk. I'll show that this semigroup defines a self-similar action on a topological Markov shift that's conjugate to the punctured tiling space. The limit space of the self-similar action turns out to be the Anderson-Putnam complex of the substitution tiling and the inverse limit recovers the translational hull. This was joint work with Jamie Walton.
In this talk, I will attempt to cover two topics related to my research on dynamics on networks. First, I will expose a trilogy of papers [1-3] leveraging diffusion for learning specific aspects of graph structures with a multiscale flavor. Diffusion on graphs is similar to continuous diffusion on compact spaces, where boundary effects can be detected. These can be used to improve node classification results [1], design multiscale centrality measures [2] or notions of dimensions [3]. In a second part, I will present [4] where we generalised frustration to the recently introduced Kuramoto model on simplicial complexes. By coupling dynamics between Hodge subspaces, it produces a dynamical system with rich and unexpected behaviors.
[1] Peach, R. L., Arnaudon, A., & Barahona, M. (2020). Semi-supervised classification on graphs using explicit diffusion dynamics. Foundations of Data Science, 2(1), 19.
[2] A., Peach, R. L., & Barahona, M. (2020). Scale-dependent measure of network centrality from diffusion dynamics. Physical Review Research, 2(3), 033104.
[3] Relative, local and global dimension in complex networks. Nature Communications, 13(1), 1-11
[4] Arnaudon, A., Peach, R. L., Petri, G., & Expert, P. (2022). Connecting Hodge and Sakaguchi-Kuramoto through a mathematical framework for coupled oscillators on simplicial complexes. Communications Physics, 5(1), 211.
In joint work with Zemer Kosloff, we show that a totally dissipative system has all nonsingular systems as factors, but that this is no longer true when the factor maps are required to be finitary. In particular, if a nonsingular Bernoulli shift has a limiting marginal distribution p, then it cannot have, as a finitary factor, an independent and identically distributed (iid) system of entropy larger than H(p); on the other hand, we show that iid systems with entropy strictly lower than H(p) can be obtained as finitary factors of these Bernoulli shifts, extending Keane and Smorodinsky's finitary version of Sinai's factor theorem to the nonsingular setting.
Abstract: Given an action of a group by homeomorphisms on a compact metrisable space X, the enveloping semigroup of this action is its compactification in the semigroup of functions from X to X w.r.t. the topology of point wise convergence. It has been introduced by Robert Ellis in the 60’s. It has very interesting algebraic and topological properties which may serve to characterise the group action. One interesting topological property goes under the name of tameness, or its contrary. A group action is tame if its enveloping semigroup is the sequential compactification of the group action. Minimal tame group actions are almost determined by their spectrum. Non-tame group actions have the reputation of being difficult to manage, but, in joint work with Reem Yassawi we have recently been able to determine the enveloping semigroup of all Z-actions defined by bijective substitutions. We may therefore say that bijective substitutions define “easy” non-tame Z-actions. In this talk I propose simple algebraic concepts from semigroup theory which can be used to refine the notion of non-tameness and distinguish “easy” non-tameness from a more difficult one.
I will review research leading to what is called stochastic climatedynamics, that is, the modeling of certain aspects of the climate ofthe earth by the theory of stochastic processes. Such developmentswere honoured by the Nobel Prize particularly to Klaus Hasselmann in2021. Starting from ordinary Langevin dynamics, I will outline simpleenergy balance models, as well as basic and then generalised(fractional) stochastic climate models predicting the temperature ofthe earth. To the end I may discuss cross-links to own work aboutfluctuation-dissipation relations and fluctuation relations thatprovide generalisations of the second law of thermodynamics tononequilibrium processes.
I will explain the current research going at our lab with model organisms like roundworms and ants to understand better the link between microscopic and macroscopic patterns of space use and spread in the context of exploration or foraging. Movement behaviour is context (information)-dependent, multidimensional (as can be measured in many different ways) and unfolds at a wide range of scales (permeates from microscopic to macroscopic scales). In ecology, we measure behaviour in the field, which limits our comprehension about the determinants and mechanistic links coupling observations at different scales. I firmly believe that the use of statistical physics tools and concepts to model accurate experimental data is indeed a key route to go to advance in the field of behavioural ecology.
A novel method is presented for stochastic interpolation of a sparsely sampled time signal based on a superstatistical random process generated from a Gaussian scale mixture. In comparison to other stochastic interpolation methods such as kriging, this method possesses strong non-Gaussian properties and is thus applicable to a broad range of real-world time series. A precise sampling algorithm is provided in terms of a mixing procedure that consists of generating a field u(x,t), where each component ux(t) is synthesized with identical underlying noise but covariance Cx(t,s) parameterized by a log-normally distributed parameter x. Due to the Gaussianity of each component ux(t), standard sampling algorithms and methods to constrain the process on the sparse measurement points can be exploited. The scale mixture u(t) is then obtained by assigning each point in time t a x(t) and therefore a specific value from u(x,t), where log x(t) is itself a realization of a Gaussian process with a correlation time large compared to the correlation time of u(x,t). Finally, a wavelet-based hierarchical representation of the interpolating paths is introduced, which is shown to provide an adequate method to locally interpolate large datasets.
Artificial intelligence (AI) based molecular data analysis has begun to gain momentum due to the great advancement in experimental data, computational power and learning models. However, a major issue that remains for all AI-based learning models is the efficient molecular representations and featurization. Here we propose advanced mathematics-based molecular representations and featurization (or feature engineering). Molecular structures and their interactions are represented as various simplicial complexes (Rips complex, Neighborhood complex, Dowker complex, and Hom-complex), hypergraphs, and Tor-algebra-based models. Molecular descriptors are systematically generated from various persistent invariants, including persistent homology, persistent Ricci curvature, persistent spectral, and persistent Tor-algebra. These features are combined with machine learning and deep learning models, including random forest, CNN, RNN, Transformer, BERT, and others. They have demonstrated great advantage over traditional models in drug design and material informatics.
We study a random dynamical system that samples between a contracting and an expanding map with a certain probability p in time. We review properties of the invariant measure and derive an explicit formula for the invariant density curve. Correlation functions are studied numerically and we give an analytic approximation which explains well in two extreme regimes. At the critical value of the parameter p, the system exhibits anomalous behaviour such as intermittency, weak ergodicity breaking and power law decay in correlations.
The structure of complex networks can be characterized by counting and analyzing network motifs, which are small graph structures that occur repeatedly in a network, such as triangles or chains. Recent work has generalized motifs to temporal and dynamic network data. However, existing techniques do not generalize to sequential or trajectory data, which represents entities walking through the nodes of a network, such as passengers moving through transportation networks. The unit of observation in these data is fundamentally different, since we analyze observations of walks (e.g., a trip from airport A to airport C through airport B), rather than independent observations of edges or snapshots of graphs over time. In this talk, I will discuss our recent work defining sequential motifs in observed walk data, which are small, directed, and sequenced-ordered graphs corresponding to patterns in observed walks. We draw a connection between counting and analysis of sequential motifs and Higher-Order Network (HON) models. We show that by mapping edges of a HON, specifically a kth-order DeBruijn graph, to sequential motifs, we can count and evaluate their importance in observed data, and we test our proposed methodology with two datasets: (1) passengers navigating an airport network and (2) people navigating the Wikipedia article network. We find that the most prevalent and important sequential motifs correspond to intuitive patterns of traversal in the real systems, and show empirically that the heterogeneity of edge weights in an observed higher-order DeBruijn graph has implications for the distributions of sequential motifs we expect to see across our null models.
Data amount, variety, and heterogeneity have been increasing drastically for several years, offering a unique opportunity to better understand complex systems. Among the different modes of data representation, networks appear particularly successful. Indeed, a wide and powerful range of tools from graph theory are available for their exploration. However, the integrated exploration of large multidimensional datasets remains a major challenge in many scientific fields. For instance, in bioinformatics, the understanding of biological systems would require the integrated analysis of dozens of different datasets. In this context, multilayer networks emerged as key players in the analysis of such complex data. Moreover, recent years have witnessed the extension of network exploration approaches to capitalize on more complex and richer network frameworks. Random walks, for instance, have been extended to explore multilayer networks. These kinds of methods are currently used for exploring the whole topology of large-scale networks. Random walk with restart, a special case of random walk, allows to measure similarity between a given node and all the other nodes of a network. This strategy is known to outperform methods based on local distance measures for the prioritization of gene-disease associations. However, current random walk approaches are limited in the combination and heterogeneity of networks they can handle. New analytical and numerical random walk methods are needed to cope with the increasing diversity and complexity of multilayer networks. In the context of my thesis, I developed a new mathematical framework and its associated Python package, named MultiXrank, that allow the integration and exploration of any combinations of networks. The proposed formalism and algorithm are general and can handle heterogeneous and multiplex networks, both directed and weighted. As part of my Ph. D., I also applied this new method to several biological questions such as the prioritization of gene and drug candidates for being involved in different disorders, gene-disease association predictions, and the integration of 3D DNA conformation information with gene and disease networks. This last application offers new tracks to unveil disease comorbidities relationships. During my Ph.D., I was also interested in the extension of several other methods to multilayer networks. In particular, I generalized the Katz similarity measure to multilayer networks. I also developed a new method of community detection. This new community detection is based on random walks with restart and allows the identification of clusters from multilayer network nodes. Finally, I studied network embedding, especially in the case of shallow embedding methods. In this context, I did a literature review, which is quickly evolving. I also developed a network embedding method based on MultiXrank that opens the embedding to more complex multilayer networks.
The percolation properties of the geometrical clusters for the ferromagnetic multi-replica Ising model in two dimensions will be discussed. The system can be considered as a collection of non-interacting copies (replicas)at the same temperature. By means of Monte Carlo simulations and a finite-size scaling analysis, we estimate the critical temperature and the critical exponents characterizing the transition. Specifically, for the one replica case(corresponding to the standard Ising problem) the critical exponents concerning the percolation strength and average cluster size are determined, by considering the influence on the estimates of the exponents when particular cluster sets are included or excluded in the definition of the observables. For two replicas a percolation transition occurs at the same temperature as for one replica, but with different exponents for the percolation strength and the average cluster size. With increasing number of replicas, stronger and stronger deviations are observed.
After two years of the Covid-19 pandemic, there is no need to emphasize the importance of the study of epidemic spreading. In the past year, there has been many studies trying to answer important questions, including both the mechanism of how epidemic spreads and the policies to mitigate the pandemic.
In this talk we will discuss a variety of mathematical models that provide a theoretical understanding of some major scientific questions posed by the current pandemic.
Abstract: The complex world surrounding us, including all living matter and various artificial complex systems, mostly operates far from thermal equilibrium. A major goal of current statistical physics and thermodynamics is to unravel the fundamental principles that govern the individual dynamics and collective behavior of such nonequilibrium systems, like the swarming of fish or flocking of birds. A novel key concept to describe and classify nonequilibrium systems is the stochastic entropy production, which explicitly quantifies the breaking of time-reversal symmetry. However, so far, little attention has been paid to the implications of non-conservative interactions, such as time-delayed (i.e., retarded) or non-reciprocal interactions, which cannot be represented by Hamiltonians contrasting all interactions traditionally considered in statistical physics. Non-conservative interactions indeed emerge commonly in biological, chemical and feedback systems, and are widespread in engineering and machine learning. In this talk, I will use simple time- and space-continuous models to discuss technical challenges and unexpected physical phenomena induced by non-reciprocity [1,2] and time delay [3,4].
[1] Loos and Klapp, NJP 22, 123051 (2020) [2] Loos, Hermann, and Klapp, Entropy 23, 696 (2021) [3] Loos and Klapp, Sci. Rep. 9, 2491 (2019) [4] Holubec, Geiss, Loos, Kroy, and Cichos, PRL 127, 258001 (2021)
Polymerase II (PolII) is an enzyme that helps synthesize messenger RNA (mRNA) strands complementary to segments of DNA (the genes) in a process called transcription. From the perspective of non-equilibrium statistical mechanics, PolII is a molecular motor that walks along a one-dimensional lattice formed by DNA. Its dynamics are subject to congestion, pausing, and feedback loops.
We perform Bayesian inference over mechanistic models of transport, such as the totally asymmetric simple exclusion process (TASEP) with smoothly varying hopping rate, and simpler phenomenological models.
This allows us to quantify key aspects of transcription from high-throughput biological data.
Abstract: Lévy flights are continuous-time stochastic processes with stationary independent increments that admit large power-law distributed jump increments. As a mathematical model they are widely used to describe non-Brownian diffusion in complex systems as varied as financial markets, foraging animals, and earthquake tremors. In this talk I discuss two topics: (1.) The derivation of Lévy flights as the effective coarse-grained dynamics of a tracer particle interacting with active swimmers in suspension, which provides the first validation of this model from a physical microscopic dynamics [1]. (2.) The calculation of escape rates from metastable states in Lévy noise driven systems. Using a path-integral framework, optimal escape paths are identified as minima of a stochastic action, which induce a more efficient escape by reducing the effective potential barrier compared to the Gaussian noise case [2].
[1] K. Kanazawa, T. Sano, A. Cairoli, and A. Baule, Nature 579, 364 (2020) [2] A. Baule and P. Sollich, arXiv:1501.00374
Abstract: We consider a typical class of systems with delayed nonlinearity, which we show to exhibit chaotic diffusion. It is demonstrated that a periodic modulation of the time lag can lead to an enhancement of the diffusion constant by several orders of magnitude. This effect is the largest if the circle map defined by the modulation shows mode locking and, more specifically, fulfils the conditions for laminar chaos. Thus, we establish for the first time a connection between Arnold tongue structures in parameter space and diffusive properties of a delay system. Counterintuitively, the enhancement of diffusion is accompanied by a strong reduction of the effective dimensionality of the system.
Our aim is to obtain precise information on the asymptotic behaviour of various dynamical systems by an improved understanding the discrete spectrum of the associated transfer operators. I'll discuss the general principle that has come to light in recent years and which often allows us to obtain substantial spectral information. I'll then describe several settings where this approach applies, including affine expanding Markov maps, monotone maps, hyperbolic diffeomorphisms. (Joint work with: Niloofar Kiamari & Carlangelo Liverani.)
Lattice models such as self-avoiding walk and polygon models have long proved useful for understanding the equilibrium and asymptotic properties of long polymers in solution. Interest in using lattice models to study knot and link statistics grew in the late 1980s when a lattice polygon model was used (by Sumners and Whittington and by Pippenger in 1988) to prove the 1960s Frisch and Wasserman and Delbruck (FWD) conjecture that long polymers should be knotted (self-entangled) with high probability. At the same time, since DNA entanglements were known to be obstructions for normal cellular processes, understanding the entanglement statistics of DNA drew the attention of polymer modellers. Despite much progress since then, many open questions remain for lattice polygon models regarding the details of the knot distribution and the typical "size" of the knotted or linked parts. After a general overview of these topics, I will discuss a recent breakthrough about the asymptotic scaling form for the number of n-edge embeddings of a link L in a simple cubic lattice tube with dimensions 2 x 1 x infinity. We prove using a combination of new knot theory results and new lattice polygon combinatorics that, as n goes to infinity, the ratio of the number of n-edge unknot polygons to the number of n-edge link-type L polygons goes to 0 like 1/n to a power, where the power is the number of prime link factors of L. This proves a 1990's conjectured scaling form that is expected to hold for any tube size and in the limit of infinite tube dimensions. The proof also allows us to establish results about the average size of the knotted and linked parts. Monte Carlo results indicate that the same scaling form holds for larger tube sizes and we connect our results to DNA in nanochannel/nanopore experiments. This is joint work with M Atapour, N Beaton, J Eng, K Ishihara, K Shimokawa and M Vazquez.
Time permitting, I will also discuss recent results and open questions for the special case of two component links in which both components span a lattice tube (2SAPs). The latter is joint work with J Eng, P Puttipong, and R Scharein.
The intermediate dimensions are a (recently introduced) continuum of dimensions which in some sense interpolate between the well-known Hausdorff and box dimensions. The Hausdorff and box dimensions of a (finitely generated) self-conformal set necessarily coincide, rendering the intermediate dimensions constant, but may differ for infinitely generated self-conformal sets (that is, when the defining IFS has an infinite number of maps). I will review intermediate dimensions and infinitely generated self-conformal sets and go on to discuss recent joint work with Amlan Banaji which brings the two notions together.
Single-file transport, where particles diffuse in narrow channels while not overtaking each other, is a fundamental model for the tracer subdiffusion observed in confined systems, such as zeolites or carbon nanotubes. This anomalous behavior originates from strong bath-tracer correlations in 1D, which have however remained elusive, because they involve an infinite hierarchy of equations. For the Symmetric Exclusion Process, a paradigmatic model of single-file diffusion, this hierarchy of equations can in fact be broken, and the bath-tracer correlations satisfy a closed equation, which can be solved. I will suggest that this equation appears as a novel tool for interacting particle systems, since it also applies to out-of equilibrium situations, other observables and other representative single-file systems.
I will present recent results on the statistical behaviour of a large number of weakly interacting diffusion processes evolving under the influence of a periodic interaction potential. We study the combined mean field and diffusive (homogenisation) limits. In particular, we show that these two limits do not commute if the mean field system constrained on the torus undergoes a phase transition, i.e., if it admits more than one steady state. A typical example of such a system on the torus is given by mean field plane rotator (XY, Heisenberg, O(2)) model. As a by-product of our main results, we also analyse the energetic consequences of the central limit theorem for fluctuations around the mean field limit and derive optimal rates of convergence in relative entropy of the Gibbs measure to the (unique) limit of the mean field energy below the critical temperature.
The aim of the following work is to model the maintenance of ecological networks in forest environments, built from bioreserves, patches and corridors, when these grids are subject to random processes such as extreme natural events. The management plan consists in providing both temporary and sustainable habitats to migratory species. It also aims at ensuring connectivity between the natural areas without interruption. After presenting the random graph-theoretic framework, we apply the stochastic optimal control to the graph dynamics. Our results show that the preservation of the network architecture cannot be achieved, under stochastic control, over the entire duration. It can only be accomplished, at the cost of sacrificing the links between the patches, by increasing usage of the control devices. This would have a negative effect on the species migration by causing congestion among the channels left at their disposal. The optimal scenario, in which the shadow price is at its lowest and all connections are well-preserved, occurs at half of the course, be it the only optimal stopping moment found on the stochastic optimal trajectories. The optimal forestry policy thus has to cut down the timing of the practices devoted to biodiversity protection by half.
Given a dynamical system f: I -> I we study the asymptotic expected behaviour of the cover time: the rate at which orbits become dense in the state space I. We will see how this can be studied through the lens of dynamical systems with holes and the spectral theory of the transfer operators associated to these systems.
The classical shrinking target problem concerns the following set-up: Given a dynamical system (T, X) and a sequence of targets (Bn) of X, we investigate the size of the set of points x of X for which Tn(x) hits the target Bn for infinitely many n. In this talk I will study shrinking target problems in the context of fractal geometry. I will first recall the symbolic and geometric dynamical systems associated with iterated function systems, fundamental constructions from fractal geometry. I will then briefly cover the Hausdorff dimension theory of generic self-affine sets; that is, sets invariant under affine iterated function systems with generic translations. Finally, I will show how to calculate the Hausdorff dimension of shrinking target -type sets on generic self-affine sets. The target sets that I will consider shrink at a speed that depends on the path of x. Time permitting, further problems of similar flavour and refinements of the dimension result might also be explored.
Resonances of Riemannian manifolds are often studied with tools of microlocal analysis. I will discuss some recent results on upper fractal Weyl bounds for certain hyperbolic surfaces of infinite area, obtained with transfer operator techniques, which are tools complementary to microlocal analysis.
First, we study dynamically crowded solutions of stiff fibers deep in the semidilute regime, where the motion of a single constituent becomes increasingly confined to a narrow tube. We demonstrate that in such crowded environments the intermediate scattering function, characterizing the motion in space and time, can be predicted quantitatively by simulating a single freely diffusing phantom needle only, yet with very unusual diffusion coefficients. Second, we also solve for the propagator of single anistropic active particle and compare to differential dynamic microscopy (DDM) as well as single particle tracking. Employing this solution we extend the tube concept to a suspension of active needles.
We study the non-equilibrium dynamics of a Brownian particle ("tracer") when first released into a fluctuating field, modelling general diffusion in a complex liquid. This microrheological model can be applied to study the rich phenomenology of transport in disordered media, such as they arise in cells, tissues, but also spin glasses. In our case, we are, however, particularly interested in how the dynamical behaviour of the tracer particle can be used to infer (critical) properties of the surrounding field. Understanding how a tracer particle can be employed in order to extract, e.g., critical exponents of the field near its critical point, is relevant to numerous experimental situations where the liquid/field cannot be observed directly.
We approach this problem by constructing a non-equilibrium field theory which perturbatively describes the joint stochastic dynamics of the colloid and the field. This allows us not only to reproduce previously found results for the long time limit, but also to understand the dynamical non-equilibrium response to the sudden, quench-like, release of the tracer into the field.
In many natural phenomena, deviations from Brownian diffusion, known as anomalous diffusion, can often be observed. Examples of these deviations can be found in cellular signalling, in animal foraging, in the spread of diseases, and even in trends in financial markets and climate records. The characterisation of anomalous diffusion remains challenging to date. In this talk, I will discuss the results of the Anomalous Diffusion (AnDi) Challenge, which was launched in 2020 to evaluate and compare new and existing methods for the characterisation of anomalous diffusion. Within the context of the AnDi Challenge, I will also discuss a new method that we introduced based on combining classical statistics and deep learning to characterise anomalous diffusion.
We describe a general approach to the theory of self consistent transfer operators. These operators have been introduced as tools for the study of the statistical properties of a large number of all to all interacting dynamical systems subjected to a mean field coupling. We consider a large class of self consistent transfer operators and prove general statements about existence and uniqueness of invariant measures, speed of convergence to equilibrium, statistical stability and linear response, mostly in a "weak coupling" or weak nonlinearity regime. We apply the general statements to examples of different nature: coupled expanding maps, coupled systems with additive noise, systems made of different maps coupled by a mean field interaction and other examples of self consistent transfer operators not coming from coupled maps.
When trying to understand the dynamics of complex chaotic systems, a common assumption is thel chaotic hypothesis of Gallavotti and Cohen, which states that the large-scale dynamics of high-dimensional systems are effectively hyperbolic, and thus have many felicitous statistical properties. We demonstrate, contrary to the chaotic hypothesis, the existence of non-hyperbolic large-scale dynamics in the thermodynamic limit of a mean-field coupled system. This thermodynamic limit has dynamics described by self-consistent transfer operators, which we approximate numerically with a Chebyshev discretisation. This enables us to obtain a high precision estimate of a homoclinic tangency, implying a failure of hyperbolicity. Robust non-hyperbolic behaviour is expected under perturbation, giving a class of systems for which the chaotic hypothesis does not hold. On the other hand, at finite ensemble size we show that the coupled system has an emergent stochastic behaviour at large scale, inducing the nice statistical properties hoped for by the Gallavotti-Cohen hypothesis.
In theoretical physics, the behaviour of a strongly disordered system cannot be inferred from its clean, homogeneous counterpart. In fact, disordered systems are prototypical examples of complex entities in many aspects, mainly in the rough free-energy landscape profile. In the current talk, I will present new results that settle down some of the most ambiguous but still fundamental questions in the theory of critical phenomena of disordered systems. The platform will be the random-field Ising model, which is unique among other models due to the existence of very fast algorithms that make the study of these questions numerically feasible and whose applications in hard and soft condensed matter physics are numerous. A small part of the talk will be devoted to the ideas stemming from the pools of theoretical computer science and the phenomenological renormalisation group that led to the development of novel computational and finite-size scaling schemes, allowing us to account and finally tame the notoriously difficult role of scaling corrections.
A few decades ago Baxter conjectured that the “standard” q-state (color) Potts model, where a ferromagnetic interaction takes place between nearest neighboring spins on the square lattice, undergoes a second order transition for q ≤ qc and a first order transition for q > qc with qc = 4 being the changeover integer. Renormalization group arguments suggest that Baxter’s conjecture should hold for other lattices or interaction content, provided that the interaction is local.
There are, however, counterexamples. An interesting one is the so-called Potts model with “invisible” colors (PMIC), where the standard model is equipped with additional r “invisible” colors that control the entropy of the system but do not affect the energy. It has been shown that for r sufficiently large, the PMIC undergoes a first order transition. Thus, it may occur that the changeover integer is smaller than four or even does not exist.
We introduce a hybrid Potts model (HPM) where qc can be manipulated in a different way. Consider a system where a random concentration p of the spins assume q0 colors and a random concentration 1 − p of the spins assume q > q0 colors. It is known that when the system is homogeneous, with an integer spin number q0 or q, it undergoes a second or a first order transition, respectively. It is argued that there is a concentration p* such that the transition behavior is changed at p* . This idea is demonstrated analytically and by simulations on the standard model.
Independently, a mean field type all-to-all interaction HPM is studied. It is shown analytically that p* exists for this model. Exact expressions for the second order critical line in concentration-temperature parameter space, together with some other related critical properties, are derived.
We estimate the entropy of self-avoiding trails on the square lattice in the dense limit, where a single trail passes through all edges of the lattice, as a function of the density of crossings.For this, the largest eigenvalues of transfer matrices of size up to 6.547*108 was obtained, utilising 76GB memory.
Deep neural networks are usually trained in the space of the nodes, by adjusting the weights of existing links via suitable optimization protocols. We will see a radically new approach which anchors the learning process to reciprocal space. Specifically, the training acts on the spectral domain and seeks to modify the eigenvalues and eigenvectors of transfer operators in direct space. We will also discuss some applications to existing problems in machine learning and possible new directions.
Transient spatiotemporal chaos is a generic pattern in extended non-equilibrium systems across disciplines. In the absence of external perturbations, the spatiotemporal complexity of a system changes spontaneously from chaotic to regular behavior. Transients may be long lived; their average lifetime increases exponentially with medium size. In the context of neurological disease, spontaneous transitions are associated with disruptions of neurological rhythms. The role of chaotic saddles for such transitions is not known. The talk will focus on spatiotemporal chaos and its collapse to either a rest state or a propagating pulse solution in a single layer of diffusively coupled, excitable Morris-Lecar neurons. Weak coupling of two such layers reveals switching of activity patterns within and between layers at irregular times. E.g., a transition from irregular to regular neuron activity in one layer can initiate spatiotemporal chaos in another layer.
While studying how mtDNA mutations might spread along muscle fibres we discovered a curious effect: a species that is at a replicative disadvantage can nonetheless outcompete a faster replicating rival. We found that the effect requires the three conditions of stochasticity, spatial structure and one species having a higher carrying capacity. I'll discuss how this connects to existing data and resolves a decades-long debate in the mitochondrial literature. I'll then discuss therapeutic implications and connections to altruism. If I make good time I will also discuss how, given time series data for individuals, but an unknown model, a particular giant time-series feature library allows us to nonetheless identify relevant parameters accounting for inter-individual variation
Given a compact convex set X of linear maps on R^d we consider the family of non-autonomous differential equations of the form v'(t)=A(t)v(t), where A is allowed to be any measurable function taking values in X. We construct examples of sets X where the fastest-growing trajectory of this form diverges at a rate which is slower than linear, but faster than anypreviously prescribed sub-linear function. The proof involves the discrepancy of rectangles with respect to linear flows on the 2-torus and an ergodic optimisation argument on the space of switching laws.
Many systems in nature can be modelled as coupled oscillators. Inspired by the classical equations of motion of the axion dark matter and Josephson junction arrays, we study complex dynamics of their interactions under variations of different parameters, through phase space trajectories and Poincaré sections. We also show analytic results in the limit of small oscillatory amplitudes, for both non-dissipative and dissipative cases. In addition, the system can be extended to a large number of oscillators with nearest-neighbour or mean-field coupling. I will illustrate rich dynamics with figures and animations. Everyone is welcome.
Virtually all the emergent properties of a complex system are rooted in the non-homogeneous nature of the behaviours of its elements and of the interactions among them. However, the fact that heterogeneity and correlations can appear simultaneously at local, mesoscopic, and global scales, is a concrete challenge for any systematic approach to quantify them in systems of different types. We develop here a scalable and non-parametric framework to characterise the presence of heterogeneity and correlations in a complex system, based on the statistics of random walks over the underlying network of interactions among its units. In particular, we focus on normalised mean first passage times between meaningful pre-assigned classes of nodes, and we showcase a variety of their potential applications. We found that the proposed framework is able to characterise polarisation in voting systems such as the roll-call votes in the US Congress. Moreover, the distributions of class mean first passage times can help identifying the key players responsible for the spread of a disease in a social system, and also allow us to introduce the concept of dynamic segregation, that is the extent to which a given group of people, characterized by a given income or ethncity, is internally clustered or exposed to other groups as a result of mobility. By analysing census and mobility data on more than 120 major US cities, we found that the dynamic segregation of African American communities is significantly associated with the weekly excess COVID-19 incidence and mortality in those communities.
Since its inception in the 19th century through the efforts of Poincaré and Lyapunov, the theory of dynamical systems addresses the qualitative behaviour of dynamical systems as understood from models. From this perspective, the modeling of dynamical processes in applications requires a detailed understanding of the processes to be analyzed. This deep understanding leads to a model, which is an approximation of the observed reality and is often expressed by a system of Ordinary/Partial, Underdetermined (Control), Deterministic/Stochastic differential or difference equations. While models are very precise for many processes, for some of the most challenging applications of dynamical systems (such as climate dynamics, brain dynamics, biological systems or the financial markets), the development of such models is notably difficult. On the other hand, the field of machine learning is concerned with algorithms designed to accomplish a certain task, whose performance improves with the input of more data. Applications for machine learning methods include computer vision, stock market analysis, speech recognition, recommender systems and sentiment analysis in social media. The machine learning approach is invaluable in settings where no explicit model is formulated, but measurement data is available. This is frequently the case in many systems of interest, and the development of data-driven technologies is becoming increasingly important in many applications.
The intersection of the fields of dynamical systems and machine learning is largely unexplored and the objective of this talk is to show that working in reproducing kernel Hilbert spaces offers tools for a data-based theory of nonlinear dynamical systems. In this talk, we introduce a data-based approach to estimating key quantities which arise in the study of nonlinear autonomous, control and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems - with a reasonable expectation of success- once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. In particular, we develop computable, non-parametric estimators approximating controllability and observability energies for nonlinear systems. We apply this approach to the problem of model reduction of nonlinear control systems. It is also shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. We also show how kernel methods can be used to detect critical transitions for some multi scale dynamical systems. We also use the method of kernel flows to predict some chaotic dynamical systems. Finally, we show how kernel methods can be used to approximate center manifolds, propose a data-based version of the centre manifold theorem and construct Lyapunov functions for nonlinear ODEs. This is joint work with Jake Bouvrie (MIT, USA), Peter Giesl (University of Sussex, UK), Christian Kuehn (TUM, Munich/Germany), Romit Malik (ANNL), Sameh Mohamed (SUTD, Singapore), Houman Owhadi (Caltech), Martin Rasmussen (Imperial College London), Kevin Webster (Imperial College London), Bernard Hasasdonk, Gabriele Santin and Dominik Wittwar (University of Stuttgart).
The progress in dynamical systems theory has led to time series analysis methods that go well beyond linear assumptions. Today nonlinear time series analysis allows us to determine the characteristics and coupling relationships within dynamical systems. But our methods are designed for data that is regular sampled in time, just because the measurement devices in our labs have a fixed time resolution. Recently I have been working with several collaborators on different approaches to analyse irregular sampled data sets. Such data is nowadays produced in all the incomplete records, business likes to call "big data", but also occurs as a consequence of the measurement process in paleo-climate records. The effectiveness of the methods is verified using experiments with the standard toy models (logistic map, Lorenz or Roessler flow) and as an application we focus on the monsoon dynamics during the Holocene around Australia and South-East Asia.
Over the last years, numerical methods for the analysis of large data sets have gained a lot of attention. Recently, different purely data-driven methods have been proposed which enable the user to extract relevant information about the global behavior of the underlying dynamical system, to identify low-order dynamics, and to compute finite-dimensional approximations of transfer operators associated with the system. However, due to the curse of dimensionality, analyzing high-dimensional systems is often infeasible using conventional methods since the amount of memory required to compute and store the results grows exponentially with the size of the system. We extend transfer operator theory to reproducing kernel Hilbert spaces and show that these operators are related to Hilbert space representations of conditional distributions, known as conditional mean embeddings in the machine learning community. One main benefit of the presented kernel-based approaches is that these methods can be applied to any domain where a similarity measure given by a kernel is available. We illustrate the results with the aid of guiding examples and highlight potential applications in molecular dynamics, fluid dynamics, and quantum mechanics.
Social scientists and biologists have studied several kinds of participation games where each player choses whether or not to participate in a given activity and payoffs depend on the own decision and the number of players who participate. The symmetric mixed strategy equilibria of such games are given by equations involving expectations of functions of binomial variables that give rise to polynomials in Bernstein form. Such polynomials are endowed with interesting shape preserving properties, well known in the field of computer aided geometric design but often ignored in game theory. Here, I review previous work demonstrating how the use of these properties allows us to easily identify the number of symmetric mixed equilibria and to sign their group size effect for a fairly large class of participation games. I illustrate this framework with applications from the economic and political science literature. Our results, based on Bernstein polynomials, provide formal proofs for previously conjectured results in a straightforward way.
We study Brownian motion in a confining potential under a constant-rate resetting to a preset position. The relaxation of this system to the steady-state exhibits a dynamic phase transition, and is achieved in a light cone region which grows linearly with time. When an absorbing boundary is introduced, effecting a symmetry breaking of the system, we find that resetting aids the barrier escape only when the particle starts on the same side as the barrier with respect to the origin. We find that the optimal resetting rate exhibits a continuous phase transition with critical exponent of unity. Exact expressions are derived for the mean escape time, the second moment, and the coefficient of variation.
The dynamics of network social contagion processes such as opinion formation and epidemic spreading are often mediated by interactions between multiple nodes. Previous results have shown that these higher-order interactions can profoundly modify the dynamics of contagion processes, resulting in bistability, hysteresis, and explosive transitions. In this paper, we present and analyze a hyperdegree-based mean-field description of the dynamics of the SIS model on hypergraphs, i.e. networks with higher-order interactions, and illustrate its applicability with the example of a hypergraph where contagion is mediated by both links (pairwise interactions) and triangles (three-way interactions). We consider various models for the organization of link and triangle structure, and different mechanisms of higher-order contagion and healing. We find that explosive transitions can be suppressed by heterogeneity in the link degree distribution, when links and triangles are chosen independently, or when link and triangle connections are positively correlated when compared to the uncorrelated case. We verify these results with microscopic simulations of the contagion process and with analytic predictions derived from the mean-field model. Our results show that the structure of higher-order interactions can have important effects on contagion processes on hypergraphs.
The dispersal of individuals within an animal population will depend upon local properties intrinsic to the environment that differentiate superior from inferior regions as well as properties of the population. Competing concerns can either draw conspecifics together in aggregation, such as collective defence against predators, or promote dispersal that minimizes local densities, for instance to reduce competition for food. In this talk we consider a range of models of non-independent movement. These include established models, such as the ideal free distribution, but also novel models which we introduce, such as the wheel. We will also discuss several ways to combine different models to create a flexible model to address a variety of dispersal mechanisms. We further discuss novel measures of movement coordination and show how to generate a population movement that achieves appropriate values of the measure specified. The movement framework that we have developed is both of interest as a stand-alone process to explore movement, but also able to generate a variety of movement patterns that can be embedded into wider evolutionary models where movement is not the only consideration.
Inspired by patterning of the vertebrate neural tube, we study how boundaries between regions of gene expression can form in epithelia. We show that the presence of noise in the morphogen-controlled bistable switch profoundly alters patterning. We also show how differentiation, which causes cells to delaminate (an apparently isotropic process), can change the shape of clones in growing epithelia. Finally, we study evolution of cooperation in epithelia. This extends work in evolutionary graph theory, since the graphs linking cells to their neighbours evolve in time.
The duration of interaction events in a society is a fundamental measure of its collective nature and potentially reflects variability in individual behavior. Using automated monitoring of social interactions of individual honeybees in 5 honeybee colonies, we performed a high-throughput measurement of trophallaxis and face-to-face event durations experienced by honeybees over their entire lifetimes. We acquired a rich and detailed dataset consisting of more than 1.2 million interactions in five honeybee colonies. We find that bees, like humans, also interact in bursts but that spreading is significantly faster than in a randomized reference network and remains so even after an experimental demographic perturbation. Thus, while burstiness may be an intrinsic property of social interactions, it does not always inhibit spreading in real-world communication networks.The interaction time distribution is heavy-tailed, as previously reported for human face-to-face interactions. We developed a theory of pair interactions that takes into account individual variability and predicts the scaling behavior for both bee and extant human datasets. The individual variability of worker honeybees was non-zero, but less than that of humans, possibly reflecting their greater genetic relatedness. Our work shows how individual differences can lead to universal patterns of behavior that transcend species and specific mechanisms for social interactions.
The theory of wave turbulence provides an analytical connection of the dynamics of weakly interacting dispersive waves and the statistical properties of turbulence, in particular the Kolmogorov-Zakharov spectrum.
Numerical simulations of a simple wave system, the one-dimensional Majda-McLaughlin-Tabak equation, produced spectra that are steeper than the Kolmogorov-Zakharov spectra of wave turbulence. In my talk I show that some exotic behavior in one dimension is responsible for this: The state of statistical spatial homogeneity of weak wave turbulence can be spontaneously broken. Wave turbulence is then superseded by radiating pulses that transfer energy in wavenumber space and that lead to a steeper spectrum. On the other hand, wave turbulence is stable in two and three spatial dimensions and in some situations in one dimension. Simulations of large ensembles of systems verify the predictions of wave turbulence theory.
Global transport and communication networks enable information, ideas, and infectious diseases to now spread at speeds far beyond what has historically been possible. To effectively monitor, design, or intervene in such epidemic-like processes, there is a need to predict the speed of a particular contagion in a particular network, and to distinguish between nodes that are more likely to become infected sooner or later during an outbreak. In this talk I will show how these quantities can be investigated using a message-passing approach to derive simple and effective predictions that are validated against epidemic simulations on a variety of real-world networks with good agreement. In addition to individualised predictions for different nodes, our results predict an overall sudden transition from low density to almost full network saturation as the contagion progresses in time.
In cancer, but also evolution in general, great effort is expended to find "driver-mutations", which are specific mutations in genes that significantly increase the fitness of an individual or a cell - and, in the case of cancer, cause the growth of tumour in the first place. But how can we distinguish them if we don't know what baseline to compare them to? This is where research into the dynamics of neutral random mutations becomes relevant. We find certain signals such as the measured frequency and burden distributions of mutations in a sample that can give us information about core population characteristics like the population size of stem cells, the mutation rate, or the percentage of symmetric cell divisions. Analysing the patterns of random mutations thus provides a theoretical tool to interpret genomic data of healthy tissues, for the purpose of both improving detection of true driver mutations as well as learning more about the underlying dynamics of the population which are often hard to measure directly.
Hyperbolic dynamic systems, by definition, have two complementary directions, one with uniform (stable) contraction and the other with uniform (unstable) expansion. A famous and important example of these systems is the so-called Smale horseshoe. However, there are many systems where hyperbolicity is not satisfied. We are interested in dynamics where there is a (central) direction where the effects of contraction and expansion superpose and the resulting action of the dynamics is neutral. In this context, I will present dynamical systems that have a simple description as a skew product of a horseshoe and two diffeomorphisms of the circle. Important examples of these diffeomorphisms are the projective actions of SL(2, R) matrices on the circle. The most interesting case occurs when one matrix is hyperbolic (eigenvalues different from one) and the other one is elliptic (eigenvalues of modulus one). In this model, two related systems come together: a non-hyperbolic system (with stable, unstable and central directions) and a matrix product (called a cocycle). The Lyapunov exponent (expansion rate) associated to the central direction corresponds to the exponential growth of the norms of the product of these matrices. We want to describe the “spectrum” of such “Lyapunov exponents”. For this, it is necessary to understand the underlying “thermodynamic formalism” and the "structure of the space ergodic measures”, where the appearance of so-called non-hyperbolic measures is a key difficulty. Our goal is to discuss this scenario, presenting key concepts and ideas and some results.
Vincent A.A. Jansen, Timothy W. Russell, Matthew J. Russell Selfish genetic elements (SGEs) are genes that enhance their own frequency, at no benefit or a cost to the individual. SGEs which involve the separate driver and target loci can show evolutionarily complex, non-equilibrium behaviour, for instance, selective sweeps followed by stasis. Using a mathematical model, I will show that the specificity between two pairs of selfish driver-target loci can lead to sweeps and stasis in the form of heteroclinic cycles. For systems with more than two target and driver loci these heteroclinic cycles can link into a network. The dynamics in the vicinity of these heteroclinic networks can be understood as symbolic dynamics and suggest the existence of a horseshoe map and positive Lyapunov exponents. The resulting dependence on initial conditions means that nearby populations are driven apart. Populations can diverge quickly, showing how chaotic dynamics can genetically isolate populations. This provides a plausible explanation for some empirically observed genetic patterns, for example, chaotic genetic patchiness.
The progressive concentration of population in urban areas brings lifelong challenges such as congestion, pollution and the health of citizens to the forefront. Here we discuss recent advances in the study of urban systems, investigating the properties of human mobility, their impact in the environment and the health of citizens and the role of transportation infrastructures. We first develop a field theory to unveil hidden patterns of mobility, showing that the gravity model outperforms the radiation when reproducing key features of urban displacements. We then connect the hierarchical structure of mobility with metrics related to city livability. Finally, we inspect the role of transportation infrastructures by studying the recovery of the public transportation network from massive gatherings such as concerts or sports events.
I will survey how statistical properties of a dynamical systems can be studied by analyzing the spectral properties of suitable transfer operators. Next, I will present how it is possible to study ergodic averages and cohomological equations, related to parabolic dynamics, by means of hyperbolic renormalizations, exploiting spectral properties of transfer operators on anisotropic Banach spaces.
The power grid frequency control is a demanding task requiring expensive idle power plants to adapt the supply to the fluctuating demand. An alternative approach is controlling the demand side in such a way that certain appliances modify their operation to adapt to power availability. This is especially important to enable a high penetration of renewable energy sources. A number of methods to manage the demand side have been proposed. In this work, we focus on dynamic demand control (DDC), where smart appliances can delay their switchings depending on the frequency of the system. We first introduce DDC in the proposed simple model to study its effects on the frequency of the power grid. We find that DDC can reduce small and medium-size fluctuations but it can also increase the probability of observing large frequency peaks due to the necessity of recovering pending tasks.
We consider a reaction-diffusion system modelling the growth, dispersal and mutation of two phenotypes. This model was proposed in by Elliott and Cornell (2012), who presented evidence that for a class of dispersal and growth coefficients and a small mutation rate, the two phenotypes spread into the unstable extinction state at a single speed that is faster than either phenotype would spread in the absence of mutation. Using the fact that, under reasonable conditions on the mutation and competition parameters, the spreading speed of the two phenotypes is indeed determined by the linearisation about the extinction state, we prove that the spreading speed is a non-increasing function of the mutation rate (implying that greater mixing between phenotypes leads to slower propagation), determine the ratio at which the phenotypes occur in the leading edge in the limit of vanishing mutation, and discuss the effect of trade-offs between dispersal and growth on the spreading speed of the phenotypes. This talk is based on joint work with Luca Börger and Aled Morris (Swansea).
Network models may be applied to describe many complex systems, and in the era of online social networks the study of dynamics on networks is an important branch of computational social science. Cascade dynamics can occur when the state of a node is affected by the states of its neighbours in the network, for example when a Twitter user is inspired to retweet a message that she received from a user she follows, with one event (the retweet) potentially causing further events (retweets by followers of followers) in a chain reaction. In this talk I will review some mathematical models that can help us understand how social contagion (the spread of cultural fads and the viral diffusion of information) depends upon the structure of the social network and on the dynamics of human behaviour. Although the models are simple enough to allow for mathematical analysis, I will show examples where they can also provide good matches to empirical observations of cascades on social networks.
Events in mesoscopic processes often take place at random times. Take for instance the example of a colloidal particle escaping from a metastable state. An interesting question is how much work an external agent has exerted on the particle when it escapes the metastable state. In order to address this question, we develop a thermodynamic theory for events that take place at random times. To this aim, we apply the theory of stochastic thermodynamics, which is a thermodynamic theory for mesoscopic systems, to ensembles of trajectories terminating at random times. Using results from martingale theory, we obtain a thermodynamic bound, reminiscent of the second law of thermodynamics, for the work exerted by an external protocol on a mesoscopic system at random times.
Power-grid frequency is a key indicator of stability in power grids. The trajectory of power-grid frequency embodies several processes of different natures: the control systems enforcing stability, the trade markets, production and demand, and the correlations between these. We study power-grid frequency from Central Europe, Great Britain, and the Nordic Grid (Finland, Sweden, Norway) under the umbrella of classical and fractional stochastic processes. We first introduced a data-driven model to extract fundamental parameters from the power-grid system's control, combining stochastic and deterministic approaches. Secondly, extent to fractional stochastic processes. We devise an estimator of the Hurst index for fractional Ornstein─Uhlenbeck processes. We show that power-grid frequency exhibits time-dependent volatility, driven by daily human activity and yearly seasonal cycles. Seasonality is consistently observable in smaller power grids, affecting the correlations in the stochastic noise. Great Britain displays daily rhythms of varying volatility, where the noise amplitude consistently doubles its intensity, and displays bi- and tri-modal distributions. Both the Nordic Grid and Great Britain power-grids exhibit varying Hurst indices over yearly scales. All the power grids display highly persistent noise, with Hurst indices above H>0.5.
Natural populations are consist of many different individuals, and they are interacting with each other. Some individuals compete with each other to exploit a shared resource, while others help each other to coexist. Such interactions affect the death or reproduction of individuals and thus shape the composition of populations. Therefore, interaction shapes the characteristics of the population. However, those interactions are not static but dynamically change due to the emergence of a new type, which can occur from either mutation, recombination, or immigration. We use a new evolutionary game characterized by an interaction matrix to investigate the effect of the emergence of new mutants on the population composition. First, we show that the population size is an emergent property of such an evolving system. Also, we examine the interaction structure, and the results suggest that backbone interaction strongly depends on the inheritance of interaction.
We give a review of some recent results on extreme value theory applied to dynamical systems by using the spectral approach on transfer operator. This in particular will allow to treat : high dimensional cases; open systems with holes and to give a precise computation of the extremal index.
The meeting will start at 2:15pm and the schedule is as follows.2:15pm Chris Good (Birmingham)Shifts of finite type as fundamental objects in the theory of shadowing3:30pm Polina Vytnova (Warwick)Dimension of Bernoulli convolutions: computer assisted estimates5:00pm Mike Todd (St Andrews)Escape of entropy
Abstracts are available at the workshop webpage.
Temporal graphs (in which edges are active only at specified timesteps) are an increasingly important and popular model for a wide varietyof natural and social phenomena. I'll talk a bit about what's been going onin the world of temporal graphs, and then go on to the idea of graphmodification in a temporal setting.Motivated by a particular agricultural example, I’ll talk about thetemporal nature of livestock networks, with a quick diversion intorecognising the periodic nature of some cattle trading systems. Withbovine epidemiology in mind, I'll talk about a particular modificationproblem in which we assign times to edges so as to maximise or minimisereachability sets within a temporal graph. I'll mention an assortment ofcomplexity results on these problems, showing that they are hard under adisappointingly large variety of restrictions. In particular, if edges canbe grouped into classes that must be assigned the same time, then theproblem is hard even on directed acyclic graphs when both the reachabilitytarget and the classes of edges are of constant size, as well as on anextremely restrictive class of trees. The situation is slightly better ifeach edge is active at a unique timestep - in some very restricted casesthe problem is solvable in polynomial time. (Joint work with Kitty Meeks.)
Using extensive Monte Carlo simulations, we investigate the surface adsorption of self-avoiding trails on the triangular lattice with two- and three-body on-site monomer-monomer interactions. In the parameter space of two-body, three-body, and surface interaction strengths, the phase diagram displays four phases: swollen (coil), globule, crystal, and adsorbed. For small values of the surface interaction, we confirm the presence of swollen, globule, and crystal bulk phases. For sufficiently large values of the surface interaction, the system is in an adsorbed state, and the adsorption transition can be continuous or discontinuous, depending on the bulk phase. As such, the phase diagram contains a rich phase structure with transition surfaces that meet in multicritical lines joining in a single special multicritical point. The adsorbed phase displays two distinct regions with different characteristics, dominated by either single or double layer adsorbed ground states. Interestingly, we find that there is no finite-temperature phase transition between these two regions though rather a smooth crossover.
The classical Lorenz flow, and any flow which is close to it in the C2-topology, satisfies a Central Limit Theorem (CLT). We first prove statistical stability and then prove that the variance in the CLT varies continuously for this family of flows and for general geometric Lorenz flows, including extended Lorenz models where certain stable foliations have weaker regularity properties.
This is a joint work with I. Melbourne and Marks Ruziboev.
The affinity dimension, introduced by Falconer in the 1980s, is the `typical' value of the Hausdorff dimension of a self-affine set. In 2014, Feng and Shmerkin proved that the affinity dimension is continuous as a function of the maps defining the self-affine set, thus resolving a long-standing open problem in the fractal geometry community. In this talk we will discuss stronger regularity properties of the affinity dimension in some special cases. This is based on recent work with Ian Morris.
In this talk, which should be accessible to a general audience, I will discuss the notion of epsilon-entropy introduced by Kolmogorov in the 1950s, as a measure of the complexity of compact sets in a metric space.I will then discuss a new proof for a problem originally raised by Kolmogorov on the precise asymptotics of the epsilon-entropy of compact sets of holomorphic functions which relies on ideas from operator theory and potential theory.This is joint work with Stephanie Nivoche (Nice).
A central problem in uncertainty quantification is how to characterize the impact that our incomplete knowledge about models has on the predictions we make from them. This question naturally lends itself to a probabilistic formulation, by making the unknown model parameters random with given statistics. Here this approach is used in concert with tools from large deviation theory (LDT) and optimal control to estimate the probability that some observables in a dynamical system go above a large threshold after some time, given the prior statistical information about the system's parameters and its initial conditions. We use this approach to quantify the likelihood of extreme surface elevation events for deep sea waves, so-called rogue waves, and compare the results to experimental measurements. We show that our approach offers a unified description of rogue wave events in the one-dimensional case, covering a vast range of paramters. In particular, this includes both the predominantly linear regime as well as the highly nonlinear regime as limiting cases, and is able to predict the experimental data regardless of the strength of the nonlinearity.
The Paris conference 2015 set a path to limit climate change to "well below 2?C". To reach this goal, integrating renewable energy sources into the electrical power grid is essential but poses an enormous challenge to the existing system, demanding new conceptional approaches. In this talk, I will introduce basics of the power grid operation and outline some pressing challenges to the power grid. In particular, I present our latest research on power grid fluctuations and how they threaten robust grid operation. For our analysis, we collected frequency recordings from power grids in North America, Europe and Japan, noticing significant deviations from Gaussianity. We developed a coarse framework to analytically characterize the impact of arbitrary noise distributions as well as a superstatistical approach. This already gives an oppurtunity to plan future grids. Finally, I will outline my recently started Marie-Curie project DAMOSET, which focusses on building up an open data base of measurements to deepen our understanding.
Complex dynamical systems driven by the unravelling of information can be modelled effectively by treating the underlying flow of information as the model input. Complicated dynamical behaviour of the system is then derived as an output. Such an information-based approach is in sharp contrast to the conventional mathematical modelling of information-driven systems whereby one attempts to come up with essentially ad hoc models for the outputs. In this talk, dynamics of electoral competition is modelled by the specification of the flow of information relevant to election. The seemingly random evolution of the election poll statistics are then derived as model outputs, which in turn are used to study election prediction, impact of disinformation, and the optimal strategy for information management in an election campaign.
Certain classes of higher-order networks can be interpreted as discrete geometries. This creates a relation with approaches to non-perturbative quantum gravity, where one also studies ensembles of geometries of this type. In the framework of Causal Dynamical Triangulations (CDT) the regularised Feynman path integral over curved space-times takes the form of a sum over simplicial geometries (triangulated spaces) of fixed dimension and topology. One key challenge of quantum gravity is to characterise the geometric properties of the resulting ``quantum geometry" in terms of a set of suitable observables. Well-known examples of observables are the Hausdorff and spectral dimension. After a short introduction of central concepts in CDT, I will describe recent attempts to study the possible emergence of global symmetries in quantum geometries. This involves the analysis of the spectra of an operator related to the discrete 1-Laplacian, whose eigenvectors are the discrete analogues of Killing vector fields in the continuum.
In this talk, we will present our ongoing activities in learning better models for inverse problems in imaging. We consider classical variational models used for inverse problems but generalise these models by introducing a large number of free model parameters. We learn the free model parameters by minimising a loss function comparing the reconstructed images obtained from the variational models with ground truth solutions from a training data base. We will also show recent results on learning "deeper" regularisers that are allowed to change their parameters in each iteration of the algorithm. We show applications to different inverse problems in imaging where we put a particular focus on joint image demosaicing and denoising.
Here we discuss some exact mathematical results in percolation theory, including the triangle-triangle duality transformation, results for 4-hypergraphs, and application of Euler’s formula to study the number of clusters on a lattice and dual lattice. The latter leads to procedures to approximate the threshold to high precision efficiently, as carried out by J. Jacobsen for a variety of Archimedean lattices. The ideas crossing probabilities on open systems, going to the work of J. Cardy and of G. M. T. Watts, and wrapping probabilities on a torus, going back to Pinson, will also be discussed. These results are limited to two dimensional systems.
The modern world can be best described as interlinked networks, of individuals, computing devices and social networks; where information and opinions propagate through their edges in a probabilistic or deterministic manner via interactions between individual constituents. These interactions can take the form of political discussions between friends, gossiping about movies, or the transmission of computer viruses. Winners are those who maximise the impact of scarce resource such as political activists or advertisements, or by applying resource to the most influential available nodes at the right time. We developed an analytical framework, motivated by and based on statistical physics tools, for impact maximisation in probabilistic information propagation on networks; to better understand the optimisation process macroscopically, its limitations and potential, and devise computationally efficient methods to maximise impact (an objective function) in specific instances.
The research questions we have addressed relate to the manner in which one could maximise the impact of information propagation by providing inputs at the right time to the most effective nodes in the particular network examined, where the impact is observed at some later time. It is based on a statistical physics inspired analysis, Dynamical Message Passing that calculates the probability of propagation to a node at a given time, combined with a variational optimisation process. We address the following questions: 1) Given a graph, a budget and a propagation/infection process, which nodes are best to infect to maximise the spreading? 2) Maximising the impact on a subset of particular nodes at given times, by accessing a limited number of given nodes. 3) Identify the most appropriate vaccination targets to isolate a spreading disease through containment of the epidemic. 4) Optimal deployment of resource in the presence of competitive/collaborative processes. We also point to potential applications.
Lokhov A.Y. and Saad D., Optimal Deployment of Resources for Maximizing Impact in Spreading Processes, PNAS 114 (39), E8138 (2017)
Many biological problems, such as tumor-induced angiogenesis (thegrowth of blood vessels to provide nutrients to a tumor), or signalingpathways involved in the dysfunction of cancer (sets of molecules thatinteract that turn genes on/off and ultimately determine whether acell lives or dies), can be modeled using differential equations.There are many challenges with analyzing these types of mathematicalmodels, for example, rate constants, often referred to as parametervalues, are difficult to measure or estimate from available data.I will present mathematical methods we have developed to enable us tocompare mathematical models with experimental data. Depending on thetype of data available, and the type of model constructed, we havecombined techniques from computational algebraic geometry andtopology, with statistics, networks and optimization to compare andclassify models without necessarily estimating parameters.Specifically, I will introduce our methods that use computationalalgebraic geometry (e.g., Gröbner bases) and computational algebraictopology (e.g., persistent homology). I will present applications ofour methodology on datasets involving cancer. Time permitting, I willconclude with our current work for analyzing spatio-temporal datasetswith multiple parameters using computational algebraic topology.
For every random process, all measurable quantities are described
comprehensively through their probability distributions. in the ideal but rare case
they can be obtained analytically, i.e., completely. most physical
models are not accessible analytically thus one has to perform numerical
simulations. usually this means one does many independent runs and
obtains estimates of the probability distributions by the measured
histograms. since the number of repetitions is limited, maybe 10
million, correspondingly the distributions can be estimated in a range
down to probabilities like 10^-10. but what if one wants to obtain the
full distribution, in the spirit of obtaining all information?
this means one desires to get the distribution down to the rare
events, but without waiting forever by performing an almost infinite
number of simulation runs.
here, we study rare events numerically using a very general black-box
method. it is based on sampling vectors of random numbers within an
artificial finite-temperature (boltzmann) ensemble to access rare
events and large deviations for almost arbitrary equilibrium and
non-equilibrium processes. in this way, we obtain probabilities as
small as 10^-500 and smaller, hence (almost) the full distribution can
be obtained in a reasonable amount of time.
here, some applications are presented:
distribution of work performed for a critical (t=2.269)
two-dimensional ising system of size lxl=128x128 upon rapidly changing
the external magnetic field (only by obtaining the distribution over hundreds
of decades it allows to check the jarzynski and crooks
theorems which exactly relate the non-equilibrium work to the
equilibrium free energy);
distribution of perimeters and area of convex hulls of
finite-dimensional single and multiple random walks;
distribution of the height fluctuations of the kardar-parisi-zhang (kpz)
equation via a model of directed polymers in random media.
We show that rank-ordered properties of a wide range of instances encountered in the arts (visual art, music, architecture), natural sciences (biology, ecology, physics, geophysics) and social sciences (social networks, archeology, demographics) follow a two-parameter Discrete Generalized Beta Distribution (DGBD) [1]. We present several models that produce outcomes which under rank-ordering follow DGBDs: i) Expansion- modification algorithms [2], ii) Death-Birth Master Equations that lead to Langevin and Fokker-Planck equations [3], iii) Symbolic dynamics of unimodal nonlinear map families and their associated thermodynamic formalism [4]. A common feature of the models is the presence of an order-disorder conflicting dynamics. In all cases “a” is associated with long-range correlations and “b” with the presence of unusual phenomena. Furthermore the difference “D=a-b” determines transitions between different dynamical regimes such as chaos/intermittency.
[1] Universalityinrank-ordereddistributionsintheartsandsciences, G. Martínez-Mekler, R. Alvarez Martínez, M. Beltran del Rio, R. Mansilla, P. Miramontes, G. Cocho, PLoS ONE 4(3): (2009) e4791.doi:10.1371/journal.pone.0004791
[2]Order-disordertransitioninconflictingdynamicsleadingtorank-frequency generalized betadistributions, R.A´lvarez-Martínez,G.Martínez-Mekler,G.Cocho Physica A 390 (2011) 120-130
Birth and death master equation for the evolution of complex,networks, A´lvarez-Martínez,R.,Cocho,G.,Rodríguez,R.F.,Martínez-Mekler,G Physica A, 31 1 198-208(2014)
[4]Rank ordered beta distributions of nonlinear map symbolic dynamics families with a first-order transition between dynamical regimes, R. Álvarez-Martínez, G. Cocho, G. Martínez-Mekler G, Chaos, 28, 075515 (2018)
The explosion in digital music information has spurred the developing of mathematical models and computational algorithms for accurate, efficient, and scalable processing of music information. Total global recorded music revenue was US$17.3b in 2017, 41% of which was digital (2018 IFPI Report). Industrial scale applications like Shazam has over 150 million active users monthly and Spotify over 140 million. With such widespread access to large digital music collections, there is substantial interest in scalable models for music processing. Optimisation concepts and methods thus play an important role in machine models of music engagement, music experience, music analysis, and music generation. In the first part of the talk, I shall show how optimisation ideas and techniques have been integrated into computer models of music representation and expressivity, and into computational solutions to music generation and structure analysis.Advances in medical and consumer devices for measuring and recording physiological data have given rise to parallel developments in computing in cardiology. While the information sources (music and cardiac signals) share many rhythmic and other temporal similarities, the techniques of mathematical representation and computational analysis have developed independently, as have the tools for data visualization and annotation. In the second part of the talk, I shall describe recent work applying music representation and analysis techniques to electrocardiographic sequences, with applications to personalised diagnostics, cardiac-brain interactions, and disease and risk stratification. These applications represent ongoing collaborations with Professors Pier Lambiase and Peter Taggart (UCL), and Dr. Ross Hunter at the Barts Heart Centre.About the speaker:Elaine Chew is Professor of Digital Media in the School of Electronic Engineering and Computer Science at Queen Mary University of London. Before joining QMUL in Fall 2011, she was a tenured Associate Professor in the Viterbi School of Engineering and Thornton School of Music (joint) at the University of Southern California, where she founded the Music Computation and Cognition Laboratory and was the inaugural honoree of the Viterbi Early Career Chair. She has also held visiting appointments at Harvard (2008-2009) and Lehigh University (2000-2001), and was Affiliated Artist of Music and Theater Arts at MIT (1998-2000). She received PhD and SM degrees in Operations Research at MIT (in 2000 and 1998, respectively), a BAS in Mathematical and Computational Sciences (honors) and in Music (distinction) at Stanford (1992), and FTCL and LTCL diplomas in Piano Performance from Trinity College, London (in 1987 and 1985, respectively).She was awarded an ERC ADG in 2018 for the project COSMOS: Computational Shaping and Modeling of Musical Structures, and is a past recipient of a 2005 Presidential Early Career Award in Science and Engineering (the highest honor conferred on young scientists/engineers by the US Government at the White House) and Faculty Early Career Development (CAREER) Award by the US National Science Foundation, and 2007/2017 Fellowships at Harvard’s Radcliffe Institute for Advanced Studies. She is an alum (Fellow) of the (US) National Academy of Science's Kavli Frontiers of Science Symposia and of the (US) National Academy of Engineering's Frontiers of Engineering Symposia for outstanding young scientists and engineers.Her research, centering on computational analysis of music structures in performed music, performed speech, and cardiac arrhythmias, has been supported by the ERC, EPSRC, AHRC, and NSF, and featured on BBC World Service/Radio 3, Smithsonian Magazine, Philadelphia Inquirer, Wired Blog, MIT Technology Review, The Telegraph, etc.
Many physical, biological and engineering processes can be represented mathematically by models of coupled systems with time delays. Time delays in such systems are often either hard to measure accurately, or they are changing over time, so it is more realistic to take time delays from a particular distribution rather than to assume them to be constant. In this talk, I will show how distributed time delays affect the stability of solutions in systems of coupled oscillators. Furthermore, I will present a system with distributed delays and Gaussian noise, and illustrate how to calculate the optimal path to escape from the basin of attraction of the stable steady state, as well as how the distribution of time delays influences the rate of escape away from the stable steady state. Throughout the talk, analytical calculations will be supported by numerical simulations to illustrate possible dynamical regimes and processes.
Modelling the dynamics of finite populations involves intrinsic demographic noise. This is particularly important when the population is small, as it is frequently the case in biological applications, and example of this are gene circuits. At the same time populations can be subject to switching or changing environments; for example promotors may bind or unbind, or bacteria can be exposed to changing concentrations of antibiotics. How does one integrate intrinsic and extrinsic into models of population dynamics, and how does one derive coarse grained descriptions? How can simulations best be performed efficiently? In this talk I will address some of these questions. Theoretical aspects include systematic expansions in the strength of each type of noise to derive reduced models such as stochastic differential equations, or piecewise deterministic Markov processes. I will show how this can lead to peculiar features including master equations with negative “rates”. I will also discuss a number of applications, in particular in game theory, and phenotype switching.
Systems with delayed interactions play a prominent role in a variety of fields, ranging from traffc and population dynamics, gene regulatory and neural networks or encrypted communications. When subjecting a semiconductor laser to reflections of its own emission, a delay results from the propagation time of the light in the external cavity. Because of its experimental accessibility and multiple applications, semiconductor lasers with delayed feedback or coupling have become one of the most studied delay systems. One of the most experimentally accessible properties to characterise these chaotic dynamics is the autocorrelation function. However, the relationship between the autocorrelation function and other nonlinear properties of the system is generally unknown. Therefore, although the autocorrelation function is often one of the key characteristics measured, it is unclear which information can be extracted from it. Here, we present a linear stochastic model with delay, that allows to analytically derive the autocorrelation function. This linear model captures fundamental properties of the experimentally obtained autocorrelation function of laser with delayed feedback, such as the shift and asymmetric broadening of the different delay echoes. Fitting this analytical autocorrelation to its experimental counterpart, we find that the model reproduces, in most dynamical regimes of the laser, the experimental data surprisingly well. Moreover, it is possible to establish a relation between the set of parameters from the linear model and dynamical properties of the semiconductor lasers, as relaxation oscillation frequency and damping rate.
Elements composing complex systems usually interact in several different ways and as such the interaction architecture is well modelled by a network with multiple layers - a multiplex network–. However only in a few cases can such multi-layered architecture be empirically observed, as one usually only has experimental access to such structure from an aggregated projection. A fundamental challenge is thus to determine whether the hidden underlying architecture of complex systems is better modelled as a single interaction layer or results from the aggregation and interplay of multiple layers. Assuming a prior of intralayer Markovian diffusion, in this talk I will present a method [1] that, using local information provided by a random walker navigating the aggregated network, is able possible to determine in a robust manner whether these dynamics can be more accurately represented by a single layer or they are better explained by a (hidden) multiplex structure. In the latter case, I will also provide a Bayesian method to estimate the most probable number of hidden layers and the model parameters, thereby fully reconstructing its hidden architecture. The whole methodology enables to decipher the underlying multiplex architecture of complex systems by exploiting the non- Markovian signatures on the statistics of a single random walk onthe aggregated network.In fact, the mathematical formalism presented here extends above and beyond detection of physical layers in networked complex systems, as it provides a principled solution for the optimal decomposition and projection of complex, non-Markovian dynamics into a Markov switching combination of diffusive modes. I will validate the proposed methodology with numerical simulations of both (i) random walks navigating hidden multiplex networks (thereby reconstructing the true hidden architecture) and (ii) Markovian and non-Markovian continuous stochastic processes (thereby reconstructing an effective multiplex decomposition where each layer accounts for a different diffusive mode). I will also state two existence theorems guaranteeing that an exact reconstruction of the dynamics in terms of these hidden jump-Markov models is always possible for arbitrary finite-order Markovian and fully non-Markovian processes. Finally, using experiments, I will apply the methodology to understand the dynamics of RNA polymerases at the single-molecule level.[1] L. Lacasa, I.P. Mariño, J. Miguez, V. Nicosia, E. Roldan, A. Lisica, S.W. Grill, J. Gómez-Gardeñes,Multiplex decomposition of non-Markovian dynamics and the hidden layer reconstructionproblemPhysical Review X 8, 031038 (2018)
Consider equations of motion that generate dispersion of an ensemble of particles. For a given dynamical system an interesting problem is not only what type of diffusion is generated by its equations of motion but also whether the resulting diffusive dynamics can be reproduced by some known stochastic model. I will discuss three examples of dynamical systems generating different types of diffusive transport: The first model is fully deterministic but non-chaotic by displaying a whole range of normal and anomalous diffusion under variation of a single control parameter [1]. The second model is a dissipative version of the paradigmatic standard map. Weakly perturbing it by noise generates subdiffusion due to particles hopping between multiple attractors [2]. The third model randomly mixes in time chaotic dynamics generating normal diffusive spreading with non-chaotic motion where all particles localize. Varying a control parameter the mixed system exhibits a transition characterised by subdiffusion. In all three cases I will show successes, failures and pitfalls if one tries to reproduce the resulting diffusive dynamics by using simple stochastic models. Joint work with all authors on the references cited below.[1] L. Salari, L. Rondoni, C. Giberti, R. Klages, Chaos 25, 073113 (2015)[2] C.S. Rodrigues, A.V. Chechkin, A.P.S. de Moura, C. Grebogi and R. Klages, Europhys. Lett. 108, 40002 (2014)[3] Y.Sato, R.Klages, to be published.
It is widely believed that to perform cognition, it is essential for a system to "have an architecture in the form of a neural network, i.e. to represent a collection of relatively simple units coupled to each other with adjustable couplings. The main, if not the only, reason for this conviction is that the single natural cognitive system known to us, the brain, has this property. With this, understanding how the brain works is one of the greatest challenges of modern science."The traditional way to study the brain is to explore its separate parts and to search for correlations and emergent patterns in their behavior. This approach does not satisfactorily answer some fundamental questions, such as how memories are stored, or how the data from detailed neural measurements could be arranged in a single picture explaining what the brain does. It is well appreciated that the mind is an emergent property of the brain, and it is important to find the right level for its description.There have been much research devoted to describing and understanding the brain from the viewpoint of the dynamical systems (DS) theory. However, the focus of this research has been on the behavior of the system and was largely limited to modelling of the brain, or of the phenomena occurring in the brain.We propose to shift the focus from the brain behavior to the ruling force behind the behavior, which in a DS is the velocity vector field. We point out that this field is a mathematical representation of the device's architecture, the result of interaction between all of the device's components, and as such represents an emergent property of the device. With this, the brain's unique feature is its architectural plasticity, i.e. a continual formation, severance, strenghtening and weakening of its inter-neuron connections, which is firmly linked to its cognitive abilities. We propose that the self-organising architectural plasticity of the brain creates a plastic self-organising velocity field, which evolves spontaneously according to some deterministic laws under the influence of sensory stimuli. Velocity fields of this type have not been known in the theory of dynamical systems, and we needed to introduce them specially to describe cognition [1].We hypothesize that the ability to perform cognition is linked to the ability to create a self-organising velocity field evolving according to some appropriate laws, rather than with the neural-network architecture per se. With this, the plastic neural network is the means to create the required velocity field, which might not be uniqe.To verify our hypothesis, we construct a very simple dynamical system with plastic velocity field, which is arhictecturally not a neural network, and show how it is able to perform basic cognition expected of neural networks, such as memorisation, classification and pattern recognition.Looking at the brain through the prism of its velocity vector field offers answers to a range of questions about memory storage and pattern recognition in the brain, and delivers the sought-after link between the brain substance and the bodily behavior. At the same time, constructing various rules of self-organisation of a velocity vector field and implementing them in man-made devices could lead to artificial intelligent machines of novel types.[1] Janson, N.B. & Marsden, C.J. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system. Scientific Reports 7, 17007 (2017).
Various interacting lattice path models of polymer collapse in two dimensions demonstrate different critical behaviours, and this difference has been without a clear explanation. The collapse transition has been variously seen to be in the Duplantier–Saleur θ-point university class (specific heat cusp), the interacting trail class (specific heat divergence) or even first-order. This talk will describe new studies that elucidate the role of three body interactions in the phase diagram of polymer collapse in two dimensions.
In this talk, we will present our last results on the modelling of rumour and disease spreading in single and multilayer networks. We will introduce a general epidemic model that encompasses the rumour and disease dynamics into a single framework. The susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models will be discussed in multilayer networks. Moreover, we will introduce a model of epidemic spreading with awareness, where the disease and information are propagated in different layers with different time scales. We will show that the time scale determines whether the information awareness is beneficial or not to the disease spreading. Finally, we will show how machine learning can be used to understand the structure and dynamics of complex networks.
Much of the progress that has been made in the field of complex networks isattributed to adopting dynamical processes as the means for studying thesenetworks, as well as their structure and response to external factors. In thistalk, by taking a different lens, I view complex networks as combinatorialstructures and show that this — somewhat alternative — approach brings newopportunities for exploration. Namely, the focus is made on the sparse regime ofthe configuration model, which is the maximum entropy network constrained by anarbitrary degree distribution, and on the generalisations of this model to thecases of directed and coloured edges (also known as the configuration multiplexmodel). We study how the (multivariate) degree distribution in these networksdefines global emergent properties, such as the sizes and structure of connectedcomponents. By applying Joyal's theory of combinatorial species, the questionsof connectivity and structure are formalised in terms of formal power series,and unexpected link is made to stochastic processes. Then, by studying thelimiting behaviour of these processes, we derive asymptotic theory that is richon analytical expressions for various generalisations of the configurationmodel. Furthermore, interesting connections are made between configuration modeland physical processes of different nature.
The mean-median map [4, 2, 1, 3] was originally introduced as a map over the space of nite multisets of real numbers. It extends such a multiset by adjoining to it a new number uniquely determined by the stipulation that the mean of the extended multiset be equal to the median of the original multiset. An open conjecture states that the new numbers produced by iterating this map form a sequence which stabilises, i.e., reaches a nite limit in nitely many iterations. We study the mean-median map as a dynamical system on the space of nite multisets of univariate piecewise-ane continuous functions with rational coecients. We determine the structure of the limit function in the neighbourhood of a distinctive family of rational points. Moreover, we construct a reduced version of the map which simplies the dynamics in such neighbourhoods and allows us to extend the results of [1] by over an order of magnitude.
References[1] F. Cellarosi, S. Munday, On two conjectures for M&m sequences, J. Di. Equa-tions and Applications 2 (2017), 428-440.[2] M. Chamberland, M. Martelli, The mean-median map, J. Di. Equations andApplications, 13 (2007), 577-583.[3] J. Hoseana, The mean-median map, MSc thesis, Queen Mary, University ofLondon, 2015.[4] H. Shultz, R. Shiett, M&m sequences, The College Mathematics Journal, 36(2005), 191-198.
For fluctuating thermodynamic currents in non-equilibrium steady states, the thermodynamic uncertainty relation expresses a fundamental trade-off between precision, i.e. small fluctuations, and dissipation. Using large deviation theory, we show that this relation follows from a universal bound on current fluctuations that is valid beyond the Gaussian regime and in which only the total rate of entropy production enters. Variants and refinements of this bound hold for fluctuations on finite time scales and for Markovian networks with known topology and cycle affinities. Applied to molecular motors and heat engines, the bound on current fluctuations imposes constraints on the efficiency and power. For cyclically driven systems, a generalisation of the uncertainty relation leads to an effective rate of entropy production that can be larger than the actual one, allowing for a higher precision of currents.
Kinetic theory is a landmark of statistical physics and is applicable to reveal the physical Brownian motion from first principles. In this framework, the Boltzmann and Langevin equations are systematically derived from the Newtonian dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy [1,2]. In light of this success, it is natural to apply this methodology to social science beyond physics, such as to finance. In this presentation, we apply kinetic theory to financial Brownian motion [3,4] with the empirical support by detailed high-frequency data analysis of a foreign exchange (FX) market.
We first show our data analysis to identify the microscopic dynamics of high-frequency traders (HFTs). By tracking trajectories of all traders individually, we characterize the dynamics of HFTs from the viewpoint of trend-following. We then introduce a microscopic model of FX traders incorporating with the trend following law. We apply the mathematical formulation of kinetic theory to the microscopic model for coarse-graining; Boltzmann-like and Langevin-like equations are derived via a generalized BBGKY hierarchy. We perturbatively solve these equations to show the consistency between our microscopic model and real data. Our work highlights the potential power of statistical physics in understanding the financial market dynamics from their microscopic dynamics.
References
[1] S. Chapman, T. G. Cowling, The Mathematical Theory of Non-Uniform Gases, (Cambridge University Press, Cambridge, England, 1970).
[2] N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 3rd ed. (Elsevier, Amsterdam, 2007).
[3] K. Kanazawa, T. Sueshige, H. Takayasu, M. Takayasu, Phys. Rev. Lett. 120, 138301 (2018).
[4] K. Kanazawa, T. Sueshige, H. Takayasu, M. Takayasu, Phys. Rev. E (in press, arXiv:1802.05993).
One of the key aims in network science is to extract information from the structure of networks. In this talk, I will report on recent work which uses the cycles (closed walks) of a network to probe the structure and provide useful information about what is going on in a particular dataset. I explore methods to count different types of cycles efficiently, and how they relate to a more general algebraic theory of cycles in a network. I will also show how counting simple cycles allows us to evaluate concepts like social balance in a network. I will then explore the concept of centrality more closely and show how it is related to the cycle structure of a network. I will present a new centrality measure for extended parts of a network (i.e. beyond simple verticies) derived from cycle theory, and show how it can be applied to real problems.
Reproduction is a defining feature of living systems. A fascinating wealth of reproductive modes is observed in nature, from unicellularity to the concerted fragmentation of multi-cellular units. However, the understanding of factors driving the evolution of these life cycles is still limited. Here, I present a model in which groups arise from the division of single cells that do not separate but stay together until the moment of group fragmentation. The model allows for all possible fragmentation modes and calculates the population growth rate of each associated life cycle. This study focuses on fragmentation modes that maximise growth rate, since these are promoted by the natural selection. The knowledge of which life cycles emerge and under which conditions give us insights into the early stages of evolution of life on Earth.
This will be a joint seminar of Complex Systems with the Institute of Applied Data Sciences.
Topology, one of the oldest branches of mathematics, provides an expressive and affordable language which is progressively pervading many areas of biology, computer science and physics. In this context, topological data analysis (TDA) tools have emerged as able to provide insights into high-dimensional, noisy and non-linear datasets coming from very different subjects. Here I will introduce two TDA tools, persistent homology and Mapper, and illustrate what novel insights they are yielding, with particular attention to the study of the functional, structural and genetic connectomes. I will show how topological observables capture and distinguish variations in the mesoscopic functional organization in two case studies: i) between drug-induced altered brain states, and ii) between perceptual states and the corresponding mental images. Moving to the structural level, I will compare the homological features of structural and functional brain networks across a large age span and highlight the presence of dynamically coordinated compensation mechanisms, suggesting that functional topology is conserved over the depleting structural substrate. Finally, using brain gene expression data, I will briefly describe recent work on the construction of a topological genetic skeleton highlighting differences in structure and function of different genetic pathways within the brain.
We all need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. In this seminar I will present some empirical evidence from human experiments carried out in a controlled laboratory setting which focus on the impact of reputation in dynamic networked interactions. People are engaged in playing pairwise repeated Prisoner's Dilemma games with their neighbours, or partners, and they are paid with real money according to their performance during the experiment. We will see whether and how the ability to make or break links in social networks fosters cooperation, paying particular attention to whether information on an individual’s actions is freely available to potential partners. Studying the role of information is relevant as complete knowledge on other people’s actions is often not available for free. We will also focus on the role of individual reputation, an indispensable tool to guide decisions about social and economic interactions with individuals otherwise unknown, and in the way this reputation is obtained in a hierarchical structure. We will show how the presence of reputation can be fundamental for achieving higher levels of cooperation in human societies. These findings point to the importance of ensuring the truthfulness of reputation for a more cooperative and fair society.
Parkinson’s disease is a neurodegenerative condition characterised by lossof neurons producing dopamine in the brain. It affects 7 million peopleworldwide, making it the second most common neurodegenerative disease, andit currently has no cure. The difficulty of developing treatments andtherapies lies in the limited understanding of the mechanisms that induceneurodegeneration in the disease. Experimental evidence suggests that theaggregation alpha synuclein monomers into toxic oligomeric forms can be thecause of dopaminergic cell death and that their detection in cerebrospinalfluid could be a potential biomarker for the disease. In addition, the studyof these alpha synuclein aggregates and their aggregation pathways couldpotentially lead to early diagnostic of the disease. However, the small sizeof alpha synuclein monomers and the heterogeneity of the oligomers makestheir detection under conventional bulk approaches extremely challenging,often requiring sample concentrations orders of magnitude higher thanclinically relevant. Nanopore sensing techniques offer a powerful platformto perform such analysis, thanks to their ability to read the information ofa single molecule at a time while requiring very low sample volume (µl).This project presents a novel nanopore configuration capable of addressingthese limitations: two nanopores separated by a 20nm gap joined together bya zeptolitre nanobridge. The confinement slows molecules translocatingthrough the nanobridge by up to two orders of magnitude compared to standardnanopore configurations, improving significantly the limits of detection.Furthermore, this new nanopore setting is size adaptable, and can be used todetect a variety of analytes.
In this seminar, we will motivate and introduce the concept of network communicability. We will give a few examples of applications of this concept to biological, social, infrastructural and engineering networked systems. Building on this concept we will show how a Euclidean geometry emerges naturally from the communicability patterns in networked complex systems. This communicability geometry characterises the spatial efficiency of networks. We will show how the communicability function allows a natural characterization of network stability and their robustness to external perturbations of the system. We will, also show how the communicability shortest paths defines routes of highest congestion in cities at rush hours. Finally, we will show that theoretical parameters derived from the communicability function determine the robustness of dynamical processes taking place on the networks, such as diffusion and synchronization. References: Estrada, E., Hatano, N. SIAM Review 58, 2016, 692-715 (Research Spotlight). Estrada, E., Hatano, N., Benzi, M. Physics Reports, 514, 2012, 89-119. Estrada, E., Higham, D.J. SIAM Review, 52, 2010, 696-714.
In this talk I will present a new modeling framework to describe co-existing physical and socio-economic components in interconnected smart-grids. The modeling paradigm builds on the theory of evolutionary game dynamics and bio-inspired collective decision making. In particular, for a large population of players we consider a collective decision making process with three possible options: option A or B or no option. The more popular option is more likely to be chosen by uncommitted players and cross-inhibitory signals can be sent to attract players committed to a different option. This model originates in the context of honeybees swarms, and we generalise it to accommodate other applications such as duopolistic competition and opinion dynamics as well as consumers' behavior in the grid. During the talk I will introduce a new game dynamics called expected gain pairwise comparison dynamics which explains the ways in which the strategic behaviour of the players may lead to deadlocks or consensus. I will then discuss equilibrium points and stability in the case of symmetric or asymmetric cross-inhibitory signals. I will discuss the extension of the results to the case of structured environment in which the players are modelled via a complex network with heterogeneous connectivity. Finally, I will illustrate the ways in which such modeling framework can be extended to energy systems.
Reaction-diffusion processes1 have been widely used to study dynamical processes in epidemics2,3,4 and ecology5 in networked metapopulations. In the context of epidemics6, reaction processes are understood as contagions within each subpopulation (patch), while diffusion represents the mobility of individuals between patches. Recently, the characteristics of human mobility7, such as its recurrent nature, have been proven crucial to understand the phase transition to endemic epidemic states8,9. Here, by developing a framework able to cope with the elementary epidemic processes, the spatial distribution of populations and the commuting mobility patterns, we discover three different critical regimes of the epidemic incidence as a function of these parameters. Interestingly, we reveal a regime of the reaction–diffussion process in which, counter-intuitively, mobility is detrimental to the spread of disease. We analytically determine the precise conditions for the emergence of any of the three possible critical regimes in real and synthetic networks. Joint work with J. Gómez-Gardeñes and D. Soriano-Paños.
Biological systems including cancer are composed of interactions among individuals carrying different traits. I build stochastic models to capture those interactions and analyse the diversity patterns arising in population level based on these individual interactions. I would like to use this seminar to introduce different topics I work in mathematical biology, including evolutionary game theoretical models on random mutations (errors introduced during individual reproduction), application of random mutation models to predator-prey systems, as well as the evolution of resistance in ovarian cancer.
Cancers have been shown to be genetically diverse populations of cells. This diversity can affect treatment and the prognosis of patients. Furthermore, the composition of the population may change overtime, it is therefore instructive to think of cancers as a diverse dynamic population of cells which is subject to the rules of evolution. Population genetics, a quantitative description of the rules of evolution in terms of mutation, selection (outgrowth of fitter sub-populations) and drift (stochastic effects) can be adapted and applied to the study of cancer as an evolutionary system. Using this mathematical description together with genomic sequencing data and Bayesian inference we measure evolutionary dynamics in human cancers on a patient by patient basis from data from single time points. This allows us to infer interesting properties that govern the evolution of cancers including the mutation rate, the fitness advantage of sub-populations and to distinguish diversity generated from neutral (stochastic) processes from diversity due to natural selection (outgrowth of fitter subpopulations).
Random matrices play a crucial role in various fields of mathematics and physics. In particular in the field of quantum chaos Hermitian random matrix ensembles represent universality classes for spectral features of Hamiltonians with classically chaotic counterparts. In recent years the study of non-Hermitian but PT-symmetric quantum systems has attracted a lot of attention. These are non-Hermitian systems that have an anti-unitary symmetry, which is often interpreted as a balance of loss and gain in a system. In this talk the question of whether and how the standard ensembles of Hermitian quantum mechanics can be modified to yield PT-symmetric counterparts is addressed. In particular it is argued that using split-complex and split-quaternionic numbers two new PT-symmetric random matrix ensembles can be constructed. These matrices have either real or complex conjugate eigenvalues, the statistical features of which are analysed for 2 × 2 matrices.
Large dynamical fluctuations - atypical realizations of the dynamics sustained over long periods of time - can play a fundamental role in determining the properties of collective behavior of both classical and quantum non-equilibrium systems. Rare dynamical fluctuations, however, occur with a probability that often decays exponentially in their time extent, thus making them difficult to be directly observed and exploited in experiments. In this talk I will explain, using methods from dynamical large deviations, how rare dynamics of a given (Markovian) open quantum system can always be obtained from the typical realizations of an alternative (also Markovian) system. The correspondence between these two sets of realizations can be used to engineer and control open quantum systems with a desired statistics “on demand”. I will illustrate these ideas by studying the photon-emission behaviour of a three-qubit system which displays a sharp dynamical crossover between active and inactive dynamical phases.
Gibbs measures are a useful class of invariant measures for hyperbolic systems, of which the best known is the natural Sinai-Ruelle-Bowen measure. It is a standard fact that the volume measure on a small piece of unstable manifold can be pushed forward under the map (or flow) and in the limit converges to the Sinai-Ruelle-Bowen measure. Pesin asked the question: How can this construction be adapted to give other Gibbs measures? In this talk we will describe one solution.
The hydrodynamic approximation is an extremely powerful tool to describe the behavior of many-body systems such as gases. At the Euler scale (that is, when variations of densities and currents occur only on large space-time scales), the approximation is based on the idea of local thermodynamic equilibrium: locally, within fluid cells, the system is in a Galilean or relativistic boost of a Gibbs equilibrium state. This is expected to arise in conventional gases thanks to ergodicity and Gibbs thermalization, which in the quantum case is embodied by the eigenstate thermalization hypothesis. However, integrable systems are well known not to thermalize in the standard fashion. The presence of infinitely-many conservation laws preclude Gibbs thermalization, and instead generalized Gibbs ensembles emerge. In this talk I will introduce the associated theory of generalized hydrodynamics (GHD), which applies the hydrodynamic ideas to systems with infinitely-many conservation laws. It describes the dynamics from inhomogeneous states and in inhomogeneous force fields, and is valid both for quantum systems such as experimentally realized one-dimensional interacting Bose gases and quantum Heisenberg chains, and classical ones such as soliton gases and classical field theory. I will give an overview of what GHD is, how its main equations are derived and its relation to quantum and classical integrable systems. If time permits I will touch on the geometry that lies at its core, how it reproduces the effects seen in the famous quantum Newton cradle experiment, and how it leads to exact results in transport problems such as Drude weights and non-equilibrium currents.This is based on various collaborations with Alvise Bastianello, Olalla Castro Alvaredo, Jean-Sébastien Caux, Jérôme Dubail, Robert Konik, Herbert Spohn, Gerard Watts and my student Takato Yoshimura, and strongly inspired by previous collaborations with Denis Bernard, M. Joe Bhaseen, Andrew Lucas and Koenraad Schalm.
In [1] Émile Le Page established the Holder continuity of the top Lyapynov exponent for irreducible random linear cocycles with a gap between its first and second Lyapunov exponents. An example of B. Halperin (see Appendix 3 in [2]) suggests that in general, uniformly hyperbolic cocycles apart, this is the best regularity that one can hope for. We will survey on recent results and limitations on the regularity of the Lyapunov exponents for random GL(2)-cocycles.[1] Émile Le Page, Régularité du plus grand exposant caractéristique des produits de matrices aléatoires indépendantes et applications. Ann. Inst. H. Poincaré Probab. Statist. 25 (1989), no. 2, 109–142.[2] Barry Simon and Michael Taylor, Harmonic analysis on SL(2,R) and smoothness of the density of states in the one-dimensional Anderson model. Comm. Math. Phys. 101 (1985), no. 1, 1–19.
Given a compact surface, we consider the set of area-preserving flows with isolated fixed points. The study of these flows dates back to Novikov in the 80s and since then many properties have been investigated. Starting from an overview of the known results, we show that typical such flows admitting several minimal components are mixing when restricted to each minimal component and we provide an estimate on the decay of correlations for smooth observables.
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and coloured noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified coloured noise approximation, developed by P. Jung and P. Hänggi. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
This is part of a series of collaborative meetings between Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, St Andrews, Surrey and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
1:00pm - 2:00pm: Dmitry Dolgopyat (Maryland), joint with the QMUL Probability and Applications SeminarLocal Limit Theorem for Nonstationary Markov chains
2:30pm - 3:30pm: Dalia Terhesiu (Exeter)The Pressure Function for Infinite Equilibrium Measures
4:00pm - 5:00pm: Sebastian van Strien (Imperial College)Heterogeneously Coupled Maps. Coherent behaviour and reconstructing network from data
For more information, visit the website:http://www.maths.qmul.ac.uk/~ob/oneday_meeting/oneday17/onedaydynamics_q...
The topology of any complex system is key to understanding its structure and function. Fundamentally, algebraic topology guarantees that any system represented by a network can be understood through its closed paths. The length of each path provides a notion of scale, which is vitally important in characterizing dominant modes of system behavior. Here, by combining topology with scale, we prove the existence of universal features which reveal the dominant scales of any network. We use these features to compare several canonical network types in the context of a social media discussion which evolves through the sharing of rumors, leaks and other news. Our analysis enables for the first time a universal understanding of the balance between loops and tree-like structure across network scales, and an assessment of how this balance interacts with the spreading of information online. Crucially, our results allow networks to be quantified and compared in a purely model-free way that is theoretically sound, fully automated, and inherently scalable. This work is joint with Pierre-Andre Maugis and Patrick Wolfe.
Is there a fundamental minimum to the thermodynamic cost of precision in non-equilibrium processes? Here, we investigate this question, which has recently triggered notable research efforts [1,2], for ballistic transport in a multi-terminal geometry. For classical systems, we derive a universal trade-off relation between total dissipation and the precision, at which particles are extracted from individual reservoirs [3]. Remarkably, this bound becomes significantly weaker in presence of a magnetic field breaking time-reversal symmetry. By working out an explicit model for chiral transport enforced by a strong magnetic field, we show that our bounds are tight. Beyond the classical regime, we find that, in quantum systems far from equilibrium, correlated exchange of particles makes it possible to exponentially reduce the thermodynamic cost of precision [3]. Uniting aspects from statistical and mesoscopic physics, our work paves the way for the design of precise and efficient transport devices.
[1] A. C Barato, U. Seifert; Phys. Rev. Lett. 114, 158101 (2015).
[2] T. R. Gingrich, J. M. Horowitz, N. Perunov, J. L. England; Phys. Rev. Lett. 116, 120601 (2016).
[3] K. Brandner, T. Hanazato, K. Saito; arXiv:1710.04928 (2017).
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has highlighted the gap between success and intrinsic quality. As a result, high quality content that receives low attention remains invisible and relegated to the long tail of the popularity distribution. Moreover, the production and consumption of content is influenced by the underlying social network connecting users by means of friendship or follower-followee relations. This talk will present a large scale study on the complex intertwinement between quality, popularity and social ties in an online photo sharing platform, proposing a methodology to democratize exposure and foster long term users engagement.
The motion of a tracer particle in a complex medium typically exhibits anomalous diffusive patterns, characterised, e.g, by a non-liner mean-squared displacement and/or non-Gaussian statistics. Modeling such fluctuating dynamics is in general a challenging task, that provides, despite all, a fundamental tool to probe the rheological properties of the environment. A prominent example is the dynamics of a tracer in a suspension of swimming microorganisms, like bacteria, which is driven by the hydrodynamic fields generated by the active swimmers. For dilute systems, several experiments confirmed the existence of non-Gaussian fat tails in the displacement distribution of the probe particle, that has been recently shown to fit well a truncated Lévy distribution. This result was obtained by applying an argument first proposed by Holtsmark in the context of gravitation: the force acting on the tracer is the superposition of the hydrodynamic fields of spatially random distributed swimmers. This theory, however, does not clarify the stochastic dynamics of the tracer, nor it predicts the non monotonic behaviour of the non-Gaussian parameter of the displacement distribution. Here we derive the Langevin description of the stochastic motion of the tracer from microscopic dynamics using tools from kinetic theory. The random driving force in the equation of motion is a coloured Lévy Poisson process, that induces power-law distributed position displacements. This theory predicts a novel transition of their characteristic exponents at different timescales. For short ones, the Holtzmark-type scaling exponent is recovered; for intermediate ones, it is larger. Consistently with previous works, for even longer ones the truncation appears and the distribution converge to a Gaussian. Our approach allows to employ well established functional methods to characterize the displacement statistics and correlations of the tracer. In particular, it qualitatively reproduces the non monotonic behaviour of the non-Gaussian parameter measured in recent experiments.
Let f be a smooth volume preserving diffeomorphism of a compact manifold and φ a known smooth function of zero integral with respect to the volume. The linear cohomological equation over f isψ ○ f - ψ = φwhere the solution ψ is required to be smooth.
Diffeomorphisms f for which a smooth solution ψ exists for every such smooth function φ are called Cohomologically Rigid. Herman and Katok have conjectured that the only such examples up to conjugation are Diophantine rotations in tori.
We study the relation between the solvability of this equation and the fast approximation method of Anosov-Katok and prove that fast approximation cannot construct counter-examples to the conjecture.
The study of complex human systems has become more important than ever as the risks facing human societies from the human and social factors are clearly increasing. However, disciplines, such as psychology and sociology, haven't made any significant scientific progress and they are immersed in theoretical approaches and empirical methodologies developed more than a 100 years ago. In this talk, I would like to point to the promise of applying ideas from complex systems and developing new computational tools for big data reservoirs in order to address the abovementioned challenge. I will provide several case-studies illustrating the benefits of the proposed approach and several open challenges that need to be addressed.
References (for illustration)1. Neuman, Y. (2014). Introduction to computational cultural psychology. Cambridge University Press.2. Neuman, Y., & Cohen, Y. (2014). A vectorial semantics approach to personality assessment. Scientific reports, 4.3. Neuman, Y., Assaf, D., Cohen, Y., & Knoll, J. L. (2015). Profiling school shooters: automatic text-based analysis. Frontiers in psychiatry,
A polynomial-like mapping is a proper holomorphic map f : U′ → U, where U′, U ≈ D, and U′ ⊂⊂ U. This definition captures the behaviour of a polynomial in a neighbourhood of its filled Julia set. A polynomial-like map of degree d is determined up to holomorphic conjugacy by its internal and external classes, that is, the (conjugacy classes of) the restrictions to the filled Julia set and its complement. In particular the external class is a degree d real-analytic orientation preserving and strictly expanding self-covering of the unit circle: the expansivity of such a circle map implies that all the periodic points are repelling, and in particular not parabolic.We extended the polynomial-like theory to a class of parabolic mappings which we called parabolic-like mappings. In this talk we present the parabolic- like mapping theory, and its uses in the family of degree 2 holomorphic correspondences in which matings between the quadratic family and the modular group lie.
Characterizing how we explore abstract spaces is key to understand our (ir)rational behaviour and decision making. While some light has been shed on the navigation of semantic networks, however, little is known about the mental exploration of metric spaces, such as the one dimensional line of numbers, prices, etc. Here we address this issue by investigating the behaviour of users exploring the “bid space” in online auctions. We find that they systematically perform Lévy flights, i.e., random walks whose step lengths follow a power-law distribution. Interestingly, this is the best strategy that can be adopted by a random searcher looking for a target in an unknown environment, and has been observed in the foraging patterns of many species. In the case of online auctions, we measure the power-law scaling over several decades, providing the neatest observation of Lévy flights reported so far. We also show that the histogram describing single individual exponents is well peaked, pointing out the existence of an almost universal behaviour. Furthermore, a simple model reveals that the observed exponents are nearly optimal, and represent a Nash equilibrium. We rationalize these findings through a simple evolutionary process, showing that the observed behaviour is robust against invasion of alternative strategies. Our results show that humans share with the other animals universal patterns in general searching processes, and raise fundamental issues in cognitive, behavioural and evolutionary sciences.
Using inducing schemes (generalised first return maps) to obtain uniform expansion is a standard tool for (smooth) interval maps, in order to prove, among other things, the existence of invariant measures, their mixing rates and stochastic laws. In this talk I would like to present joint work with Mike Todd (St Andrews) on how this can be applied to maps on the brink of being dissipative. We discuss a family fλ of Fibonacci maps for which Lebesgue-a.e. point is recurrent or transient depending on the parameter λ. The main tool is a specific induced Markov map Fλ with countably many branches whose lengths converge to zero. Avoiding the difficulties of distortion control by starting with a countably piecewise linear unimodal map, we can identify the transition from conservative to dissipative exactly, and also describe in great detail the impact of this transition on the thermodynamic formalism of the system (existence and uniqueness of equilibrium states, (non)analyticity of the pressure function and phase transitions).
1:00pm - 2:00pm: Dmitry Dolgopyat (Maryland), joint with the QMUL Probability and Applications Seminar
2:30pm - 3:30pm: Dalia Terhesiu (Exeter)
4:00pm - 5:00pm: Tuomas Sahlsten (Manchester)
The Paris conference 2015 set a path to limit climate change to "well below 2°C". To reach this goal, integrating renewable energy sources into the electrical power grid is essential but poses an enormous challenge to the existing system, demanding new conceptional approaches. In this talk, I outline some pressing challenges to the power grid,highlighting how methods from Mathematics and Physics can potentially support the energy transition.In particular, I present our latest research on power grid fluctuations and how they threaten robust grid operation. For our analysis, we collected frequency recordings from power grids in North America, Europe and Japan, noticing significant deviations from Gaussianity. We develope a coarse framework to analytically characterize the impact of arbitrary noise distributions as well as a superstatistical approach. Overall, we identify energy trading as a significant contribution to today's frequency fluctuation and effective damping of the grid as a controlling factor to reduce fluctuation risks.
We will present in this talk a 1-parameter family of affine interval exchange transformations (AIET) which displays various dynamical behaviours. We will see that a fruitful viewpoint to study such a family is to associate to it what we call a dilation surface, which should be thought of as the analogue of a translation surface in this setting.
The study of this example is a good motivation to several conjectures on the dynamics of AIETs that we will try to expose.
The function of many real-world systems that consist of interacting oscillatory units depends on their collective dynamics such as synchronization. The Kuramoto model, which has been widely used to study collective dynamics in oscillator networks, assumes that interactions between oscillators is determined by the sine of the differences between pairs of oscillator phases. We show that more general interactions between identical phase oscillators allow for a range of collective effects, ranging from chaotic fluctuations to localized frequency synchrony patterns.
Kuramoto Sakaguchi type models are probably the simplest and most generic approach to investigate phase coupledoscillators. Particular partially synchronised solutions, so called chimera states, have received recently a great deal of attention. Dynamical behaviour of this type will be discussed in the context of time delay dynamics caused by a finite propagation speed of signals.
For a family of rational maps, results by Lyubich, Mané-Sad-Sullivan and DeMarco provide a fairly complete understanding of dynamical stability. I will review this one-dimensional theory and present a recent generalisation to several complex variables. I will focus on the arguments that do not readily generalise to this setting, and introduce the tools and ideas that allow one to overcome these problems.
The nodal surplus of the $n$-th eigenfunction of a graph is defined asthe number of its zeros minus $(n-1)$. When the graph is composed oftwo or more blocks separated by bridges, we propose a way to define a"local nodal surplus" of a given block. Since the eigenfunction index$n$ has no local meaning, the local nodal surplus has to be defined inan indirect way via the nodal-magnetic theorem of Berkolaiko andWeyand.
We will discuss the properties of the local nodal surplus and theirconsequences. In particular, it also has a dynamical interpretationas the number of zeros created inside the block (as opposed to thosewho entered it from outside) and its symmetry properties allow us toprove the long-standing conjecture that the nodal surplus distributionfor graphs with $\beta$ disjoint loops is binomial with parameters$(\beta, 1/2)$. The talk is based on a work in progress with Lior Alonand Ram Band.
One often aims to describe the collective behaviour of an infinite number of particles by the differential equation governing the evolution of their density. The theory of hydrodynamic limits addresses this problem. In this talk, the focus will be on linking the particles with the geometry of the macroscopic evolution. Zero-range processes will be used as guiding example. The geometry of the associated hydrodynamic limit, a nonlinear diffusion equation, will be derived. Large deviations serve as a tool of scale-bridging to describe the many-particle dynamics by partial differential equations (PDEs) revealing the geometry as well. Finally, time permitting we will discuss the near-minimum structure, studying the fluctuations around the minimum state described by the deterministic PDE.
Graphs can encode information from datasets that have a natural representation in terms of a network (for example datasets describing collaborations or social relations among individulas), as well as from data that can be mapped into graphs due to their intrinsic correlations, such as time series or images. Characterising the structure of complex networks at the micro and mesocale can thus be of fundamental importance to extract relevant information from ourdata. We will present some algorithms useful to characterise the structure of particular classes of networks:
i) multiplex networks, describing systems where interactions of differentnature are involved,
and ii) visibility graphs, that can be extracted from time series.
We start by giving a short introduction about quasiperiodically forced interval maps. To distinguish smooth and non-smooth saddle-node bifurcations by means of a topological invariant, we introduce two new notions in the low-complexity regime, namely, asymptotic separation numbers and amorphic complexity. We present recent results with respect to these two novel concepts for additive and multiplicative forcing. This is joint work with G. Fuhrmann and T. Jäger.
Internal gravity waves play a primary role in geophysical fluids: they contribute significantly to mixing in the ocean and they redistribute energy and momentum in the middle atmosphere. Until recently, most of the studies were focused on plane-wave solutions. However, these solutions are not a satisfactory description of most geophysical manifestations of internal gravity waves, and it is now recognized that internal wave beams with a locally confined profile are ubiquitous in the geophysical context.
We will discuss the reason for their ubiquity in stratified fluids, since they are solutions of the nonlinear governing equations. Moreover, in the light of the recent experimental and analytical studies of those internal gravity beams, it is timely to discuss the two main mechanisms of instability for those beams: the triadic resonant instability and the streaming instability.
In a seminal paper Ruelle showed that the long time asymptotic behaviour of analytic hyperbolic systems can be understood in terms of the eigenvalues, also known as Pollicott-Ruelle resonances, of the so-called Ruelle transfer operator, a compact operator acting on a suitable Banach space of holomorphic functions.
Until recently, there were no examples of Ruelle transfer operators arising from analytic hyperbolic circle or toral maps, with non-trivial spectra, that is, spectra different from {0,1}.
In this talk I will survey recent work with Wolfram Just and Julia Slipantschuk on how to construct analytic expanding circle maps or analytic Anosov diffeomorphisms on the torus with explicitly computable non-trivial Pollicott-Ruelle resonances. I will also discuss applications of these results.
Epidemic processes on temporally varying networks are complicated by complexityof both network structure and temporal dimensions. It is yet under debate whatfactors make some temporal networks promote infection at a population levelwhereas other temporal networks suppress it. We develop a theory to understandthe susceptible-infected-susceptible epidemic model on arbitrary temporalnetworks, where each contact is used for a finite duration. We show that, undercertain conditions, temporality of networks lessens the epidemic threshold suchthat infections persist more easily in temporal networks than in their staticcounterparts. We further show that the Lie commutator bracket of the adjacencymatrices at different times (precisely speaking, commutator's norm) is a usefulindex to assess the impact of temporal networks on the epidemic thresholdvalue.
One reason for the success of one-particle quantum graph models is that their spectra are determined by secular equations involving finite-dimensional determinants. In general, one cannot expect this to extend to interacting many-particle models. In this talk I will introduce some specific two-particle quantum graph models with interactions that allow one to express eigenfunctions in terms of a Bethe ansatz. From this a secular equation will be determined, and eigenvalues can be calculated numerically. The talk is based on joint work with George Garforth.
Topology is one of the oldest and more relevant branches of mathematics, and it has provided an expressive and affordable language which is progressively pervading many areas of mathematics, computer science and physics.Using examples taken from work drug-altered brain functional networks, I will illustrate the type of novel insights that algebraic topological tools are providing in the context of neuroimaging.
I will then show how the comparison of homological features of structural and functional brain networks across a large age span highlights the presence of a globally conserved topological skeletons and of a compensation mechanism modulating the localization of functional homological features. Finally, with an eye to altered cognitive control in disease and early ageing, I will introduce preliminary theoretical results on the modelization of multitasking capacities from a statistical mechanical perspective and show that even a small overlap between tasks strongly limits overall parallel capacity to a degree that substantially outpaces gains by increasing network size.
Networks form the substrate of a wide variety of complex systems, rangingfrom food webs, gene regulation, social networks, transportation and theinternet. Because of this, general network abstractions allow for thecharacterization of these different systems under a unified mathematicalframework. However, due to the sheer size and complexity of many of thesessystems, it remains an open challenge to formulate general descriptions oftheir structures, and to extract such information from data. In this talk, Iwill describe a principled approach to this task, based on the elaborationof probabilistic generative models, and their statistical inference fromdata. In particular, I will present a general class of generative modelsthat describe the multilevel modular structure of network systems, as wellas efficient algorithms to infer their parameters. I will highlight thecommon pitfalls present in more heuristic methods of capturing this type ofstructure, and demonstrate the efficacy of more principled methods based onBayesian statistics.
Two-dimensional lattice paths and polygons such as self-avoiding walks and polygons and subclasses thereof are often used as models for biological vesicles and cell membranes. When tuning the pressure acting on the wall of the vesicle or the strength of the interactions between different parts of the chain, one often observes a phase transition between a deflated or crumpled towards an inflated or globule-like state. For models including self-avoiding polygons, staircase polygons, Dyck and Schröder paths, Bernoulli meanders and bridges, the phase transition between the different regimes is (conjectured to be) characterised by two critical exponents and a one-variable scaling function involving the Airy function. John Cardy conjectured that by turning on further interactions, one should be able to generate multicritical points of higher order, described by multivariate scaling functions involving generalised Airy integrals.
Abstract: The field of random matrix theory (RMT) was born out experimental observations of the scattering amplitudes of large atomic nuclei in the late 1950s. It led Wigner, Dyson and others to develop a theory comprising three standard random matrix ensembles, termed the Gaussian Orthogonal, Unitary and Symplectic Ensembles, which predicted the distribution of such resonances in various situations. Until recently it was a standard consensus that observing this third type of statistics (the GSE) required a quantum spin, however, together with S. Mueller and M. Sieber we proposed a quantum graph that would have such statistics, but without the spin requirement. Recently, this quantum graph has been realised in a laboratory setting, leading to the first experimental observation of GSE statistics, some 60 years after the conception of RMT. I will present the mathematical framework behind the construct of this system and the ideas which led to its conception.
The cell cytoskeleton can be successfully modelled as an 'active gel'. This is gel that is driven out of equilibrium by the consumption of biochemical energy. In particular myosin molecular motors exert forces on actin filaments resulting in contraction. Theoretical studies of active matter over the past two decades have shown it to have rich dynamics and behaviour. Here I will discuss finite droplets or active matter in which interactions with the boundaries play an important role. Displacement of the whole droplet is generated by flows of the contractile active gel inside. I will show how this depends on the average direction of cytoskeleton filaments and the boundary conditions at the edge of the model cell, which are set by interactions with the external environment. I will consider the shape deformation and movement of such droplets. Inspired by applications to cell movement and deformation I will discuss the behaviour of a layer of active gel surrounding a passive solid object as a model for the cell nucleus.
This is part of a series of collaborative meetings between Bath, Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, St Andrews, Surrey and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
For speakers, abstracts and schedule, see the meeting web page.
Biological invasion can be generically defined as the uncontrolled spread and proliferation of species to areas outside of their native range, hence called alien, usually following by their unintentional introduction by humans. A conventional view of the alien species spatial spread is that it occurs via the propagation of a travelling population front. In a realistic 2D system, such a front normally separates the invaded area behind the front from the uninvaded areas in front of the front. I will show that there is an alternative scenario called “patchy invasion” where the spread takes place via the spatial dynamics of separate patches of high population density with a very low density between them, and a continuous population front does not exist at any time. Patchy invasion has been studied theoretically in much detail using diffusion-reaction models, e.g. see Chapter 12 in [1]. However, diffusion-reaction models have many limitations; in particular, they almost completely ignore the so-called long distance dispersal (usually associated with stochastic processes known as Levy flights). Correspondingly, I will then present some recent results showing that patchy invasion can occur as well when long distance dispersal is taken into account [2]. In this case, the system is described by integral-difference equations with fat-tailed dispersal kernels. I will also show that apparently minor details of kernel parametrization may have a relatively strong effect on the rate of species spread.
[1] Malchow H, Petrovskii SV, Venturino E (2008) Spatiotemporal Patterns in Ecology and Epidemiology: Theory, Models, Simulations. Chapman & Hall / CRC Press, 443p.
[2] Rodrigues LAD, Mistro DC, Cara ER, Petrovskaya N, Petrovskii SV (2015) Bull. Math. Biol. 77, 1583-1619.
Networks, virtually in any domain, are dynamical entities. Think for exampleabout social networks. New nodes join the system, others leave it, and linksdescribing their interactions are constantly changing. However, due to absenceof time-resolved data and mathematical challenges, the large majority ofresearch in the field neglects these features in favor of staticrepresentations. While such approximation is useful and appropriate in somesystems and processes, it fails in many others. Indeed, in the case of sexualtransmitted diseases, ideas, and meme spreading, the co-occurrence, durationand order of contacts are crucial ingredients.During my talk, I will present a novel mathematical framework for the modelingof highly time-varying networks and processes evolving on their fabric. Inparticular, I will focus on epidemic spreading, random walks, and socialcontagion processes on temporal networks.
Time-dependency adds an extra dimension to network science computations, potentially causing a dramatic increase in both storage requirements and computation time. In the case of Katz-style centrality measures, which are based on the solution of linear algebraic systems,allowing for the arrow of time leads naturally to full matrices that keep track of all possible routes for the flow of information. Such a build-up of intermediate data can make large-scale computations infeasible. In this talk, we describe a sparsification technique that delivers accurate approximations to the full-matrix centrality rankings, while retaining the level of sparsity present in the network time-slices. With the new algorithm, as we move forward in time the storage cost remains fixed and the computational cost scales linearly, so the overall task is equivalent to solving a single Katz-style problem at each new time point.
Recently, there has been a surge of interest in an old result discussed by Mainardi et al. [1] that relates pseudo-differential relaxation equations and semi-Markov processes. Meerschaert and Toaldo presented a rigorous theory [2] and I recently applied these ideas to semi-Markov graph dynamics [3]. In this talk, I will present several examples and argue that further work is needed to study the solutions of pseudo-differential relaxation equations and their properties.
References[1] Mainardi, Francesco, Raberto, Marco, Gorenflo, Rudolf and Scalas, Enrico (2000) Fractional calculus and continuous-time finance II: the waiting-time distribution. Physica A Statistical Mechanics and its Applications, 287 (3-4). pp. 468-481.[2] Meerschaert, Mark M and Toaldo, Bruno (2015) Relaxation patterns and semi-Markov dynamics arXiv:1506.02951 [math.PR].[3] Raberto, Marco, Rapallo, Fabio and Scalas, Enrico (2011) Semi-Markov graph dynamics. PLoS ONE, 6 (8). e23370. ISSN 1932-6203. Georgiou, Nicos, Kiss, Istvan and Scalas, Enrico (2015) Solvable non Markovian dynamic network. Physical Review E, 92 (4). 042801. ISSN 1539-3755.
Life originated as single celled organisms, and multicellularity arose multiple times across evolutionary history. Increasingly more complex cellular arrangements were selected for, conferring organisms with an adaptive advantage. Uncovering the properties of these synergistic cellular configurations is central to identifying these optimized organizational principles, and to establish structure-function relationships. We have developed methods to capture all cellular associations within plant organs using a combination of high resolution 3D microscopy and computational image analysis. These multicellular organs are abstracted into cellular connectivity networks and analysed using a complex system approach. This discretization of cellular organization enables the topological properties of global 3D cellular complexity in organs to be examined for the first time. We find that the organizing properties of global cellular interactions are tightly conserved both within and across species in diverse plant organs. Seemingly stochastic gene expression patterns can also be predicted based on the context of cells within organs. Finally, evidence for optimization in cellular configurations and transport processes have emerged as a result of natural selection. This provides a framework and insight to investigate the structure-function relationship at the level of cell organization within complex multicellular organs.
Self-propelled particles are able to extract energy from their environment to perform a directed motion. Such a dynamics lead to a rich phenomenology that cannot be accounted for by equilibrium physics arguments. A striking example is the possibility for repulsive particles to undergo a phase separation, as reported in both experimental and numerical realizations. On a specific model of self-propulsion, we explore how far from equilibrium the dynamics operate. We quantify the breakdown of the time reversal symmetry, and we delineate a bona fide effective equilibrium regime. Our insight into this regime is based on the analysis of fluctuations and response of the particles. Finally, we discuss how the nonequilibrium properties of the dynamics can also be captured at a coarse-grained level, thus allowing a detailed examination of the spatial structure that underlies departures from equilibrium.
I will discuss defining networks from observations of tree species. This talk will discuss how to quantify co-associations between multiple and inhomogeneous point-process patterns, and how to identify communities, or groups, in such observations. The work is motivated by the distribution of tree and shrub species from a 50 ha forest plot on Barro Colorado Island. We show that our method can be used to construct biologically meaningful subcommunities that are linked to the spatial structure of the plant community.
This is joint work with David Murrell and Anton Flugge.
We study the spectrum of random geometric graphs using random matrix theory. We look at short range correlations in the level spacings via the nearest neighbour spacing distribution and long range correlations via the spectral rigidity. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find that the spectral statistics of random geometric graphs fits the universality of random matrix theory found in other random graph models.
In 1972 Robert May argued that (generic) complex systems become unstable to small displacements from equilibria as the system complexity increases. In search of a global signature of this instability transition, we consider a class of nonlinear dynamical systems whereby N degrees of freedom are coupled via a smooth homogeneous Gaussian vector field. Our analysis shows that with the increase in complexity, as measured by the number of degrees of freedom and the strength of interactions relative to the relaxation strength, such systems undergo an abrupt change from a simple set of equilibria (a single stable equilibrium for N large) to a complex set of equilibria. Typically, none of these equilibria are stable and their number is growing exponentially with N. This suggests that the loss of stability manifests itself on the global scale in an exponential explosion in the number of equilibria. [My talk is based on a joint paper with Yan Fyodorov and on an unpublished work with Gerard Ben Arous and Yan Fyodorov]
The title of my talk was the topic of an Advanced Study Group for which I was convenor last year [1]. In my talk I will give a brief outline about our respective research activities. It should be understandable to a rather general audience.A question that attracted a lot of attention in the past two decades is whether biologically relevant search strategies can be identified by statistical data analysis and mathematical modeling. A famous paradigm in this field is the Levy flight hypothesis. It states that under certain mathematical conditions Levy dynamics, which defines a key concept in the theory of anomalous stochastic processes, leads to an optimal search strategy for foraging organisms. This hypothesis is discussed very controversially in the current literature [2]. After briefly introducing the stochastic processes of Levy flights and Levy walks I will review examples and counterexamples of experimental data and their analyses confirming and refuting the Levy flight hypothesis. This debate motivated own work on deriving a fractional diffusion equation for an n-dimensional correlated Levy walk [3], studying search reliability and search efficiency of combined Levy-Brownian motion [4], and investigating stochastic first passage and first arrival problems [5].
[1] www.mpipks-dresden.mpg.de/~asg_2015(link is external)[2] R.Klages, Search for food of birds, fish and insects, invited book chapter in: A.Bunde, J.Caro, J.Kaerger, G.Vogl (Eds.), Diffusive Spreading in Nature, Technology and Society. (Springer, Berlin, 2017).[3] J.P.Taylor-King, R.Klages, S.Fedotov, R.A.Van Gorder, Phys.Rev.E 94, 012104 (2016).[4] V.V.Palyulin, A.Chechkin, R.Klages, R.Metzler, J.Phys.A: Math.Theor. 49, 394002 (2016).[5] G.Blackburn, A.V.Chechkin, V.V.Palyulin, N.W.Watkins, R.Klages, tbp.
There has been emerging recent interest towards the study of the socialnetworks in cultural works such as novels and films. Such character networksexhibit many of the properties of complex networks such as skewed degreedistribution and community structure, but may be of relatively small orderwith a high multiplicity of edges. We present graph extraction,visualization, and network statistics for three novels: Twilight byStephanie Meyer, Steven King's The Stand, and J.K. Rowling's Harry Potterand the Goblet of Fire. Coupling with 800 character networks from filmsfound in the Moviegalaxies database, we compare the data sets to simulationsfrom various stochastic complex networks models including the Chung-Lumodel, the configuration model, and the preferential attachment model. Wedescribe our model selection experiments using machine learning techniquesbased on motif (or small subgraph) counts. The Chung-Lu model best fitscharacter networks and we will discuss why this is the case.
Consider a continuously evolving stochastic process that gets interrupted at random times with big changes. Examples are financial crashes due to a sudden fall in stock prices, a sudden decrease in population due to a natural catastrophe, etc. Question: How do these sudden interruptions affect the observable properties at long times?
As a first answer, we consider simple diffusion interrupted at random times by long jumps associated with resets to the initial state. We will discuss recent advances in characterizing the long-time properties of such a dynamics, thereby unveiling a host of rich observable properties. Time permitting, I will discuss the extension of these studies to many-body interacting systems.
We all need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. In this seminar I will present two laboratory experiments which focus on the impact of information and reputation on human behavior when people engage cooperative interactions on dynamic networks. In the first study, we investigate whether and how the ability to make or break links in social networks fosters cooperation, paying particular attention to whether information on an individual’s actions is freely available to potential partners. Studying the role of information is relevant as complete knowledge on other people’s actions is often not available for free. In the second work, we focus our attention on the role of individual reputation, an indispensable tool to guide decisions about social and economic interactions with individuals otherwise unknown. Usually, information about prospective counterparts is incomplete, often being limited to an average success rate. Uncertainty on reputation is further increased by fraud, which is increasingly becoming a cause of concern. To address these issues, we have designed an experiment where participants could spend money to have their observable cooperativeness increased. Our findings point to the importance of ensuring the truthfulness of reputation for a more cooperative and fair society.
Zeros of vibrational modes have been fascinating physicists forseveral centuries. Mathematical study of zeros of eigenfunctions goesback at least to Sturm, who showed that, in dimension d=1, the n-theigenfunction has n-1 zeros. Courant showed that in higher dimensionsonly half of this is true, namely zero curves of the n-th eigenfunction ofthe Laplace operator on a compact domain partition the domain into atmost n parts (which are called "nodal domains").
It recently transpired that the difference between this upper boundand the actual value can be interpreted as an index of instability ofa certain energy functional with respect to suitably chosenperturbations. We will discuss two examples of this phenomenon: (1)stability of the nodal partitions of a domain in R^d with respect to aperturbation of the partition boundaries and (2) stability of a grapheigenvalue with respect to a perturbation by magnetic field. In bothcases, the "nodal defect" of the eigenfunction coincides with theMorse index of the energy functional at the corresponding criticalpoint. We will also discuss some applications of the above results.
Based on arXiv:1103.1423, CMP'12 (with R.Band, H.Raz, U.Smilansky),arXiv:1107.3489, GAFA'12 (with P.Kuchment, U.Smilansky),arXiv:1110.5373, APDE'13arXiv:1212.4475, PTRSA'13 to appear (with T.Weyand),arXiv:1503.07245, JMP'15 to appear (with R.Band and T.Weyand)
Granular matter is the prototypical example of systems that jam when subject to an external loading. Its athermal character, i.e. the fact that the motion of individual grains is insensitive to thermal fluctuations, makes its statistical properties a priori dependent on the protocol used to reach the jammed state. In this talk we will look at two distinct examples from different classes of such protocols: single-step protocols and sequential protocols. Depending on the context, we will see how one can try to extend the definition of concepts borrowed from statistical thermodynamics such as entropy, ensembles and ergodicity so that they remain meaningful for jammed granular matter.
Many physical systems can be described by particle models. The interaction between these particles is often modeled by forces, which typ- ically depend on the inter-particle distance, e.g., gravitational attraction in celestial me- chanics, Coulomb forces between charged par- ticles or swarming models of self-propelled par- ticles. In most physical systems Newtons third law of actio-reactio is valid. However, when considering a larger class of interacting par- ticle models, it might be crucial to introduce an asymmetry into the interaction terms, such that the forces not only depend on the dis- tance, but also on direction. Examples are found in pedestrian models, where pedestrians typically pay more attention to people in front than behind, or in traffic dynamics, where dri- vers on highways are assumed to adjust their speed according to the distance to the preced- ing car. Motivated by traffic and pedestrian models, it seems valuable to study particle sys- tems with asymmetric interaction where New- tons third law is invalid. Here general parti- cle models with symmetric and asymmetric re- pulsion are studied and investigated for finite- range and exponential interaction in straight corridors and annulus. In the symmetric case transitions from one-to multi-lane (zig-zag) be- havior including multi-stability are observed for varying particle density and for a varying curvature with fixed density. When the asym- metry of the interaction is taken into account a new “bubble”-like pattern arises when the dis- tance between lanes becomes spatially mod- ulated and changes periodically in time, i.e. peristaltic motion emerges. We find the tran- sition from the zig-zag state to the peristaltic state to be characterized by a Hopf bifurcation.
Evolutionary Game Theory (EGT) represents the attempt to describe the evolution of populations by the formal frame of Game Theory, combined with principles and ideas of the Darwinian theory of evolution.Nowadays, a long list of EGT applications spans from biology to socio-economic systems, where the emergence of cooperation constitutes one of the topics of major interest.Here statistical physics allows to investigate EGT dynamics, in order to understand the relations between microscopic and macroscopic behaviors in these systems.Following this approach, during this talk a new application of EGT will be shown. In particular, a new heuristic for solving optimization tasks, like the Traveling Salesman Problem (TSP), will be introduced. Results of this work show that EGT can be a powerful framework for studying a wide range of problems.
We find a correspondence between certain difference algebras and subshifts of finite type (SFTs) as studied in symbolic dynamics. The known theory of SFTs from symbolic dynamics allows us to make significant advances in difference algebra. Conversely, a `Galois theory' point of view from difference algebra allows us to obtain new structure results for SFTs.
Biology systems operate in the far from equilibrium regime and one defining feature of living organisms is their motility. In the hydrodynamic limit, a system of motile organisms may be viewed as a form of active matter, which has been shown to exhibit behaviour analogous to that found in equilibrium systems, such as phase separation in the case of motility-induced aggregation, and critical phase transition in incompressible active fluids. In this talk, I will use the concept of universality to categorise some of the emergent behaviour observed in active matter. Specifically, I will show that i) the coarsening kinetics of motility-induced phase separation belongs to the Lifshitz-Slyozov-Wagner universality class [1]; ii) the order-disorder phase transition in incompressible polar active fluids (IPAF) constitutes a novel universality class [2], and iii) the behaviour of IPAF in the ordered phase in 2D belongs to the Kardar-Parisi-Zhang universality class [3].
References:
[1] C. F. Lee, “Interface stability, interface fluctuations, and the Gibbs-Thomson relation in motility-induced phase separations,” arXiv: 1503.08674, 2015.[2] L. Chen, J. Toner, and C. F. Lee, “Critical phenomenon of the order-disorder transition in incompressible active fluids,” New Journal of Phyics, 17, 042002, 2015.[3] L. Chen, C. F. Lee, and J. Toner, “Birds, magnets, soap, and sandblasting: surprising connections to incompressible polar active fluids in 2D,” arXiv:1601.01924, 2016.
Assessing systemic risk in financial markets and identifying systemically important financial institutions and assets is of great importance. In this talk I will consider two channels of propagation of financial systemic risk, (i) the common exposure to similar portfolios and fire sale spillovers and (ii) the liquidity cascades in the interbank networks. For each of them I will show how the use of statistical models of networks might be useful in systemic risk studies. In the first case, by applying the Maximum Entropy principle to the bipartite network of banks and assets, we propose a method to assess aggregated and single bank’s systemicness and vulnerability and to statistically test for a change in these variables when only the information on the size of each bank and the capitalization of the investment assets are available. In the second case, by inferring a stochastic block model from the e-MID interbank network, we show that the extraordinary ECB intervention during the sovereign debt crisis changed completely the large scale organization of such market and we identify the banks that, changing their strategy in response to the intervention, contributed most to the architectural network mutation.
There is a recognized need to build tools capable of anticipaticiting tipping points in complex systems. Most commonly this is done by describing a tipping point as a bifurcation and using the formalism coming from phase transitions. Here we try a different approach, applicable to systems with high dimensions. A metastable state is described as a high-dimensional tipping point, a transition in this new optics is the escape of the system from such configuration, given by a rare perturbation parallel to un unstable direction.We will show our procedure by an application to two models: The Tangled Nature Model introduced by H. Jensen et al to mathematically explain the macroscopic intermittent dynamics of ecological systems, phenomenon known under the name of punctuated equilibrium. And high dimensional replicator systems with a stochastic element, first developed by J. Grujic. By describing the models' stochastic dynamics through a mean fied approximation we are able to gather information on the stability of the meta-stable configuration and predict the arrival of transitions.
Discrete Flow Mapping (DFM) was recently introduced as a mesh-based high frequency method for modelling structure-borne sound in complex structures comprised of two-dimensional shell and plate subsystems. In DFM, the transport of vibrational energy between substructures is typically described via a local interface treatment where wave theory is employed to generate reflection/transmission and mode coupling coefficients. The method has now been extended to model three-dimensional meshed structures, giving a wider range of applicability and also naturally leading to the question of how to couple the two- and three-dimensional substructures. In my talk I will present a brief overview of DFM, discuss numerical approaches and sketch ideas behind Discrete Flow Mapping in coupled two and three dimensional domains.
We study a standard model for the stochastic resonance from the point of view of dynamical systems. We present a framework for random dynamical systems with nonautonomous deterministic forcing and we prove the existence of an attracting random periodic orbit for a class of one-dimensional systems with a time-periodic component. In the case of the stochastic resonance, we use properties of the attractor to derive an indicator for the resonant regime.
Mathematical modelling of cancer has a long history, but all cancer models can be categorized into two classes. "Non-spatial" models treat cancerous tumours as well-stirred bags of cells. This approach leads to nice, often exactly solvable models. However, real tumours are not well mixed and different subpopulations of cancer cells reside in different spatial locations in the tumour. "Spatial" models that aim at reproducing this heterogeneity are often very complicated and can only be studied through computer simulations.
In this talk I will present spatial models of cancer that are analytically soluble. These models demonstrate how growth and genetic composition of tumours is affected by three processes: replication, death, and migration of cancer cells. I will show what predictions these models make regarding experimentally accessible quantities such as the growth rate or genetic heterogeneity of a tumour, and discuss how they compare to clinical data.
In a good solvent, a polymer chain assumes an extended configuration. As the solvent quality (or the temperature) is lowered, the configuration changes to globular, which is more compact. This collapse transition is also called coil-globule transition in the literature. Since the pioneering work by de Gennes, it is known that it corresponds to a tricritical point in a grand-canonical parameter space. In the most used lattice model to study it, the chain is represented by a self-avoiding walk on and the solvent is effectively taken into account by including attractive interactions between monomers on first neighbor sites which are not consecutive along a chain (SASAW's: self-attracting self-avoiding walks). We will review the model and show that small changes in it may lead to different phase diagrams, where the collapse transition is no longer a tricritial point. In particular, if the polymer is represented by a trail, which allows for multiple visits of sites but mantains the constraint of single visits of edges, we find two distinct polymerized phases besides the non-polymerized phase and the collapse transition becomes a bicritical point.
Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social or biological networks. We prove that this strcutural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to networkscience and cosmology. However, our simple frameworks is unable to explain the emergence of community structure, a property that, along with scale-free degree distributions and strong clustering, is commonly found in real complex networks. Here we show how latent network geometry coupled with preferential attachment of the nodes to this geometry fills this gap. We call this mechanism geometric preferential attachment (GPA) and validate it against the Internet. GPA gives rise to soft communities that provide a different perspective on the community structure in networks. The connections between GPA and cosmological models, including inflation, are also discussed.
Understanding the relation between functional anatomy and structural substrates is a major challenge in neuroscience. To study at the aggregate level the interplay between structural brain networks and functional brain networks, a new method will be described; it provides an optimal brain partition —emerging out of a hierarchical clustering analysis— and maximizes the “cross-modularity” index, leading to large modularity for both networks as well as a large within-module similarity between them . The brain modules found by this approach will be compared with the classical Resting State Networks, as well as with anatomical parcellations in the Automated Anatomical Labeling atlas and with the Broadmann partition.
Network growth models with attachment rules governed by intrinsic node fitness are considered. Both direct and inverse problems of matching the growth rules to node degree distribution and correlation functions are given analytical solutions. It is found that the node degree distribution is generically broader than the distribution of fitness, saturating at power laws. The saturation mechanism is analysed using a feedback model with dynamically updated fitness distribution. The latter is shown to possess a nontrivial fixed point with a unique power-law degree distribution. Applications of field-theoretic methods to network growth models are also discussed.
We study the phenomenon of migration of the small molecular weight component of a binarypolymer mixture to the free surface using mean field and self-consistent field theories. By proposing a free energy functional that incorporates polymer-matrix elasticity explicitly, we compute the migrant volume fraction and show that it decreases significantly as the sample rigidity is increased. Estimated values of the bulk modulus suggest that the effect should be observable experimentally for rubber-like materials. This provides a simple way of controlling surface migration in polymer mixtures and can play an important role in industrial formulations, where surface migration often leads to decreased product functionality.
The binary-state voter model describes a system of agents who adopt the opinions of their neighbours. The coevolving voter model (CVM, [1]) extends its scope by giving the agents the option to sever the link instead of adopting a contrarian opinion. The resulting simultaneous evolution of the network and the configuration leads to a fragmentation transition typical of such adaptive systems. The CVM was our starting point for investigating coevolution in the context of multilayer networks, work that IFISC was tasked with under the scope of the LASAGNE Initiative. In this talk I will briefly review some of the outcomes and follow-up works. First we will see how coupling together of two CVM networks modifies the transitions and results in a new type of fragmentation [2]. I will then identify the latter with the behaviour of the single-network CVM with select nodes constantly under the stress of noise [3]. Finally, I will relate our attempts to reproduce the effect of multiplexing on the voter model by studying behaviour of the standard aggregates; the negative outcome of which gives validity to considering the multiplex as a fundamentally novel, non-reducible structure [4].
[1] F. Vazquez, M. San Miguel and V. M. Eguiluz, Generic Absorbing Transition in Coevolution Dynamics, Physical Review Letters, 100, 108702 (2008)[2] MD, M. San Miguel and V. E. Eguiluz, Absorbing and Shattered Fragmentation Transitions in Multilayer Coevolution, Physical Review E, 89, 062818, (2014)[3] MD, V. M. Eguiluz and M. San Miguel, Noise in Coevolving Networks, Physical Review E, 92, 032803, (2015)[4] MD, V. Nicosia, V. Latora and M. San Miguel, Irreducibility of Multilayer Network Dynamics: the Case of the Voter Model,arXiv:1507.08940 (2015)
Quantum Hall states are characterised by the precise quantization of Hall conductance, the phenomenon whose geometric origin was understood early on. One of the main goals of the theory is computing adiabatic phases corresponding to various geometric deformations (associated with the line bundle, metric and complex structure moduli), in the limit of a large number of particles. We consider QH states on Riemann surfaces, and give a complete characterisation of the problem for the integer QH states and for the Laughlin states in the fractional QHE, by computing the generating functional for these states. In the integer QH our method relies on the Bergman kernel expansion for high powers of holomorphic line bundle, and the answer is expressed in terms of energy functionals in Kahler geometry. We explain the relation of geometric phases to Quillen theory of determinant line bundles, using Bismut-Gillet-Soule anomaly formulas. On the sphere the generating functional is also related to the partition function for normal random matrix ensembles for a large class of potentials. For the Laughlin states we compute the generating functional using path integral in a 2d scalar field theory.
In the last years, ideas and methods from network science have beenapplied to study the structure of time series, thereby building a bridgebetween dynamical systems, time series analysis and graph theory. In thistalk I will focus on a particular approach, namely the family ofvisibility algorithms, and will give a friendly overview of the mainresults that we have obtained recently. In particular, I will focus onseveral canonical problems arising in different fields such as nonlineardynamics, stochastic processes, statistical physics and machine learningas well as in applied fields such as finance, and will show how these canbe mapped, via visibility algorithms, to the study of certain topologicalproperties of visibility graphs. If time permits, I will also present adiagrammatic theory that allows to find some exact results on theproperties of these graphs for general classes of Markovian dynamics.
The partially directed path is a classical model in lattice path combinatorics. In this talk I will review briefly the model and explain why it is a good model for quantifying polymer entropy. If the path is confined to the space between vertical walls in a half-lattice, then it loses entropy. This loss of entropy induces an entropic force on the walls. I will show how to determine the generating and partition function of the model using the kernel method, and then compute entropic forces and pressures. In some cases the asymptotic behaviour of the entropic forces will be shown. This work was done in collaboration with Thomas Prellberg. See http://arxiv.org/abs/1509.07165
The talk provides an overview recent work on the analysis of von Neumann entropy, which leads to new methods for network algorithms in both the machine learning and complex network domains. We commence by presenting simple approximations for the Von Neumann entropy of both directed and undirected networks in terms of edge degree statistics. In the machine learning domain, this leads to new description length methods for learning generative models of network-structure, and new ways of computing information theoretic graph kernels. In the complex network domain, it provides a means of analysing the time evolution of networks, and making links with the thermodynamics of network evolution.
This is part of a series of collaborative meetings between Bath, Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, Surrey, and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
For speakers, schedule, titles, and abstracts see the meeting webpage.
I will give a gentle introduction to some recent work on the effects of long-range temporal correlations in stochastic particle models, focusing particularly on fluctuations about the typical behaviour. Specifically, in the first part of the talk, I will discuss how long-range memory dependence can modify the large deviation principle describing the probability of rare currents and lead, for example, to superdiffusive behaviour. In the second part of the talk, I will describe a more interdisciplinary project incorporating the psychological "peak-end" heuristic for human memory into a simple discrete choice model from economics.
[Sun, sea and sand(pit): This is mainly work completed during my sabbatical and partially funded/inspired by the "sandpit" grant EP/J004715/1. There may be a few pictures!]
Systems driven out of equilibrium experience large fluctuations of the dissipated work. The same is true for wavefunction amplitudes in disordered systems close to the Anderson localization transition. In both cases, the probability distribution function is given by the large-deviation ansatz. Here we exploit the analogy between the statistics of work dissipated in a driven single-electron box and that of random multifractal wavefunction amplitudes, and uncover new relations that generalize the Jarzynski equality. We checked the new relations theoretically using the rate equations for sequential tunnelling of electrons and experimentally by measuring the dissipated work in a driven single-electron box and found a remarkable correspondence. The results represent an important universal feature of the work statistics in systems out of equilibrium and help to understand the nature of the symmetry of multifractal exponents in the theory of Anderson localization.
In this presentation, a generalisation of pairwise models to non-Markovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations, which shows excellent agreement with results based on stochastic simulations. Furthermore, we analytically compute a new R_0-like threshold quantity and an analytical relation between this and the final epidemic size. Additionally, we show that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size. By showing the rigorous link between non-Markovian dynamics and pairwise delay differential equations, we provide the framework for a more systematic understanding of non-Markovian dynamics.
Rectification of work in non-equilibrium conditions has been one of the important topic of non-equilibrium statistical mechanics. Within the framework of equilibrium thermodynamics, it is well known that the works can be rectified from two thermal equilibrium baths. We address the question that how can we rectify work from Brownian object (piston) attached to multiple environments, including non-equilibrium baths? We focus on adiabatic piston problem under nonlinear friction, where the piston with sliding friction separates two gases of the same pressure, but different temperatures. Without sliding friction, the direction of piston motion is known to be determined from the difference of temperature of two gases [1,2]. However, if sliding friction exists, we report that the direction of motion depends on the amplitude of the friction, and nonlinearity of the friction [3]. If time allows, we also report the possibility of application to the problem of fluctuating heat engine, where the temperature of gas is changed, in a cyclic manner [4].
[1] E. H. Lieb, Physica A 263 491 (1999).[2] Ch. Gruber and J. Piasecki, Physica A 268 412 (1999). A. Fruleux, R. Kawai and K. Sekimoto, Phys. Rev. Lett. 108 160601 (2012).[3] T. G. Sano and H. Hayakawa, Phys. Rev. E 89 032104 (2014).[4] T. G. Sano and H. Hayakawa, arXiv:1412.4468 (2014).
Fluctuation in small systems has attracted wide interest because of the recent experimental development in biological, colloidal, and electrical systems. As accurate data on fluctuation have become accessible, the importance of mathematical modeling of fluctuation’s dynamics has been increasing. One of the minimal models for such systems is the Langevin equation, which is a simple model composed of the viscous friction and the white Gaussian noise. The validity of the Langevin model has been shown in terms of some microscopic theories [1], and this model has been used not only theoretically but also experimentally in describing thermal fluctuation.On the other hand, non-Gaussian properties of fluctuation are reported to emerge in athermal systems, such as biological, granular, and electrical systems. A natural question then would arises: When and how does the non-Gaussian fluctuation emerge for athermal systems? In this seminar, we present a systematic method to derive a Langevin-like equation driven by non-Gaussian noise for a wide class of stochastic athermal systems, starting from master equations and developing an asymptotic expansion [2, 3]. We found an explicit condition whereby the non-Gaussian properties of the athermal noise become dominant for tracer particles associated with both thermal and athermal environments. We also derive an inverse formula to infer microscopic properties of the athermal bath from the statistics of the tracer particle. Furthermore, we obtain the full-order asymptotic formula of the steady distribution function for an arbitrary strong non-linear friction, and show that the first-order approximation corresponds to the independent kick model [4]. We apply our formulation to a granular motor under viscous and Coulombic frictions, and analytically obtain the angular velocity distribution functions. Our theory demonstrates that the non-Gaussian Langevin equation is a minimal model of athermal systems.[1] N.G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland (2007).[2] K. Kanazawa, T.G. Sano, T. Sagawa, and H. Hayakawa, Phys. Rev. Lett. 114, 090601 (2015).[3] K. Kanazawa, T.G. Sano, T. Sagawa, and H. Hayakawa, J. Stat. Phys. 160, 1294 (2015).[4] J. Talbot, R.D. Wildman, and P. Viot, Phys. Rev. Lett. 107, 138001 (2011).
In this talk we study the impact that urban mobility patterns have on the onset of epidemics. We focus on two particular datasets from the cities of Medellín and Bogotá, both in Colombia. Although mobility patterns in these two cities are similar from those typically found for large cities, these datasets provide additional information about the socioeconomic status of the individuals. This information is particularly important when the level of inequality i a society is large, as it is the case in Colombia. Thus, taking advantage of this additional information we unveil the differences between the mobility patterns of these social stata to finally unveil the social hierarchy by analyzing the contagion patterns occurring during an epidemic outbreak.
The synaptic inputs arriving in the cortex are under many circumstanceshighly variable. As a consequence, the spiking activity of corticalneurons is strongly irregular such that the coefficient of variation ofthe inter-spike interval distribution of individual neurons isapproximately Poisson-like. To model this activity, balanced networkshave been put forward where a coordination between excitatory and stronginhibitory input currents, which nearly cancel in individual neurons,gives rise to this irregular spiking activity. However, balancednetworks of excitatory and inhibitory neurons are characterized by astrictly linear relation between stimulus strength and network firingrate. This linearity makes it hard to perform more complex computationaltasks like the generation of receptive fields, multiple stable activitystates or normalization, which have been measured in many sensorycortices. Synapses displaying activity dependent short-term plasticity(STP) have been previously reported to give rise to a non-linear networkresponse with potentially multiple stable states for a given stimulus.In this seminar, I will discuss our recent analytical and numericalanalysis of computational properties of balanced networks whichincorporate short-term plasticity. We demonstrate stimuli are normalizedby the network and that increasing the stimulus to one sub-network,suppresses the activity in the neighboring population. Thereby,normalization and suppression are linear in stimulus strength when STPis disabled and become non-linear with activity dependent synapses.
Many state of the art music generation/improvisation systems generate music thatsounds good on a note-to-note level. However, these compositions often lack long termstructure or coherence. This problem is addressed in this research by generating music thatadheres to a structural template. A powerful variable neighbourhood search algorithm (VNS)was developed, which is able to generate a range of musical styles based on it'sobjective function, whilst constraining the music to a structural template. In the firststage of the project, an objective function based on rules from music theory was used togenerate counterpoint. In this research, a machine learning approach is combined with theVNS in order to generate structured music for the bagana, an Ethiopian lyre. Differentways are explored in which a Markov model can be used to construct quality metrics thatrepresent how well a fragment fits the chosen style (e.g. music for bagana). This approachallows us to combine the power of machine learning methods with optimization algorithms.
Links:http://dorienherremans.com/biography
Transfer operators are global descriptors of ensemble evolution under nonlinear dynamics and form the basis of efficient methods of computing a variety of statistical quantities and geometric objects associated with the dynamics.I will discuss two related methods of identifying and tracking coherent structures in time-dependent fluid flow; one based on probabilistic ideas and the other on geometric ideas.Applications to geophysical fluid flow will be presented.
The densest way to pack objects in space, also known as the packing problem, has intrigued scientists and philosophers for millenia. Today, packing comes up in various systems over many length scales from batteries and catalysts to the self-assembly of nanoparticles, colloids and biomolecules. Despite the fact that so many systems' properties depend on the packing of differently-shaped components, we still have no general understanding of how packing varies as a function of particle shape. Here, we carry out an exhaustive study of how packing depends on shape by investigating the packings of over 55,000 polyhedra. By combining simulations and analytic calculations, we study families of polyhedra interpolating between Platonic and Archimedean solids such as the tetrahedron, the cube, and the octahedron. Our resulting density surface plots can be used to guide experiments that utilize shape and packing in the same way that phase diagrams are essential to do chemistry. The properties of particle shape indeed are revealing why we can assemble certain crystals, transition between different ones, or get stuck in kinetic traps.
Links: http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011024(link is external),http://www.newscientist.com/article/dn25163-angry-alien-in-packing-puzzl...(link is external),http://physicsworld.com/cws/article/news/2014/mar/03/finding-better-ways...(link is external),http://physics.aps.org/synopsis-for/10.1103/PhysRevX.4.011024(link is external)
My website: http://www-personal.umich.edu/~dklotsa/Daphne_Klotsas_Homepage/Home.html
Teaching mathematical writing gives you a vivid portrait of the students' struggle with exactness and abstraction, and new tools for dealing with it. This seminar intends to stimulate a discussion on how we introduce ourstudents to abstract mathematics; I also hope to give a positive twist to the soul-searching that normally accompanies exam-marking.
In this talk, making use of statistical physics tools, we address the specific role of randomness in financial markets, both at micro and macro level. In particular, we will review some recent results obtained about the effectiveness of random strategies of investment, compared with some of the most used trading strategies for forecasting the behavior of real financial indexes. We also push forward our analysis by means of a Self-Organized Criticality model, able to simulate financial avalanches in trading communities with different network topologies, where a Pareto-like power law behavior of wealth spontaneously emerges. In this context we present new findings and suggestions for policies based on the effects that random strategies can have in terms of reduction of dangerous financial extreme events, i.e. bubbles and crashes.
A.E. Biondo, A. Pluchino, A. Rapisarda, Contemporary Physics 55 (2014) 318A.E. Biondo, A. Pluchino, A. Rapisarda, D. Helbing, Phys Rev. E 88 (2013) 062814A.E. Biondo, A. Pluchino, A. Rapisarda, D. Helbing, (2013) PLOS ONE 8(7): e68344.A.E. Biondo, A. Pluchino, A. Rapisarda, Journal of Statistical Physics 151 (2013) 607.
Linear fractional equation involving a Riemann-Liouville derivativeis the standard model for the description of anomalous subdiffusivetransport of particles. The question arises as to how to extend this fractionalequation for the nonlinear case involving particles interactions.The talk will be concerned with the structural instability of fractional Fokker–Planckequation, nonlinear fractional PDE's and aggregation phenomenon.
A famous problem of fluid dynamics is the flow around a cylindrical or spherical obstacle. At small flow velocity, a steady axisymmetric wake forms behind the obstacle; upon increasing the velocity the wake becomes longer, then asymmetric and time dependent (vortices of alternating signs are shed in the von Karman vortex street pattern), then turbulent. The question which we address is what happens if the fluid is a superfluid, such as liquid He, or an atomic Bose-Einstein condensate: in the absence of viscosity, is there a quantum analog to the classical wake ?
I will discuss methods for spatio-temporal modelling in molecular,cell and population biology. Three classes of models will be considered:
(i) microscopic (individual-based) models (molecular dynamics,Brownian dynamics) which are based on the simulation oftrajectories of molecules (or individuals) and their localizedinteractions (for example, reactions);
(ii) mesoscopic (lattice-based) models which divide the computationaldomain into a finite number of compartments and simulate the timeevolution of the numbers of molecules (numbers of individuals)in each compartment; and
(iii) macroscopic (deterministic) models which are written in termsof mean-field reaction-diffusion-advection partial differentialequations (PDEs) for spatially varying concentrations.
In the first part of my talk, I will discuss connections between themodelling frameworks (i)-(iii). I will consider chemical reactions both ata surface and in the bulk. In the second part of my talk, I will presenthybrid (multiscale) algorithms which use models with a different levelof detail in different parts of the computational domain.The main goal of this multiscale methodology is to use a detailedmodelling approach in localized regions of particular interest(in which accuracy and microscopic detail is important) and a lessdetailed model in other regions in which accuracy may be tradedfor simulation efficiency. I will also discuss hybrid modellingof chemotaxis where an individual-based model of cells is coupledwith PDEs for extracellular chemical signals.
Ripening in systems where the overall aggregate volume increases due tochemical reactions or the drift of thermodynamic parameters is a problemof pivotal importance in the material and environmental sciences. Inthe former its better understanding provides insight into controllingnanoparticle synthesis, annealing, and aging processes. In the latterit is of fundamental importance to improve the parametrization of mistand clouds in weather and climate models.
I present the results of comprehensive laboratory experiments andnumerical studies addressing droplet growth and droplet sizedistributions in systems where droplets grow due to sustainedsupersaturation of their environment. Both, for classical theoriesaddressing droplets condensing on a substrate (like in dew and coolingdevices) and droplets entrained in an external flow (like in clouds andnanoparticle synthesis) we identify severe shortcomings. I will showthat the quantitative modelling of rain formation in clouds on the onehand and of the ageing and synthesis of nanoparticles on the other handface the same theoretical challenges, and that these challenges can beaddressed by adapting modern methods of non-equilibrium statisticalphysics.
The use of the so-called Coulomb gas technique in Random Matrix Theory goes back to the seminal works of Wigner and Dyson. I review some modern (and not so modern!) applications of this technique, which are linked via a quite intriguing unifying thread: the appearance of extremely weak (third-order) phase transitions separating the equilibrium phases of the fluid of "eigenvalues". A particular interesting example concerns the statistics of the largest eigenvalue of random matrices, and the probability of atypical fluctuations not described by the celebrated Tracy-Widom law. Recent occurrences of this type of phase transitions in condensed matter and statistical physics problems - which have apparently very little to do with each other - are also addressed, as well as some "exceptions" or "counter-examples".
We show that the mixed phase space dynamics of a typical smooth Hamiltonian system universally leadsto a sustained exponential growth of energy at a slow periodic variation of parameters. We build a model for thisprocess in terms of geometric Brownian motion with a positive drift, and relate it to the steady entropy increaseafter each period of the parameters variation.
The constituents of a wide variety of real-world complex systems interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Recently, the interest of the research community increased towards such systems because accounting for the "multilayer" features of those systems is a challenge. In this lecture, we will discuss several real-world examples, put in evidence their multilayer information and review the most recent advance in this new field.
In this talk we explore different ways to construct city boundaries and its relevance to current efforts towards a science of cities. We use percolation theory to understand the hierarchical organisation of the urban system, and look at the morphological characteristics of urban clusters for traces of optimization or universality.
In this special lecture, organized within our MSc Mathematics of Networks/Network Science, Dr. Jim Webber, chief scientist at Neo Technology, will talk about how Network Science is used in industry in a daily basis, within their software Neo4j.Jim will introduce the notion of graph databases for storing and querying connected data structures. He will also look under the covers at Neo4j's design, and consider how the requirements for correctness and performance of connected data drive the architecture. Moving up the stack, he will explore Neo4j's Cypher query language and show how it can be used to tackle complex scenarios like recommendations in minutes (with live programming, naturally!). Finally he will discuss what it means to be a very large graph database and review the dependability requirements to make such a system viable.Everybody is welcome, and we specially invite all our MSc and PhD students to attend, as it can be an excellent forum for discussion between academia and industry.
We consider random quantum walks on a homogeneous tree of degree 3 describing the discrete time evolution of a quantum particle with internal degree of freedom in C^3 hopping on the neighboringsites of the tree in presence of static disorder. The one time step random unitary evolution operator of the particle depends on a unitary matrix C in U(3) which monitors the strength of the disorder.We show the existence of open sets of matrices in U(3) for which the random evolution has either pure point spectrum almost surely or purely absolutely continuous spectrum.We also establish properties of the spectral diagram which provide a description of the spectral transition driven by C in U(3). This is joint work with Eman Hamza.
This is part of a series of collaborative meetings between Bristol, Leicester, Liverpool, Loughborough, Manchester, Queen Mary, Surrey, and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
The use of ac fields allows one to precisely control the motion of particles in periodic potentials. We demonstrate such a precise control with cold atoms in driven optical lattices, using two very different mechanism: the ratchet effect and vibrational mechanics. In the first one ac fields drive the system away from equilibrium and break relevant symmetries, in the second one ac fields lead to the renormalisation of the potential.
In the talk I will demonstrate on specific examples the emergence of a new field, "statistical topology", which unifies topology, noncommutative geometry, probability theory and random walks. In particular, I plan to discuss the following interlinked questions: (i) how the ballistic growth ("Tetris" game) is related to random walks in symmetric spaces and quantum Toda chain, (ii) what is the optimal structure of the salad leaf in 3D and how it is related to modular functions and hyperbolic geometry, (iii) what is the fractal structure of unknotted long polymer chain confined in a bounding box and how this is related to Brownian bridges in spaces of constant negative curvature.
Empirical evidence suggesting that living systems might operate in the vicinity of critical points, at the borderline between order and disorder, has proliferated in recent years, with examples ranging from spontaneous brain activity, to the dynamic of gene expression or to flock dynamics. However, a well-founded theory for understanding how and why living systems tune themselves to be poised in the vicinity of a critical point is lacking. In this talk I will review the concept of criticality with its associated scale invariance and power-law distributions. I will discuss mechanisms by which inanimate systems may self-tune to critical points and compare such phenomenology with what observed in living systems. I will also introduce the concept of Griffiths phase --an old acquaintance from the physics of disordered systems-- and show how it can be very naturally related to criticality in living structures such as the brain. In particular, taking into account the complex hierarchical-modular architecture of cortical networks, the usual singular critical pointin the dynamics of neural activity propagation is replaced by an extended critical-like region with a fascinating dynamics which might justify the trade-off between segregation and integration, needed to achieve complex cognitive functions.
Arnold’s cat map is a prototypical dynamical system on the torus with uniformly hyperbolic dynamics. Since the famous picture ofa scrambled cat in the 1968 book by Arnold and Avez, it has become one of the icons of chaos. In 2010, Lev Lerman studied a family of maps homotopic to the cat map that has, in addition to a saddle, a parabolic fixed point. Lerman conjectured that this map could be a prototype for dynamics with a mixed phase space, having positive measure sets of nonuniformly hyperbolic and of elliptic orbits. We present some numerical evidence that supports Lerman’s conjecture. The elliptic orbits appear to be confined to a pair of channels bounded by invariant manifolds of the two fixed points. The complement of the channels appears to be a positive measure Cantor set. Computations show that orbits in the complement have positive Lyapunov exponents.
Financial markets are complex systems with a large number of different factors contributing in an interrelated way. Complexity mainly manifests in two aspects: 1) changes in the statistical properties of financial signals when analyzed at different time-scales; 2) dependency and causality structure dynamically evolving in time. These -non-stationary- changes are more significant during periods of market stress and crises.
In this talk I’ll discuss methods to study financial market complexity from a statistical perspective. Specifically, I’ll introduce two approaches: 1) multi-scaling studies by means of novel scaling exponents and complexity measures; 2) network filtering techniques to make sense of big data.
I will discuss practical applications showing how a better understanding of market complexity can be used, in practice, to hedge risk and discover market inefficiencies.
In this talk - which will be accessible to a general audience - we show how the asymptotic behavior of random networks gives rise to universal statistical summaries. These summaries are related to concepts that are well understood in the other contexts - such as stationarity and ergodicity - but whose extension to networks requires recent developments from the theory of graph limits and the corresponding analog of de Finetti's theorem. We introduce a new tool based on these summaries, which we call a network histogram, obtained by fitting a statistical model called a blockmodel to a large network. Blocks of edges play the role of histogram bins, and so-called network community sizes that of histogram bandwidths or bin sizes. For more details, see recent work in the Proceedings of the National Academy of Sciences (doi:10.1073/pnas.1400374111, with Sofia Olhede) and the Annals of Statistics (doi:10.1214/13-AOS1173, with David Choi).
Contemporary finance is characterized by a complex pattern of relations between financial institutions that can be conveniently modeled in terms of networks.In stable market conditions, connections allow banks to diversify their investments and reduce their individual risk. The same networked structure may, however, become a source of contagion and stress amplification when some banks go bankrupt.We consider a network model of financial contagion due to the combination of overlapping portfolios and market-impact, and we show how it can be understood in terms of a generalized branching process. We estimate the circumstances under which systemic instabilities are likely to occur as a function of parameters such as leverage, market crowding and diversification.The analysis shows that the probability of observing global cascades of bankruptcies is a non-monotonic function of both the average diversification of financial institutions, and that there is a critical threshold for leverage below which the system is stable. Moreover the system exhibits "robust yet fragile'' behavior, with regions of the parameter space where contagion is rare but catastrophic whenever it occurs.
I will discuss the mean field kinetics of irreversible coagulation inthe presence of a source of monomers and a sink at large cluster sizeswhich removes large particles from the system. These kinetics aredescribed by the Smoluchowski coagulation equation supplemented withsource and sink terms. In common with many driven dissipative systems withconservative interactions, one expects this system to reach a stationarystate at large times characterised by a constant flux of mass in thespace of cluster sizes from the small-scale source to the large scale sink.While this is indeed the case for many systems, I will present here aclass of systems in which this stationary state is dynamically unstable.The consequence of this instability is that the long-time kinetics areoscillatory in time. This oscillatory behaviour is caused by the fact thatmass is transferred through the system in pulses rather than via a stationarycurrent in such a way that the mass flux is constant on average. Theimplications of this unusual behaviour the non-equilibrium kinetics ofother systems will be discussed.
(Work in collaboration with F. Font-Clos, G. Pruessner, A. Deluca)
When analysing time series it is common to apply thresholds. For example, this could be to eliminatenoise coming from the resolution limitations of measuring devices, or to focus on extreme events in the caseof high thresholds. We analyse the effect of applying a threshold to the duration time of a birth-deathprocess. This toy model allows us to work out the form of the duration time density in full detail. We findthat duration times decay with random walk exponent -3/2 for `short' times, and birth-death exponent -2for `long' times, where short and long are characterised by a threshold-imposed timescale. For sparse datathe ultimate -2 exponent of the underlying (multiplicative) process may never be observed. This may haveimplications for real-world data in the interpretation of threshold-specific decay exponents.
Many complex systems are characterised by distinct types ofinteractions among a set of elementary units, and their structure canbe thus better modelled by means of multi-layer networks. Afundamental open question is then how many layers are really necessaryto accurately represent a multi-layered complex system. Drawing on theformal analogy between quantum density operators and the normalisedLaplacian of a graph, we develop a simple framework to reduce thedimensionality of a multiplex network while minimizing informationloss. We will show that the number of informative layers in somenatural, social and collaboration systems can be substantiallyreduced, while multi-layer engineered and transportation systems, forwhich the redundancy is purposedly avoided in order to maximise theirefficiency, are essentially irreducible.
The topological entropy is a measure of the complexityof a map. In this talk I will explain this notion in some detail andreport on a recent result with H.H. Rugh on the regularity of thetopological entropy of interval maps with holes as a function of the holeposition and size.
When driven out of equilibrium by a temperature gradient, fluids respond by developing a nontrivial, inhomogeneous structure according to the governing macroscopic laws. Here we show that such structure obeys strikingly simple universal scaling laws arbitrarily far from equilibrium, provided that both macroscopic local equilibrium (LE) and Fourier’s law hold. These results, that we prove for hard sphere fluids and more generally for systems with homogeneous potentials in arbitrary dimension, are likely to remain valid in the much broader family of strongly correlating fluids where excluded volume interactions are dominant. Extensive simulations of hard disk fluids confirm the universal scaling laws even under strong temperature gradients, suggesting that Fourier’s law remains valid in this highly nonlinear regime, with the expected corrections absorbed into a non-linear conductivity functional. Our results also show that macroscopic LE is a very strong property, allowing us to measure the hard disks equation of state in simulations far from equilibrium with a surprising accuracy comparable to the best equilibrium simulations. Subtle corrections to LE are found in the fluctuations of the total energy which strongly point out to the non-locality of the nonequilibrium potential governing the fluid’s macroscopic behavior out of equilibrium. Finally, our simulations show that both LE and the universal scaling laws are robust in the presence of strong finite-size effects, via a bulk-boundary decoupling mechanism by which all sorts of spurious finite-size and boundary corrections sum up to renormalize the effective boundary conditions imposed on the bulk fluid, which behaves macroscopically.
Membranes or membrane-like materials play an important role in many fields ranging from biology to physics. These systems form a very rich domain in statistical physics. The interplay between geometry and thermal fluctuations lead to exciting phases such flat, tubular and disordered flat phases. Membranes can be divided into two group : fluid membranes in which the molecules are free to diffuse and thus no shear modulus. On the other hand, in polymerized membranes the connectivity is fixed which leads to elastic forces. This difference etween fluid and polymerized membranes leads to a difference in their critical behaviour. For instance, fluid embranes are always crumpled, whereas polymerized membranes exhibit a phase transition between a crumpled phase and a flat phase. In this talk, I will focus only on polymerized phantom, i.e. non-self-avoiding, membranes. The critical behaviour of both isotropic and anisotropic polymerized membranes are studied using a nonperturbative renormalization group approach (NPRG). This allows for the investigation of the phase transitions and the low temperature flat phase in any internal dimension D and embedding d. Interestingly, from the point of view of its mechanical properties, graphene identifies with the flat phase.
Nonlinear dynamics of neuron-neuron interaction via complex networks lie at the base of all brain activity. How such inter-cellular communication gives rise to behavior of the organism has been a long-standing question. In this talk, we first explore the evidence for the occurrence of such mesoscopic structures in the nervous system of the nematode C. elegans and in the macaque cortex. Next, we look at their possible functional role in the brain. We also consider the attractor network models of nervous system activity and investigate howmodular structures affect the dynamics of convergence to attractors. We conclude with a discussion of the general implications of our results for basin size of dynamical attractors in modular networks whose nodes have threshold-activated dynamics. As such networks also appear in the context of intra-cellular signaling, our results may provide a glimpse of a universal (i.e., scale-invariant) theory for information processing dynamics in biology.
The Brauer loop model is an integrable lattice model based on the Braueralgebra, with crossings of loops allowed. The ground state of thetransfer matrix is calculable (with some caveats) via the quantumKnizhnik--Zamolodchikov (qKZ) equation, a technique that expresses theground state components in terms of each other. This method has beenused frequently for lattice models of this type.
In 2005 de Gier and Nienhuis noticed a connection between the groundstate of the periodic Brauer loop model and the degrees of theirreducible components of a certain algebraic scheme as calculated byKnutson in 2003. This connection was explored further by Di Francescoand Zinn-Justin in 2006, and proved shortly thereafter by Knutson andZinn-Justin. The irreducible components can be labelled by the basiselements of the ground state, and the final proof involves showing thatthe multidegrees (an extension of the concept of polynomial degree) ofthese irreducible components also satisfy the qKZ equation. Thisconnection seems similar in spirit to the connection between integrablemodels and combinatorics, but is much less explored.
Organisers: Leon Danon and Rosemary J. Harris
Complex systems theory has played an increasingly important role in infectious disease epidemiology. From the fundamental basis of transmission between two interacting individuals, complexity can emerge at all scales, from small outbreaks to global pandemics. Traditional ODE models rely on simplistic characterisations of interactions and transmission, but as more and more data become available these are no longer necessary. The descriptive and predictive power of transmission models can be improved by statistical descriptions of behaviour and movement of individuals, and tools from complex systems contribute greatly to the discussion.
This workshop will cover advances in mathematical epidemiology that have been shaped by complex systems approaches. The workshop is intended to cover a broad spectrum of topics, from theoretical aspects of transmission on networks to current work shaping public policy on diseases of livestock and honey bees.
Attendance at this workshop is free and open to everyone. However, for catering purposes, please register your attendance via email to l.danon@qmul.ac.uk(link sends e-mail) or rosemary.harris@qmul.ac.uk(link sends e-mail) by 21st March.
The meeting is part of the CoSyDy series, a London Mathematical Society Scheme 3 network bringing together UK mathematicians investigating Complex Systems Dynamics. Travel support is available for participants from the member nodes.
Schedule:
All talks will now be in the Maths Lecture Theatre of the Mathematics Building. The full programme is also available as a pdf attachment below.
In the past few years, multilayer, interdependent and multiplex networks have quickly become a big avenue in mathematical modelling of networked complex systems, with applications in social sciences, large-scale infrastructures, information and communications technology, neuroscience, etc. In particular, it has been shown that such networks can describe the resilience of large coupled infrastructures (power grids, Internet, water systems, …) to failures, by studying percolation properties under random damage.Percolation is perhaps the simplest model of network resilience and can be defined or extended to multiplex networks (defined as a network with multiple edge types) in many different ways. In some cases, new analytical approaches must be introduced to include features that are intrinsic to multiplex networks. In other cases, extensions of classical models give origin to new critical phenomena and complex behaviours.Regarding the first case, I will illustrate a new theoretical approach to include edge overlap in a simple percolationmodel. Edge overlap, i.e. node pairs connected on different layers, is a feature common to many empirical cases,such as in transportation networks, social networks and epidemiology. Our findings illustrate properties ofmultiplex resilience to random damage and may give assistance in the design of large-scale infrastructure.Regarding the second aspect, I will present models of pruning and bootstrap percolation in multiplex networks. Bootstrap may be seen as a simple activation process and has applications in many areas of science. Our extension to multiplex networks can be solved analytically, has potential applications in network security, and provides a step in dealing with dynamical processes occurring on the network.
Interacting self-avoiding walks as models for polymer collapse in dilute solution have been studied for many years. The canonical model, also known as the Theta model, is rather well understood, and it was expected that all models with short-range attractive interactions between “monomers” would give the same behaviour as the Theta model. In recent years a variety of models have been studied which do not conform to this expectation, and the observed behaviour depends on the specifics of the interaction and lattice.
In this talk I will review some of the known or conjectured results for these models, with particular attention to the self-avoiding trails and vertex-interacting self-avoiding walk models, and show how these models may be studied using extended transfer matrix methods (transfer matrices, DMRG and CTMRG methods). I will also present some results for the complex zeroes of the partition function as a method for finding critical points and estimates of the cross-over exponents for walk models.
In April 2010 I gave a seminar at the Santa Fe Institute where I demonstrated that certain classic problems in economics can be resolved by re-visiting basic tenets of the formalism of decision theory. Specifically, I noted that simple mathematical models of economic processes, such as the random walk or geometric Brownian motion, are non-ergodic. Because of the non-stationarity of the processes, observables cannot be assumed to be ergodic, and this leads to a difference in important cases between time averages and ensemble averages. In the context of decision theory, the former tend to indicate how an individual will fare over time, while the latter may apply to collectives but are a priori meaningless for individuals. The effects of replacing expectation values by time averages are staggering -- realistic predictions for risk aversion, market stability, and economic inequality follow directly. This observation led to a discourse with Murray Gell-Mann and Kenneth Arrow about the history and development of decision theory, where the first studies of stochastic systems were carried out in the 17th century, and its relation to the development of statistical mechanics where refined concepts were introduced in the 19th century. I will summarize this discourse and present my current understanding of the problems.
Cultural change is often quantified by changes in frequency of cultural traits over time. Based on those (observable) frequency patterns researchers aim to infer the nature of the underlying evolutionary processes and therefore to identify the (unobservable) causes of cultural change. Especially in archaeological and anthropological applications this inverse problem gains particular importance as occurrence or usage frequencies are often the only available information about past cultural traits or traditions and the forces affecting them. In this talk we start analyzing the described inference problem and discuss it in the context of the question of which learning strategies human populations should deploy to be well-adapted to changing environmental conditions. To do so we develop a mathematical framework which establishes a causal relationship between changes in frequency of different cultural traits and the considered underlying evolutionary processes (in our case learning strategies). Besides gaining theoretical insights into the question of which learning strategies lead to efficient adaptation processes in changing environments we focus on ‘reverse engineering’ conclusions about the learning strategies deployed in current or past population, given knowledge of the frequency change dynamic over space and time. Using appropriate statistical techniques we investigate under which conditions population-level characteristics such as frequency distributions of cultural variants carry a signature of the underlying evolutionary processes and if this is the case how much information can be inferred from it. Importantly, we do not expect the existence of a unique relationship between observed frequency data and underlying evolutionary processes; to the contrary, we suspect that different processes can produce similar frequency pattern. However, our approach might help narrow down the range of possible processes that could have produced those observed frequency patterns, and thus still be instructive in the face of uncertainty. Rather than identifying a single evolutionary process that explains the data, we focus on excluding processes that cannot have produced the observed changes in frequencies. In the last part of the talk, we demonstrate the applicability of the developed framework to anthropological case studies.
Dangerous damage to mitochondrial DNA (mtDNA) between generations is ameliorated through a stochastic developmental process called the mtDNA bottleneck. The mechanism by which this process occurs is debated mechanistically and lacks quantitative understanding, limiting our ability to prevent the inheritance of mtDNA disease. We address this problem by producing a new, physically motivated, generalisable theoretical model for cellular mtDNA populations during development. This model facilitates, for the first time, a rigorous statistical treatment of experimental data on mtDNA during development, allowing us to resolve, with quantifiable confidence, the mechanistic question of the bottleneck. The mechanism with most statistical support involves random turnover of mtDNA with binomial partitioning at cell divisions and increased turnover during folliculogenesis. We analytically solve the equations describing this mechanism, obtaining closed-form results for all mtDNA and heteroplasmy statistics throughout development, allowing us to explore the effects of potential sampling strategies and dynamic interventions for the bottleneck. We find that increasing mtDNA degradation during the bottleneck may provide a general therapeutic target to address mtDNA disease. Our theoretical advances thus allow the first rigorous statistical analysis of data on the bottleneck, resolving and obtaining analytic results for its debated mechanism and suggesting clinical strategies to assess and prevent the possibility of inherited mtDNA disease.
An analytical solution for a network growth model of intrinsic vertex fitness is presented, alongwith a proposal to a new paradigm in fitness based network growth models. This class of modelsis classically characterised by a fitness linking mechanism that governs the attachment rate of newlinks to existing nodes and a distribution of node fitness, that measures the attractiveness of a node.It is argued in the present paper, that this distinction is unnecessary, instead linking propensity ofnodes can be expressed in terms of a ranking among existing nodes, which reduces the complexityof the problem. At each time-step of this dynamical model either a new node joins the network andis attached to one of the existing nodes or a new edge is added between two existing nodes withprobability proportional to the nodes attractiveness. The full analytic theory connecting the fitnessdistribution, the linking function, and the degree distribution is constructed. Given any two of thesecharacteristics, the third one can be determined in closed form. Furthermore additional statisticsare computed to fully describe every aspect of this network model. One particularly interestingfinding is that for a factorisable, and not necessarily symmetric linking function, very restrictiveassumptions on the exact form of the linking function need to be imposed to find a power-lawdegree distribution within this class of models.
Who are the most influential players in a social network? What's the origin of an epidemic outbreak? The answer to simple questions like these can hide incredibly difficult computational problems, that require powerful methods for the inference, optimization, and control of dynamical processes on large networks.I will present a statistical mechanics approach to inverse dynamical problems in the idealized framework provided by simple models of irreversible contagion and diffusion on networks (linear threshold model, susceptible-infected-removed epidemic model). Using the cavity method (belief propagation), it is possible to explore the large-deviation properties of these dynamical processes, and develop efficient message-passing algorithms to solve optimization and inference problems even on large networks.
A polymer grafted to a surface exerts pressure on the substrate. Similarly, a surface-attached vesicle exerts pressure on the substrate. By using directed walk models, we compute the pressure exerted on the surface for grafted polymers and vesicles, and the effect of surface binding strength and osmotic pressure on this pressure.
First we discuss general fractal and critical aspects of the brain as indicated by recent fMRI analysis. We then turn to the analysis of EEG signals from the brain of musicians and listeners during performance of improvised and non-improvised classical music. We are interested in differences between the response to the two different ways of playing music. We use measures of information flow to try to pin point differences in the structure of the network constituted by all the EEG electrodes of all musicians and listeners.
The surface drawn by a potential energy function, which is usually a multivariate nonlinear function, is called the potential energy landscape (PEL) of the given Physical/Chemical system. The stationary points of the PEL, where the gradient of the potential vanishes, are used to explore many important Physical and Chemical properties of the system. Recently, we have employed the numerical algebraic geometry (NAG) method to study the stationary points of the PELs of various models arising from Physics and Chemistry and have discovered their many interesting characteristics. In this talk, I will mention some of these results after giving a very brief introduction to the NAG method. I will then go on discussing our latest adventure: exploring the PELs of random potentials with NAG, which will address not only one of a classic problems in Algebraic Geometry but will also find numerous applications in different areas such as String Theory, Statistical Physics, Neural Networks, etc.
Recently models of evolution have begun to incorporate structured populations, including spatial structure, through the modelling of evolutionary processes on graphs (evolutionary graph theory). We shall start by looking at some work on quite simple graphs. One limitation of this otherwise quite general framework, however, is that interactions are restricted to pairwise ones, through the edges connecting pairs of individuals. Yet many animal interactions can involve many players, and theoretical models also describe such multi-player interactions. We shall discuss a more general modelling framework of interactions of structured populations with the focus on competition between territorial animals, where each animal or animal group has a "home range" which overlaps with a number of others, and interactions between various group sizes are possible. Depending upon the behaviour concerned we can embed the results of different evolutionary games within our structure, as occurs for pairwise games such as the prisoner’s dilemma or the Hawk-Dove game on graphs. We discuss some examples together with some important differences between this approach and evolutionary graph theory.
Why are large, complex ecosystems stable? For decades it has been conjectured that they have some unidentified structural property. We show that trophic coherence -- a hitherto ignored feature of food webs which current structural models fail to reproduce -- is significantly correlated with stability, whereas size and complexity are not. Together with cannibalism, trophic coherence accounts for over 80% of the variance in stability observed in a 16-food-web dataset. We propose a simple model which, by correctly capturing the trophic coherence of food webs,accurately reproduces their stability and other basic structural features. Most remarkably, our model shows that stability can increase with size and complexity. This suggests a key to May’s Paradox, and a range of opportunities and concerns for biodiversity conservation.
The inclusion process is a driven diffusive system which exhibits acondensation transition in certain scaling limits, where a fraction ofall particles condenses on a single lattice site. We study the dynamicsof this phenomenon, and identify all relevant dynamical regimes andcorresponding time scales as a function of the system size. Thisincludes a coarsening regime where clusters move on the lattice andexchange particles, leading to a growing average cluster size. Suitableobservables exhibit a power law scaling in this regime before theysaturate to stationarity following an exponential decay depending on thesystem size. For symmetric dynamics we have rigorous results on finitelattices in the limit of infinitely many particles (joint work withFrank Redig and Kiamars Vafayi). We have further heuristic results onone-dimensional periodic lattices in the thermodynamic limit, coveringtotally asymmetric and symmetric dynamics (joint work with Jiarui Caoand Paul Chleboun), and preliminary results for a generalized version ofthe symmetric process that exhibits finite time blow-up (joint work withYu-Xi Chau).
Adaptive networks are models of complex systems in which the structure of the interaction network changes on the same time-scale as the status of the nodes. For instance, consider the spread of a disease over a social network that is changing as people try to avoid the infection. In this talk I will try to persuade you that demographic noise (random fluctuations arising from the discrete nature of the components of the network) plays a major role in determining the behaviour of these models. These effects can be studied analytically by employing a reduced-dimension Markov jump process as a proxy.
The immune system can recall and execute a large number of memorized defense strategies in parallel. The explanation for this ability turns out to lie in the topology of immune networks. We studied a statistical mechanical immune network model with `coordinator branches' (T-cells) and `effector branches' (B-cells), and show how the finite connectivity enables the system to manage an extensive number of immune clones simultaneously, even above the percolation threshold. The model is solvable using replica techniques, in spite of the fact that the network has an extensive number of short loops.
I this seminar I will discuss two distinct approaches to the structure of the world around us. In the first I'll discuss our implementation of a battery of thousands of signal processing tools as part of an attempt to organize our methods and to perform a sky-survey of types of dynamics. In the second I'll cover our work connecting topics in network analysis to parameterized complexity and outline how the complexity of some routing tasks on graphs scales with the number of communities rather than the number of nodes.
Abstract (Short): I will review ideas to approach the Graph Isomorphism Problem with tools linked to Quantum Information.
Many networks have cohesive groups of nodes called "communities". The study of community structure borrows ideas from manyareas, and there exist myriad methods to detect comminities algorithmically. Community structure has also been insightful in many applications, as it can reveal social organization in friendship networks, groups of simultaneously active brain regions in functional brain networks, and more. My collaborators and I have been very active in studying community structure, and I will discuss some of our work on both methodological development and applications. I'll include examples from subjects like social networks, brain networks, granular materials, and more.
Over the past decade complex networks have come be be recognized as powerful tools for the analysis of complex systems. The defining feature of complexity is emergence; complex systems exhibit phenomena that do not originate in the parts of the system, but rather in their interactions. The underlying structural and dynamical properties behind these phenomena are therefore, almost by definition, delocalized across the network. But, a major driving force of network theory is the hope that we can nevertheless trace these properties back to localized structures in the network. In other words, we study global network-wide phenomena but often search for the magical red arrow that points at a certain part of the network and says 'This causes it!'.In this talk I focus on analytical investigation of network dynamics, where the network is considered as a large dynamical system. Combining approaches from dynamical systems theory and statistical physics with insights from network research analytical progress in the investigation of these systems can be made. I show that network dynamics is generally inherently nonlocal, but also point out a fundamental reason why many important real world phenomena can nevertheless be understood by a local dynamical analysis.
We introduce a framework for compressing complex networks into powergraphs with overlapping powernodes. The most compressible components of a given network provide a highly informative sketch of its overall architecture. In addition this procedure also gives rise to a novel, link-based definition of overlapping node communities in which nodes are defined by their relationships with sets of other nodes, rather than through connections within the community. We show that this approach yields valuable insights into the large-scale structure of transcription networks, food webs, and social networks, and allows for novel ways in which network architecture can be studied, defined and classified. Furthermore, when paired with enrichment analysis of node classification terms, this method can provide a concise overview of the dominant conceptual relationships that define the network.
(Joint work with Matthew Urry)
We consider the problem of learning a function defined on the nodes of a graph, in a Bayesian framework with a Gaussian process prior. We show that the relevant covariance kernels have some surprising properties on large graphs, in particular as regards their approach to the limit of full correlation of the function values across all nodes.
Our main interest is in predicting the learning curves, i.e. the typical generalization error given a certain number of examples. We describe an approach for deriving these predictions that becomes exact in the limit of large random graphs. The validity of the method is broad and covers random graphs specified by arbitrary degree distributions, including the power-law distributions typical of social and other networks. We also discuss the effects of normalization of the covariance kernels. These are more intricate than for functions of real input variables, because of the variation in local connectivity structure on a graph. Time permitting, recent extensions to the case of learning with a mismatched prior will be covered.
We will present some recent results on the energetic cost of information processing in the framework of stochastic thermodynamics. This theory provides a consistent description of non-equilibrium processes governed by a Markovian dynamics. We shall discuss the physical role of the information exchange during measure, feedback and erasure for systems driven by an external controller. We will also address the issue of quantifying the thermodynamic cost of sensing for autonomous two-component systems and discuss the connection between dissipation and information-theoretic correlation.
A complex system in science and technology can often be represented as a network of interacting subsystems or subnetworks. If we follow a reductionist approach, it is natural (though not always wise!) to attempt to describe the dynamics of the network in terms of the dynamics of the subsystems of the network. Put another way, we often have a reasonable understanding of the "pieces", but how do they fit together, and what do they do collectively? In the simplest, and most studied cases, the subnetworks all run on the same clock (are updated simultaneously), and dynamics is governed by a fixed set of (usually analytic) dynamical equations: we say the network is synchronous (this is classical dynamics).In biology, especially neuroscience, and technology, for example large distributed systems, these assumptions may not hold: components may run on different clocks, there may be switching between different dynamical equations, and most significantly, and quite unlike what happens in a classical synchronous network, component parts of the network may run independently of the rest of the network, and even stop, for periods of time. We say networks of this type are asynchronous.
It is a major challenge to develop the mathematical theory of dynamics on asynchronous networks. In this talk, we describe examples of dynamics on synchronous and asynchronous networks and point out how properties such as switching are forced by an asynchronous structure. We also indicate relationships with random dynamical systems and problems related to "qualitative computing" .
Motivated by the classification of nonequilibrium steady states suggested by R. K. P. Zia and B. Schmittmann (J. Stat. Mech. 2007 P07012), I propose to measure the violation of the detailed balance criterion by the p norm of the matrix formed by the probability currents. Its asymptotic analysis for the totally asymmetric simple exclusion process motivates the definition of a 'distance' from equilibrium. In addition, I show that the latter quantity and the average activity are both related to the probability distribution of the entropy production. Finally, considering the open asymmetric simple exclusion process and open zero-range process, I show that the current of particles gives an exact measure of the violation of detailed balance.
In this talk I will present a dynamical system called Fictitious Play Dynamics. This is a basic learning algorithm from Game Theory, modelling learning behaviour of players repeatedly playing a game. Dynamically, it can be described as a non-smooth (continuous and piecewise linear) flow on the three-sphere, with global sections whose first return maps are continuous, piecewise affine and area-preserving. I will show how these systems give rise to very intricate behaviour and how they can be studied via a family of rather simple planar piecewise affine maps.
The talk will be about 'contractive Markov systems' - a generalisation of an iterated function system. Under a 'contraction-on-average' condition, such systems have a unique invariant measure. By studying how the spectral properties of a certain linear operator acting on an appropriate function space perturb, we will discuss the stochastic stability of this invariant measure and other probabilistic results.
It is well-known from Crauel and Flandoli (Additive noise destroys a pitchfork bifurcation, J. Dyn. & Diff. Eqs 10 (1998), 259-274) that adding noise to a system with a deterministic pitchfork bifurcation yields a unique random attracting fixed point with negative Lyapunov exponent for all parameters. Based on this observation, they conclude that the deterministic bifurcation is destroyed by the additive noise. However, we show that there is qualitative change in the random dynamics at the bifurcation point in the sense that, after the bifurcation, the Lyapunov exponent cannot be observed almost surely in finite time. We associate this bifurcation with a breakdown of both uniform attraction and equivalence under uniformly continuous topological conjugacies, and with non-hyperbolicity of the dichotomy spectrum at the bifurcation point. This is joint work with Mark Callaway, Jeroen Lamb and Doan Thai Son (all at Imperial College London).
The interactions between the components of complex networks are often directed. Proper modeling of such systems frequently requires the construction of ensembles of directed graphs with a given sequence of in- and out-degrees. Previous algorithms used to generate such samples have either unknown mixing times, or lead often to unacceptably many rejections due to self-loops and multiple edges. I will present a method that can directly construct all possible directed realizations of a given degree sequence. This method is rejection-free, guarantees the independence of the constructed samples, and allows the calculation of statistical averages of network observables according to a uniform or otherwise chosen distribution.
The quantification of the complexity of networks is, today, a fundamental problem in the physics of complex systems. A possible roadmap to solve the problem is via extending key concepts of statistical mechanics and information theory to networks. In this talk we discuss recent works defining the Shannon entropy of a network ensemble and evaluating how it relates to the Gibbs and von Neumann entropies of network ensembles. The quantities we introduce here play a crucial role for the formulation of null models of networks through maximum-entropy arguments and contribute to inference problems emerging in the field of complex networks.
Research in the field of relativistic quantum information aims at finding ways to process information using quantum systems taking into account the relativistic nature of spacetime. Cutting edge experiments in quantum information are already reaching regimes where relativistic effects can no longer be neglected. Ultimately, we would like to be able to exploit relativistic effects to improve quantum information tasks. In this talk, we propose the use of moving cavities for relativistic quantum information processing. Using these systems, we will show that non-uniform motion can change entanglement affecting quantum information protocols such as teleportation between moving parties. Via the equivalence principle, our results also provide a model of entanglement generation by gravitational effects.
The problem of convergence to equilibrium for diffusion processes is of theoretical as well as applied interest, for example in nonequilibrium statistical mechanics and in statistics, in particular in the study of Markov Chain Monte Carlo (MCMC) algorithms. Powerful techniques from analysis and PDEs, such as spectral theory and functional inequalities (e.g. logarithmic Sobolev inequalities) can be used in order to study convergence to equilibrium. Quite often, the diffusion processes that appear in applications are degenerate (in the sense that noise acts directly to only some of the degrees of freedom of the system) and/or nonreversible. The study of convergence to equilibrium for such systems requires the study of non-selfadjoint, possibly non-uniformly elliptic, second order differential operators. In this talk we will prove exponentially fast convergence to equilibrium for such diffusion processes using the recently developed theory of hypocoercivity. Furthermore, we will show how the addition of a nonreversible perturbation to a reversible diffusion can speed up convergence to equilibrium. This is joint work with M. Ottobre, K. Pravda-Starov, T. Lelievre and F. Nier.
The relation between quantum systems and their classical analogues is a subtle matter that has been investigated since the early days of quantum mechanics. Today, we have at our disposal powerful tools to formulate in a precise way the semi-classical limit. The understanding of quantum classical correspondence is of importance (a) for the interpretation and practical understanding of quantum effects, and (b) as a basis of a variety of simulation methods for quantum spectra and dynamics. Recently, there has been a growing interest in so-called non-Hermitian or "complexified" quantum theories. Applications include (i) decay, (ii) transport and scattering phenomena, (iii) dissipative systems, and (iv) PT-symmetric theories. In this talk I will present an overview of some of the issues and novelties arising in the investigation of the classical analogues of such "complexified” quantum theories, with applications ranging from optics to cold atoms and Bose-Einstein condensates.
Energy landscape methods make use of the stationary points of the energy function of a system to infer some of its collective properties. Recently this approach has been applied to equilibrium phase transitions, showing that a connection between some properties of the energy landscape and the occurrence of a phase transition exists, at least for certain simple models.
I will discuss the study of the energy landscape of classical O(n) models defined on regular lattices and with ferromagnetic interactions. This study suggests an approximate expression for the microcanonical density of states of the O(n) models in terms of the energy density of the Ising model. If correct, this would implies the equivalence of the critical values of the energy densities of a generic O(n) model and the n=1 case, i.e., a system of Ising spins with the same interactions. Numerical and analytical results are in good agreement with such prediction.
The sociological notion of F-formations denote the spatial configurations that people assume in social interactions; and an F-formation system denotes all the behavioural aspects that go into establishing and sustaining an F-formation between people. Kendon (1990) identified some of the geometrical aspects of such F-formations that have to do with the spatial positions and orientations of interlocutors. In this talk, I will be presenting some of our two-dimensional and three-dimensional simulations that are based on Kendon's geometrical aspects of F-formations. Discussions will also extend to the evaluations carried out on the simulations by participants during a pilot study, their outcomes and implications.
Bibliography:
Kendon A. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press, 1990.
The theory of large deviations is at the heart of recent progress in the field of statistical physics. I will discuss in this talk some developments that are interesting for non-equilibrium physics. In particular, I will insist on symmetries of large deviations and on analytical large deviation results.
Direct numerical continuation in physical experiments is made possible by the combination of ideas from control theory and nonlinear dynamics, resulting in a family of methods known as control-based continuation. This family of methods allows both stable and unstable periodic orbits to be tracked through bifurcations such as a fold by varying suitable system parameters. As such, the intricate details of the bifurcation structure of a physical experiment can be investigated. In its original form control-based continuation was based on the Pyragas' time-delayed feedback control strategy, suitably modified to overcome the stability issues that occur in the vicinity of a saddle-node bifurcation (fold). It has since become a much more general methodology.
There are a wide range of possible applications for such investigations across engineering and the applied sciences. Specifically, there is a great deal of promise in combining such methods with ideas such as numerical substructuring, whereby a numerical model is coupled to a physical experiment in real-time via actuators and sensors.
The basic scheme (known as control-based continuation) works with standard numerical methods; however, the results are sub-optimal due to the comparative expense of making an experimental observation and the inherent noise in the measurement. This talk will present the current state-of-the-art and possibilities for future research in this area, from the development of numerical methods and control-strategies to more fundamental dynamical systems research.
Strong thermodynamical arguments exist in the literature which show that the entropy S of say a many-body Hamiltonian system should be extensive (i.e., S(N)~N) independently from the range of the interactions between its elements. If the system has short-range interactions, an additive entropy, namely the Boltzmann-Gibbs one, makes the job. For long-range interactions, nonergodicity and strong correlations are generically present, and nonadditive entropies become necessary to preserve the desired entropic extensivity. These and recently related points (q-Fourier transform, large-deviation theory, nonlinear quantum mechanics) will be briefly presented. BIBLIOGRAPHY: (i) J.S. Andrade Jr., G.F.T.da Silva, A.A. Moreira, F.D. Nobre and E.M.F. Curado, Phys. Rev. Lett. 105, 260601 (2010); (ii) F.D. Nobre, M.A. Rego-Monteiro and C. Tsallis, Phys. Rev. Lett. 106, 140601 (2011); (iii) http://tsallis.cat.cbpf.br/biblio.htm
We study the steady state of a finite XX chain coupled at its boundaries to quantum reservoirs made of free spins that interact one after the other with the chain. The two-point correlations are calculated exactly and it is shown that the steady state is completely characterized by the magnetization profile and the associated current. Except at the boundary sites, the magnetization is given by the average of the reservoirs' magnetizations. The steady state current, proportional to the difference in the reservoirs' magnetizations, shows a non-monotonous behavior with respect to the system reservoir coupling strength, with an optimal current state for a finite value of the coupling. Moreover, we show that the steady state can be described by a generalized Gibbs state.
The metric theory of Diophantine approximation on fractal sets is developed in which the denominators of the rational approximates are restricted to lacunary sequences. The case of the standard middle third Cantor set and the sequence {3^n : n \in N} is the starting point of our investigation. Our metric results for this simple setup answers a problem raised by Mahler. As with all 'good' problems - its solution opens up a can of worms.
It turns out that one-dimensional probability distributions of annihilating Brownian motions on the real line is a Pfaffian point process. It also turns out that this Pfaffian point process describes the one-dimensional statistics of real eigenvalues in Ginibre ensemble of random matrices. Is the real sector of Ginibre ensemble equivalent to annihilating Brownian motions as a stochastic process?
Many turbulent flows undergo sporadic random transitions after long periods of apparent statistical stationarity. A straightforward study of these transitions, through direct numerical simulation of the governing equations is nearly always impracticable. In this talk, we consider two-dimensional and geostrophic turbulence models with stochastic forces in regimes where two or more attractors coexist. We propose a non-equilibrium statistical mechanics approach to the computation of rare transitions between two attractors. Our strategy is based on the large deviation theory for stochastic dynamical systems (Freidlin-Wentzell theory) derived from a path integral representation of the stochastic process.
"Music exists in an infinity of sound. I think of all music as existing in the substance of the air itself. It is the composer's task to order and make sense of sound, in time and space, to communicate something about being alive through music." ~ Libby LarsenIt is the performer's task then to intuit this order, to make sense of the music -- in ways that may augment or be different from the composer's own understanding -- and to communicate the interpreted structure through prosodic cues to the listener. Just as physicists develop mathematical models to make sense of the world in which we live, music science researchers seek mathematical models to represent and manipulate music structures, both frozen in time (e.g. as mapped out in a score), or communicated in performance. Mathematics is also the glue that binds music to digital representations, allowing for large-scale computations carried out by machines.I shall begin by introducing some of my own work originating in music structure representation and analysis, then move on to more recent investigations into aspects of music prosody. A key element of this talk will be the posing of some open problems in the scientific study of music structure and expressive performance, in which I hope to solicit interest, and to which I shall invite responses.
I will explain how one can track unstable periodic orbits in experiments using non-invasive feedback control in the spirit of Pyragas' time-delayed feedback. In some (experimentally very common) situations one can achieve non-invasiveness of the control without subtracting a delayed term in the control and without having to apply Newton iterations. I will show some recent experimental results of David Barton, who was able to trace out a resonance surface of a mechanical nonlinear oscillator around a cusp in two parameters with high accuracy.
Evolutionary dynamics have been traditionally studied in infinitely large homogeneous populations where each individual is equally likely to interact with every other individual. However, real populations are finite and characterised by complex interactions among individuals. Over the last few years there has been a growing interest in studying evolutionary dynamics in finite structured populations represented by graphs. An analytic approach of the evolutionary process is possible when the contact structure of the population can be represented by simple graphs with a lot of symmetry and lack of complexity. Such graphs are the complete graph, the circle and the star graph. Moreover, this is usually infeasible on complex graphs and the use of various assumptions and approximations is necessary for the exploration of the process. We propose a powerful method for the approximation of the evolutionary process in populations with a complex structure. Comparisons of the predictions of the model constructed with the results of computer simulations reveal the effectiveness of the process and the improved accuracy that it provides when compared to well-known pair approximation methods.
For details see the workshop webpage
I will discuss recent results concerning topological invariants for Henon-like maps in dimension two using the Renormalisation apparatus constructed by de Carvalho, Lyubich, Martens and myself.
The new faces of the Feigenbaum point: Dynamical hierarchy, self-similar network, theoretical game and stationary distribution. In this talk we first show that the recently revealed features of the dynamics toward the Feigenbaum attractor form a hierarchical construction with modular organization that leads to a clear-cut emergent property. Then we transcribe the well-known Feigenbaum scenario into families of networks via the horizontal visibility algorithm, derive exact results for their degree distributions, recast them in the context of the renormalization group and find that its fixed points coincide with those of network entropy optimization. Next we study a discrete-time version of the replicator equation for two-strategy theoretical games. Their stationary properties differ from those of continuous time for sufficiently large values of the parameters, where periodic and chaotic behavior replaces the usual fixed-point population solutions. We observe the familiar period-doubling and chaotic-band-splitting attractor cascades of unimodal maps. Finally, we look at the limit distributions of sums of deterministic chaotic variables in unimodal maps and find a remarkable renormalization group structure associated with the operation of increment of summands and rescaling. In this structure—where the only relevant variable is the difference in control parameter from its value at the transition to chaos—the trivial fixed point is the Gaussian distribution and a novel nontrivial fixed point is a multifractal distribution that emulates the Feigenbaum attractor.
In this talk we will discuss the application of the Fluctuation Theorem (FT) on systems where the heat bath is out of equilibrium. We first recall the main properties of Fluctuation Theorem (FT) starting from experimental results. We then discuss the result of an experiment where we measure the energy fluctuations of a Brownian particle confined by an optical trap in an aging gelatin after a very fast quench (less than 1 ms). The strong non-equilibrium fluctuations due to the assemblage of the gel, are interpreted, within the framework of (FT) , as a heat flux from the particle towards the bath. We derive an analytical expression of the heat probability distribution, which fits the experimental data and satisfies a fluctuation relation similar to that of a system in contact with two baths at different temperatures. We finally show that the measured heat flux is related to the violation of the equilibrium Fluctuation Dissipation Theorem for the system.
Heterogeneity is a ubiquitous aspect of many social and economic complex systems. The analysis and modeling of heterogeneous systems is quite difficult because each economic and social actor is characterized by different attributes and it is usually acting on a multiplicity of time scales. We use statistically validated networks [1], a recently introduced method to validate links in a bipartite system, to investigate heterogeneous social and economic systems. Specifically, we investigate the classic movie-actor system [1] and the trading activity of individual investors of Nokia stock [2]. The method is unsupervised and allows constructing networks of social actors where the links indicate co-occurrence of events or decisions. Each link is statistically validated against a null hypothesis taking into account system heterogeneity. Community detection is performed on the statistically validated networks and the communities (partitions) obtained are investigated with respect to the over-expression or under-expression of the attributes characterizing the social actors and/or their activities [3].
[1] Michele Tumminello, Salvatore Miccichè, Fabrizio Lillo, Jyrki Piilo, Rosario N Mantegna (2011) Statistically Validated Networks in Bipartite Complex Systems. PLoS ONE 6(3): e17994.
[2] Michele Tumminello, Fabrizio Lillo, Jyrki Piilo, and Rosario N. Mantegna, Identification of clusters of investors from their real trading activity in a financial market (2012) New J. Phys. 14 013041
[3] Michele Tumminello , Salvatore Miccichè , Fabrizio Lillo , Jan Varho , Jyrki Piilo and Rosario N Mantegna, Community characterization of heterogeneous complex systems (2011) J. Stat. Mech. P01019
This talk will discuss a stability index that characterises the local geometry of the basin of attraction for a dynamical system. The index is of particular interest for attractors that are not asymptotically stable - such attractors are known to arise robustly, for example, as heteroclinic cycles in systems with symmetries.
In this talk I will introduce the issue of the emergence of cooperation, identified by Science as one of the 25 most important problems for the 21st century. I will discuss the puzzle that cooperative behavior is in the light of evolution theory and the importance of cooperation in its major steps. Then I will present the main tool with which one can study this problem, namely game theory. I will review games played by two players and their classical and evolutionary versions. Finally, I will devote some time to recent experiments addressing the relevance of the structure of the evolving population for the emergence of cooperation.
The decay of classical temporal correlations represents a fundamental issue in dynamical systems theory, and, in the generic setting of systems with a mixed phase space, it still presents a remarkable amount of open problems. We will describe prototype systems where the main questions arise, and discuss some recent progress where polynomial mixing rates are linked to large deviations estimates.
The joint spectral radius of a finite set of square matrices is defined to be the maximum possible exponential growth rate of products of matrices drawn from that set. In joint work with Nikita Sidorov, Kevin Hare and Jacques Theys, we examine a certain one-parameter family of pairs of matrices in detail, showing that the matrix products which realise this optimal growth rate correspond to Sturmian sequences with a particular characteristic ratio. We investigate the dependence of this characteristic ratio on the parameter, and show that it takes the form of a Devil's staircase. We establish some fine properties of this Devil's staircase, answering a question posed by T. Bousch.
I discuss the synchronization of cows using both an agent-based model and then formulate a mechanistic model for the daily activities of a cow (eating, lying down, and standing) in terms of a piecewise smooth dynamical system. I analyze the properties of this bovine dynamical system and develop an exact integrative form as a discrete-time mapping. I then couple multiple cow "oscillators" together to study synchrony and cooperation in cattle herds. With this abstract approach, I not only investigate equations with interesting dynamics but also develop interesting biological predictions. In particular, the model illustrates that it is possible for cows to synchronize less when the coupling is increased.
The macroscopic behaviour of microscopically defined particle models are investigated by equation-free techniques where no explicitly given equations are available for the macroscopic quantities of interest. We investigate situations with an intermediate number of particles where the number of particles is too large for microscopic investigations of all particles and too small for analytical investigations using many-particle limits and density approximations. By developing and combining very robust numerical algorithms, it was possible to perform an equation-free numerical bifurcation analysis of macroscopic quantities describing the structure and pattern formation in particle models. The approach will be demonstrated for two examples from traffic and pedestrian flow. The presented traffic flow on a single lane highway shows besides uniform flow solutions also traveling waves of high density regions. Bifurcations and co-existence of these two solution types are investigated. The pedestrian flow shows the emergence of an oscillatory pattern of two crowds passing a narrow door in opposite directions. The oscillatory solutions appear due to a Hopf bifurcation. This is detected numerically by an equation-free continuation of a stationary state of the system. Furthermore, an equation-free two-parameter continuation of the Hopf point is performed to investigate the oscillatory behaviour in detail using the door width and relative velocity of the pedestrians in the two crowds as parameters.
A recent study into the geometry underlying discontinuities in dynamics revealed some surprises. The problems of interest are fundamental, things like: frictional sticking, electronic switching, protein activation and neuron spiking. When a discontinuity occurs at some threshold value in a system of differential equations, the solutions that result might not be unique. Besides the myriad cute models from applications, we want to know what discontinuities really tell us about dynamics in the real world. Non-unique solutions are easily dismissed as unphysical, yet they tell us something about the extreme behaviour made possible in the limit as a sudden change becomes almost discontinuous. Initially unique solutions may become multi-valued, revealing extreme sensitivity to initial conditions, a breakdown of determinism, yet the possible outcomes lie in a well-defined set: an "explosion". An intriguing connection between discontinuities and singularly perturbations is revealed by studying the so-called two-fold singularities and canards, borrowing ideas from nonstandard analysis along the way. The outcomes have been seen in superconductor experiments, are possible in control circuits, they are hidden in plain sight in the dynamics of friction, impacts, and neuron spiking, and they lead to non-deterministic forms of chaos.
We study the effect of external forcing on the saddle-node bifurcation pattern of interval maps. Replacing fixed points of unperturbed maps by invariant graphs, we obtain direct analogues to the classical result both in a measure-theoretic and a topological setting. As an interesting new phenomenon, a dichotomy appears for the behaviour at the bifurcation point, which allows the bifurcation to be either "smooth" (as in the classical case) or "non-smooth".
This talk investigates the effect of network topology on the fair allocation of network resources among a set of agents, an all-important issue for the efficiency of transportation networks all around us. We analyse a generic mechanism that distributes network capacity fairly among existing flow demands, and describe some conditions under which the problem can be solved by semi-analytical methods. We find that, for some regions of the parameter space, a fair allocation implies a decrease of at least 50% from maximum throughput. We also find that the histogram of the flow allocations assigned to the agents decays as a power-law with exponent -1. Our semi-analytical framework suggests possible explanations for the well-known reduction of throughput in fair allocations. It also suggests that the network topology can lead to highly uneven (but fair) distributions of resources, a remark of caution to network designers
The joint spectral radius (JSR) of a finite set of real d × d matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth.
The purpose of this talk is to present the first completely explicit family of 2 × 2 matrices which do not possess the finiteness property. Time permitting, I will also mention recent advances concerning maximizing sequences (those which realize the JSR) of polynomial complexity.
We investigate the problem of Diophantine approximation on rational surfaces using ergodic-theoretic techniques. It turns out that this problem is closely related to the asymptotic distribution of orbits for a suitably constructed dynamical system. Using this connection we establish analogues of Khinchin's and Jarnik's theorems in our setting.
This talk is about recurrence time statistics for chaotic maps with strange attractors - focusing on the probability distributions that describe the typical recurrence statistics to certain subsets of the phase space. The limiting probability distributions depend on the geometry of the (chaotic) attractor, the dimension of the SRB measure on the attractor, and the observables on the system.
Gonzalez-Tokman, Hunt and Wright studied a metastable expanding system which is described by a piecewise smooth and expanding interval map. It is assumed that the metastable map has two invariant sub-intervals and exactly two ergodic invariant densities. Due to small perturbations, the system starts to allow for infrequent leakage through subsets (called holes) of the initially invariant sub-intervals, forcing the two invariant sub-systems to merge into one perturbed system which has exactly one invariant density. It is proved that the unique invariant density of the perturbed interval map can be approximated by a particular convex combination of the two invariant densities of the original interval map, with the weights in the combination depending on the sizes of the holes.
In this talk we will present analogous results in two cases: 1. intermittent interval maps; 2. Randomly perturbed expanding maps.
Organisers: Rosemary J. Harris and Hugo Touchette
Much effort has focused recently on developing models of stochastic systems that are non-Markovian or show long-range correlations in space or time, or both. The need for such models has come from many different fields, ranging from mathematical finance to biophysics, and from engineering to statistical mechanics.
This workshop will bring together a number of mathematicians and engineers interested in stochastic processes having long-range correlations, with a view to share ideas as to how we can define such correlations mathematically, as well as to how we can devise stochastic processes that go beyond the Markov model.
The meeting is part of the CoSyDy series, a London Mathematical Society Scheme 3 network bringing together UK mathematicians investigating Complex Systems Dynamics.
See the full programme with abstracts in attachment.
All are welcome. Registration is not required, but for catering purposes we would appreciate if you could confirm your attendance to the organisers.
In equilibrium statistical mechanics macroscopic observables are calculated as averages over statistical ensembles, which represent probability distributions of the microstates of the system under given constraints. Away from equilibrium ensemble theory breaks down due to the strongly dissipative nature of non-equilibrium steady states, where, for example, energy conservation no longer holds in general. Nevertheless, ensemble approaches can be useful in describing the statistical mechanics of non-equilibrium systems, as I discuss in this talk. Two different approaches are presented: (i) a theory of microscopic transition rates in sheared steady states of complex fluids and (ii) a statistical theory for jammed packings of non-spherical objects. In both cases the ensemble approach relies crucially on an assumption of ergodicity in the absence of equilibrium thermalization.
Nature is rich with many different examples of the cohesive motion of animals. Individual-based models are a popular and promising approach to explain features of moving animal groups such as flocks of birds or shoals of fish. Previous models for collective motion have primarily focused on group behaviours of identical individuals, often moving at a constant speed. In contrast we put our emphasis on modelling the contributions of different individual-level characteristics within such groups by using stochastic asynchronous updating of individual positions and orientations. Recent work has highlighted the importance of speed distributions, anisotropic interactions and noise in collective motion. We test and justify our modelling approach by comparing simulations to empirical data for fish, birds and insects. The techniques we use range from motion tracking to "equation-free" coarse-grained modelling. With the maturation of the field new exciting applications are possible for models such as ours.
We consider ensembles of trajectories associated with large deviations of time-integrated quantities in stochastic models. Motivated by proposals that these ensembles are relevant for physical processes such as shearing and glassy relaxation, we show how they can be generated directly using auxiliary stochastic processes. We illustrate our results using the Glauber-Ising chain, for which energy-biased ensembles of trajectories can exhibit ferromagnetic ordering, and briefly discuss the relation between such biased ensembles and quantum phase transitions. The talk will conclude with a wish list of things we'd like to work out but so far haven't been able to.
Since the seminal work by Ott et al., the concept of controlling chaos have gathered much attention and several techniques have been proposed. Among those control methods, delayed feedback control is of interest for its applicability and tractability for analysis. In this talk, we propose a parametric delayed feedback control where delay time is adaptively changed by the state of the system. Unlike the conventional chaos control, we are able to obtain super-stable periodic orbits. From the viewpoints of dynamical systems, the whole controlled system becomes a particular two dimensional system with multiple attractors in the sense of Milnor. Finally, I would like to mention a possible application of this control technique for a coding scheme.
Hidden Markov Models (HMMs) are a commonly used tool for inference of transcription factor (TF) binding sites from DNA sequence data. We exploit the mathematical equivalence between HMMs for TF binding and the "inverse" statistical mechanics of hard rods in a one-dimensional disordered potential to investigate learning in HMMs. We derive analytic expressions for the Fisher information, a commonly employed measure of confidence in learned parameters, in the biologically relevant limit where the density of binding sites is low. This allows us to formulate a simple criteria for when it is possible to distinguish between binding sites of closely related TFs and derive a scaling relation relating the quantity of training data to the minimum energy (statistical) difference between TFs that one can resolve. We apply our formalism to the NF-$\kappa$B TF-family and find that it is composed of two related but statistically distinct sub-families.
The law of elastic reflection by a smooth mirror surface is well known: the angle of incidence is equal to the angle of reflection. In contrast, the law of elastic scattering by a rough surface is not unique, but depends on the shape of microscopic pits and groves forming the roughness. In the talk we will give the definition of a rough surface and provide a characterisation for laws of scattering by rough surfaces. We will also consider several problems of optimal resistance for rough bodies and discuss their relationship with Monge-Kantorovich optimal mass transfer. These problems can be naturally interpreted in terms of optimal roughening of the surface for artificial satellites on low Earth orbits.
I will present a result which gives a characterization of the law of the partition function of a Brownian directed polymer model in terms of the eigenfunctions of the quantum Toda lattice, and has close connections to random matrix theory.
In 1998 Hastings and Levitov proposed a model for planar random growth such as diffusion-limited aggregation (DLA) and the Eden model, in which clusters are represented as compositions of conformal mappings. I shall introduce an anisotropic version of this model, and discuss some of the natural scaling limits that arise. I shall show that very different behaviour can be seen in the isotropic case, and that here the model gives rise to a limit object known as the Brownian web.
One can (for the most part) formulate a model of a classical system in either the Lagrangian or the Hamiltonian framework. Though it is often thought that those two formulations are equivalent in all important ways, this is not true: the underlying geometrical structures one uses to formulate each theory are not isomorphic. This raises the question whether one of the two is a more natural framework for the representation of classical systems. In the event, the answer is yes: I state and prove two technical results, inspired by simple physical arguments about the generic properties of classical systems, to the effect that, in a precise sense, classical systems evince exactly the geometric structure Lagrangian mechanics provides for the representation of systems, and none that Hamiltonian mechanics does. The argument clarifies the conceptual structure of the two systems of mechanics, their relations to each other, and their respective mechanisms for representing physical systems.
Software testing makes use of combinatorial designs called covering arrays. These arrays are a generalization of Latin Squares and orthogonal arrays. Idealy we look to use the smallest possible array for the given parameters, but this is a hard problem. We define a family of graphs, partition graphs, which give a full characterization of optimal covering arrays using homomorphisms. We investigate these graphs and are able to determine the diameter, and for some subfamilies, the clique and chromatic number and homomorphic core of these graphs.There are many open problems involving these graphs
An edge-regular graph with parameters (v,k,t) is a regular graph of order v and valency k, such that every edge is in exactly t triangles, and a clique in a graph is a set of pairwise adjacent vertices. I will apply a certain quadratic "block intersection" polynomial to obtain information about cliques in an edge-regular graph with given parameters.
Acyclic orientations of a graph arise in various applications, including heuristics for colouring. The number of acyclic orientations is an evaluation of the chromatic polynomial. Stanley gave a formula for the average number of acyclic orientations of graphs with n vertices and m edges. Recently we have found the graphs with the minimum number of acyclic orientations, but the more interesting question about the maximum number is still open.
The regular complete bipartite graph (on an even number of vertices) is thought to maximise the number of acyclic orientations. Unexpectedly, the number turns out to be a poly-Bernoulli number, one of a family of numbers connected with polylogarithms. We will try to explain these connections.
Guessing game is a variant of "guessing your own hat" game and can be played on any simple undirected graph. The aim of this game is to maximise the probability of the event that all players guess correctly their own value without any communication. The fractional clique cover strategy for playing the guessing game was developed by Christofides and Markstrom and was conjectured to be the optimal strategy. In this talk, we will construct some counterexamples to this conjecture.
Consider two strict weak orders (that is irreflexive, transitive, non-total relations) on the same finite set. How similar are the two? This question is motivated by the statistical question of association between two rankings which contain ties. In order to assess the similarity of the orders I will present an approach where the lack of agreement is assessed by counting the number of certain operations that are needed to transform one weak order into the other. The resulting measure is a symmetric and positive definite function but does not satisfy the triangle inequality. Hence, technically, it is a distance but not a metric. So far the proposed distance can only be computed recursively. Input from the audience which would help me to derive a closed form solution and pointers to related "pure" literature I am not aware of will be greatly appreciated.
The Brouwer fixed point theorem and the Borsuk-Ulam theorem are beautiful and well-known theorems of topology that admit combinatorial analogues: Sperner's lemma and Tucker's lemma. In this talk, I will trace recent connections and generalizations of these combinatorial theorems, including applications to the social sciences.
We examine the structure of 1-extendable graphs G which have no even F-orientation where F is a fixed 1-factor of G. In the case of regular graphs, graphs of connectivity at least four and of graphs of maximum degree three, a characterization is given.
Terminology A graph G is 1-extendable if every edge belongs to at least one 1-factor. An orientation of a graph G is an assignment of a "direction" to each edge of G. Now suppose that G has a 1-factor F. Then an even F-orientation of G is an orientation in which each F-alternating cycle has exactly an even number of edges directed in the same fixed direction around the cycle.
Combinatorial species of structure has been a subject which has had a great impact on Statistical Mechanics, especially through the use of generating functions. It has been described as a Rosetta stone for the key models of Statistical Mechanics (Faris 08) through the way in which it has the capacity to abstract and generalise many of the key features in Statistical Mechanical Models. The talk will focus on developing the main notions of these species of structure and the algebraic identity called Lagrange-Good inversion, a method of finding the coefficients of an inverse power series. I will introduce some of the key concepts of Statistical Mechanics which indicate how they can be understood in the context of the combinatorial tools we have. These interpretations also indicate some interesting combinatorial identities. The final emphasis is on how the Lagrange-Good inversion can help us to obtain a virial expansion for a gas comprising of many types of particle, as was used in a recent paper (Jansen, T. Tsagkarogiannis, Ueltschi).
In 1983, Allan Schwenk posed a problem in the American Mathematical Monthly asking whether the edge set of the complete graph on ten vertices can be decomposed into three copies of the Petersen graph. He, and O. P. Lossers (the problem-solving group at Eindhoven University run by Jack van Lint – "oplossers" is Dutch for "solvers") gave a negative solution in 1987. This year, Sebastian Cioaba and I considered the question: for which m is it possible to find 3m copies of the Petersen graph which cover the complete graph m times. We were able to show that this is possible for all natural numbers m except for m = 1. I will discuss the proof, which involves three parts: one uses linear algebra, one uses group theory, and one is bare-hands.
Of course this problem can be generalised to an arbitrary graph G: Given a graph G on n vertices, for which integers mcan one cover the edges of Kn m times by copies of G? I will say a bit about what we can do, and pose some very specific problems.
Suppose various processors in a network wish to reach agreement on a particular decision. Unfortunately, some unknown subset of these may be under the control of a malicious adversary who desires to prevent such an agreement being possible.
To this end, the adversary will instruct his "faulty" processors to provide inaccurate information to the non-faulty processors in an attempt to mislead them. The aim is to construct an "agreement protocol" that will always foil the adversary and enable the non-faulty processors to reach agreement successfully (perhaps after several rounds of communication).
In traditional agreement problems, it is usually assumed that the set of faulty processors is "static", in the sense that it is chosen by the adversary at the start of the process and then remains fixed throughout all communication rounds. In this talk, we shall instead focus on a "mobile" version of the problem, providing results both for the case when the communications network forms a complete graph and also for the general case when the network is not complete.
Metaalgorithms for deciding properties of combinatorial structures have recently attracted a significant amount of attention. For example, the famous theorem of Courcelle asserts that every property definable in monadic second order logic can be decided in linear time for graphs with bounded tree-width.
We focus on deciding simpler properties, those definable in first order (FO) logic. In the case of graphs, FO properties include the existence of a subgraph or a dominating set of a fixed size. Classical results include the almost linear time algorithm of Frick and Grohe which applies to graphs with locally bounded tree-width. In this talk, we first survey commonly applied techniques to design FPT algorithms for FO properties. We then focus on one class of graphs, intersection graphs of intervals with finitely many lengths, where these techniques do not seem to apply in a straightforward way, and we design an FPT algorithm for deciding FO properties for this class of graphs.
The talk contains results obtained during joint work with Ganian, Hlineny, Obdrzalek, Schwartz and Teska.
To what extent is the spectrum of a matrix determined by its "structure"? For example, what claims can be made simultaneously about all matrices in some qualitative class (i.e. with some fixed sign pattern)? Qualitative classes are naturally associated with signed digraphs or signed bipartite graphs, and some nice theory relates matrix spectra to structures in these graphs. But there are more exotic ways of associating matrix-sets, not necessarily qualitative classes, with graphs (perhaps directed, signed, etc), and extracting information from the graphs. In applications, a quick graph-computation may then suffice to make surprising claims about a family of systems. I'll talk about some recent results and open problems in this area, focussing in particular on the use of compound matrices.
A derangement is a permutation with no fixed points.
An elementary theorem of Jordan asserts that a transitive permutation group of degree n>1 contains a derangement. Arjeh Cohen and I showed that in fact at least a fraction 1/n of the elements of the group are derangements. So there is a simple and efficient randomised algorithm to find one: just keep picking random elements until you succeed.
Bill Kantor improved Jordan's theorem to the statement that a transitive group contains aderangement of prime power order. The theorem is constructive but requires the classification of finite simple groups. Emil Vaughan showed that Kantor's theorem yields a polynomial-time (but not at all straightforward) algorithm for finding one.
This month, Vikraman Arvind from Chennai posted a paper on the arXiv giving a very simple deterministic polynomial-time algorithm to find a derangement in a transitive group. The proof is elementary and combinatorial.
Fix a prime p. Starting with any finite undirected graph G, pick an automorphism of G of order p and delete all the vertices that are moved by this automorphism. Apply the same procedure to the new graph, and repeat until a graph G* is reached that has no automorphisms of order p. Is the reduced graph G* uniquely defined (up to isomorphism) by G? I..e., is G* independent of the sequence of automorphisms chosen?
In a CSG in 2010 John Faben showed that the answer is "yes'' in the special case p = 2 (i.e., reduction by involutions) using Newman's Lemma on confluence of reduction systems. Later, he noticed that the general case can be handled using the so-called Lovász vector of a graph. I'll prove the general result and sketch some consequences to the extent that time allows.
This talk will continue the discussion from previous talks in the series.
Since the foundational results of Thomason and Chung-Graham-Wilson on quasirandom graphs over 20 years ago, there has been a lot of effort by many researchers to extend the theory to hypergraphs. I will present some of this history, and then describe our recent results that provide such a generalization and unify much of the previous work. One key new aspect in the theory is a systematic study of hypergraph eigenvalues. If time permits I will show some applications to Sidorenko's conjecture and the certification problem for random k-SAT. This is joint work with John Lenz.
This is the second in a short series inspired by the talks by Terence Chan at our recent workshop on "Information flows and information bottlenecks". No familiarity with the talks will be assumed.
A partition is uniform if all its parts have the same size. I will define orthogonality of partitions, and interpret orthogonality in terms of the entropy of the associated random variables. I will explain how a sublattice of the partition lattice consisting of mutually orthogonal uniform partitions gives rise to an association scheme.
This is the first in a short series inspired by the talks by Terence Chan at our recent workshop on "Information flows and information bottlenecks". No familiarity with the talks will be assumed.
I will define the entropy function of a family of random variables on a finite probability space. I will prove Chan's theorem that it can be approximated (up to a scalar multiple) by the entropy function obtained when G is a finite group (carrying the uniform distribution) and the random variables are associated with a family of subgroups of G: the random variable associated with H takes a group element to the coset of H containing it.
Ecological occurrence matrices, such as Darwin finches tables, are 0,1-matrices whose rows are species of animals and colums are islands, and the (i,j) entry is 1 if animal i lives in island j, and is 0 otherwise. Moreover the row sums and columns sums are fixed by field observation of these islands. These occurence matrices are thus just bipartite graphs G with a fixed degree sequence and where V1(G) is the set of animals and V2(G) is the set of islands. The problem is, given an occurrence matrix, how to tell whether the distribution of animals is due to competition or to chance. Thus, researchers in Ecology are highly interested in sampling easily and uniformly ecological occurrence tables so that, by using Monte Carlo methods, they can approximate test statistics allowing them to prove or disprove some null hypothesis about competitions amongst animals.
Several algorithms are known to construct realizations on n vertices and m edges of a given degree sequence, and each one of them has its strengths and limitations. Most of these algorithms can be fitted in two categories: MonteCarlo Markov chains methods that are based on edge-swappings, and sequential sampling methods that are based on starting from an empty graph on n vertices and adding edges sequentially according to some probability scheme. We present a new algorithm that samples uniformly all simple bipartite realizations of a degree sequence and whose basic ideas may be seen as implementing a dual sequential method, as it inserts sequentially vertices instead of edges.
The running time of our algorithms is O(m) where m is the number od edges in any realization. The best algorithms that we know of are the one implicit in [1] that has a running time of O(mamax where amax is the maximum of the degrees, but is not uniform. Similarly, the algorithm presented by Chen et al. [3] does not sample uniformly, but nearly uniformly. Moreover the edge-swapping Markov Chains pionneered by Brualdi [2] and Kannan et al. [5], and much used by reseachers in Ecology, have just been proven in [4] to be fast mixing for semi-regular degree sequences only.
A 2-dimensional framework is a straight line realisation of a graph in the Euclidean plane. It is radically solvable if the set of vertex coordinates is contained in a radical extension of the field of rationals extended by the squared edge lengths. We show that the radical solvability of a generic framework depends only on its underlying graph and characterise which planar graphs give rise to radically solvable generic frameworks. We conjecture that our characterisation extends to all graphs. This is joint work with J. C. Owen (Siemens).
This talk is on the combinatorics of partitions. Given a positive integer s, the set of s-cores is a highly structured subset of the set of all partitions, which is important in representation theory. I'll take two positive integers s,t, and define a set of partitions which includes both the set of s-cores and the set of t-cores, and is somehow supposed to be the appropriate analogue of the union of these two sets.
This work is somewhat unfinished, and needs a new impetus. So I'll be hoping for some good questions!
Graphs and digraphs behave quite differently, and many classical results for graphs are often trivially false when extended to general digraphs. Therefore it is usually necessary to restrict to a smaller family of digraphs to obtain meaningful results. One such very natural family is Eulerian digraphs, in which the in-degree equals out-degree at every vertex.
In this talk, we discuss several natural parameters for Eulerian digraphs and study their connections. In particular, we show that for any Eulerian digraph G with n vertices and m arcs, the minimum feedback arc set (the smallest set of arcs whose removal makes G acyclic) has size at least m2/2n2 + m/2n, and this bound is tight. Using this result, we show how to find subgraphs of high minimum degrees, and also long cycles in Eulerian digraphs. These results were motivated by a conjecture of Bollobas and Scott.
Joint work with Ma, Shapira, Sudakov and Yuster.
A typical result in graph theory can be read as following: under certain conditions, a given graph G has some property P. For example, a classical theorem of Dirac asserts that every n-vertex graph G of minimum degree at least n/2 is Hamiltonian, where a graph is called Hamiltonian if it contains a cycle that passes through every vertex of the graph.
Recently, there has been a trend in extremal graph theory where one revisits such classical results, and attempts to see how strongly G possesses the property P. In other words, the goal is to measure the robustness of G with respect to P. In this talk, we discuss several measures that can be used to study robustness of graphs with respect to various properties. To illustrate these measures, we present three extensions of Dirac's theorem.
There are only a few methods for analysing the rate of convergence of an ergodic Markov chain to its stationary distribution. One is the canonical path method of Jerrum and Sinclair. This method applies to Markov chains which have no negative eigenvalues. Hence it has become standard practice for theoreticians to work with lazy Markov chains, which do absolutely nothing with probability 1/2 at each step. This must be frustrating for practitioners, who want to use the most efficient Markov chain possible.
I will discuss how laziness can be avoided by the use of a twenty-year old lemma of Diaconis and Stroock's, or my recent modification of that lemma. As an illustration, I will apply the new lemma to Jerrum and Sinclair's well-known chain for sampling perfect matchings in a bipartite graph.
Let H be a graph. The function ex(n,H) is the maximum number of edges that a graph with n vertices can have, which contains no subgraph isomorphic to H.
If H is not bipartite then the asymptotic behaviour of ex(n,H) is known, but if H is bipartite then in general this is not the case. This talk will focus on the case that H is a complete bipartite graph. I will review the previous constructions from a geometrical point of view and explain how this enables us to improve the lower bound of ex(n,K5,5).
We shall use a theorem of probability to prove a geometrical result, which when applied in an analytical context yields an interesting and surprisingly strong result in combinatorics on the existence of long arithmetic progressions in sums of two sets of integers. For the sake of exposition, we might focus on a version of the final result for vector spaces over finite fields: if A is a subset of Fqn of some fixed size, then how large a subspace must A+A contain?
Joint work with Ernie Croot and Izabella Laba.
I will talk about some work of Ian Wanless and his student Joshua Browning, and some further work that Ian and I did last month.
We are interested in the maximum number of subsquares of order m which a Latin square of order n can have, where we regard m as being fixed and n as varying and large. In many cases this maximum is (up to a constant) a power nr, for some exponent r depending on m. However, we cannot prove that this always holds; the smallest value of m for which it is not known is m = 7.
A related problem concerns the maximum number of Latin squares isotopic to a fixed square of order m.
An elementary problem when writing a computer program is how to swap the contents of two variables. Although the typical approach consists of using a buffer, this operation can actually be performed using XOR without memory. In this talk, we aim at generalising this approach to compute any function without memory.
We introduce a novel combinatorial framework for procedural programming languages, where programs are allowed to update only one variable at a time without the use of any additional memory. We first prove that any function of all the variables can be computed in this fashion. Furthermore, we prove that any bijection can be computed in a linear number of updates. We conclude the talk by going back to our seminal example and deriving the exact number of updates required to compute any manipulation of variables.
Ge and Stefankovic recently introduced a novel two-variable graph polynomial. When specialised to a bipartite graphs G and evaluated at the point (1/2,1), the polynomial gives the number of independent sets in the graph. Inspired by this polynomial, they also introduced a Markov chain which, if rapidly mixing, would provide an efficient sampling procedure for independent sets in G. The proposed Markov chain is promising, in the sense that it overcomes the most obvious barrier to mixing. Unfortunately, by exhibiting a sequence of counterexamples, we can show that the mixing time of their Markov chain may be exponential in the size of the instance G.
I'll play down the complexity-theoretic motivation for this investigation, and concentrate on the combinatorial aspects, namely the graph polynomial and the construction of the counterexamples.
This is joint work with Leslie Ann Goldberg (Liverpool). A preprint is available as arXiv:1109.5242.
See here [PDF 91KB].
A family of graphs F on a fixed set of n vertices is said to be triangle-intersecting if for any two graphs G,H in F, the intersection of G and H contains a triangle. Simonovits and Sos conjectured that such a family has size at most (1/8)2{n choose 2}, and that equality holds only if F consists of all graphs containing some fixed triangle. Recently, the author, Yuval Filmus and Ehud Friedgut proved this conjecture, using discrete Fourier analysis, combined with an analysis of the properties of random cuts in graphs. We will give a sketch of our proof, and then discuss some related open questions.
All will be based on joint work with Yuval Filmus (University of Toronto) and Ehud Friedgut (Hebrew University of Jerusalem).
A conference matrix is an n×n matrix C with zeros on the diagonal and entries ±1 elsewhere which satisfies CCT=(n-1)I. Such a matrix has the maximum possible determinant given that its diagonal entries are zero and the other entries have modulus at most 1.
Conference matrices first arose in the 1950s in connection with conference telephony, and more recently have had applications in design of experiments in statistics. They have close connections with other kinds of combinatorial structure such as strongly regular graphs and Hadamard matrices.
It is known that the order of a conference matrix must be even, and that it is equivalent to a symmetric matrix if n is congruent to 2 (mod 4) or to a skew-symmetric matrix if n is congruent to 0 (mod 4). In the second case, they are conjectured to exist for all admissible n, but there are some restrictions in the first case (for example, there no conference matrices of order 22 or 34). Statisticians are interested to know what is the maximum possible determinant in cases where a conference matrix does not exist.
I will give a gentle introduction to the subject, and raise a recent open question by Dennis Lin.
Combinatorial representations are generalisations of linear representations of matroids based on functions over an alphabet. In this talk, we define representations of a family of bases (r-sets of an n-set). We first show that any family is representable over some finite alphabet. We then link this topic with design theory, and especially Wilson's theory of PBD-closed sets. This allows us to show that all graphs (r=2) can be represented over all large enough alphabets. If time permits, we finally give a characterisation of families representable over a given alphabet as subgraphs of a determined hypergraph.
See here [PDF 35KB].
Abstract and further reading [PDF 59KB]
We prove that almost surely a random graph process becomes Maker's win in the Maker-Breaker games ``k-vertex-connectivity'', ``perfect matching'' and ``Hamiltonicity'' exactly when its minimum degree first becomes 2k, 2 and 4 respectively.
The perfect matching polytope of a graph G is the convex hull of the incidence vectors of all perfect matchings in G. We characterise bipartite graphs and near-bipartite graphs whose perfect matching polytopes have diameter 1.
A finite set X in some Euclidean space Rn is called Ramsey if for any k there is a d such that whenever Rd is k-coloured it contains a monochromatic set congruent to X. A long standing open problem is to characterise the Ramsey sets.
In this talk I will discuss the background to this problem, a new conjecture, and some group theoretic questions this new conjecture raises.
Classically, the Ising model in statistical physics is defined on a graph. But through the random cluster formulation we can make sense of the Ising partition function in the wider context of an arbitrary matroid. I expect most of the talk will be spent setting the scene. But eventually I'll come round to discussing the computational complexity of evaluating the partition function on various classes of matroids (graphic, regular and binary). I'm not a physicist nor a card-carrying matroid theorist, so the talk should be pretty accessible.
This is joint work with Leslie Goldberg (Liverpool).
The family of intervals of a binary structure on a set S satisfies well known properties:
A family of subsets of S with these properties is called weakly partitive.
An interval X is called strong provided that for each interval Y, if the intersection of X and Y is non-empty then Y is a subset of X or Y contains X. Using the notion of strong interval, and a study of the characteristics of elements of a weakly partitive family, Pierre Ille and I gave a proof in [1] of his result that given a weakly partitive family I on a set Sthere is a binary structure on S whose intervals are exactly the elements of I.
[1] Weakly partitive families on infinite sets, Pierre Ille and Robert E. Woodrow, Contributions to Discrete Mathematics, Vol 4, Number 1, 2009 pp. 54–79.
Vivek Jain asked whether, when G is a finite group and H is a core-free subgroup of G, it is possible to generate G by a set of coset representatives of H in G. The answer is yes: the proof uses a result of Julius Whiston about the maximal size of an independent set in the symmetric group.
I will discuss the proof and some slight extensions, and will also talk about a parameter of a group conjecturally related to the maximum size of an independent set; this involves an open question about the subgroup lattices of finite groups.
For a family of subsets of {1,...,n}, ordered by inclusion, and a partially-ordered set P, we say that the family is P-free if it does not contain a subposet isomorphic to P. We are interested in finding ex(n,P), the largest size of a P-free family of subsets of [n]. It is conjectured that, for any fixed P, this quantity is (k+o(1))n(n-1)/2 for some fixed integer k, depending only on P.
Recently, Boris Bukh has verified the conjecture for P which are in a "tree shape". There are some other small posets Pfor which the conjecture has been verified. The smallest for which it is unknown is Q2, the Boolean lattice on two elements. We will discuss the best-known upper bound for ex(n,Q2) and an interesting open problem on graph theory that, if solved, would improve this bound. This is joint work with Maria Axenovich, Iowa State University and Jacob Manske, Texas State University.
A transversal of a latin square is a selection of entries that hits each row, column and symbol exactly once. We can construct latinsquares whose transversals are constrained in various ways. For orders that are not twice a prime, these constructions yield2-maxMOLS, that is, pairs of orthogonal latin squares that cannot be extended to a triple of MOLS. If only Euclid's theorem was false, we'dhave nearly solved the 2-maxMOLS problem.
We study a problem of minimising the total number of zeros in the gaps between blocks of consecutive ones in the columns of a binary matrix by permuting its rows. The problem is known to be NP-hard. An analysis of the structure of an optimal solution, allows us to focus on a restricted solution space, and to use an implicit representation for searching the space. We develop an exact solution algorithm, which is polynomial if the number of columns is fixed, and two constructive heuristics to tackle instances with an arbitrary number of columns. The heuristics use a novel solution representation based upon column sequencing. In our computational study, all heuristic solutions are either optimal or close to an optimum. One of the heuristics is particularly effective, especially for problems with a large number of rows.
Crystals are certain labelled graphs which give a combinatorial understanding for certain representations of simple Lie algebras. Although crystals are known to exist for certain important representations, understanding what they look like is tricky, and an important theme in combinatorial representation theory is constructing models of crystals, where the vertices are given by simple combinatorial objects, with combinatorial rules for determining the edges.
I'll try to give a brief but comprehensible overview to motivate, and then concentrate on one particular crystal, for which there is a family of models based on partitions.
I will introduce the problem of reconstructing population pedigrees from their subpedigrees (pedigrees of sub-populations) and present a construction of pairs of non-isomorphic pedigrees that have the same collection of sub-pedigrees. I will then show that reconstructing pedigrees is equivalent to reconstructing hypergraphs with isomorphisms from a suitably chosen group acting on the ground set. I will then discuss some ideas to characterize non-reconstructible pedigrees.
Abstract: Fiala has shown with computer aid that there are 35 laws of length at most six and involving the product operation only which havethe property of the title (discounting renaming, cancelling, mirroring and symmetry). However, he has not provided humanly-comprehensible proofs of these facts.
We show that it is possible to give short understandable proofs of Fiala's results and to separate the loops and groups into classes.
A tournament is an orientation of a complete graph. Sumner conjectured in 1971 that any tournament G on 2n-2 vertices contains any directed tree T on n vertices. Taking G to be a regular tournament on 2n-3 vertices and T to be an outstar shows that this conjecture, if true, is best possible. Many partial results have been obtained towards this conjecture.
In this talk I shall outline how a randomised embedding algorithm can be used to prove an approximate version of Sumner's conjecture, by first proving a stronger result for the case when T has bounded maximum degree. Furthermore, I will briefly sketch how by considering the extremal cases of this proof we may deduce that Sumner's conjecture holds for all sufficiently large n.
This is joint work with Daniela Kühn and Deryk Osthus.
The notion of residual and derived design of a symmetric design was introduced in a classic paper by R. C. Bose (1939). A quasi-residual (quasi-derived) design is a 2-design which has the parameters of a residual (derived) design. The embedding problem of a quasi-residual design into a symmetric design is an old and natural question. A Menon design of order h² is a symmetric (4h²,2h²-h, h²-h) design. Quasi-residual and quasi-derived designs of a Menon design have parameters 2-(2h²+h,h²,h²-h) and 2-(2h²-h,h²-h,h²-h-1), respectively.
We use regular Hadamard matrices to construct non-embeddable quasi-residual and quasi-derived Menon designs. As applications, the first two new infinite families of non-embeddable quasi-residual and quasi-derived Menon designs are constructed. This is a joint work with T. A. Alraqad.
Abdullahi Umar has discovered that many celebrated sequences of combinatorial numbers, including the factorials, binomial coefficients, Bell, Catalan, Schröder, Stirling and Lah numbers solve counting problems in certain naturally defined inverse semigroups of partial bijections on a finite set. I will give an account of some of these results, together with the beginning of a study of q-analogues where we consider linear bijections between subspaces of a finite vector space (and some very interesting open problems arise).
We review the Ising model with random-site or random-bond disorder, which has been controversial in both two and four dimensions. In the two-dimensional case, the controversy is between the strong universality hypothesis which maintains that the leading critical exponents are the same as in the pure case and the weak universality hypothesis, which favours dilution-dependent leading critical exponents. Here the random-site version of the model is subject to a finite-size scaling analysis, paying special attention to the implications for multiplicative logarithmic corrections. The analysis is supportive of the scaling relations for logarithmic corrections and of the strong scaling hypothesis in the 2D case. In the four-dimensional case unusual corrections to scaling characterize the model, and the precise nature of these corrections has been debated. Progress made in determining the correct 4D scenario is outlined.
It is a rather common belief that the only probability distribution occurring in the statistical physics of many-particle systems is that of Boltzmann and Gibbs (BG). This point of view is too limited. The BG-distribution, when seen as a function of parameters such as the inverse temperature and the chemical potential, is a member of the exponential family. This observation is important to understand the structure of statistical mechanics and its connection with thermodynamics. It also is the starting point of the generalizations discussed below. Recently, the notion of a generalized exponential family has been introduced, both in the mathematics and in the physics literature. A sub-class of this generalized family is the q-exponential family, where q is a real parameter describing the deformation of the exponential function. It is the intention of this talk to show the relevance for statistical physics of these generalizations of the BG-distribution. Particular attention will go to the configurational density of classical mono-atomic gases in the micro- canonical ensemble. These belong to the q-exponential family, where q tends to 1 as the number of particles tends to infinity. Hence, in this limit the density converges to the BG-distribution.
Correlation functions or factorial moments are important characteristics of spatial point process. The question under consideration is to what extend the first two correlation functions identify the point processes. This is a non-linear infinite dimension version of the classical truncated moment problem. In collaboration with J. Lebowitz and E. Speer we derived general conditions, giving rise also to a new approach to moment problems and obtain more concrete results in particular situation.
I will centre my talk mainly on the general theme of control and synchronization, and on how the problem of multiple current-reversals in ratchets could be translated into that of achieving asymptotic stability and tracking of its dynamical and transport properties. Current-reversal is an intriguing phenomenon that has been central to recent experimental and theoretical investigations of transport based on the ratchet mechanism. Research in this domain is largely motivated by applications to a variety of systems such as asymmetric crystals, semiconductor surfaces under light radiation, vortices in Josephson junction arrays, micro-fluidic channels, transport of ion channels and muscle contraction. Here, by considering a system of two interacting ratchets, we will demonstrate how the interaction can be used to control the reversals. In particular, we will show that current reversal that exists in a single driven ratchet system can ultimately be eliminated in the presence of a second ratchet and then establish a connection between the underlying dynamics and reversal-free regime. The conditions for current-reversal-free transport will be given. Furthermore, we will discuss briefly some applications of our results, recent challenges and possible direction for future works.
Nonlinear media host a wide variety of localized coherent structures (bright and dark solitons, vortices, aggregates, spirals, etc.) with complex intrinsic properties and interactions. In many situations such as optical communications, condensed matter waves and biochemical aggregates it is crucial to study the interaction dynamics of coherent structures arranged in periodic lattices. In this talk I will present results concerning chains and lattices of coherent structures and their dynamical reductions from PDEs to ODEs, and all the way down to discrete maps. Particular attention will be given to a) spatially localized vibrations (breathers) in 1D chains of coupled bright solitons and b) vortex lattices dynamics and their crystalline configurations.
The question of deciding whether a given function is injective is important in a number of applications. For example, where the function defines a vector field, injectivity is sufficient to rule out multiple fixed points of the associated flow. One useful approach is to associate sets of matrices/ generalised graphs with a function, and make claims about injectivity based on (finite) computations on these matrices or graphs. For a large class of functions, a novel way of doing this will be presented. Well-known results on functions with signed Jacobian, and more recent results in chemical reaction network theory, are both special cases of the approach presented. However the technique does not provide a unique way of associating matrices/graphs with functions, leading to some interesting open questions.
Nonequilibrium steady states for two classes of Hamiltonian models with different local dynamics are discussed. Models in the first class have chaotic dynamics. An easy-to-compute algorithm that goes from micro-dynamics to macro-profiles such as energy is proposed, and issues such as memory, finite-size effects and their relation to geometry are discussed. Models in the second class have integrable dynamics. They become ergodic when driven at the boundary, but continue to exhibit anomolous behavior such as non-Gibbsian local distributions. The results follow from a mixture of numerical and theoretical considerations, some of which are rigorous. They are in joint works with J-P Eckmann, P Balint and K Lin.
Quantum technologies represent a groundbreaking paradigm that leverages the principles of quantum mechanics to develop novel computational technologies, sensing, and communication. Their full potential can be harnessed when networked together, creating what is known as the quantum internet. Unlike a classical internet that transmits bits, a quantum internet will distribute entangled qubit pairs. Motivated by recent technological developments in constructing a quantum internet, the study of quantum networks and networking has seen a significant increase in interest, with several models being proposed. In this talk, I will explain the general workings of a quantum network, the mechanics necessary for the distribution of entanglement, and our recently developed entanglement distribution protocol based on PGRAND [2], which achieves the Hamming bound [1] and is arguably realistic. I will discuss the advantages and disadvantages of our model, assess the realism of our assumptions for different levels of technological development, and compare it with other recent models in quantum networking.
References:[1] D. P. DiVincenzo, Shor, P. W., & Smolin, J. A. (1998). Quantum-channel capacity of very noisy channels. Physical Review A, 57(2), 830.[2] A. Roque, Cruz, D., Monteiro, F. A., & Coutinho, B. C. (2023). Efficient entanglement purification based on noise guessing decoding. arXiv preprint arXiv:2310.19914.
In this seminar, following a brief review of thermodynamic affinities, we will explore the concept of an effective affinity. This quantity, defined through large deviation theory, encapsulates several properties of current fluctuations and dissipation in a single number (see Ref. [1]). After discussing these properties, I will present a promising, though not yet fully understood, result, viz., that for thermodynamically consistent models the effective affinity is closely related to the current's stalling force.
[1] https://arxiv.org/abs/2406.08926
Active matter transforms fuel into mechanical action at the local, microscopic scale. Living organisms, such as swimming bacteria and growing cell tissue, provide plenty of examples of active non-equilibrium systems. In this talk, I will introduce the framework of Doi-Peliti field theory, and show how it can be used to characterise the emergence of effective attraction between soft repulsive run-and-tumble particles. I will further present recent work on exactly solvable optimal transport protocols of self-propelling particles and how these can be extended to construct a minimal active information engine.
The Feynman ratchet-and-pawl is a celebrated illustration of an apparent Maxwell demon, capable of extracting useful work from thermal fluctuations. After discussing the original context of the ratchet-and-pawl construction, a microscopic conceptualisation is introduced, which allows for an analysis based on first principles. A general formalism is derived describing both dynamical and energetic properties of this microscopic Feynman ratchet. Specifically, work and heat flows are given as a series expansion in the thermodynamic forces, providing analytical expressions for the (non)linear response coefficients.
Many famous examples from Aperiodic Order, such as the Penrose tilings, Ammann-Beenker tilings or tilings of the recently discovered hat monotile, turn out to be constructable from the cut and project method. Roughly speaking, a cut and project scheme takes an ‘irrational slice’ of a periodic pattern (a lattice) in a higher dimensional space, producing a structure which is no longer periodic but is still ‘ordered’. In this talk I will introduce central concepts, such as this, from the field of Aperiodic Order, including how these patterns can be studied from the perspective of Dynamical Systems. I will then explain how one may determine properties of cut and project sets which have polytopal acceptance windows: the growth rate of their patch counting functions (or 'complexity'), whether or not they have linear repetitivity and whether or not they are ‘self-similar’, that is, generated from a substitution rule.
Social conventions are the foundation of social and economic life. As legions of AI agents increasingly interact with each other and with humans, their ability to form shared conventions will determine how effectively they coordinate behaviours, integrate into society, and influence it. In this talk, I will first provide an overview of theoretical and experimental results that demonstrate the spontaneous creation of universally adopted social norms in human groups, as well as the existence of tipping points in social convention. Then, I will discuss the case of populations of LLMs. I will explore analogies and differences between what we observe in groups of humans and machines, and highlight how strong collective biases can emerge in AI groups, even when individual agents appear to be unbiased. These results clarify how AI systems can autonomously develop norms without explicit programming and have implications for designing AI systems that align with human values and societal goals.
Reference:
Ashery, A. F., Aiello, L. M., & Baronchelli, A. (2024). The Dynamics of Social Conventions in LLM populations: Spontaneous Emergence, Collective Biases and Tipping Points. arXiv preprint arXiv:2410.08948.
We introduce the notion of conditioned Lyapunov exponents for random dynamical systems, where we condition on trajectories that stay within a bounded domain for asymptotically long times. This is motivated by the desire to characterise local dynamical properties in the presence of unbounded noise (when almost all trajectories are unbounded). We present two different approaches to prove the existence of such conditioned Lyapunov exponents, and both approaches make use of recent developments on quasi-stationary and quasi-ergodic measures. Joint work with Matheus de Castro, Dennis Chemnitz, Hugo Chu, Maximilian Engel, and Jeroen Lamb.
Whereas quantum physics arouse from the need to explain the strange microscopic world of atoms, electrons and photons, network theory has been immensely successful in taming the complexity of large interacting systems such as social, transportation and biological networks. The field of quantum complex networks is in their intersection and consists of research where both are used together. It includes for example network models exhibiting emergent quantum statistics, networks of interacting quantum systems, network generalized quantum correlations, and the quantum Internet. My goal in this seminar talk is to give a gentle introduction of both the big picture and the main research lines. Technical details, minutiae and jargon are avoided in favour of clearly communicating the central ideas and recent trends using the unifying language of networks.
Stationary measures are measures which are equal to a convex combination of the pushforward images of themselves under finitely many maps. We will discuss some techniques for studying such measures with particular focus on when they are absolutely continuous with respect to the Lebesgue measure.
The aim of this talk is to characterize all dynamical systems on Cantor set that can be embedded in interval with vanishing derivative over it. Starting motivation for this study is an old question whether invariant subset C ⊂ [0, 1] on which derivative of interval map f vanishes must contain a periodic point, which was recently answered in the negative by Ciesielski and Jasinski.
(joint work with Silvere Gangloff)