Introduction
The mathematics that underpins today’s search engines, nuclear engineering and even large-language models began with a bitter ideological dispute in Tsarist Russia. Andrei Markov’s 1906 proof that dependent events can still obey the law of large numbers created the “Markov chain”, a tool now woven into everything from Google’s PageRank to Monte-Carlo option-pricing. For investors, the same logic that models neutron paths or web-page authority can illuminate price dynamics, portfolio risk and the behaviour of complex systems.
Markov chains in a nutshell
A Markov chain is a sequence where the probability of the next state depends only on the current state, not on the path taken to arrive there. This “memoryless” property allows enormous simplification of otherwise intractable problems.
- States: discrete possibilities (e.g., “bull”, “bear”, “flat” market regimes).
- Transitions: probabilities of moving from one state to another.
- Stationary distribution: long-run proportion of time spent in each state, giving insight into equilibrium behaviour.
From neutrons to Nasdaq: the Monte-Carlo revolution
Stanislaw Ulam and John von Neumann adapted Markov’s insight to simulate neutron diffusion for the Manhattan Project. Their “Monte-Carlo method” generates thousands of random paths, counts the outcomes and builds a probability distribution. The same technique is now standard for:
- Pricing path-dependent options (Asian, barrier, look-back).
- Estimating tail risk (VaR, expected shortfall).
- Stress-testing portfolios under non-normal shocks.
The chart above shows how implied volatility (VIX) tends to cluster—exactly the kind of persistence a Markov regime-switching model can capture.
PageRank and the wisdom of link analysis
Larry Page and Sergey Brin modelled the web as a Markov chain where each page is a state and hyperlinks are transition probabilities. The stationary distribution yields a “PageRank” score that measures authority. Investors can borrow the same logic:
- Credit networks: rank counter-party risk by how many other high-quality institutions trade with them.
- Factor crowding: identify which factors (momentum, quality, low-vol) are becoming “central” in the market graph.
- ESG scoring: propagate sustainability ratings through supply-chain links.
Regime-switching models for markets
Financial time-series exhibit regime changes—low-vol bull phases morph into high-vol bear markets. A two-state Markov chain can estimate:
- Probability of remaining in the current regime.
- Expected duration before the next switch.
- Impact on expected returns and volatility.
Overlaying regime probabilities on the S&P 500 and ASX 200 illustrates how transitions often coincide with turning points in gold and the Australian dollar.
Putting into practice: actionable steps for investors
- Model regime risk: Fit a Markov regime-switching model to monthly returns of your core equity ETF. Use the smoothed probabilities to scale position sizes—reduce exposure when the high-vol regime probability exceeds 60 %.
- Monte-Carlo your drawdowns: Simulate 10,000 price paths for a 60/40 portfolio, allowing volatility and correlations to switch regimes. Report the 5 % tail loss over a 12-month horizon.
- Back-test with memoryless assumptions: When building algorithmic strategies, test whether performance persists if you randomise the order of trades while preserving transition probabilities—this guards against over-fitting to path dependence.
- Monitor factor centrality: Construct a directed graph where nodes are factors and edges are rolling correlations. Run a PageRank-style algorithm to detect crowding before it unwinds.
Limitations and forward look
Markov chains assume the future depends only on the present. In reality, feedback loops—such as volatility-induced deleveraging—can violate this assumption. Newer techniques (hidden Markov models, LSTM networks) extend the framework, yet the core insight remains: complex systems can often be tamed by focusing on the current state and its immediate transitions.
- Speed1
- Subtitles
- Quality
- Normal (1x)
- 1.25x
- 1.5x
- 2x
- 0.5x
- 0.25x
- Copy video url at current time
-
Exit Fullscreen (f)