How did the central banks find themselves holding the torch that keeps the shadow banks in existence? Leon Wansleben throws some light.
You are likely to read this piece at a time when numerous other articles return to the collapse of Lehman Brothers as the most iconic, single most consequential event of “the worst banking crisis the world has ever witnessed”.  But in autumn 2008, more or less coinciding with Lehman’s fall, another, less publicised, similarly consequential event took place: the Bank of England decided that it would turn its ad-hoc support for distressed financial institutions and failing markets into permanent policy. 
After a year of reluctance and internal conflict following the failure of Northern Rock, governor of the Bank of England, Mervyn King, and colleagues had come around to endorse a new version of the bank’s role as lender of last resort – that is, as the ultimate provider of liquidity for the banking sector in times of market turmoil.
I return to this decision because it marks an important step in re-defining the relationships between central banking and financial markets, not just for a situation like Lehman’s fall and all that followed, but for decades to come.
Through its commitment for permanent liquidity support, the central bank would become ever more closely intertwined with finance. Federal Reserve rate cuts in the 1990s under its then chairman, Alan Greenspan, were made to shore up liquidity in times of crisis. Those interventions spawned the infamous “Greenspan puts” – options that shielded investors from their risk taking strategies – which have now become acknowledged, permanent feature of public policy.