Sunday, March 15, 2009

Sunday UnFunnies

Here's your Sunday supplement-- excerpts from an article in the Financial Times that gives an interesting analysis of the way changes in banking over the last several decades helped to create the current crisis.

From Lost through destructive creation, by Gillian Tett

Six years ago, Ron den Braber was working at Royal Bank of Scotland in London when he became worried that the bank’s models were underestimating the risk of credit products. But when the Dutch statistical expert alerted his bosses to the problem, he faced so much disapproval that he eventually left.

“I started off saying things gently ... but no one wanted to listen,” Mr den Braber recalls. The reason, he believes, lay in “groupthink ... and pressure to get business done” – as well as a sheer lack of understanding about how the models worked.

Tales of that nature go some way to explaining how the west’s big banks brought themselves to their present plight and tipped the world into recession. Their writedowns are running at $1,000bn (€795bn, £725bn), according to the Institute for International Finance, the banking groups’ Washington lobby group. The Bank of England says losses arising from banks having to mark their investments down to market prices stand at $3,000bn, equivalent to about a year’s worth of British economic production. On Monday, the Asian Development Bank estimated that financial assets worldwide could by now have fallen by more than $50,000bn – a figure of the same order as annual global output.

It is imperative for policymakers, bankers, investors and voters to understand more clearly what went so badly wrong with 21st-century finance. Certainly, there is no shortage of potential culprits: naked greed, lax regulation, excessively loose monetary policy, fraudulent borrowing and managerial failure all played a role (as in earlier periods of boom and bust).

Another problem was at play: the extraordinary complexity and opacity of modern finance. During the past two decades, a wave of innovation has reshaped the way markets work, in a manner that once seemed able to deliver huge benefits for all concerned. But this innovation became so intense that it outran the comprehension of most ordinary bankers – not to mention regulators.

The current crisis stems from changes that have been quietly taking root in the west for many years. Half a century ago, banking appeared to be a relatively simple craft. When commercial banks extended loans, they typically kept those on their own books – and they used rudimentary calculations (combined with knowledge of their customers) when deciding whether to lend or not.

From the 1970s onwards, however, two revolutions occurred: banks started to sell their credit risk on to third-party investors in the blossoming capital markets; and they adopted complex computer-based systems for measuring credit risk that were often imported from the hard sciences – and designed by statistical “geeks” such as Mr den Braber at RBS.

Until the summer of 2007, most investors, bankers and policymakers assumed that those revolutions represented real “progress” that was beneficial for the economy as a whole. Regulators were delighted that banks were shedding credit exposures, since crises such as the 1980s US savings and loan debacle had demonstrated the dangers of banks being exposed to a concentrated type of lending. The dispersion of credit risk “has helped to make the banking and overall financial system more resilient”, the International Monetary Fund proclaimed in April 2006, expressing a widespread western belief.

Bankers were even more thrilled, because when they repackaged loans for sale to outside investors, they garnered fees at almost every stage of the “slicing and dicing” chain. More­over, when banks shed credit risk, regulators permitted them to make more loans – enabling more credit to be pumped into the economy, creating even more bank fees. By early 2007, financial officers at Britain’s Northern Rock gleefully estimated that they could extend three times more loans, per unit of capital, than five years earlier. That was because they were turning their mortgages into bonds and were thus able to meet regulatory guidelines in a more “efficient” manner.

But as innovation grew more intense, it also became plagued with a terrible irony. In public, the financiers at the forefront of the revolution depicted the shifts as steps that would promote a superior form of free-market capitalism. When a team at JPMorgan developed credit derivatives in the late 1990s, a favourite buzzword in their market literature was that these derivatives would promote “market completion” – or more perfect free markets.

In reality, many of the new products were so specialised that they were never traded in “free” markets at all. An instrument known as “collateralised debt obligations of asset-backed securities” was a case in point. This gizmo turned up in the middle of this decade when bankers created bundles of mortgage-linked bonds, often intermingled with other credit derivatives. The alphabet soup of abbreviations this generated was often as baffling as the products that the acronyms represented. In 2006 and early 2007, no less than $450bn worth of these “CDO of ABS” securities were produced. Instead of being traded, most were sold to banks’ off-balance-sheet entities such as SIVs – “structured investment vehicles” – or simply left on the books.

That made a mockery of the idea that innovation had helped to disperse credit risk. It also undermined any notion that banks were using “mark to market” accounting systems: since most banks had no market price for these CDOs (or much else), they typically valued them by using theoretical calculations from models. The result was that a set of innovations that were supposed to create freer markets actually produced an opaque world in which risk was being concentrated – and in ways almost nobody understood. By 2006, it could “take a whole weekend” for computers to perform the calculations needed to assess the risks of complex CDOs, admit officials at Standard & Poor’s rating agency.

Most investors were happy to buy products such as CDOs because they trusted the value of credit ratings. Meanwhile, the banks were making such fat profits they had little incentive to question their models – even when specialists such as Mr den Braber tried to point out the flaws.

In July 2007, this blind faith started to crack. Defaults had started to rise on US subprime mortgages. Agencies such as S&P cut ratings for mortgage-linked products and admitted that their models were malfunctioning. That caused such shock that investors such as money market funds stopped purchasing notes issued by shadowy entities such as SIVs. The gangrene of fear began to infect “real” banks, which investors realised were exposed to SIVs in unexpected ways. “In spite of more than 30 years in the business, I was unaware of the extent of banks’ off-balance-sheet vehicles such as SIVs,” Anthony Bolton, president of investments at Fidelity International, recently observed.

From 2005, banks such as Merrill Lynch, Citigroup and UBS had been stockpiling instruments such as CDOs. “We never paid much attention ... because our risk managers said those instruments were triple-A,” recalls Peter Kurer, UBS chairman. But when subprime delinquencies rose, accountants demanded that banks revalue these instruments.

By the spring of 2008, Citi, Merrill and UBS had collectively written down $53bn. Shockingly, two-thirds of that stemmed from supposedly triple-A CDOs, which by then were deemed to be worth only half of their face value. In financial services, this “was the era when models failed”, as Joshua Rosner, an American economist, has put it.

Banks tried to plug the gap by raising more than $200bn in new capital. But the hole kept deepening. As a result, trust in the ability of regulators to monitor the banks crumbled. So did faith in banks. Then, as models lost credibility, investors shunned all forms of complex finance.

Last September, the final pillar of faith collapsed. Most investors had assumed the US government would never let a large financial group fail. But when Lehman Brothers went bankrupt, distrust and disorientation spiralled. Most funding markets seized up. Prices went haywire; banks and asset managers discovered that all their trading and hedging models had broken down. “Nothing in the capital markets worked any more,” says the chief risk officer at a large western bank. The system, as Mervyn King, governor of the Bank of England, noted a few weeks later, was “on the precipice”.

Today, as they seek new pillars of trust for finance, governments are stepping in to replace many market functions. The US Treasury is conducting “stress tests” of banks, to boost investor confidence. In Britain the state is insuring banks against losses on their toxic assets. Banks and rating agencies are – belatedly – revamping their models. Financiers and regulators have also pledged to make the industry more transparent and standardised.

But the brutal truth is that until financial markets live up to their name – becoming places where assets are traded and priced in a credible manner – it will be difficult to rebuild investor trust. Not for nothing does the root of the word “credit” come from the Latin credere, meaning “to believe”.

The past year has shown that without faith, finance is worth naught. Rebuilding the sense of trust could take rather longer than that.

Entire article at Financial Times:

http://www.ft.com/cms/s/0/0d55351a-0ce4-11de-a555-0000779fd2ac,dwp_uuid=ae1104cc-f82e-11dd-aae8-000077b07658.html

No comments:

Post a Comment