Monday, October 28, 2013

SEC Examing Mutual Funds Re: Puerto Rico Bonds

The SEC is conducting nationwide, “limited scope examinations” of certain mutual funds that invest in Puerto Rico securities, according to a document obtained by The Bond Buyer and knowledgeable sources. On-site interviews are being conducted Thursday at an investment management firm with exposure to Puerto Rico through its mutual funds, according to a letter from the SEC’s San Francisco Regional Office. The letter was obtained by The Bond Buyer with the recipient’s name redacted.

The exams, which are being carried out by the SEC’s Office of Compliance, Inspections and Examinations and its regional offices, are intended to make sure the commission is up to speed on how much of Puerto Rico’s $70 billion of outstanding debt is held by funds in the U.S. and what those funds are telling their shareholders about the risks involved.

Our firm has been reviewing claims for investors regarding losses in bonds issued by Puerto Rico and in particular the sales practices of large broker-dealers, including UBS, who may have inappropriately marketed the funds. For more information on our work, visit our web site, or call us at 212-609-6544

For more information - SEC Probing Funds Holding Puerto Rico Debt 


---
The attorneys at Sallah Astarita & Cox include veteran securities litigators and former SEC Enforcement Attorneys. We have decades of experience in securities litigation matters, including the defense of enforcement actions. We represent investors, financial professionals and investment firms, nationwide. For more information contact Mark Astarita at 212-509-6544 or at email us

The Next Obamacare Fiasco

Thousands Of Consumers Get Insurance Cancellation Notices Due To Health Law Change Kaiser Health News

Some health insurance gets pricier as Obamacare rolls out Los Angeles Times

Kaiser:
Health plans are sending hundreds of thousands of cancellation letters to people who buy their own coverage,...The main reason insurers offer is that the policies fall short of what the Affordable Care Act requires starting Jan. 1

Florida Blue, for example, is terminating about 300,000 policies, about 80 percent of its individual policies in the state. Kaiser Permanente in California has sent notices to 160,000 people – about half of its individual business in the state. Insurer Highmark in Pittsburgh is dropping about 20 percent of its individual market customers, while Independence Blue Cross, the major insurer in Philadelphia, is dropping about 45 percent.

LA Times:
Blue Shield of California sent roughly 119,000 cancellation notices out in mid-September, about 60 percent of its individual business. About two-thirds of those policyholders will see rate increases in their new policies....
Middle-income consumers face an estimated 30% rate increase, on average, in California due to several factors tied to the healthcare law. Some may elect to go without coverage if they feel prices are too high. Penalties for opting out are very small initially. Defections could cause rates to skyrocket if a diverse mix of people don't sign up for health insurance
This is interesting. Obamacare could actually increase the number of people without insurance, because you are not allowed to keep (consumer) or sell (insurance company) simple cheap insurance.


If you're healthy and have been paying for individual insurance all along -- largely because you know people with preexisting conditions can't get insurance, and you want to lock in your right to continue your policy should you get sick -- there is now a strong incentive to drop out.

The government has just wiped out the value of those premiums you paid all these years -- you don't need the right to buy health insurance anymore, as you can always get it later. You're seeing a large increase in premiums for benefits you don't want and to cross-subsidize other people. The mandate penalties are almost certainly going to be pushed back, they penalties are a good deal less than the cost of health insurance (which you can always get later if you get sick), the IRS has already said it's not going after people who don't pay them. Dropping out of individual health insurance starts to make a lot of sense.

This was bad enough on its own. But if insurance companies cancel these people's policies, all at once,  it's dramatically worse. It would be hard to design a more effective "nudge" to get such people to think about it and conclude that dropping health insurance is a good idea.

The overall numbers may not change. Other reports suggest that poor and sick people have been signing up in droves, mostly to get on the expanded medicaid. But it's an obvious fiscal disaster if Obamacare only attracts the poor and sick, does not attract the young and healthy -- and now drives away the healthy people who were provident enough to buy individual health insurance!

Why is this happening? A curious tidbit
All these cancellations were prompted by a requirement from Covered California, the state's new insurance exchange. The state didn't want to give insurance companies the opportunity to hold on to the healthiest patients for up to a year, keeping them out of the larger risk pool that will influence future rates.
The destruction of the off-exchange individual insurance market is deliberate.

The best quote of the bunch, from the LA Times
Pam Kehaly, president of Anthem Blue Cross in California, said she received a recent letter from a young woman complaining about a 50% rate hike related to the healthcare law.

"She said, 'I was all for Obamacare until I found out I was paying for it,'" Kehaly said.
This realization will come soon to millions more.

Wednesday, October 23, 2013

SEC Releases Crowd Funding Proposal

Seal of the U.S. Securities and Exchange Commi...
The Crowd Funding proposal has been released by the SEC today. If adopted, entrepreneurs and start-up companies looking for investors will be able to solicit over the Internet from the general public, a historic change in the regulatory structure regarding fund raising. Current regulations effectively limit solicitation of accredited investors - those with a net worth of at least $1 million, excluding the value of their homes, or annual income of more than $200,000. The crowdfunding rule would let small businesses raise up to $1 million a year by tapping unaccredited investors

We will have more once we review the proposal, but for those who can't wait, the proposal is here - SEC Crowd Funding Proposal. The SEC's Press Release - SEC Issues Proposal on CrowdFunding is at their site, as is their site to Submit Comments on CrowdFunding.

Related Articles
Enhanced by Zemanta

Ex-Merrill Broker Sentenced to Three Years for Sales Practice Violations

A federal judge in the U.S. District Court for the Eastern District of Missouri sentenced Greg J. Campbell on charges that he had siphoned off almost $2 million from client accounts, including that of an 85-year-old client with dementia, for personal use.

According to press reports, Campbell was ordered to repay $1.8 million in restitution and forfeit property purchased with client funds. Campbell’s scheme began in September 2007 while he was a broker working for Merrill Lynch, according to a court filing by the U.S. Attorney’s office in St. Louis, Mo.

The reports continue sayin that he opened and oversaw Loan Management Accounts, which are credit lines collateralized by securities held in customer accounts, under his clients’ names and then used those funds for down payments on a personal residence, mortgage payments, vehicle lease payments and living expenses, the filing said. He hid the activities by falsifying signatures on letters of authorization, replacing balances at one account with those of another client’s, and sending account statements to unrelated addresses to which only Campbell had access, according to the complaint.

More details are available at Ex-Merrill Broker Sentenced to More Than 3 Years 

Wall Street Profit May Drop 37% Bitten by Laws, Congress

Wall Street’s profit may fall 37 percent this year, hurt during the second half by rising interest rates, legal costs and budget turmoil in Washington, New York State Comptroller Thomas DiNapoli said. DiNapoli forecast securities industry earnings at $15 billion in 2013 compared with $23.9 billion the year before, while employment has fallen near a post-recession low.

A drop in profit may crimp bonuses, which reached an estimated $20 billion for 2012, he said. “The political gridlock in Washington may take a bite out of the securities industry’s profits for the fourth quarter,” DiNapoli, 59, said in a statement. “Washington’s inability to resolve budget and fiscal issues is bad for business.”

An impasse over spending and raising the nation’s borrowing limit led to a partial shutdown of U.S. government operations this month, as Republicans in Congress fought with Democrats over paring back Obamacare. The resulting turmoil rocked equities and pushed prices higher in the $4.1 trillion market for federal debt. That may lower earnings in the securities industry, which helps drive the city’s economy, DiNapoli said. “Failure to resolve the federal budget and debt ceiling impasse could disrupt the economy and hurt New York City and New York state,” said DiNapoli, a Democrat. Congress put off both issues with short-term fixes setting new deadlines next year.

For more information - Wall Street Profit May Drop 37% Bitten by Laws, Congress - Bloomberg

 

Is the JPMorgan Settlement a Template for Other Bank Settlements?

J.P. Morgan Chase’s settlement is just a template for more settlements on Wall Street. According to an article at MarketWatch, the Justice Department is planning to use the reported $13 billion settlement with J.P. Morgan as a blueprint for other similar settlements. 

The deal to settle investigations by prosecutors into the firm’s issuance of bad mortgage investments to investors before the financial crisis could just be the start of many large settlements in the banking industry. The Justice Department plans to use a 1980s law which carries a lower burden of proof and gives prosecutors 10 years, instead of the standard 5 years, to pursue these cases. Some of the settlement money would have to go directly to struggling consumers, under the new model.

J.P. Morgan has been accused of selling troubled mortgage securities, many of which originated from its acquisitions of Bear Stearns and Washington Mutual. Almost every major Wall Street firm issued similar mortgage securities before the crisis and could become targets under this new approach by prosecutors. 

When the housing boom crashed five years ago, investors lost billions on their investments and the banks were accused of intentionally selling bad mortgages. Now five years later, banks, including Bank of America Corp. and Citigroup, have been saddled with ongoing litigation from the regulators, prosecutors, states and investors.

This switch in prosecution theories, which effectively extends the statute of limitations, will undoubtedly result in longer investigations, and more uncertainty for targets and potential targets of those investigations.

Tuesday, October 22, 2013

How the JPMorgan deal could curtail credit

The Justice Department's potential $13 billion settlement with JPMorgan may go a long way toward appeasing consumers' anger at big banks for the financial crisis, but it probably won't help those same consumers get a mortgage. In fact, it may make it harder.

For more information - How the JPMorgan deal could curtail credit

 

BofA Said to Face Three More U.S. Probes of Mortgage-Bond Sales

Bloomberg is reporting that Bank of America Corp., sued by U.S. attorneys in August over an $850 million mortgage bond, faces three additional Justice Department civil probes over mortgage-backed securities, according to two people with direct knowledge of the situation.

U.S. attorneys offices in Georgia and California are examining potential violations tied to Countrywide Financial Corp., the subprime lender Bank of America bought in 2008, said the people, who asked not to be identified because the inquiries aren’t public. U.S. attorneys in New Jersey are looking into deals involving Merrill Lynch & Co., purchased by the firm in 2009, the people said.

If claims are brought, Bank of America would join JPMorgan Chase & Co. (JPM) in facing Justice Department demands that it resolve liabilities inherited while buying weakened rivals at the government’s urging during the credit crisis. JPMorgan, the biggest U.S. bank, reached a tentative $13 billion agreement last week to end civil claims over mortgage-bond sales, including those handled by Bear Stearns Cos. and Washington Mutual Inc. operations purchased in 2008. Bank of America, led by Chief Executive Officer Brian T. Moynihan, 54, is being examined for violations of the Financial Institution Reform, Recovery and Enforcement Act of 1989, a relic of the savings-and-loan crisis known as FIRREA, according to the people. The Justice Department cited that statute in its August lawsuit against the firm, which is the nation’s second-largest lender after JPMorgan.

For more information - BofA Said to Face Three More U.S. Probes of Mortgage-Bond Sales - Bloomberg

---

The attorneys at Sallah Astarita & Cox include veteran securities litigators and former SEC Enforcement Attorneys. We have decades of experience in securities litigation matters, including the defense of enforcement actions. We represent investors, financial professionals and investment firms and brokers nationwide. For more information cotact Mark Astarita at 212-509-6544 or at email us

Thursday, October 17, 2013

SEC Loses Mark Cuban Suit

Years ago, in what we viewed as a far too convenient allegation, the SEC accused billionaire Mark Cuban of insider-trading. The allegations were odd - the SEC alleged that the CEO in question told Mr. Cuban, that he had confidential information to provide to him, and that Mr. Cuban agreed to keep it confidential. That allegation raises the question, can the CEO of a public company voluntarily provide material, non-public information to someone, and prevent that someone from trading? Is so, it is a great way to keep your largest shareholder from selling his stock - call him up and give him some inside information.

Seal of the U.S. Securities and Exchange Commi...I have a number of blog posts on the case. All are collected in Mark Cuban SEC. All predicted a loss for the SEC, given the sheer lack of legal weight to the claims.
But that is not often enough. It is an unfortunate part of our society that the government often wins cases simply because the target of its ire does not have the ability to fight back. There are countless examples of small brokerge firms, investors and individual brokers who settle SEC, or FINRA cases simply because they cannot afford to fight, even though they are right.

I had the pleasure to represent a broker who did not back down from a fight with FINRA, who  refused to settle with them when he was right and FINRA was wrong. It was a time consuming and expensive fight, but we won, and FINRA lost.

It was therefore a pleasure to watch Mark Cuban fight back. He certainly has the financial ability, but he also had the nerve to do so. And, after only a few hours of deliberation, much of which was probably discussing football, so as to not embarass the SEC, the jury in federal district court in Dallas said that the Securities and Exchange Commission failed to prove the key elements of its case, including the claim that Cuban agreed to keep certain information confidential and not trade on it.

During an impromptu news conference outside the courthouse, Mark Cuban angrily denounced the SEC and its lead trial attorney, Jan Folena, saying that they lied about the evidence and targeted him because of his fame.

Mr, Cuban acknowledged that  defendants of lesser wealth could have been bullied.
''Hopefully people will start paying attention to how the SEC does business,'' Cuban said. ''I'm the luckiest guy in the world. I'm glad this happened to me. I'm glad I'm able to be the person who can afford to stand up to them.''
For more information - Jury says Cuban did not commit insider trading 
Enhanced by Zemanta

Wednesday, October 16, 2013

Puerto Rico Seeks To Calm Rattled Muni Bond Investors

Puerto Rico officials sought to soothe the fears of edgy investors Tuesday as the market continues to price risk into the tropical island’s widely held municipal bonds. “These are not just constitutional obligations, but also moral obligations,” said Governor Alejandro Garcia Padilla in an investor webinar, reaffirming the island’s intent to make good on its debt obligations.

South view of the building, located in the Pue...Puerto Rico has roughly $70 billion in outstanding municipal bonds, which hold the lowest investment grade rating from major rating agencies. Its debt has long been held in municipal bond funds because its bonds are exempt from local, state, and federal taxes, but investor concerns have grown.

Another aspect of this bond debacle is marketing of the bonds by some brokerage firms, notably UBS. We are prosecuting and investigating claims against UBS for those practices, as well as other broker dealers, who may not have properly disclosed the risks in the Puerto Rico bonds.

For more information - Puerto Rico seeks to calm rattled muni bond investors

Related articles
Enhanced by Zemanta

Tuesday, October 15, 2013

Bob Shiller's Nobel

As with Lars Hansen and Gene Fama, Bob Shiller has also produced a span of interesting innovative work, that I can't possibly cover here. Again, don't let a Nobel Prize for one contribution overshadow the rest. In addition to volatility, Bob did (with Grossman and Melino) some of the best and earliest work on the consumption model, and his work on real estate and innovative markets is justly famous.  But, space is limited so again I'll just focus on volatility and predictability of returns which is at the core of the Nobel.

Source: American Economic Review
The graph on the left comes from Bob's June 1981  American Economic Review paper. Here Bob contrasts the actual stock price p with the "ex-post rational" price p*, which is the discounted sum of actual dividends. If price is the expected discounted value of dividends, then price should vary less than the actual discounted value of ex-post dividends.  Yet the actual price varies tremendously more than this ex-post discounted value.

This was a bombshell. It said to those of us watching at the time (I was just starting graduate school) that you Chicago guys are missing the boat. Sure, you can't forecast stock returns. But look at the wild fluctuations in prices! That can't possibly be efficient. It looks like a whole new category of test, an elephant in the room that the Fama crew somehow overlooked running little regressions.  It looks like prices are incorporating information -- and then a whole lot more!  Shiller interpreted it as psychological and social dynamics, waves of optimisim and pessimism.


(Interestingly, Steve Leroy and Richard Porter also wrote an essentially contemporary paper on volatility bounds in the May 1981 Econometrica: The Present Value Relation: Tests Based on Implicit Variance Bounds, that has been pretty much forgotten. I think Shiller got a lot more attention because of the snazzy graph, and the seductive behavioral interpretation. This is not a criticism. As I've said of the equity premium, knowing what you have and marketing it well matters. Deirdre McCloskey tells us that effective rhetoric is important and she's right. Most great work emerges as the star among a lot of similar efforts. Young scholars take note.)

But wait, you say. "Detrended by an exponential growth factor?" You're not allowed to detrend a series with a unit root. And what exactly is the extra content, overlooked by Fama's return forecasting regressions? Aha, a 15 year investigation took off, as a generation of young scholars dissected the puzzle. Including me. Well, you get famous in economics for inducing lots of people to follow you, and Shiller (like Fama and Hansen) is justly famous here by that measure.

My best attempt at summarizing the whole thing is in the first few pages of "Discount Rates," and the theory section of that paper. For a better explanation, look there. The digested version here.

Along the way I wrote "Volatility Tests and Efficient Markets" (1991) establishing the equivalence of volatility tests and return regressions, "Explaining the Variance of Price-Dividend Ratios" (1992), an up to date volatility decomposition, "Permanent and Transitory Components of GNP and Stock Prices" (1994) "The Dog That Did Not Bark" (2008), three review papers, an extended chapter in my textbook "Asset Pricing," covering volatility, bubbles and return regressions, and last but not least an economic model that tries to explain it all, "By Force of Habit" (1999) with John Campbell. And that's just me. Read the citations in the Nobel Committe's  "Understanding Asset Prices." John Campbell's list is three times as long and distinguished.  

So, in the end, what do we know? A modern volatility test starts with the Campbell-Shiller linearized present value relation
Here p=log price, d=log dividend, r=log return and rho is a constant about 0.96. This is just a clever linearization of the rate of return -- you can rearrange it to read that the long run return equals final price less initial price plus intermediate dividends. Conceptually, it is no different than reorganizing the definition of return to
You can also read the first equation as a present value formula. The first term says prices are higher if dividends are higher. The second term says prices are higher if returns are lower -- the discount rate effect. The third term represents "rational bubbles."  A price can be high with no dividends if people expect the price to grow forever.

Since it holds ex-post, it also holds ex-ante -- the price must equal the expected value of the right hand side. And now we can talk about volatilty: the price-dividend ratio can only vary if expected dividend growth, expected returns, or the expected bubble vary over time. 

Likewise, multiply both sides of the present value identity by p-d and take expectations. On the left, you have the variance of p-d. On the right, you have the amount by which p-d forecasts dividend growth, returns, or future p-d. The price-dividend ratio can only vary if it forecasts future dividend, growth, future returns, or its own long-run future. 

The question for empirical work is, which is it? The surprising answer: it's all returns. You might think that high prices relative to current dividends mean that markets expect dividends to be higher in the future. Sometimes, you'd be right. But on average, times of high prices relative to current dividends (earnings, book value, etc.) are not followed by higher future dividends. On average, such times are followed by lower subsequent long-run returns.

Shiller's graph we now understand as such a regression: price-dividend ratios do not forecast dividend growth. Fortunately, they do not forecast the third term, long-term price-dividend ratios, either -- there is no evidence for "rational bubbles." They do forecast long-run returns. And the return forecasts are enough to exactly account for price-dividend ratio volatility!

Starting in 1975 and continuing through the late 1980s, Fama and coauthors, especially Ken French, were running regressions of long-run returns on price-dividend ratios, and finding that returns were forecastable and dividend growth (or the other "complementary" variables) were not. So, volatility tests are not something new and different from regressions. They are exactly the same thing as long-run return forecasting regressions. Return forecastability is exactly enough to acount for price-dividend volatility.  Price-dividend volatility is another implication of return forecastability-- and an interesting one at that! (Lots of empirical work in finance is about seeing the same phenomenon through different lenses that shows its economic importance.)

And the pattern is pervasive across markets. No matter where you look, stock, bonds, foreign exchange, and real estate, high prices mean low subsequent returns, and low prices (relative to "fundamentals" like earnings, dividends, rents, etc) mean high subsequent returns.

These are the facts, which are not in debate. And they are a stunning reversal of how people thought the world worked in the 1970s. Constant discount rate models are flat out wrong.

So, does this mean markets are "inefficient?" Not by itself. One of the best parts of Fama's 1972 essay was to prove a theorem: any test of efficiency is a joint hypothesis test with a "model of market equilibrium." It is entirely possible that the risk premium varies through time. In the 1970s, constant expected returns were a working hypothesis, but the theory long anticipated time varying risk premiums -- it was at the core of Merton's 1972 ICAPM -- and it surely makes sense that the risk premium might vary through time.

So here is where we are: we know the expected return on stocks varies a great deal through time. And we know that time-variation in expected returns varies exactly enough to account for all the puzzling price volatility. So what is there to argue about? Answer: where that time-varying expected return comes from.

To Fama, it is a business cycle related risk premium. He (with Ken French again) notices that low prices and high expected returns come in bad macroeconomic times and vice-versa. December 2008 was a recent  time of low price/dividend ratios. Is it not plausible that the average investor, like our endowments,  said, "sure, I know stocks are cheap, and the long-run return is a bit higher now than it was. But they are about to foreclose on the house, reposess the car, take away the dog, and I might lose my job. I can't take any more risk right now." Conversely, in the boom, when people "reach for yield", is it not plausible that people say "yeah, stocks aren't paying a lot more than bonds. But what else can I do with the money? My business is going well.  I can take the risk now."

To Shiller, no. The variation in risk premiums is too big, according to him, to be explained by variation in risk premiums across the business cycle. He sees irrational optimism and pessimism in investor's heads. Shiller's followers somehow think the government is more rational than investors and can and should stabilize these bubbles. Noblesse oblige.

Finally, the debate over "bubbles" can start to make some sense. When Shiller says "bubble," in light of the facts, he can only mean "time-variation in the expected return on stocks, less bonds, which he believes is disconnected from rational variation in the risk premium needed to attract investors." When Fama says no "bubble," he means that the case has not been proven, and it seems pretty likely the variation in stock expected returns does correspond to rational, business-cycle related risk premiums. Defining a "bubble," clarifying what the debate is about, and settling the facts, is great progress.

How are we to resolve this debate? At this level, we can't. That' the whole point of Fama's joint hypothesis theorem and its modern descendants (the existence of a discount factor theorems). "Prices are high, risk aversion must have fallen" is as empty as "prices are high, there must be a wave of irrational optimism." And as empty as "prices are high, the Gods must be pleased." To advance this debate, one needs an economic or psychological model, that independently measures risk aversion or optimisim/pessimism, and predicts when risk premiums are high and low. If we want to have Nobels in economic "science," we do not stop at story-telling about regressions.

One example: John Campbell and I (Interestingly, Shiller was John's PhD adviser and frequent coauthor) wrote such a model, in "By Force of Habit". It uses the history of consumption and an economic model as an independent measure of time varying risk aversion, which rises in recessions. Like any model that makes a rejectable hypothesis, it fits some parts of the data and not others. It's not the end of the story.  It is, I think, a good example of the kind of model one has to write down to make any progress.

I am a little frustrated by behavioral writing that has beautiful interpretive prose, but no independent measure of fad, or at least no number of facts explained greater than number of assumptions made. Fighting about who has the more poetic interpretation of the same regression, in the face of a theorem that says both sides can explain it, seems a bit pointless. But an emerging literature is trying to do with psychology what Campbell and I did with simple economics. Another emerging literature on "institutional finance" ties risk aversion to internal frictions in delegated management, and independent measures such as intermediary leverage.

That's where we are. Which is all a testament to Fama, Shiller, Hansen, and asset pricing. These guys led a project that assembled a fascinating and profound set of facts. Those facts changed 100% from the 1970s to the 1990s. We agree on the facts. Now is the time for theories to understand those facts.  Real theories, that make quantitative predictions (it is a quantiative question: how much does the risk premium vary over time), and more predictions than assumptions.

If it all were settled, their work would not merit the huge acclaim that it has, and deserves.

Update: I'm shutting down most comments on these. For this week, let's congratulate the winners, and debate the issues some other day.

Lars Hansen's Nobel

Lars has done so much  deep and pathbreaking research, that I can't begin to even list it, to say nothing of explain the small part of it that I understand.  I wrote whole chapters of my textbook "Asset Pricing" devoted to just one Hansen paper. Lars writes for the ages, and it often takes 10 years or more for the rest of us to understand what he has done and how important it is.

So I will just try to explain GMM and the consumption estimates, the work most prominently featured in the Nobel citation. Like all of Lars' work, it looks complex at the outset, but once you see what he did, it is actually brilliant in its simplicity.

The GMM approach basically says, anything you want to do in statistical analysis or econometrics can be written as taking an average.

For example, consider the canonical consumption-based asset pricing model, which is where he and Ken Singleton took GMM out for its first big spin. The model says, we make sense of out of asset returns -- we should understand the large expected-return premium for holding stocks, and why that premium varies over time (we'll talk about that more in the upcoming Shiller post) -- by the statement that the expected excess return, discounted by marginal utility growth, should be zero
where Et means conditional expectation, beta and gamma capture investor's impatience and risk aversion, c is consumption and R is a stock or bond return and Rf is a bond return. E(R-Rf) is the premium -- how much you expect to earn on a risky asset over a riskfree one, as compensation for risk. (Non-economists, just ignore the equations. You'll get the idea). Expected returns vary over time and across assets in puzzling ways, but the expected discounted excess return should always be zero.

How do we take this to data? How do we find parameters beta and gamma that best fit the data? How do we check this over many different times and returns, to see if those two parameters can explain lots of facts? What do we do about that conditional expectation Et, conditional on information in people's heads? How do we bring in all the variables that seem to forecast returns over time (D/P) and across assets (value, size, etc.)? How do we handle the fact that return variance changes over time, and consumption growth may be autocorrelated?

When Hansen wrote, this was a big headache. No, suggested Lars. Just multiply by any variable z that you think forecasts returns or consumption, and take the unconditional average of this conditional average, and the model predicts  that the unconditional average obeys
So, just take this average in the data. Now, you can do this for lots of different assets R and lots of different "instruments" z, so this represents a lot of averages. Pick beta and gamma that make some of the averages as close to zero as possible. Then look at the other averages and see how close they are to zero.

Lars worked out the statistics of this procedure -- how close should the other averages be to zero, and what's a good measure of the sample uncertainty in beta and gamma estimates -- taking in to account a wide variety of statistical problems you could encounter. The latter part and the proofs make the paper hard to read. When Lars says "general" Lars means General!

But using the procedure is actually quite simple and intuitive. All of econometrics comes down to a generalized version of the formula sigma/root T for standard errors of the mean. (I recommend my book "Asset Pricing" which explains how to use GMM in detail.)

Very cool.

The results were not that favorable to the consumption model. If you look hard, you can see the equity premium puzzle -- Lars and Ken needed huge gamma to fit the difference between stocks and bonds, but then couldn't fit the level of interest rates.  But that led to an ongoing search -- do we have the right utility function? Are we measuring consumption correctly? And that is now bearing fruit.

GMM is really famous because of how it got used. We get to tests parts of the model without writing down the whole model. Economic models are quantiative parables, and we get to examine and test the important parts of the parable without getting lost in irrelevant details.

What do these words mean? Let me show you an example. The classic permanent income model is a special case of the above, with quadratic utility. If we model income y as an AR(1) with coefficient rho, then the permanent income model says consumption should follow a random walk with innovations equal to the change in the present value of future income:


This is the simplest version of a "complete" model that I can write down. There are fundamental shocks, the epsilon; there is a production technology which says you can put income in the ground and earn a rate of return r, and there is an interesting prediction -- consumption smooths over the income shocks.

Now, here is the problem we faced before GMM. First, computing the solutions of this sort of thing for real models is hard, and most of the time we can't do it and have to go numerical. But just to understand whether we have some first-order way to digest the Fama-Shiller debate, we have to solve big hairy numerical models? Most of which is beside the point? The first equations I showed you were just about investors, and the debate is whether investors are being rational or not. To solve that, I have to worry about production technology and equilibrium?

Second, and far worse, suppose we want to estimate and test this model. If we follow the 1970s formal approach, we immediately have a problem. This model says that the change in consumption is perfectly correlated with income minus rho times last year's income. Notice the same error epsilon in both equations. I don't mean sort of equal, correlated, expected to be equal, I mean exactly and precisely equal, ex-post, data point for data point.

If you hand that model to any formal econometric method (maximum likelihood), it sends you home before you start. There is no perfect correlation in the data, for any parameter values. This model is rejected. Full stop.

Wait a minute, you want to say. I didn't mean this model is a complete perfect description of reality. I meant it is a good first approximation that captures important features of the data. And this correlation between income shocks and consumption shocks is certainly not an important prediction.  I don't think income is really an AR(1), and most of all I think agents know more about their income than my simple AR(1). But I can't write that down, because I don't see all their information. Can't we just look at the consumption piece of this and worry about production technology some other day?

In this case, yes. Just look whether consumption follows a random walk. Run the change in consumption on a bunch of variables and see if they predict consumption. This is what Bob Hall did in his famous test, the first test of a part of a model that does not specify the whole model, and the first test that allows us to "condition down" and respect the fact that people have more information than we do. (Lars too walks on the shoulders of giants.) Taking the average of my first equation is the same idea, much generalized.

So the GMM approach allows you to look at a piece of a model -- the intertemporal consumption part, here -- without specifying the whole rest of the model -- production technology, shocks, information sets. It allows you to focus on the robust part of the quantitative parable -- consumption should not take big predictable movements -- and gloss over the parts that are unimportant approximations -- the perfect correlation between consumption and income changes.  GMM is a tool for matching quantitative parables to data in a disciplined way.

This use of GMM is part of a large and, I think, very healthy trend in empirical macroeconomics and finance. Roughly at the same time, Kydland and Prescott started "calibrating" models rather than estimating them formally, in part for the same reasons. They wanted to focus on the "interesting" moments and not get distracted by the models' admitted abstractions and perfect correlations.

Formal statistics asks "can you prove that this model is not a 100% perfect representation of reality" The answer is often "yes," but on a silly basis. Formal statistics does not allow you to say "does this model captures some really important pieces of the picture?" Is the glass 90% full, even if we can prove it's missing the last 10%?

But we don't want to give up on statistics, which much of the calibration literature did. We want to pick parameters in an objective way that gives models their best shot. We want to measure how much uncertainty there is in those parameters. We want to know how precise our predictions for the "testing" moments are. GMM lets you do all these things. If you want to "calibrate" on the means (pick parameters by observations such as the mean consumption/GDP ratio, hours worked, etc.), then "test" on the variances (relative volatility of consumption and output, autocorrelation of output, etc.), GMM will let you do that. And it will tell you how much you really know about parameters (risk aversion, substitution elasticities, etc.) from those "means", how accurate your predictions about "variances" are, including the degrees of freedom chewed up in estimation!

In asset pricing, similar pathologies can happen. Formal testing will lead you to focus on strange portfolios, thousands of percent long some assets and thousands of percent short others. Well, those aren't "economically interesting." There are bid/ask spread, price pressure, short constraints and so on. So, let's force the model to pick parameters based on interesting, robust moments, and let's evaluate the model's performance on the actual assets we care about, not some wild massive long-short ("minimum variance") portfolio.

Fama long ran OLS regressions when econometricians said to run GLS, because OLS is more robust.  GMM allows you to do just that sort of thing for any kind of model -- but then correct the standard errors!

In sum, GMM is a tool, a very flexible tool. It has let us learn what the data have to say, refine models, understand where they work and where they don't, emphasize the economic intuition, and break out of the straightjacket of "reject" or "don't reject," to a much more fruitful empirical style.

Of course, it's just a tool. There is no formal definition of an "economically interesting" moment, or a "robust" prediction. Well, you have to think, and read critically.

Looking hard but achieving  a remarkable simplicity when you understand it is a key trait of Lars' work. GMM really is just applying sigma/Root T (generalized) to all the hard problems of econometrics. Once you make the brilliant step of recognizing they can be mapped to a sample mean. His "conditioning information" paper with Scott Richard took me years to digest. But once you understand L2, the central theorem of asset pricing is "to every plane there is an orthogonal line." Operators in continuous time, and his new work on robust control and recursive preference shares the same elegance.

The trouble with the Nobel is that it leads people to focus on the cited work. Yes, GMM is a classic. I got here in 1985 and everyone already knew it would win a Nobel some day. But don't let that fool you, the rest of the Lars portfolio is worth studying too. We will be learning from it for years to come. Maybe this will inspire me to write up a few more of his papers. If only he would stop writing them faster than I can digest them.

Source: Becker-Friedman Institute
I won't even pretend this is unbiased. Lars is a close friend as well as one of my best colleagues at Chicago. I learned most of what I know about finance by shuttling back and forth between Lars' office and Gene Fama's, both of whom patiently explained so many things to me. But they did so in totally different terms, and understanding what each was saying in the other's language led me to whatever synthesis I have been able to achieve. If you like the book "Asset Pricing," you are seeing the result. He is also a great teacher and devoted mentor to generations of PhD students.

(This is a day late, because I thought I'd have to wait a few more years, so I didn't have a Hansen essay ready to go. Likewise Shiller, it will take a day or two. Thanks to Anonymous and Greg for reporting a typo in the equations.)

Update: I'm shutting down most comments on these posts. This week, let's congratulate the winners, and discuss issues again next week.

Previously Unknown Inside Traders in Heinz Settle SEC Charges

The Securities and Exchange Commission announced that two brothers in Brazil have agreed to pay nearly $5 million to settle charges that they were behind suspicious trading in call options for H.J. Heinz Company the day before the company publicly announced its acquisition.

The SEC filed an emergency enforcement action earlier this year to freeze assets in a Swiss-based trading account used to reap more than $1.8 million from trading in advance of the Heinz announcement.  The SEC’s immediate move the day after the announcement ensured the illicit profits could not be released out of the account while the investigation into the then-unknown traders continued.

In an amended complaint filed today in federal court in Manhattan, the SEC alleges that the order to purchase the Heinz options was placed by Rodrigo Terpins while he was vacationing at Walt Disney World in Orlando, and the trading was based on material non-public information that he received from his brother Michel Terpins.  The trades were made through an account belonging to a Cayman Islands-based entity named Alpine Swift that holds assets for one of their family members.  Rodrigo Terpins purchased nearly $90,000 in option positions in Heinz the day before the announcement, and those positions increased dramatically by nearly 2,000 percent the next day.

The Terpins brothers and Alpine Swift, which has been named as a relief defendant for the purposes of recovering ill-gotten gains, have agreed to disgorge the entire $1,809,857 in illegal profits made from trading Heinz options.  The Terpins brothers also will pay $3 million in penalties.  The settlement is subject to court approval.

For more detail, visit SEC.gov | Previously Unknown Insider Traders in Heinz Agree to $5 Million Settlement

Monday, October 14, 2013

Understanding Asset Prices

The Nobel Committee's "Understanding Asset Prices" "scientific background" paper for the Fama, Hansen, Shiller award is excellent. It is pretty much a self-contained graduate course in empirical finance.

Gene Fama's Nobel

(For a pdf version click here.)
Photo: Elizabeth Fama

Gene Fama’s Nobel Prize

Efficient Markets

Gene’s first really famous contributions came in the late 1960s and early 1970s under the general theme of “efficient markets.” “Efficient Capital Markets: a Review of Theory and Empirical Work’’ [15] is often cited as the central paper. (Numbers refer to Gene’s CV.)

“Efficiency” is not a pleasant adjective or a buzzword. Gene gave it a precise, testable meaning. Gene realized that financial markets are, at heart, markets for information. Markets are “informationally efficient” if market prices today summarize all available information about future values. Informational efficiency is a natural consequence of competition, relatively free entry, and low costs of information in financial markets. If there is a signal, not now incorporated in market prices, that future values will be high, competitive traders will buy on that signal. In doing so, they bid the price up, until the price fully reflects the available information.

Like all good theories, this idea sounds simple in such an overly simplified form. The greatness of Fama’s contribution does not lie in a complex “theory” (though the theory is, in fact, quite subtle and in itself a remarkable achievement.) Rather “efficient markets” became the organizing principle for 30 years of empirical work in financial economics. That empirical work taught us much about the world, and in turn affected the world deeply.

For example, a natural implication of market efficiency is that simple trading rules should not work, e.g. “buy when the market went up yesterday.” This is a testable proposition, and an army of financial economists (including Gene, [4], [5],[ 6]) checked it. The interesting empirical result is that trading rules, technical systems, market newsletters and so on have essentially no power beyond that of luck to forecast stock prices. It’s not a theorem, an axiom, or a philosophy, it’s an empirical prediction that could easily have come out the other way, and sometimes did.


Similarly, if markets are informationally efficient, the “fundamental analysis” performed by investment firms has no power to pick stocks, and professional active managers should do no better than monkeys with darts at picking stocks portfolios. This is a remarkable proposition. In any other field of human endeavor, we expect seasoned professionals systematically to outperform amateurs. But other fields are not so ruthlessly competitive as financial markets! Many studies checked this proposition. It’s not easy. Among other problems, you only hear from the winners. The general conclusion is that markets are much closer to efficient here than anybody thought. Professional managers seem not to systematically outperform well-diversified passive investments. Again, it is a theory with genuine content. It could easily have come out the other way. In fact, a profession that earns its salary teaching MBA students could ask for no better result than to find that better knowledge and training lead to better investment management. Too bad the facts say otherwise.

If markets are informationally efficient, then corporate news such as an earnings announcement should be immediately reflected in stock prices, rather than set in motion some dynamics as knowledge diffuses. The immense “event study” literature, following [12] evaluates this question, again largely in the affirmative. Much of the academic accounting literature is devoted to measuring the effect of corporate events by the associated stock price movements, using this methodology.

Perhaps the best way to illustrate the empirical content of the efficient markets hypothesis is to point out where it is false. Event studies of the release of inside information usually find large stock market reactions. Evidently, that information is not incorporated ex-ante into prices. Restrictions on insider trading are effective. When markets are not efficient, the tests verify the fact.

These are only a few examples. The financial world is full of novel claims, especially that there are easy ways to make money. Investigating each “anomaly” takes time, patience and sophisticated statistical skill; in particular to check whether the gains were not luck, and whether the complex systems do not generate good returns by implicitly taking on more risk. Most claims turn out not to violate efficiency after such study.

But whether “anomalies” are truly there or not is beside the point for now. For nearly 40 years, Gene Fama’s efficient market framework has provided the organizing principle for empirical financial economics. Random walk tests continue. For example, in the last few years, researchers have been investigating whether “neural nets” or artificial intelligence programs can forecast short run stock returns, and a large body of research is dissecting the “momentum effect,” a clever way of exploiting very small autocorrelations in stock returns to generate economically significant profits. Tests of active managers continue. For example, a new round of studies is examining the abilities of fund managers, focusing on new ways of sorting the lucky from the skillful in past data. Hedge funds are under particular scrutiny as they can generate apparently good returns by hiding large risks in rare events. Event studies are as alive. For example, a large literature is currently using event study methodology to debate whether the initial public offerings of the 1990s were “underpriced” initially, leading to first-day profits for insiders, and “overpriced” at the end of the first day, leading to inefficiently poor performance for the next six months. It’s hard to think of any other conceptual framework in economics that has proved so enduring.

Development and testing of asset pricing models and empirical methods

Financial economics is at heart about risk. You can get a higher return, in equilibrium, in an efficient market, but only if you shoulder more risk. But how do we measure risk? Once an investment strategy does seem to yield higher returns, how do we check whether these are simply compensation for greater risk?

Gene contributed centrally to the developments of the theoretical asset pricing models such as the Capital Asset Pricing Model (CAPM) that measure the crucial components of risk ([9], [11], [13], [14], [16], [17], [20], [21], [26], [31], [75], 79]). [14] is a classic in particular, for showing how the CAPM could apply beyond the toy two period model.

However, Gene’s greatest contribution is again empirical. “Risk, Return and Equilibrium: Empirical Tests” with James MacBeth [25] stands out. The Capital Asset Pricing model specifies that assets can earn higher returns if they have greater “beta” or covariance with the market portfolio. This paper convincingly verified this central prediction of the CAPM.

Its most important contribution, though, lies in methods. Checking the prediction of the CAPM is difficult. This paper [25] provided the “standard solution” for all of the statistical difficulties that survives to this day. For example, we now evaluate asset pricing theories on portfolios, sorted on the basis of some characteristic, rather than using individual stocks; we often use 5 year rolling regressions to estimate betas. Most of all, The Journal of Finance in 2008 is still full of “Fama MacBeth regressions,” which elegantly surmount the statistical problem that returns are likely to be correlated across test assets, so N assets are not N independent observations. Gene's influence is so strong that even many of the arbitrary and slightly outdated parts of this procedures are faithfully followed today. What they lose in econometric purity, they gain by having become a well-tested and trusted standard.

"The adjustment of stock prices to new information" [12] is another example of Gene's immense contribution to methods. As I mentioned above, this paper, with over 400 citations, launched the entire event study literature. Again, actually checking stock price reactions to corporate events is not as straightforward as it sounds. Gene and his coauthors provided the "standard solution" to all of the empirical difficulties that survives to this day. Similarly, his papers on interest rates and inflation led the way on how to impose rational expectations ideas in empirical practice.

Simply organizing the data has been an important contribution. Gene was central to the foundation of the Center for Research in Securities Prices, which provides the standard data on which all U.S. stock and bond research are done. The bond portfolios he developed with Robert Bliss are widely used. He instigated the development of a survivor bias free mutual fund database, and the new CRSP-Compustat link is becoming the standard for a new generation of corporate finance research, again led by Gene's latest efforts ([80] [83] [85], [87]).

This empirical aspect of Gene's contribution is unique. Gene did not invent fancy statistical "econometric techniques," and he is not a collector of easily observed facts. Gene developed empirical methods that surmounted difficult problems, and led a generation through the difficult practical details of empirical work. The best analogy is the controlled clinical trial in medicine. One would call that an empirical method, not a statistical theorem. Gene set out the empirical methods for finance, methods as central as the clinical trial is to medicine, empirical methods that last unquestioned to this day.

Predictable returns

Many economists would have rested on their laurels at this point, and simply waited for the inevitable call from the Nobel Prize committee. The above contributions are widely acknowledged as more than deserving in the financial and macroeconomics community. But Gene’s best and most important work (in my opinion) still lies ahead.

The efficient markets work of the 1960s and 1970s found that stock returns are not predictable ("random walks") at short horizons. But returns might well still be predictable at long horizons, if investors’ fear of risk varies over time. For example, in the depths of a recession few people may want to hold risky assets, as they are rightly worried about their jobs or the larger economic risks at these times. This quite rational fear lowers the demand for risky assets, pushing down their prices and pushing up subsequent returns. If this is true, we could predict good and bad returns in the stock market based on the state of the economy, even though the market is perfectly efficient (all information is reflected in current prices). This argument is obviously much more plausible at business cycle frequencies than at short horizons, which is why the early tests concentrated on short horizons. Gene’s next great contribution, in the 1980s, was to show how returns are predictable at long horizons.

Though the last paragraph makes it sound like an easy extension, I cannot begin to describe what a difficult intellectual leap this was for Gene as well as for the rest of the financial economics profession. Part of the difficulty lay in the hard won early success of simple efficient markets in its first 10 years. Time after time, someone would claim a system that could “beat the market” (predict returns) in one way or another, only to see the anomaly beat back by careful analysis. So the fact that returns really are predictable by certain variables at long horizons was very difficult to digest.

The early inklings of this set of facts came from Gene’s work on inflation ([30], [32], [35], [37], [39], [43], [44], [49]). Since stocks represent a real asset, they should be a good hedge for inflation. But stock returns in the high inflation of the 1970s were disappointing. Gene puzzled through this conundrum to realize that the times of high inflation were boom times of low risk premiums. But this means that risk premiums, and hence expected returns, must vary through time.

Gene followed this investigation in the 1980s with papers that cleanly showed how returns are predictable in stock ([55], [58], [59], [62]), bond ([50], [52], [57], [62], [64]), commodity ([56], [60]) and foreign exchange ([40], [51]) markets, many with his increasingly frequent coauthor Ken French. These papers are classics. They define the central facts that theorists of each market are working on to this day. None have been superseded by subsequent work, and these phenomena remain a focus of active research.

(I do not mean to slight the contributions of others, as I do not mean to slight the contribution of others to the first generation of efficient markets studies. Many other authors examined patterns of long horizon return predictability. This is a summary of Gene’s work, not a literature review, so I do not have space to mention them. But as with efficient markets, Gene was the integrator, the leader, the one who most clearly saw the overarching pattern in often complex and confusing empirical work, and the one who established and synthesized the facts beyond a doubt. Many others are often cited for the first finding that one or another variable can forecast returns, but Gene’s studies are invariably cited as the definitive synthesis.)

The central idea is that the level of prices can reveal time varying expected returns. If expected returns and risk premiums are high, this will drive prices down. But then the “low” price today is a good signal to the observer that returns will be high in the future. In this way stock prices relative to dividends or earnings predict stock returns; long term bond prices relative to short-term bond prices predict bond returns; forward rates relative to spot rates predict bond and foreign exchange returns, and so forth. Low prices do not cause high returns any more than the weatherman causes it to snow.

This work shines for its insistence on an economic interpretation. Other authors have taken these facts as evidence for “fads” and “fashion” in financial markets. This is a plausible interpretation, but it is not a testable scientific hypothesis; a “Fad” is a name for something you don’t understand. Gene’s view, as I have articulated here, is that predictable returns reflect time-varying risk premia related to changing economic conditions. This is a testable view, and Gene takes great pain to document empirically that the high returns come at times of great macroeconomic stress, (see especially [60], [62]). This does not prove that return forecastability is not due to “fads,” anymore than science can prove that lightning is really not caused by the anger of the Gods. But had it come out the other way; had times of predictably high returns not been closely associated with macroeconomic difficulties, Gene’s view would have been proven wrong. Again, this is scientific work in the best sense of the word.

The influence of these results is really only beginning to be felt. The work of my generation of theoretically inclined financial economists has centered on building explicit economic models of time-varying macroeconomic risk to explain Fama and French’s still unchallenged findings. Most of corporate finance still operates under the assumption that risk premia are constant over time. Classic issues such as the optimal debt/equity ratio or incentive compensation change dramatically if risk premia, rather than changing expectations of future profits, drive much price variation. Most of the theory of investment still pretends that interest rates, rather than risk premia, are the volatile component of the cost of capital. Portfolio theory is only beginning to adapt. If expected returns rise in a recession, should you invest more to take advantage of the high returns? How much? Or are you subject to the same additional risk that is, rationally, keeping everyone else from doing so? Macroeconomics and growth theory, in the habit of considering models without risk, or first order approximations to such models in which risk premia are constant and small, are only beginning to digest the fact that risk premia are much larger than interest rates, let alone that these risk premia vary dramatically over time.

In these and many other ways, the fact that the vast majority of stock market fluctuation comes from changing expected returns rather than changing expectations of future profits, dividends, or earnings, will fundamentally change the way we do everything in financial economics.

The cross section, again

We are not done. A contribution as great as any of these, and perhaps greater still, lies ahead.

If low prices relative to some multiple (dividends, earnings, book value) signal times of high stock returns, perhaps low prices relative to some multiple signal stocks with high risks and hence high returns. In the early 1990s, Gene, with Ken French, started to explore this idea.

The claim was old, that “value stocks” purchased for low prices would yield higher returns over the long run than other stocks. This claim, if true, was not necessarily revolutionary. The Capital Asset Pricing Model allows some asset classes to have higher average returns if they have higher risk, measured by comovement with the market return, or “beta.” So, if the value effect is not a statistical anomaly, it could easily be consistent with existing theory, as so many similar effects had been explained in the past. And it would be perfectly sensible to suppose that “value” stocks, out of favor, in depressed industries, with declining sales, would be extra sensitive to declines in the market as a whole, i.e. have higher betas. The “value premium” should be an interesting, but not unusual, anomaly to chase down in the standard efficient markets framework.

Given these facts, Gene and Ken’s finding in “The Cross Section of Expected Stock Returns” [68] was a bombshell. The higher returns to “value stocks” were there all right, but CAPM betas did nothing to account for them! In fact, they went the wrong way -- value stocks have lower market betas. This was an event in Financial Economics comparable to the Michelson-Morley experiment in Physics, showing that the speed of light is the same for all observers. And the same Gene who established the cross-sectional validity of the Capital Asset Pricing Model for many asset classifications in the 1970s was the one to destroy that model convincingly in the early 1990s when confronted with the value effect.

But all is not chaos. As asset pricing theory had long recognized the possibility of time varying risk premia and predictable returns, asset pricing theory had recognized since the early 1970s the possibility of “multiple factors” to explain the cross section. Both possibilities are clearly reflected in Gene's 1970 essay. It remained to find them. Though several “multiple factor” models had been tried, none had really caught on. In a series of papers with Ken French, ([72], [73], [78], and especially [74]) Gene established the “three factor model” that does successfully account for the “value effect.”

The key observation is that “value stocks” -- those with low prices relative to book value -- tend to move together. Thus, buying a portfolio of such stocks does not give one a riskless profit. It merely moves the times at which one bears risk from a time when the market as a whole declines, to a time when value stocks as a group decline. The core idea remains, one only gains expected return by bearing some sort of risk. The nuance is that other kids of risk beyond the market return keep investors away from otherwise attractive investments.

Since it is new, the three-factor model is still the object of intense scrutiny. What are the macroeconomic foundations of the three factors? Are there additional factors? Do the three factors stand in for a CAPM with time-varying coefficients? Once again, Gene’s work is defining the problem for a generation.

Though literally hundreds of multiple-factor models have been published, the Fama-French three-factor model quickly has become the standard basis for comparison of new models, for risk-adjustment in practice, and it is the summary of the facts that the current generation of theorists aims at. It has replaced the CAPM as the baseline model. Any researcher chasing down an anomaly today first checks whether anomalously high returns are real, and then checks whether they are consistent the CAPM and the Fama-French three factor model. No other asset pricing model enjoys this status.

Additional contributions

Gene has made fundamental contributions in many other areas. His early work on the statistical character of stock returns, especially the surprisingly large chance of large movements, remains a central part of our understanding.([1], [2], [3], [4]). He has made central contributions to corporate finance, both its theory ([24], [36], [38], [42], [46], [47], [54], [63], [75]) and empirical findings ([10], [29], [80], [83], [85], [86], [87]). Some of the latter begin the important work of integrating predictable returns and new risk factors into corporate finance, which will have a major impact on that field. These are as central as his contributions to asset pricing that I have surveyed here; I omit them only because I am not an expert in the field. He has central contributions to macroeconomics and the theory of money and banking ([40], [41], [48], [49], [53], [70]).

The case for a Prize

I have attempted to survey the main contributions that must be mentioned in a Nobel Prize; any of these alone would be sufficient. Together they are overwhelming. Of course, Gene leads most objective indicators of influence. For example, he is routinely at or near the top of citations studies in economics, as well as financial economics.

The character of Gene’s work is especially deserving of recognition by a Nobel Prize, for a variety of reasons.

Empirical Character. Many economists are nominated for Nobel prizes for influential theories, ideas other economic theorists have played with, or theories seem to have potential in the future for understanding actual phenomena. Gene’s greatness is empirical. He is the economist who has taught us more about how the actual financial world works than any other. His ideas and organizing framework guided a generation of empirical researchers. He established the stylized facts that 30 years of theorists puzzle over. Gene’s work is scientific in the best sense of the word. You don’t ask of Gene, “what’s your theory?” you ask “what’s your fact?” Finance today represents an interplay of fact and theory unparalleled in the social sciences, and this fact is largely due to Gene’s influence.

Ideas are Alive. Gene’s ideas are alive, and his contributions define our central understanding of financial markets today. His characterizations of time varying bond, stock, and commodity returns, and the three-factor model capturing value and size effects remain the baseline for work today. His characterization of predictable foreign exchange returns from the early 1980s is still one of the 2 or 3 puzzles that define international finance research. The critics still spend their time attacking Gene Fama. For example, researchers in the “behavioral finance” tradition are using evidence from psychology to give some testable content to an alternative to Gene’s efficient market ideas, to rebut caustic comments like mine above about “fads.” This is remarkable vitality. Few other idea from the early 1970s, including ideas that won well-deserved Nobel prizes, remains an area of active research (including criticism) today.

Of course, some will say that the latest crash "proves" markets aren't "efficient." This attitude only expresses ignorance. Once you understand the definition of efficiency and the nature of its tests, as made clear by Gene 40 years ago, you see that the latest crash no more "proves" lack of efficiency than did the crash of 1987, the great slide of 1974, the crash of 1929, the panic of 1907, or the Dutch Tulip crisis. Gene's work, and that of all of us in academic finance, is about serious quantiative scientific testing of explicit economic models, not armchair debates over anecdotes. The heart of efficient markets is the statement that you cannot earn outsize returns without taking on “systematic” risk. Given the large average returns of the stock market, it would be inefficient if it did not crash occasionally.

Practical importance. Gene’s work has had profound influence on the financial markets in which we all participate.

For example, In the 1960s, passively managed mutual funds and index funds were unknown. It was taken for granted that active management (constant buying and selling, identifying "good stocks" and dumping "bad stocks") was vital for any sensible investor. Now all of us can invest in passive, low cost index funds, gaining the benefits of wide diversification only available in the past to the super rich (and the few super-wise among those). In turn, these vehicles have spurred the large increase in stock market participation of the last 20 years, opening up huge funds for investment and growth. Even proposals to open social security systems to stock market investment depend crucially on the development of passive investing. The recognition that markets are largely “efficient,” in Gene’s precise sense, was crucial to this transformation.

Unhappy investors who lost a lot of money to hedge funds, dot-coms, bank stocks, or mortgage-backed securities can console themselves that they should have listened to Gene Fama, who all along championed the empirical evidence – not the “theory” – that markets are remarkably efficient, so they might as well have held a diversified index.

Gene's concepts, such as "efficiency" or that value and growth define the interesting cross section of stock returns are not universally accepted in practice, of course. But they are widely acknowledged as the benchmark. Where an active manager 40 years ago could just say "of course," now he or she needs to confront the overwhelming empirical evidence that most active managers do not do well. Less than 10 years after Fama and French's small/large and value/growth work was first published, mutual fund companies routinely categorize their products on this dimension. (See www.vanguard.com for example.)

Influence in the field. Finally,  Gene has had a personal influence in the field that reaches beyond his published work. Most of the founding generation of finance researchers got their Ph. D’s under Gene Fama, and his leadership has contributed centrally to making the Booth School at the University of Chicago such a superb institution for developing ideas about financial economics.

Fama, Hansen, and Shiller Nobel

Gene Fama, Lars Hansen and Bob Shiller win the Nobel Prize. Congratulations! (Minor complaint: Nobel committee, haven't you heard of Google? There are lots of nice Gene Fama photographs lying around. What's with the bad cartoon?)

I'll write more about each in the coming days. I've spent most of my professional life following in their footsteps, so at least I think I understand what they did more than for the typical prize.

As a start, here is an an introduction I wrote for  Gene Fama’s Talk, “The History of the Theory and Evidence on the Efficient Markets Hypothesis” given for the AFA history project. There is a link to this document on my webpage here. The video version is here at IGM.

Introduction for Gene Fama

On behalf of the American Finance Association and the University of Chicago Graduate School of Business, it is an honor and a pleasure to introduce Gene Fama. This talk is being videotaped for the AFA history project, so we speak for the ages.

Gene will tell us how the efficient-markets hypothesis developed. I’d like to say a few words about why it’s so important. This may not be obvious to young people in the audience, and Gene will be too modest to say much about it.

“Market efficiency” means that asset prices incorporate available information about values. It does not mean that orders are “efficiently” processed, that prices “efficiently” allocate resources, or any of the other nice meanings of “efficiency.” Why should prices reflect information? Because of competition and free entry. If we could easily predict that stock prices will rise tomorrow, we would all try to buy today. Prices would rise today until they reflect our information.


This seems like a pretty simple “theory,” hardly worth all the fuss. Perhaps you expect general relativity, lots of impenetrable equations. Gene is more like Darwin, and the efficient markets hypothesis is more like evolution. Both evolution and efficient markets are elegant, simple, and powerful ideas that organized and energized vast empirical projects, and that’s the true measure of any theory. Without evolution, natural history would just be a collection of curious facts about plants and animals. Without the efficient markets hypothesis, empirical finance would just be a collection of Wall-Street anecdotes, how-I-got-rich stories, and technical-trading newssheets.

Efficient-market theory and empirical work are also a much deeper intellectual achievement than my little story suggests. There are plenty of hard equations. It took nearly a century to figure out the basic prediction of an efficient market, from Bachelier’s random walk to the consumption Euler equation (price equals conditionally expected value, discounted by marginal utility growth). It took hard work and great insight to account for risk premiums, selection biases, reverse causality, and endogenous variables, and to develop the associated statistical procedures.

Efficient-markets empirical work doesn’t check off easy “predictions.” It typically tackles tough anomalies, each of which looks superficially like a glaring violation of efficiency, and each endorsed by a cheering crowd of rich (or perhaps lucky?) traders. It’s not obvious that what looks like an inefficiently low price is really a hidden exposure to systematic risk. It took genius to sort through the mountains of charts and graphs that computers can spit out, to see the basic clear picture.

Efficient-market predictions can be beautifully subtle and unexpected. One example: In an efficient market, expert portfolio managers should do no better than monkeys throwing darts. That’s a remarkable prediction. Experts are better than amateurs in every other field of human endeavor: Tiger Woods will beat you at golf; you should hire a good house painter and a better tax lawyer. The prediction is even more remarkable for how well it describes the world, after we do a mountain of careful empirical work.

That empirical work consists, fundamentally, of applying scientific method to financial markets. Modern medicine doesn’t ask old people for their health secrets. It does double-blind clinical trials. To this, we owe our ability to cure many diseases. Modern empirical finance doesn’t ask Warren Buffet to share his pearls of investment wisdom. We study a survivor-bias-free sample of funds sorted on some ex-ante visible characteristic, to separate skill from luck, and we correct for exposure to systematic risk. To this we owe our wisdom, and maybe, as a society, a lot of wealth as well.

This point is especially important now, in a period of great financial turbulence. It’s easy to look at the latest market gyration and opine, “Surely markets aren’t efficient.” But that’s not how we learn anything of lasting usefulness. Efficient markets taught us to evaluate theories by their rejectable predictions and by the numbers; to do real, scientific, empirical work, not to read newspapers and tell stories.

Efficient markets are also important to the world at large, in ways that I can only begin to touch on here. The assurance that market prices are in some sense basically “right” lies behind many of the enormous changes we have seen in the financial and related worlds, from index funds, which have allowed for wide sharing of the risks and rewards of the stock market, to mark-to-market accounting, quantitative portfolio evaluation and benchmarking, and modern risk management.

With 40 years’ hindsight, are markets efficient? Not always, and Gene said so in 1970. For example, prices rise on the release of inside information, so that information, though known by someone, was not reflected in the original price. More recently, I think we have seen evidence that short-sales constraints and other frictions can lead to informationally-inefficient prices.

This is great news. Only a theory that can be proved wrong has any content at all. Theories that can “explain” anything are as useless as “prices went down because the Gods are angry.”

Gene went on, arguing that no market is ever perfectly efficient, since no market is perfectly competitive and frictionless. The empirical question has always been to what degree a given phenomenon approaches an unanattainable ideal.

Still, the answer today is much closer to “yes” than to “no” in the vast majority of serious empirical investigations. It certainly is a lot closer to “yes” than anyone expected in the 1960s, or than the vast majority of practitioners believe today. There are strange fish in the water, but even the most troublesome are surprisingly small fry. And having conquered 157 anomalies with patient hard work, many of us can be excused for suspecting that just a little more work will make sense of the 158th.

However, empirical finance is no longer really devoted to “debating efficient markets,” any more than modern biology debates evolution. We have moved on to other things. I think of most current research as exploring the amazing variety and subtle economics of risk premiums – focusing on the “joint hypothesis” rather than the “informational efficiency” part of Gene’s 1970 essay.

This is also great news. Healthy fields settle debates with evidence and move on to new discoveries. But don’t conclude that efficient markets are passé. As evolution lies quietly behind the explosion in modern genetics, markets that are broadly efficient, in which prices quickly reflect information, quietly underlie all the interesting things we do today. This is the best fate any theory can aspire to.

Gene will talk about the history of efficient markets. People expect the wrong things of history as they expect overly complex “theory.” No lone genius ever thought up a “hypothesis,” went out to “test” it, and convinced the world with his 2.1 t-statistic. Theory and empirical work develop together, ideas bounce back and forth between many people, the list of salient vs. unimportant facts shifts, and evidence, argument and, alas, age gradually change people’s minds. This is how efficient markets developed too, as Gene has always graciously acknowledged. Gene’s two essays describe the ideas, but much less of this process. It was an amazing adventure, and historians of science should love this story. Ladies and Gentlemen, please welcome Gene Fama to tell us about it.

Friday, October 11, 2013

Friday Art Fun

Totally off topic. It's Friday, time to relax.

Source: Nina Katchadourian

15th Century Flemish Style Portraits Recreated In Airplane Lavatory Click the link for the full set.

From the artist:
While in the lavatory on a domestic flight in March 2010, I spontaneously put a tissue paper toilet cover seat cover over my head and took a picture in the mirror using my cellphone. The image evoked 15th-century Flemish portraiture. <…> I made several forays to the bathroom from my aisle seat, and by the time we landed I had a large group of new photographs entitled Lavatory Self-Portraits in the Flemish Style
From the art critic (Sally Cochrane)
What no one's saying, though, is that she was hogging the bathroom while a line of antsy people held their bladders! 
In related art news, the street artist Banksy is prowling New York. A group of Brooklyn locals, seeing people coming in to photograph the stencil, promptly covered it with cardboard and starting charging $5 per shot. Entrepreneurship and property rights are still alive.