Tuesday, July 18, 2017

17/07/17: Debt Relief v Payments Relief: A Lesson Ireland Should Have Learned


An interesting study looked into two sets of debt relief measures:

  1. Immediate payment reductions to target short-run liquidity constraints and 
  2. Delayed debt write-downs to target long-run debt constraints.
It is worth noting that the first measure was roughly similar to the majority of 'sustainable debt resolution' measures introduced in Ireland (e.g. temporary relief on payments, split mortgages, etc) that temporarily delay repayments at the full rate. Even worse, in Irish case, policy instruments that delay repayments are generally associated with roll up of unpaid debt and in some cases, with interest on the unpaid debt, thus increasing life-cycle level of indebtedness. 

The second set of measures used in the NBER study are broadly consistent with debt forgiveness measures, where actual debt reduction took place at both the principal and interest levels.

So what did NBER study find?

"We find that the debt write-downs significantly improved both financial and labor market outcomes despite not taking effect for three to five years. In sharp contrast, there were no positive effects of the more immediate payment reductions. These results run counter to the widespread view that financial distress is largely the result of short-run constraints."

In other words, it appears that empirical evidence supports debt relief, as opposed to temporary payments reductions. Irish banks and authorities, in continuing to insist on preferences for temporary relief measures are simply driven by pure self interest - protecting banks' balancesheets - not by a desire to deliver a common good, such as speedier recovery of the heavily indebted households. 

Specifically, for debt relief: "For the highest-debt borrowers, the median debt write-down in the treatment group increased the probability of finishing a repayment program by 1.62 percentage points (11.89 percent) and decreased the probability of filing for bankruptcy by 1.33 percentage points (9.36 percent). The probability of having collections debt also decreased by 1.25 percentage points (3.19 percent) for these high-debt borrowers, while the probability of being employed increased by 1.66 percentage points (2.12 percent). The estimated effects of the debt write-downs for credit scores, earnings, and 401k contributions are smaller and not statistically significant. Taken together, however, our results indicate that there are significant benefits of debt relief targeting long-run debt overhang in our setting".

For repayment relief: "we find no positive effects of the minimum payment reductions targeting short-run liquidity constraints. There was no discernible effect of the payment reductions on completing the repayment program... The median payment reduction in the treatment group also increased the probability of filing for bankruptcy in this sample by a statistically insignificant 0.70 percentage points (6.76 percent) and increased the probability of having collections debt by a statistically significant 1.40 percentage points (3.56 percent). There are also no detectable positive effects of the payment reductions on credit scores, employment, earnings, or 401k contributions. In sum, there is no evidence that borrowers in our sample benefited from the minimum payment reductions, and even some evidence that borrowers seem to have been hurt by these reductions."

Why did payment relief not work? "The payments reductions increased the length of the repayment program in the treatment group by an average of four months and, as a result, increased the number of months where a treated borrower could be hit by an adverse shock that causes default (e.g., job loss)."

Now, imagine the Irish authorities arguing that no such shocks can impact over-indebted households over 10-20 years the repayment relief schemes, such as split mortgages or temporarily reduced repayments, are designed to operate. 

17/7/17: New Study Confirms Parts of Secular Stagnation Thesis


For some years I have been writing about the phenomena of the twin secular stagnations (see here: http://trueeconomics.blogspot.com/2015/07/7615-secular-stagnation-double-threat.html). And just as long as I have been writing about it, there have been analysts disputing the view that the U.S. (and global) economy is in the midst of a structural growth slowdown.

A recent NBER paper (see here http://www.nber.org/papers/w23543) clearly confirms several sub-theses of the twin secular stagnations hypothesis, namely that the current slowdown is

  1. Non-cyclical (extend to prior to the Global Financial Crisis);
  2. Attributable to "the slow growth of total factor productivity" 
  3. And also attributable to "the decline in labor force participation".

Wednesday, June 28, 2017

28/6/17: Tech Financing and NASDAQ: Divorce Proceedings Afoot?

Based on the recent data from Kleiner Perkins,  there has been a substantial inflection point in the relationship between NASDAQ index valuations and tech IPOs around 2015 that continued into 2016-2017 period.

Over the period 2009-2014, the positive correlation between NASDAQ and global technology IPOs and PE/VC funding was largely a matter of regularity. Starting with 2015, this relationship turned negative. Which means one pesky thing when it comes to the real economy: the great engine of enterprise innovation (smaller, earlier stage companies gaining sunlight) as opposed to behemoths patenting (larger legacy corporations blocking off the sunlight with marginal R&D) is not exactly in a rude health.

28/6/17: Seattle's Minimum Wage Lessons for California


Two states and Washington DC are raising their minimum wages comes July 1, with Washington DC’s minimum wage rising to $12.50 per hour, the highest state-wide minimum wage level in the U.S. This development comes after 19 states raised their minim wages since January 1, 2017. In addition, New York and Oregon are now using geographically-determined minimum wage, with urban residents and workers receiving higher minimum wages than rural workers.

Still, one of the most ambitious minimum wage laws currently on the books is that of California. For now, California’s minimum wage (for employers with 26 or more workers) is set us $10.50 per hour (a rise of $0.5 per hour on 2016), which puts California in the fourth place in the U.S. in terms of State-mandated minimum wages. It will increase automatically to $11.00 comes January 1, 2018. Thereafter, the minimum wage is set to rise by $1.00 per annum into 2022, reaching $15.00. From 2023 on, minimum wage will be indexed to inflation. Smaller employers (with 25 or fewer employees) will have an extra year to reach $15.00 nominal minimum wage marker, from current (2017) minimum wage level of $10.00 per hour. All in, in theory, current minimum wage employee working full time will earn $21,840 per annum, and this will rise (again in theory) by $1,040 per annum in 2018. So, again, in theory, nominal earnings for a full-time minimum wage employee will reach $31,200 in 2022.


In cities like San Francisco and Los Angeles, local minim wages are even higher. San Francisco is planning to raise its minim wage to $15.00 per hour in 2018, while Los Angeles is targeting the same level in 2020. This means that in 2018, San Francisco minimum wage workers will be $8,320 per annum better off than the State minimum wage earners, and Los Angeles minim wage earnings will be $4,160 above the State level in 2020.

UC Berkeley research centre for labor economics, http://laborcenter.berkeley.edu/15-minimum-wage-in-california/, does some numbers crunching on the distributional impact of California minimum wages. Except, really, it doesn’t. Why?

Because the problem with minimum wage impact estimates is that it ignores a range of other factors, such as, for example the impacts of minimum wage hikes on substations away from labor into capital (including technological capital), and the impacts of jobs offshoring, etc. While economists can control for these factors imperfectly, it is impossible to know with certainty how specific moves in minimum wages will effect incentives for companies to increase capital intensity of their operations, change skills mix for employees, alter future growth and product development plans, etc.

What we do have, however, is historical evidence to go by. And that evidence is a moving target. In particular, it is a moving target because as minimum wages continue to increase, at some point (we call these inflation points), past historical relationships between wages and hours worked, wages and technological investments, wages and R&D, and so on, change as well.

Take the most recent example of Seattle.

In 2016, Seattle raised its $11.00 per hour minimum wage to $13 per hour, the highest in the U.S. Subsequent protests demanded an increase to $15.00 per hour in 2017. However, research by economists at the University of Washington shows that the wage hike could have
1) Triggered steep declines in employment for low-wage workers, and
2) Resulted in a drop in paid hours of work for workers who kept their jobs.

Overall, these negative impacts have more than cancelled out the benefits of higher wages, so that, on average, low-wage workers now earn $125 per month less than before the minimum wage was hiked in January 2016. In simple terms, instead of rising by $4,160 per annum, minimum wage earners’ wages fell $1,500 per annum, creating the adverse movement in earnings of $5,160. Given current minimum wage earnings, in theory, delivering $27,040 per annum in full time wages, this is hardly an insignificant number. For details of the study, see https://evans.uw.edu/sites/default/files/NBER%20Working%20Paper.pdf.

The really worrying matter is that the empirical estimates presented in the University of Washington studies do not cover longer-term potential impacts from capital deepening and technological displacement of minimum wage jobs, because, put simply, we don’t have enough time elapsing from the latest minimum wage hike. Another worrying matter is that, like the majority of studies before it, the Washington study does not directly control for the effects of Seattle’s booming local economy on minimum wage impacts: as Seattle faces general unemployment rate of 3.2 percent, the adverse impacts of the latest hike in the minimum wages can be underestimated due to the tightness in labor markets.

Now, consider the recent past: in her Presidential bid, Hillary Clinton was advocating a federal minimum wage hike to $12.00 per hour from $7.25 per hour. That was hardly enough for a large number of social activists who pushed for even higher hikes. This tendency amongst activists - to pave the road to hell with good intentions, while using someone else’s money and work prospects - is quite problematic. Econometric analysis of minimum wage effects is highly ambiguous and the expected impacts of minimum wage hikes are highly uncertain ex ante. This ambiguity and uncertainty adversely impacts not only employers, including smaller businesses, but also employees. Including those on minimum wages. It also impacts prospective minimum wage employees who, as Seattle evidence suggests, might face lower prospects of gaining a job. More worrying, the parts of the minimum wage literature that show modest positive impacts from minimum wage hikes are based on the data for minimum wage increases from lower levels to moderate levels, not from high levels to extremely high, as is the case with Seattle, San Francisco, Los Angeles and other cities.

That point seems to be well-reflected in the latest study from the University of Washington. In fact, June 2017 paper results stand clearly contrasted by 2016 study that showed that April 2015 hike in Seattle’s minimum wage from $9.47 per hour to $11.00 per hour was basically neutral in terms of its impact on wages. Losses to those workers who ended up without a job post-minimum wage hike were offset by gains for those worker who kept their employment. In effect, April 2015 hike was a transfer of money from jobs-losing workers to jobs keepers.

In a separate study, from the UC Berkeley labor economics center http://irle.berkeley.edu/seattles-minimum-wage-experience-2015-16/, the researchers found that Seattle’s minimum wage hikes were actually effective in boosting incomes of minimum wage workers, albeit only in one sector: the food industry, and the results are established on a cumulative basis for 2009-2016 period. In addition, University of Washington study used higher quality, more detailed and directly comparable data on minimum wage earners than the UC Berkeley study. However, on the opposite side of the argument, the former study excluded multi-location enterprises, e.g. fast food companies, who are often large scale employers of minimum wage workers. The UC Berkeley study is quite bizarre, to be honest, in so far as it focuses on one sector, while the study from the UofW clearly suggests that wider data is available.

In other words, the UC Berkeley study does not quite contradict or negate the University of Washington study, although it highlights the complexity of analysing minimum wage impacts.


PS: This lifts the veil of strangeness from the UC Berkeley study: http://www.zerohedge.com/news/2017-06-28/fake-research-seattle-mayor-knew-critical-min-wage-study-was-coming-so-he-called-ber. It turns out UC Berkeley study was a commissioned hit, financed by the office of the Mayor of Seattle to pre-empt forthcoming UofW study. Worse, the Berkeley team were provided by the Mayor of Seattle with the pre-released draft of the UofW paper. This is at best unethical for both the Mayor's office and for the UC Berkeley team.

Tuesday, June 27, 2017

27/6/17: Millennials’ Support for Liberal Democracy is Failing


New paper is now available at SSRN: "Millennials’ Support for Liberal Democracy is Failing. An Investor Perspective" (June 27, 2017): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2993535.


Recent evidence shows a worrying trend of declining popular support for the traditional liberal democracy across a range of Western societies. This decline is more pronounced for the younger cohorts of voters. The prevalent theories in political science link this phenomena to a rise in volatility of political and electoral outcomes either induced by the challenges from outside (e.g. Russia and China) or as the result of the aftermath of the recent crises. These views miss a major point: the key drivers for the younger generations’ skepticism toward the liberal democratic values are domestic intergenerational political and socio-economic imbalances that engender the environment of deep (Knightian-like) uncertainty. This distinction – between volatility/risk framework and the deep uncertainty is non-trivial for two reasons: (1) policy and institutional responses to volatility/risk are inconsistent with those necessary to address rising deep uncertainty and may even exacerbate the negative fallout from the ongoing pressures on liberal democratic institutions; and (2) investors cannot rely on traditional risk management approaches to mitigate the effects of deep uncertainty. The risk/volatility framework view of the current political trends can result in amplification of the potential systemic shocks to the markets and to investors through both of these factors simultaneously. Despite touching on a much broader set of issues, this note concludes with a focus on investment strategy that can mitigate the rise of deep political uncertainty for investors.


Thursday, June 22, 2017

22/6/17: Efficient Markets for H-bomb Fuel - 1954


For all the detractors of the EMH - the Efficient Markets Hypothesis - and for all its fans, as well as for any fan of economic history, this paper is a must-read: http://www.terry.uga.edu/media/events/documents/Newhard_paper-9-6-13.pdf.

Back in 1954, an economist, Armen A. Alchian, working at RAND conducted the world’s first event study. His study used stock market data, publicly available at the time, to infer which fissile fuel material was used in manufacturing highly secret H-bomb. That study was immediately withdrawn from public view. The paper linked above replicates Alchian's results.


22/6/17: Unwinding Monetary Excesses: FocusEconomics


Focus Economics are running my comment (amongst other analysts') on the Fed and ECB paths for unwinding QE: http://www.focus-economics.com/blog/how-will-fed-reduce-balance-sheet-how-will-ecb-end-qe.


21/6/17: Azerbaijan Bank and Irish Saga of $900 million


A Bloomberg article on the trials and tribulations of yet another 'listing' on the Irish Stock Exchange, this one from Azerbaijan: https://www.bloomberg.com/news/articles/2017-06-18/azerbaijan-bank-took-900-million-irish-detour-on-way-to-default. Includes a comment from myself.



Friday, June 16, 2017

16/6/17: Replicating Scientific Research: Ugly Truth


Continuing with the theme on 'What I've been reading lately?', here is a smashing paper on 'accuracy' of empirical economic studies.

The paper, authored by Hou, Kewei and Xue, Chen and Zhang, Lu, and titled "Replicating Anomalies" (most recent version is from June 12, 2017, but it is also available in an earlier version via NBER) effectively blows a whistle on what is going on in empirical research in economics and finance. Per authors, the vast literature that detects financial markets anomalies (or deviations away from the efficient markets hypothesis / economic rationality) "is infested with widespread p-hacking".

What's p-hacking? Well, it's a shady practice whereby researchers manipulate (by selective inclusion or exclusion) sample criteria (which data points to exclude from estimation) and test procedures (including model specifications and selective reporting of favourable test results), until insignificant results become significant. In other words, under p-hacking, researchers attempt to superficially maximise model and explanatory variables significance, or, put differently, they attempt to achieve results that confirm their intuition or biases.

What's anomalies? Anomalies are departures in the markets (e.g. in share prices) from the predictions generated by the models consistent with rational expectations and the efficient markets hypothesis. In other words, anomalies occur when markets efficiency fails.

There are scores of anomalies detected in the academic literature, prompting many researchers to advocate abandonment (in all its forms, weak and strong) of the idea that markets are efficient.

Hou, Xue and Zhang take these anomalies to the test. The compile "a large data library with 447 anomalies". The authors then control for a key problem with data across many studies: microcaps. Microcaps - or small capitalization firms - are numerous in the markets (accounting for roughly 60% of all stocks), but represent only 3% of total market capitalization. This is true for key markets, such as NYSE, Amex and NASDAQ. Yet, as authors note, evidence shows that microcaps "not only have the highest equal-weighted returns, but also the largest cross-sectional standard deviations in returns and anomaly variables among microcaps, small stocks, and big stocks." In other words, these are higher risk, higher return class of securities. Yet, despite this, "many studies overweight microcaps with equal-weighted returns, and often together with NYSE-Amex-NASDAQ breakpoints, in portfolio sorts." Worse, many (hundreds) of studies use 1970s regression technique that actually assigns more weight to microcaps. In simple terms, microcaps are the most common outlier and despite this they are given either same weight in analysis as non-outliers or their weight is actually elevated relative to normal assets, despite the fact that microcaps have little meaning in driving the actual markets (their weight in the total market is just about 3% in total).

So the study corrects for these problems and finds that, once microcaps are accounted for, the grand total of 286 anomalies (64% of all anomalies studied), and under more strict statistical signifcance test 380 (of 85% of all anomalies) "including 95 out of 102 liquidity variables (93%) are insignificant at the 5% level." In other words, the original studies claims that these anomalies were significant enough to warrant rejection of markets efficiency were not true when one recognizes one basic and simple problem with the data. Worse, per authors, "even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Among the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3)."

This is pretty damning for those of us who believe, based on empirical results published over the years, that markets are bounded-efficient, and it is outright savaging for those who claim that markets are perfectly inefficient. But, this tendency of researchers to silverplate statistics is hardly new.

Hou, Xue and Zhang provide a nice summary of research on p-hacking and non-replicability of statistical results across a range of fields. It is worth reading, because it dents significantly ones confidence in the quality of peer review and the quality of scientific research.

As the authors note, "in economics, Leamer (1983) exposes the fragility of empirical results to small specification changes, and proposes to “take the con out of econometrics” by reporting extensive sensitivity analysis to show how key results vary with perturbations in regression specification and in functional form." The latter call was never implemented in the research community.

"In an influential study, Dewald, Thursby, and Anderson (1986) attempt to replicate empirical results published at Journal of Money, Credit, and Banking [a top-tier journal], and find that inadvertent errors are so commonplace that the original results often cannot be reproduced."

"McCullough and Vinod (2003) report that nonlinear maximization routines from different software packages often produce very different estimates, and many articles published at American Economic Review [highest rated journal in economics] fail to test their solutions across different software packages."

"Chang and Li (2015) report a success rate of less than 50% from replicating 67 published papers from 13 economics journals, and Camerer et al. (2016) show a success rate of 61% from replicating 18 studies in experimental economics."

"Collecting more than 50,000 tests published in American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics, [three top rated journals in economics] Brodeur, L´e, Sangnier, and Zylberberg (2016) document a troubling two-humped pattern of test statistics. The pattern features a first hump with high p-values, a sizeable under-representation of p-values just above 5%, and a second hump with p-values slightly below 5%. The evidence indicates p-hacking that authors search for specifications that deliver just-significant results and ignore those that give just-insignificant results to make their work more publishable."

If you think this phenomena is encountered only in economics and finance, think again. Here are some findings from other ' hard science' disciplines where, you know, lab coats do not lie.

"...replication failures have been widely documented across scientific disciplines in the past decade. Fanelli (2010) reports that “positive” results increase down the hierarchy of sciences, with hard sciences such as space science and physics at the top and soft sciences such as psychology, economics, and business at the bottom. In oncology, Prinz, Schlange, and Asadullah (2011) report that scientists at Bayer fail to reproduce two thirds of 67 published studies. Begley and Ellis (2012) report that scientists at Amgen attempt to replicate 53 landmark studies in cancer research, but reproduce the original results in only six. Freedman, Cockburn, and Simcoe (2015) estimate the economic costs of irreproducible preclinical studies amount to about 28 billion dollars in the U.S. alone. In psychology, Open Science Collaboration (2015), which consists of about 270 researchers, conducts replications of 100 studies published in top three academic journals, and reports a success rate of only 36%."

Let's get down to real farce: everyone in sciences knows the above: "Baker (2016) reports that 80% of the respondents in a survey of 1,576 scientists conducted by Nature believe that there exists a reproducibility crisis in the published scientific literature. The surveyed scientists cover diverse fields such as chemistry, biology, physics and engineering, medicine, earth sciences, and others. More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than 50% have failed to reproduce their own experiments. Selective reporting, pressure to publish, and poor use of statistics are three leading causes."

Yeah, you get the idea: you need years of research, testing, re-testing and, more often then not, you get the results are not significant or weakly significant. Which means that after years of research you end up with unpublishable paper (no journal would welcome a paper without significant results, even though absence of evidence is as important in science as evidence of presence), no tenure, no job, no pension, no prospect of a career. So what do you do then? Ah, well... p-hack the shit out of data until the editor is happy and the referees are satisfied.

Which, for you, the reader, should mean the following: when we say that 'scientific research established fact A' based on reputable journals publishing high quality peer reviewed papers on the subject, know that around half of the findings claimed in these papers, on average, most likely cannot be replicated or verified. And then remember, it takes one or two scientists to turn the world around from believing (based on scientific consensus at the time) that the Earth is flat and is the centre of the Universe, to believing in the world as we know it to be today.


Full link to the paper: Charles A. Dice Center Working Paper No. 2017-10; Fisher College of Business Working Paper No. 2017-03-010. Available at SSRN: https://ssrn.com/abstract=2961979.

16/6/17: Trumpery & Knavery: New Paper on Washington's Geopolitical Rebalancing


Not normally my cup of tea, but Valdai Club work is worth following for all Russia watchers, regardless of whether you agree or disagree with Moscow-centric worldview (and  whether you agree or disagree that such worldview even exists). So here is a recent paper on Trump's Administration and the context of the Washington's search for new positioning in the geopolitical environment where asymmetric influence moves by China, Russia and India, as well as by smaller players, e.g. Iran and Saudis, are severely constraining the neo-conservative paradigm of the early 2000s.

Making no comment on the paper and leaving it for you to read:  http://valdaiclub.com/files/14562/.