2
Most obviously, his work includes no explicit model. The expositions in Friedman (1969
[1969]; 1989) start with the conceptual experiment of a doubling of the money stock that leads to a
doubling of the price level. In itself, the conceptual experiment offers no guidance on how to test the
hypothesis of the neutrality of money. The exposition then jumps to a summary of NBER style
cyclical timing relationships involving money. The empirical relationships between money and
cyclical peaks in the business cycle and between money and inflation, however, no longer offer
guidance on how to test hypotheses about the role of money in the transmission of monetary policy.
Friedman’s comparative advantage was as an applied statistician not as a model builder.
Moreover, Friedman developed his ideas over time in the context of contemporaneous debate.
Today, the disappearance of that context can render opaque the underlying issues. Consider two
examples. First, Friedman challenged the prevailing view of the Great Depression as evidence of
failure of the price system. According to the real bills views then prevalent among policymakers, the
role of the Fed was to proportion the supply of credit to the demand for credit arising from legitimate
(nonspeculative) uses. Given the view that it should only accommodate the demand for credit, the
Fed allowed the money stock to collapse. In their turn, Keynesians employed the idea of a liquidity
trap in order to dismiss the relevance of monetary policy to the Depression. In opposition, Friedman
used a reserves-money multiplier framework with reserves as an exogenous variable to argue that
money was a causal factor. Second, given his methodology for testing hypotheses based on their
predictive power, using OLS regressions, Friedman and Meiselman (1963) compared the power of
money versus exogenous expenditures (investment) for predicting consumption. The organization of
Friedman’s ideas around the exogeneity of money makes his work seem dated now.
2.
Identification: Friedman versus Cowles
In order to understand Friedman’s methodology for identification of the forces causing
macroeconomic instability, it is helpful to place it in the context of the debate that took place around
1950 at the University of Chicago between Friedman and members of the Cowles Commission. This
debate in turn arose in the context of the movement away from the institutionalism that emphasized
descriptive reality and how the structure of the economy would change as institutions changed.
Economics became a discipline that tested hypotheses based on the predictions of a model. The need
for a model in order to disentangle causation from correlation now constitutes a bedrock principle of
the discipline of economics. In this respect, the Cowles Commission set the research agenda for
modern macroeconomics (Christ 1952). The holy grail of macroeconomics has become construction
of a structural model of the economy grounded in microeconomic theory.
One motivating force behind the Cowles agenda was the creation of a structural model that
could be used to implement a policy of aggregate-demand management. Lawrence Klein (1964, 2)
wrote: “At an early stage, it was recognized that this policy implementation would require accurate
predictions of the macro-economy, and econometric model building has this goal precisely in mind.”
He led the profession in the estimation of large-scale econometric models of the economy subject to
identification restrictions such as the exclusion restrictions in individual equations.
1
1
Friedman (1953, 8) criticized this method of identification by pointing out that in dynamic models
in which the future is important expectations affect both supply and demand. “[T]he simple and even
3
In A Theory of the Consumption Function, Friedman (1957a) contributed to the Cowles
agenda of constructing micro-founded structural models. However, as reflected in his critique of
aggregate-demand management and his advocacy of rules that relied upon unfettered operation of the
price system to assure macroeconomic stability, Friedman was skeptical of economists’ ability to
construct adequate large-scale econometric models. Accordingly, as an alternative to identification
through estimation of such models, Friedman looked for episodes in which the Fed interfered with
the operation of the price system as flags for predicting cyclical turning points in the economy.
In this spirit, Friedman investigated how the practice of controlled experiments in the hard
sciences could be applied to economics.
2
In “The Methodology of Positive Economics,” Friedman
(1953) argued that the validity of a hypothesis lies not in its descriptive realism but rather in whether
its predictions can be refuted. Only with the abstractions of a model rather than with a complicated
description of reality is it possible to make predictions that can be refuted rather than rationalized ex
post (Hetzel 2016a). Friedman also stressed that the test of a model was not how well it fit the data
but rather how well it predicted when applied to data not available to the economist at the time of
formulating the model (Friedman and Schwartz 1991; Hammond 1996).
In testing monetarist hypotheses, Friedman organized the historical record in a way that
isolated episodes in which the Fed interfered with the price system. The challenge is that the forces
generating the phenomena of concern—inflation and cyclical fluctuations—are obscured by the poor
experimental design that gives rise to them. As evidenced by the historical narrative in Friedman and
Schwartz (1963a), Friedman therefore pursued an identification strategy characterized here as the
concatenation of semicontrolled policy experiments. During the monetarist heyday, Friedman
flagged them from monetary accelerations and decelerations (Friedman and Schwartz 1963b). The
use of historical narrative in order to concatenate episodes of monetary disorder would hopefully
wash out other forces that could not be held constant (Hammond 1996, 103).
3
obvious step of filing the relevant factors under the headings of ‘supply’ and ‘demand’ effects a great
simplification…. But the generalization is not always valid. For example, it is not valid in a …
speculative market.”
2
One can see the different approaches to identification in Koopmans’ (1947, 166-7) criticism of
Burns and Mitchell’s NBER approach to the study of the business cycle:
[E]conomic theories are based on … knowledge of the motives and habits of consumers and
of the profit-making objectives of business…. The mere observation of regularities in the
interrelations of variables then does not permit us to recognize or to identify behavior
equations among such regularities. In the absence of experimentation, such identification is
possible, if at all, only if the form of each structural equation is specified.
Friedman (1960, 23) chose to push the idea of “experimentation” in his narrative exploration of the
hypothesis that “Governmental intervention in monetary matters, far from providing the stable
monetary framework for a free market economy that is its ultimate justification, has proved a potent
source of instability.” See Hetzel (2016a).
3
The earliest example of Friedman’s concatenation methodology was “Price, Income, and Monetary
Changes in Three Wartime Periods” (Friedman 1952 [1969]). Friedman argued for inflation as a