Skip to Main Content

Essentiality of Money: A Historical Perspective

Economic Brief
January 2024, No. 24-01

In this article, I explore the historical development of the concept of essentiality of money, as well as monetary theory more broadly. I begin by introducing the concept of money and its role in facilitating economic activity. I delve into the evolution of monetary theory, starting with the classical theory and the marginal revolution and moving to the division between microeconomics and macroeconomics. Then I discuss the microfoundation revolution in macroeconomics and the debate over the essentiality of money. Finally, I examine the New Keynesian School (which uses reduced-form tools to introduce money) and the New Monetarist School (which emphasizes explicit microfoundations and the essential functions of money).


Money is any object (even digital) generally accepted by a large group of people as payment for goods and services. It eliminates the need for barter and facilitates economic activity by making transactions more efficient. Throughout history, humanity has used diverse forms of money including commodity money (such as precious metals, like gold and silver) and representative money (such as paper bills and tokens backed by a specific commodity).

In the modern era, most societies rely on fiat money, which derives its value from general acceptance. Fiat money possesses two defining characteristics that set it apart from its historical predecessors:

  • Inconvertibility: The issuer of the currency does not guarantee its redemption for any physical commodity, such as gold or silver.
  • Intrinsic uselessness: The currency is not desired for its own sake. Unlike gold or livestock, for example, fiat money has no inherent utility beyond its role as a medium of exchange.

Understanding what makes monetary exchange — particularly the exchange of fiat money — a socially useful institution is a central problem in economics. This has led to a search for economic models where money demonstrably delivers better outcomes than alternative arrangements. In such models, money is deemed essential.1 However, to fully appreciate the significance of essential money, we must first dive into the history of economic thought and monetary theory.

Before continuing, I want to emphasize that I do not attempt to provide a comprehensive history of economic thought or monetary theory. Instead, I focus on key views and events that shaped the field and contribute to understanding the concept of essential money.

Classical Theory, Marginal Revolution, Microeconomics and Macroeconomics

The concept of monetary exchange as an improvement over barter systems can be traced back at least to 1776 and Adam Smith's seminal work The Wealth of Nations. However, it was William Stanley Jevons who, in his 1875 book Money and the Mechanism of Exchange, introduced the now widely used term "double coincidence of wants." This term perfectly captures the core challenge of bartering: the need for two individuals to simultaneously desire each other's possessions.

Within economic discourse, the double coincidence of wants (or absence of it) has long been recognized as a fundamental driver of the adoption of money. Classical economists such as Smith and David Ricardo understood the role of money as a medium of exchange. However, their theory of value — the labor theory of value — failed to fully explain the value of money. According to this theory, the value of, say, a gold coin is determined only by the labor cost of sourcing the precious metal from other countries or gold mines. However, not all fluctuations on the value of gold coins could be traced back to changes on the labor cost of sourcing its material. And while the adoption of fiat money was not a widespread phenomenon in the 18th century, the labor theory of value would have special difficulties in explaining fiat currencies, where the negligible labor cost of production contrasts starkly with its market value.

The Water-Diamond Paradox

The fact is that the labor theory of value, despite its early influence, held significant flaws. This is well exemplified in its difficulty in explaining the water-diamond paradox: Water, which is essential for survival, commands a lower price than diamonds, which are not. Classical economists attempted to reconcile this by distinguishing between use value (determined by individual preferences) and market value (determined by production cost). This explanation proved inadequate, and it was not until the marginal revolution that a satisfactory answer emerged for the water-diamond paradox.

The marginal revolution — initiated by the publications of Carl Menger's 1871 book Principles of Economics, Jevons' 1871 book Theory of Political Economy and Leon Walras' 1874 book Elements of Pure Economics — redefined the concept of value in economics. Instead of viewing market value as determined by production cost, marginalists argued that it was determined solely by individual preferences (or utility value), which diminishes as additional units are consumed.

In the case of water and diamonds, the paradox is easily explained through the lens of marginal utility. Water is readily available in most contexts, leading to low marginal utility. On the other hand, diamonds are scarce, resulting in higher marginal utility. This difference in marginal utility — not the labor cost of production — determines the market value of each good.

Microeconomics and Macroeconomics

While the marginal revolution provided a sound explanation for the water-diamond paradox, it fell short of fully explaining the value of money, particularly fiat money, which lacks inherent utility. This divergence in theory set the stage for the separation of economics into two distinct branches: microeconomics and macroeconomics.

Microeconomics continued to evolve upon the foundation laid by the marginal revolution. Alfred Marshall — in his seminal 1890 work Principles of Economics — popularized the supply and demand diagrams, linking them to the concept of diminishing marginal utility. He also introduced several modern economic concepts, such as price elasticity and consumer/producer surplus. In his 1899 book The Distribution of Wealth, John Bates Clark applied the marginal principle to labor and capital, providing theoretical explanations for wages and profits.

Over time, microeconomics also incorporated powerful new tools to analyze individual decisions and market interactions, such as game theory and general equilibrium theory. Pioneered by John von Neumann and Oskar Morgenstern in their seminal 1944 work Theory of Games and Economic Behavior, game theory provided a framework for analyzing competitive and cooperative behavior in various economic contexts, from auctions and bargaining to market entry and oligopolistic competition. General equilibrium theory — spearheaded by Kenneth Arrow and Gerard Debreu in their 1954 paper Existence of an Equilibrium for a Competitive Economy — aimed to model the entire economic system as a set of interconnected markets simultaneously reaching equilibrium. This framework allowed economists to analyze the relationship between individual decisions and aggregate outcomes, providing insights into market efficiency, resource allocation and welfare implications of various policies.

By the end of the 1960s, microeconomics — previously dominated by the marginal revolution and its focus on individual behavior and market interactions — had entered a period of diversification and expansion. The core principles of neoclassical economics (including utility maximization and profit maximization) were firmly established and widely accepted. Game theory and general equilibrium theory rose to prominence. Microeconomics also expanded to encompass new and emerging areas of research, such as information and behavioral economics.

The 1960s also witnessed the birth of two subfields that would significantly influence monetary theory: mechanism design and search theory. In his seminal 1960 paper "Optimality and Informational Efficiency in Resource Allocation Processes," Leonid Hurwicz introduced the term mechanism design and formally began the study of designing mechanisms that lead to efficient outcomes. This laid the groundwork for what later Neil Wallace would call "the mechanism design approach to monetary theory." In his seminal 1961 paper "The Economics of Information," George Stigler studied the process by which a buyer who wishes to obtain the most favorable price must canvass various sellers. Stigler labeled this process "search," and a new field then emerged to study this process. As we will discuss, search theory becomes prominently used in monetary theory in the 1980s.

Macroeconomics started to diverge from microeconomics (particularly the field of monetary economics) most notably with the development of the quantity theory of money in Irving Fisher's 1911 book The Purchasing Power of Money. The quantity theory of money proposes a direct relationship between the money supply and the price level. Although this was recognized in the works of classical economists, Fisher formalized it in the equation:2 M×V=P×Q, where M represents the money supply, V represents the velocity of money (average number of times a unit of currency is spent within a period), P represents the price level, and Q represents real output.

By itself, this equation does not represent a theory since it is, by definition, true. (That is, it is a tautology.) What makes it a theory is that Fisher postulated that V and Q are stable, making M and P move proportionally to each other. In other words, money would have no real effect and work solely as a veil to the economy, as in the language introduced by Eugen von Bohm-Bawerk in his 1884 book The Positive Theory of Capital. That is, the use of money to facilitate transactions can obscure the real economic activity behind the value of goods and services people exchange. However, like a veil, it does not change what is under it: the real value of such exchanges.

The Great Depression, however, served as a major turning point in macroeconomic thought. The prevailing models — which emphasized self-correcting market mechanisms and the stability of the money velocity and output (again, V and Q, respectively) — failed to explain the economic collapse and subsequent prolonged unemployment. John Maynard Keynes — with his groundbreaking 1936 work The General Theory of Employment, Interest and Money — challenged these assumptions and offered a new paradigm for understanding economic fluctuations. Keynes argued against the self-correcting nature of the economy, which was defended by Fisher and others. He proposed that government use of fiscal and monetary policies could play a crucial role in stabilizing the economy during recessions.

Change in the 1960s

While the division of economics into microeconomics and macroeconomics serves as a useful framework, it can be seen as an oversimplification. Many elements of microeconomics (particularly from the marginal revolution) were incorporated into macroeconomics. This is evident in the Austrian school of economics and the works of Knut Wicksell, Ludwig von Mises and Friedrich A. Hayek. Whether it be Wicksell's cumulative process, Mises's marginality principle of money or Hayek's stages of production, Austrian economists integrated numerous microeconomic elements into their analyses. However, despite this integration, their focus remained firmly on "macro" aggregates.

The end of the 1960s marked a significant turning point in the history of monetary theory. After decades of dominance by Keynesian economics, monetarism emerged as a major rival school. Milton Friedman and Anna Schwartz's 1963 book A Monetary History of the United States, 1867-1960 challenged the prevailing Keynesian view that money played a negligible role in the economy. Instead, Friedman and Schwartz argued that changes in the money supply had a significant impact on real economic activity, particularly in the short run. They provided extensive historical evidence to support their claims, revitalizing interest in the quantity theory of money, which had been largely disregarded by Keynesians. So, by the end of 1960s, macroeconomics was back to studying the same equation Fisher proposed in 1911: M×V=P×Q.

The Microfoundation Revolution and the Essentiality of Money

The microfoundation revolution in macroeconomics was a significant shift in thought that occurred in the 1970s and 1980s. It emphasized the need to ground macroeconomic models in the microeconomic behavior of agents. This meant explicitly modeling how individual preferences, decision-making processes and interactions lead to aggregate economic outcomes. One of the main reasons for this revolution is found Robert Lucas's influential 1976 work "Econometric Policy Evaluation: A Critique," which argued that models lacking microfoundations ignored how agents adapt their behavior to policy changes. (This is known as the Lucas Critique.)

However, one of the key reasons for the split between macroeconomics and microeconomics in the early days of the marginal revolution had been specifically the difficulty of explaining the value of money within the existing microeconomic framework. So, the microfoundation revolution put this challenge back at the center of the economic debate and, while attempting to address it, created a new divide within macroeconomics.

From Reduced Forms of Money Demand and Back to Money as a Veil

On one side, adopting a more pragmatic approach, were economists who argued that explicit microfoundations for money demand were not necessary in macroeconomic models. Their models incorporated optimization theory and other aspects of microeconomics but relied on "reduced form" approaches to capture individual money demand. These approaches included:

  • Adding money directly to the utility function: Pioneered by Don Patinkin and Miguel Sidrauski in separate works,3 it assumes that holding money itself provides utility to individuals.
  • Imposing a cash-in-advance constraint: Proposed by Robert Clower in his 1967 paper "A Reconsideration of the Microfoundations of Monetary Theory," it assumes that individuals need money upfront to make purchases, thereby creating a demand for money.
  • Imposing a shopping-time constraint: Originally introduced by Thomas Saving in his 1971 paper "Transactions Costs and the Demand for Money," it assumes that individuals need time to shop for goods and services, which creates a demand for money to reduce the cost of transactions.

These three approaches were actually shown to be equivalent under certain assumptions.4 As a result, the literature evolved using variations of them until the 1990s. Then, with the publication of Michael Woodford's 1998 paper "Doing Without Money: Controlling Inflation in a Post-Monetary World," it took an even more radical turn. Woodford argued that as aggregate real money balances become a small fraction of aggregate real output, policy-induced changes in real money balances have minimal effects on output. Therefore, he argued, basing monetary policy advice on models where real money balances are zero incurs no significant loss. While frictions to the adjustment of nominal prices are important, money itself was again a veil and considered unimportant to describe the movements observed in the real economy. Woodford popularized this approach in his book Interest and Prices: Foundations of a Theory of Monetary Policy, and it has been widely adopted by macroeconomists, particularly in central banks. It's been labeled the "New Keynesian School," since one of its more standard features is some form of friction that prevents nominal prices from adjusting and driving economic imbalances.

The Keys to the Essentiality of Money

On the other side, a camp emerged that insisted on explicit microfoundations for money demand. This meant building models where the economy with money would support outcomes not possible in the economy without it (that is, models where money is essential). Early efforts in this camp were rooted in the concept of sequence economies, developed by Roy Radner in the late 1960s and early 1970s.5 His work inspired subsequent contributions by Ross Starr, Frank Hahn and Joseph Ostroy.6 Simultaneously, Paul Samuelson's overlapping-generation model of money was gaining traction, finding applications in studies like John Kareken and Neil Wallace's 1978 report "Samuelson's Consumption-Loan Model With Country-Specific Fiat Monies." In a separate work, Wallace highlights that these models feature "the existence of equilibria in which value is attached to a fixed stock of fiat money and... the optimality of such equilibria and the nonoptimality of nonfiat-money equilibria."7 That is, he argues that money is essential in these models.

However, the true turning point came with Nobuhiro Kiyotaki and Randall Wright's seminal 1989 paper "On Money as a Medium of Exchange" and the subsequent development of search theories of money. This framework quickly gained momentum, and by the early 2000s, many core problems in monetary theory had already been studied using this approach.8 The literature further benefited from the mechanism design approach outlined in Wallace's 2010 book chapter "The Mechanism-Design Approach to Monetary Theory," which helped identify essential frictions in economies (such as imperfect monitoring and costly connections) that contribute to the essentiality of money.9 Additionally, the unified framework developed by Ricardo Lagos and Randall Wright in their 2005 paper "A Unified Framework for Monetary Theory and Policy Analysis" incorporated elements of general equilibrium, increasing the tractability and applicability of the framework. These advancements led to the emergence of the New Monetarist School.10 This school not only studies monetary exchange but applies the same methodology it developed to several other institutions (such as banks) related to payments and financial intermediation.

Where We Are and Where We Are Going

In the previous sections, I attempted to provide a description of the past. Now, I will allow myself to speculate on the future (a futile but fun endeavor).

It's important to acknowledge the influence of the New Keynesian School. Its widespread adoption makes it unlikely to be replaced entirely in the foreseeable future. Furthermore, many simpler economic phenomena can be effectively explained by models that abstract from the microfoundations of money, similar to how we still rely on Marshall's classic supply and demand diagrams for introductory analysis.

However, despite the New Keynesian School's continued relevance, I believe that the future does not lie in using reduced-form approaches to capture the crucial aspects of the financial system. The limitations of this approach — and the ever-present risks highlighted by the Lucas Critique — necessitate this shift.

We can already witness this shift taking place. When it comes to digital currencies and the potential issuance of a central bank digital currency, models devoid of currencies or featuring arbitrary restrictions on how trade takes place offer limited insights. Furthermore, the recognition that a clear distinction between "money" and "non-money" doesn't exist — and that assets (such as bitcoin, the U.S. dollar and U.S. Treasuries) can all function as a medium of exchange and be substitutes to varying degrees — necessitates a broader understanding of how agents chose which one to use.

To highlight this point, consider again the argument from Woodford that, as aggregate real money balances become a small fraction of aggregate real output, policy-induced changes in real money balances have minimal effects on output. As Ricardo Lagos and Shengxing Zhang show, that is not the case when there is imperfect competition in the lending market.11 That is because using money instead of credit for payments provides an outside option to agents and improves their bargaining position against lenders. And high inflation hurts these agents by worsening their bargaining position, even if in equilibrium they do not carry money. Such analysis is not possible if we don't model explicitly individuals' choice of payment instrument.

To conclude, I believe that the future of monetary theory promises a more nuanced and comprehensive approach, one that will provide a deeper analysis of the financial system's essential functions in the modern — and future — economy.


Bruno Sultanum is an economist in the Research Department at the Federal Reserve Bank of Richmond.

 
1

Frank Hahn introduced the terminology "essential money" in his 1973 papers "On Transaction Costs, Inessential Sequence Economies and Money" and "On Foundations of Monetary Theory," specifically referring to his concept of sequence economies. Now it is used more broadly to mean economies in which monetary arrangements implement outcomes that are better than non-monetary ones.

2

See The Purchasing Power of Money for Fisher's formulation of the quantity theory. For more context, Thomas Humphrey's 1974 paper "The Quantity Theory of Money: Its Historical Evolution and Role in Policy Debates" reviews the development of the quantity theory of money, including a discussion on the contributions prior to Fisher.

3

Patinkin's work is the 1965 book Money, Interest and Prices, and Sidrauski's work is the 1967 paper "Rational Choice and Patterns of Growth in a Monetary Economy."

4

See, for example, Robert Feenstra's 1986 paper "Functional Equivalence Between Liquidity Costs and the Utility of Money."

6

See Hahn's review of these author's papers in his 1973 work "On Foundations of Monetary Theory."

7

See Wallace's 1977 report "On Simplifying the Theory of Fiat Money."

8

See, for example, Wallace's 2001 paper "Whither Monetary Economics?"

9

Another key advantage of applying a mechanism-design approach is that it allows us to benchmark the economy against the best outcome that can be achieved given the existing frictions. For example, in his 2016 paper "Monetary Exchange and the Irreducible Cost of Inflation," my Richmond Fed colleague Russell Wong applies mechanism design to construct a lower bound to the welfare cost of inflation in the United States.

10

For more on the New Monetarist School, see Stephen Williamson and Randall Wright's 2010 book chapter "New Monetarist Economics: Models" and 2010 report "New Monetarist Economics: Methods."


To cite this Economic Brief, please use the following format: Sultanum, Bruno. (January 2024) "Essentiality of Money: A Historical Perspective." Federal Reserve Bank of Richmond Economic Brief, No. 24-01.


This article may be photocopied or reprinted in its entirety. Please credit the author, source, and the Federal Reserve Bank of Richmond and include the italicized statement below.

Views expressed in this article are those of the author and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System.

Subscribe to Economic Brief

Receive a notification when Economic Brief is posted online.

Subscribe to Economic Brief

By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.

Phone Icon Contact Us

RC Balaban (804) 697-8144