The Science of Monetary Policy

  • Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

The near loss of price stability in the 1970s led to serious attention being paid to the role of expectations in economic management and the need to find monetary rules that respected changing incentives faced by households and firms under different economic environments. Models were then developed that mostly yielded arguments for price stability under an interest rate feedback rule and for a decade or so, under the Long Expansion, all seemed well.

 

Download Transcript

26th March 2015

The Science of Monetary Policy

Professor Jagjit Chadha

 

 

“The essence of central banking is discretionary control of the monetary system. The purpose of central banking has been defined in various ways: to maintain stability of the price level, to keep the economy on an even keel, and so on... The choice of purpose - the object of monetary policy - is not irrelevant to the choice of method: a community might hope more reasonably in some cases than in others to attain its ends by making the monetary system work to rule. And working to rule is the antithesis of central banking. A central bank is necessary only when the community decides that a discretionary element is desirable. The central banker is the man who exercises his discretion, not the machine that works according to rule.”

R. S. SAYERS (1957), Central Banking After Bagehot.

 

1.  Introduction

 

Let me start at the end of this story. The end of monetary history was supposed to be an independent central bank pursuing implicitly or explicitly an inflation target under the guise of operational independence. Other aspects of financial policy and even fiscal policy could be partitioned off into a box that said: Do Not Open. The central bank could pursue its target in a rule-based manner and get agents to bind their behaviour with the central bank's objectives. This alloy of jointly determined beliefs and targets would tend to ensure stability in the face of shocks. It became not so much a matter of getting people to do what you want but getting people to do your job for you: if people always expected stability following any stream of shocks, they would not need so much proof from recessions and painful interest rate hikes that central bankers really meant business. Belief, even in a cynical age, can still be a powerful weapon.

 

But the ongoing financial crisis has thrown into sharp relief, the question of whether monetary policy can be separated from other aspects of financial and fiscal policy. It has become increasingly difficult to argue with the proposition that financial regulation, fiscal policy and, even the objectives of overseas policymakers are all conditional states that constrain the monetary policy makers' actions. Indeed, in his June 2010 Mansion House speech, the then Governor of the Bank of England welcomed whole heartedly the Chancellor's plan to recombine monetary and financial policy: `the Bank (will) take on (responsibilities) in respect of micro prudential regulation and macro prudential control of the balance sheets of the financial system as a whole. I welcome those new responsibilities. Monetary stability and financial stability are two sides of the same coin. During the crisis the former was threatened by the failure to secure the latter'. Indeed prior to the financial crisis a form of separation principle was in place, whereby monetary policy concentrated on one measure of macroeconomic disequilibria, inflation, and financial or credit policy was perceived as essentially an aspect of microeconomic regulation.[1]

 

From an imaginary vantage point of the first few years of the 21st century the collapse of the separation principle would seem rather surprising. The new monetary policy consensus that emerged seemed to have solved the many of the technical problems of monetary policy management. A representative view from this era, though written with circumspection, is that by Ben Bernanke (2004), who argued that: Few disagree that monetary policy has played a large part in stabilizing inflation, and so the fact that output volatility has declined in parallel with inflation volatility, both in the United States and abroad, suggests that monetary policy may have helped moderate the variability of output as well...my view is that improvements in monetary policy, though certainly not the only factor, have probably been an important source of the Great Moderation.' He suggests several reasons: (i) low and stable inflation outcomes promoting a more stable economic structure, (ii) better monetary policy may have reduced the size and distribution from which measured shocks are drawn and (iii) inflation expectations stop becoming an exogenous driver of macroeconomic instability. But the most important was arguably understanding simply the limitations of monetary policy. That is bound by severe information constraints about the correct model of the economy and the state of nature, monetary policy concentrated on simply on gauging the correct current level and prospective path of short term interest rates in order to stabilise aggregate demand over the medium term. A general acceptance that a simple rule was likely to dominate a full blown optimal control solution, which was, in any case, always predicated on a particular model and not time consistent and subject to discretion, or what used to be called `fine-tuning'.[2]

 

But from an older perspective the Art of Central Banking predated the Science of Monetary Policy and tended to define central banking not so much in term of a narrow price stability but in terms of objectives that might now be termed financial policy and involved policies to safeguard the ongoing health of the financial system.[3] This art developed as a response to both the multiplicity of roles `grabbed' by a developing central bank but also fundamentally in response to crises. As already explained in this book Bagehot, Lombard Street, (1873) famously outlined the principles of central banking in a crisis: (i) the central bank ought to lend freely at a high rate of interest to borrowers with good collateral; (ii) the value the assets should be somewhere between panic and pre-panic prices and (iii) institutions with poor collateral should be allowed to fail. The general understanding of these principles has been associated with the avoidance of banking panics in England since Overend and Gurney 1866.

 

Whilst short term liquidity support, of varying kinds, was ultimately offered by all major central banks following the August 2007 freeze in the interbank markets, another issue emerged shortly thereafter: How to deal with the zero bound? In each, the response has been to increase the size of the central bank balance sheet. The basic idea here has borrowed from an older literature in which the size, composition and risk taken by the central bank onto its balance sheet is used to control financial conditions more generally. Because of imperfect substitutability across financial claims, a central bank that uses it balance sheet to alter the structure of private sector balance sheet and market segmentation can influence financial prices (Tobin, 1969). This leads to the question of the extent to which balance sheet operations, and commercial bank reserve policy, are indeed instruments independent of the short term interest rate but we shall leave these matters to the next chapter.

 

The ongoing financial crisis has injected a considerable degree of variance into the economic belief system. In the eyes of many it would appear that an economic crisis necessarily implies a crisis in economics itself. So much so that many are questioning not only the relevance of trying to use microeconomic foundations in order to understand economic behaviour in the aggregate but are even also ascribing a causal role from the over-reliance on economic models, or one type of economic model, as a contributory factor in the crisis. I will argue that although there had been too much reliance on one type of simple model, the methodology implied by that model has not been shown to be flawed.

In fact, the challenge faced by the economists really stem from two basic errors. The first error has been to over analyse the policy implication of a simple New Keynesian model in which the basic rigidity has only involved some form of price stickiness and very little else. And secondly to compound the problem by spending extensive resources trying to estimate forms of this model, and then use them to underpin policy formulation rather than developing a more convincing structure in which informational and financial frictions trigger significantly different responses to economic shocks. These errors made it nearly impossible to develop fully a richer vein of models that yield the kind of policy prescriptions chosen, in a hurry and in the dark, in response to this crisis. In this lecture I shall try to explain how we ended up with such an alarmingly simple, and perhaps simplistic, yet effective approach.

 

2.  The Record

 

The UK post-war macroeconomic record, although common knowledge, is worth a re-examination. The first Figure shows the year on year growth in real GDP and in a broad-based measure of the prices, the GDP deflator. From the mid-1950s to the end of the Bretton Woods system of fixed but adjustable exchange rates, we can observe reasonable levels of GDP growth and passable attempts to maintain price ability. But let me point to two observations, every successive peak in inflation was higher and prior to 1971 the peaks in output growth were also lower. After the abandonment of fixed exchange rates, the downward shock to potential output growth in the early 1970s was treated as a demand deficient phenomenon and, without a firm nominal anchor for prices and wage settlement, expansionary policies generated high and persistent levels of inflation. The disinflation of the 1980s was associated with a further recession but again with no credible nominal anchor the subsequent boom of the late 1980s led to both higher inflation and the abandonment of a domestic nominal by joining the ERM. The adoption of inflation targeting in 1992 heralded in a period exceptional stability with low inflation and stable growth. Of course, the good times ended in 2007 with year on year growth at -6% in the first quarter of 2009.

 

There has been much talk about how to measure aggregate welfare and in most macroeconomic models; the policy maker is thought to care about inflation and output deviations from target or steady state, which translates crudely into adding up the two sets of standard deviations. Obviously there are many objections to such a measure as we might be interested in distributional issues and in other measures of welfare, such as consumption growth or real household net disposable income. But even when we use models with deep microeconomic foundations based on optimising over the households budget constraint we tend to find that welfare does tend to be (inversely) proportional to some weighted average of inflation and output growth. So in Figure 2 I show a simple misery index, which does not add the level of inflation with that of unemployment, but plots the rolling five-year the simple sum of the standard deviations of inflation and output growth. Naturally, the weights can be disputed as can the choice of the Parliament-inspired five year horizon but the index is perhaps a convenient way of thinking about measuring the uncertainty about economic performance. Overall there seems to be something of a downward trend, suggesting that macroeconomic management by-and-large may be improving. But clearly there are significant events that upset the monetary and financial settlement and require policymakers to redesign the framework.

Let us take another type of slice through this data by scattering the data in different sub-periods. In the top left panel I show the Bretton Woods period, in the top right panel the period, which we might describe as the quest for a nominal anchor, the bottom left is the classical inflation targeting period over the long expansion and the bottom right the period through which we are now living. A standard view is that inflation should not be related to growth other than temporarily so that over long run periods we ought not to see any significant relationship. The Phillips curve, as we discussed, in the previous chapter, is a short run trade-off that will disappear over time as growth returns to its long run level. What I think we can see if when we move from the North West to North East quadrant is that the range of outcomes for both output and inflation become considerably wider and, in particular the loss of a nominal anchor means that shocks are transmitted to nominal outcomes in a persistently manner. In fact it looks like the whole economy pivoted onto a higher level of inflation - stagflation - with reduction in the medium term rate of growth. Against these developments the subsequent compression of outcomes looks quite remarkable: for a 15-year period, practically a generation, inflation and output seemed boxed into positive levels under 5%. The negative income shocks associated with the financial crisis have been large but the second remarkable observation to make is that nominal anchor has done its job. There has been no loss (yet) of monetary stability as there was in the 1970s and 1980s. Let us hope that we stay in the neighbourhood of these outcomes.

 

3.  Humean Lucas

 

The long run neutrality of money is a central plank of monetary policy making (Lucas, 1995). As is well known the insight dates back to at least David Hume, who argued that in the first sentence of his famous essay Of Money that “(m)oney is not, properly speaking, one of the subjects of commerce; but only the instrument which men have agreed upon to facilitate the exchange of one commodity for another. It is none of the wheels of trade: It is the oil which renders the motion of the wheels more smooth and easy. If we consider any one kingdom by itself, it is evident, that the greater or less plenty of money is of no consequence; since the prices of commodities are always proportioned to the plenty of money.”[4] So in the long run the value of money was determined by its relative scarcity and would do nothing to change real endowments, preferences and relative prices of goods and services. Indeed Hume went even further to suggest a way for monetary policy to be thought about:

 

“From the whole of this reasoning we may conclude, that it is of no manner of consequence, with regard to the domestic happiness of a state, whether money be in a greater or less quantity. The good policy of the magistrate consists only in keeping it, if possible, still encreasing; because, by that means, he keeps alive a spirit of industry in the nation, and encreases the stock of labour, in which consists all real power and riches.” He has observed that because of money's long run neutrality it does not affect welfare but by ensuring that it is circulated in a manner to keep trade and industry going it can help, and perhaps I over interpret, smooth the business cycle.

 

There are two aspects of Lucas's thought I want to place before you. The first is simply about the costs of business cycle fluctuations and what value we might wish to place on stabilisation policy. Using a simple calculation and the assumption that we can examine the average household as a representative one, we can examine the cost of the expected standard deviation in terms of the average level of consumption of this average household. We can ask ourselves how much income would the representative household be willing to give up in order to eliminate the standard deviation of their year to year changes in consumption of goods and services. It turns out that it is not very much, typically, a small fraction of 1% of overall consumption, because the standard deviation of average consumption growth does not turn out to be particularly high and households, although risk averse, are not paranoid.

 

Secondly, in a justly famous paper Lucas criticised the econometric evaluation of policy of the type outlined by Tinbergen-Theil. The argument takes the following form.

 

·         The policy maker may estimate certain behavioural parameters e.g. how much inflation may flow from a given increase in output;

·         If they have some notion of establishing price stability at some implicit level of inflation they may decide to respond to any observed increases in output using their estimates of the responsiveness of inflation to output and of the impact their policy instrument has in output;

·         The problem Lucas highlighted was circularity. The calculation of the optimal response of the policy maker depending on estimates that themselves contained previous responses of policy makers;

·         If the policy maker now changed his behaviour, the estimates based on historic behaviour not only would be wrong but the response of the economy may not have the expected effect.

 

Let me illustrate with an example. A goalkeeper wanting to decide a policy about whether to dive to his right or left against a particular forward might think statistics, or an econometrician, may help. First he might pay someone to examine the success rate of a particular forward and work out if they tended to score when they shot to the goalkeeper's left or right. He then can act armed with the knowledge that the forward scores when he shoots to his left. Naturally the next time, he faces a penalty from this forward he dives to his left and saves it. And that is the Tinbergen-Theil argument. But Lucas says something quite different. If the forward also has access to the published information about his success rate and works out that the goalkeeper he is about to take a penalty against will dive to his left, he will change his behaviour. He will reoptimise, shoot to the goalkeeper's right, score and knock England out of the World Cup. And the optimality of the `estimated' penalty saving reaction function disappears in a Lucasian puff of logic.

One way to square this circle is to find a response, or reaction function, that does not depend on estimated parameters that are a function of previous rules. But locating a story in which the behaviour of the economy, in terms of price and wage setting behaviour or in the determination of financial contracts can be thought to be independent of the monetary rule seems to me to be quite hard. Another option may to avoid any form of feedback rule and just adopt non-discretionary or even random response. Our goalkeeper will do better by randomising his responses so that the penalty taker just does not know where he will dive. So we are left with a number of important principles. First, remember that changing the growth rate of money will not affect growth and productivity in any significant manner for the better, secondly, perhaps because of private and public insurance, households do not seem to face extremely high levels of aggregate risk on average and thirdly, beware of fine tuning responses in an economy that will learn about what you do and may even nullify the impact.

 

4.  Rational Expectations and Policy-Making

 

If elements of the Lucas critique sound a bit like portrayals of efficient markets, in which all information gets publicly traded and this makes it quite hard for agents to make excess profits or for people like central bankers to influence behaviour, actually they are. The so-called rational expectations revolution was taking a firm grip on economics and monetary policy making by this time. And perhaps the most obvious or clear point would be the one raised by Sargent and Wallace (1975) called the policy ineffectiveness proposition, which states that policy that relies on any feedback from observed data cannot affect the plans of people who have already used that observed data to work out their plans. If policy makers and households and firms both have access to the same information set and have freedom to act optimally with that information, how can policy makers `fool' households or firms into working more or less hours or producing more of fewer goods?

 

Let us treat the information set for policy makers and the private sector as public and known to both. If the policy maker decides to respond systematically to lower output by increasing the rate of issuance of money in order to offset some of the fall in output. We might reasonably expect agents to forecast this increase in money supply growth and consequently a higher inflation rate and start to ask for higher money wages to compensate themselves for the fall in the value of money. Should prices and wages jump in proportion to the change in the money supply, nothing real will change and the policy gambit would have been finessed.

 

Without jumping to the final punchline, one response by the policy maker would be to enter a world of mystique, opacity and near secrecy. They may decide that their own analysis and models are part of the private sector's information set and that it might be best not to explain very clearly how data gets turned into policy responses. This mystique may leave the private sector without much ability to forecast when policy might respond to events and so we have a regime of policy surprises. One possible response of the policy world to the natural implications of a full information world might be to try and make that world a little less of a full information world and so hold on to some residual power to effect policy by stealth. I do not think that policy by mystique is a credible way ahead because it relies on the view that agents cannot learn, that central banks do not have a duty to explain their actions in advance and that the kind of equilibrium we end up in when the private sector and the central bank play a game of bluff and counter-bluff is preferable to one in which they can through persuasion and understanding agree on common objectives and move jointly towards them.

 

The next set of principles flowed from the work of Kydland and Prescott (1977) and Barro and Gordon (1983). The former pair launched a frontal salvo on the rationale behind any attempt to apply elements of control theory to rational, forward-looking agents. They argued that unlike the natural world in which the game is between agents who do not learn or forecast your responses, the game of economic policy will force policy makers to abandon seemingly optimal plans. The argument is rather subtle but hugely important nevertheless. Imagine a policy maker, who wishes to maximise output growth and decides to allow capital to migrate into their country in order to increase productive capacity. The policy maker may announce a plan to set capital taxes to zero and this policy will produce the required large flow of capital. But as the stock of capital increases, the policy maker may have revenues to raise and roads to build, for example, and may be faced with an increasingly tempting incentive to change policy and tax the now burgeoning capital stock. In fact when the returns from changing policy outweigh those from maintaining the status quo, e.g. in the run up to an election, the policy maker will snap and raise capital taxes. Now here is the rub. The capital investors will be able to work today out that a future trigger point will force a change in the ability of a government to stick to its plans and so will work on the basis that taxes will be raised, despite whatever the government says. As a result and without a form of credible commitment technology to low or zero taxes, capital will not flow into the country. The government can say whatever it wants to mobile capitalists but it will not succeed without an ability to inspire credibility in its plans.

 

Barro and Gordon then applied this idea to a simple monetary policy game in which the policy maker has an objective for low inflation and stable output growth but rather like Oliver wants some more output growth if at all possible. If the policy maker announces a target for low and stable inflation, as a wage bargainer I have two alternative responses.  Do I believe – giving him or her credibility - the policy maker and hold my wage growth in line with the target? Or do I choose to disbelieve the policymaker because I think they may gain from reneging on their plan or target. My rational response will not only look at the benefits to me from my two choices but also the benefits to the policy maker from playing within the rules or acting with discretion.

 

The policy maker can also do the same exercise. They can hold policy firm to meet the target or re-optimise on the private sector's credibility and renege. The box shows the four possible outcomes. There are two equilibria in which both sides play the same cards. Under credibility, the private sector believes in the target and acts accordingly with nominal wages set in line with the full employment level for real wages.  In this case the central bank hits the target as well and the world is a happy place. Now consider the incentives: if the private sector bargains for higher nominal wages they will also lead to higher real wages for those in work - although they will lead to lower levels of output as fewer workers are hired. Alternatively, the central bank could engineer more output by having a surprise inflation that would lower real wages and increase the quantity of employment and hence output. If the private sector bargains for high wage rates, then the only way we have full employment is for the central bank to meet those requirements and raise the inflation target. And if the central bank is observed to have an incentive to induce a surprise inflation then that is precisely what the private sector will expect. The low inflation equilibrium is this very unstable and will tend to the high inflation state, because both parties have an incentive to plan around the high inflation state and create, what was termed, an inflation bias. Again in the absence of a credible commitment technology, the economy cannot achieve a lasting degree of price stability.

The rational expectations revolution had led to a fundamental change in the requirements for a policy maker. We needed to write down an economic model in which agents could learn or forecast the policy maker's responses. We needed thus to spell out the policy maker's objectives and explain the reaction function in terms of its arguments and its instruments. And we needed a target, or fixed point, about which all agents would co-ordinate because there were no obvious benefits from doing otherwise.

 

5.  The Search for a Nominal Anchor

 

The story of the UK's search for a nominal anchor is probably worthy of a book in its own right, as it was a rather tortuous process. The end of Bretton Woods left monetary policy makers with a choice over domestic nominal targets. And the starting point was some form of caps on both wage and price increases. Such aggregate targets tend to prevent relative price adjustment and in the wake of a dislocated monetary anchor, the falls in real wages meant that there had been a large incentive for nominal wage contracts to be re-priced. A continuing deterioration of the fiscal position made the domestic achievement of macroeconomic stability almost impossible and the IMF were called in to provide some form of credible commitment technology. About this time, the first set of money targets were adopted. And under the medium term financial strategy of the 1980s involved an evolving sequence of choices and debates about the correct quantity of money to target.

 

It turned out that targeting the money stock in some narrow or ever wider form was not a particularly good idea in the presence of ferocious financial innovation and reform. Furthermore money targeting was always going to be a pretty good example of the Lucas critique, or what we termed Goodhart's Law over here: as soon as you try to target an intermediate quantity or price in order to achieve a particularly final objective, the link between intermediate and final objective will be irrevocably broken. The observation tends to carry over to all kinds of targets in the public sector for schools, hospitals and even assessment of University Lecturers. And the evolving experiment with a domestic nominal anchor ended with the start of the late 1980s boom started, which was exacerbated by the shadowing of the Deutsche Mark and brought to an abrupt end by form adoption of another exchange rate peg in the ERM in October 1990. The credible commitment technology could not it seems be found at home so it was necessary to rent (shadow) and then buy (join the ERM) one from abroad and the very best was bought from the Bundesbank. In fact even the Bundesbank could not be relied upon to meet its intermediate target, as the impact of re-unification threw the monetary numbers of track. But here is the nice thing about credibility, it should be able to withstand temporary shocks.

ERM exit in September 1992 meant that a domestic nominal anchor had to found. And we adopted, almost overnight - well over three weeks - flexible inflation targets in October 1992. Under Inflation Targeting I, policy rates were set by the Chancellor. After the adoption of operational independence in May 1997, Inflation Targeting II, policy rates were set by the MPC to a target set by the Executive. We have already examined the outcomes under this regime and would seem fair to think that we have found a stable form of anchor. The reforms after the crisis to ensure that monetary, fiscal and financial policy co-ordinate may yet lead to a more complex regime. But for the moment monetary policy is still judged by the likely attainment of low and stable inflation both currently and in expectation.

 

6.  `By jove, I think she's got it'

 

The macroeconomic relationships that were not derived from first principles and mostly posited from observation, introspection or estimation were shunned by the new generation of macroeconomists who demanded models with microeconomic foundations. This meant that models had to be derived from the basic principles of the maximisation of household utility subject per period income and borrowing constraints, firm level behaviour that also sought to maximise profits subject to the costs of production and the posted prices of output and, where specified, monetary and fiscal policies were sequences of interest rate choices and budget deficits and surpluses that did not violate the government's own budget constraint. As I am concerned with telling a story about monetary, we shall concentrate on the application of this new way of thinking about macroeconomics to the monetary policy problem.

 

The point of departure for a simple macroeconomic model suitable for monetary policy analysis became the New Keynesian (NK) framework (see McCallum, 2001), which is essentially an aggregate model with dominant supply side dynamics but where sticky prices mean that output may deviate temporarily from its flex-price long run level. The possibility of temporary deviations in output from its flex-price level creates a role for the monetary policy maker. In brief, the basic NK story is that the capacity of output is set by a production function based on usual arguments in land and capital with its accumulation of efficiency shocks (the so-called Solow residuals see, for example, 1987) and short run output is determined by a monopolistically competitive supply side faced with an ability only to changes their prices from time to time in what is called Calvo, or time dependent price setting.

 

The NK structure means that the full capacity level of output in this economy lies at a point behind the perfectly competitive frontier, which in principle provides an incentive to push the economy above its full capacity level. Secondly, with prices adjusting only gradually to an optimal mark-up over evolving marginal costs, short run output can deviate from this full capacity level. Following any shocks, prices can only be re-set in each period by the fraction of firms who are sent an exogenous (Calvo) signal to re-price - with some fraction in each time period. And so all other firms are faced with having to accept a sub-optimal price for their output for at least one period and the overall price level, which is a linear combination of all firm prices, is also sub-optimal, which means that there are both distributional and direct output consequences from sticky prices.

 

Inflation is driven by both the difference between capacity and the short run aggregate level of production chosen by all firms and expected inflation. And so inflation, at least in its temporary deviations from target, is not a monetary phenomenon in this model but really an output gap or mark-up phenomenon, which is itself controlled by interest rate choices. But nevertheless to this basic model we can also consider appending a simple model of money demand (for which supply by the monetary policy maker is implicitly perfectly elastic), where we assume that households need to hold money balances to meet a given level of planned nominal expenditures. The role of the policy maker is to set interest rates so that output stabilises at the capacity level, that is the so-called output gap is closed, at which point inflation is also stabilised.

 

This model deals with a number of issues highlighted in this book so far. Firms and household are rational in the sense that they set prices and plans rationally and in a forward looking manner. But it is the combination of monopolistic competition, which gives firms some pricing power over their mark-ups, and sticky prices that means that output can deviate from the its long run level in a manner resembling a business cycle. In this model, it is possible to write the output gap as a function of the expected output gap plus the real interest rate and inflation as a function of expected inflation and the output gap. In fact we can then write inflation as a function of the expected stream of output gaps or indeed the stream of policy rates themselves. And so the forward-looking agents can use the path of expected interest rates to guide their views about future inflation back to target.

 

Let us to try to show how this process might work graphically. The Basic New Keynesian Policymaker had a picture at the back of their minds when thinking about monetary policy. There is an inflation target where the thick dotted line meets the horizontal axis. When we add back in the natural rate of interest or the neutral real interest rate, we then arrive at the steady state policy rate, where the thick dotted line meets the y-axis. The line labelled Fisher equation simply adds a nominal return onto the natural rate of interest defined at the expected inflation rate, which for the purposes of simplicity we set equal to the actual inflation rate plus some random error. All points along the line labelled Fisher Equation imply the same real or neutral real interest rate and so do not impact on aggregate demand, which is a negative function of deviations in the policy rate from this real rate. Off chart, perturbations from aggregate demand relative to supply lead to inflation. The policy function therefore is steeper than the Fisher Equation line and it brings aggregate demand down when there is upward inflationary pressure and vice versa.

 

In this rather mechanistic setting we can understand the impact of changes to, for example, the natural rate, which implies a shift up in the intercept of the Fisher Equation and also a shift upwards to the left for the policy function at a given inflation rate. We can also quickly understand why a mistaken belief that the natural rate, which cannot be observed, is increasing can lead rather quickly to problems as policy rates may be left too low for too long, excessively stimulating demand. Whilst the possibility of uncertainty about the natural rate was clear, we were unable to say anything very much about how to identify changes in it or even think in a constructive manner about the correct set of market interest rates that might allow us to understand the neutral rate.

Still, setting aside the two great real-time unmeasurables, the output gap and the neutral rate of interest, it seemed inflation, and aggregated demand relative to supply, could be stabilized if policy was sufficiently forward-looking. At point A, though, things change. Here nominal rates cannot fall with inflation, and ever-larger negative output gaps, driving down inflation, will thereby increase real rates and so in principle set-up a destabilising feedback loop - unless something else can be found to ease monetary and financial conditions. These instruments turned out to first to be a large depreciation in the exchange rate and, with more novelty, quantitative easing.

 

7.  From Art to Science and Back Again.

 

So the simple linearised New Keynesian model both succeeded and failed. It succeeded because the policy reaction function simply has to react by more than any given change in inflation so that the real interest rate acts to bear down on any output gap and drive inflation back to its target. But it fails because this linear model is subject to a number of well-known control problems, which were exposed in the crisis: (i) changes in the natural rate (the intercept), which cannot be easily measured, will lead to monetary impulses from nominal rates that imply an incorrect real rate of interest; (ii) measuring the output gap and then forecasting the change in the inflation that is implied is very difficult in real-time; (iii) with no set of asset prices in the model, we are left hoping that use of a single policy rate leads to clearing in credit markets that does not leave the economy in a fragile state and (iv) finally if rates were to fall to zero and deflationary pressures continued to escalate, real rates would rise and the economy would be in serious danger of not being able to be stabilised.

 

Actually the failures over the financial crisis were simply that financial risks had built up in a manner that threatened the integrity of the whole system, what we might call a very large shock. Throughout the crisis, this form of model helped us think about the appropriate response. One set of responses involve understanding that forward looking agents need to be told what the end point is. And one possible exit from the crisis might be a boom, so we might be able to escape if the authorities committed to a boom by promising to hold interest rates below the neutral level for an extended period - in general such a commitment has not been forthcoming. Cautious central bankers have relied on rough estimates of the likely responses of the monetary transmission mechanism to asset purchases to try and generate impacts analogous to those that might have happened if policy rates could have gone substantially negative. And without a unique models of cause and effect for asset prices and financial factors in the transmission mechanism, just here, this is where to some degree we can question the science and realise that balancing so many alternate choices may require the canvas of art.

 

JSC

March 2015

 

References

 

Bagehot, W., ([1873] 1898). Lombard Street: A Description of the Money Market, New York: Charles Scriber's Sons. Barro, R. J. and D. B. Gordon, (1983). Rules, Discretion and Reputation in a Model of Monetary Policy, Journal of Monetary Economics, 12(1),pp.101-121. Bernanke, B. S., (2004). The Great Moderation, Speech to the Eastern Economic Association, Washington, DC February 20, 2004, - http://www.federalreserve.gov/boarddocs/speeches/2004/20040220/ Clarida, R., M. Gertler and S. Gilchrist, (1999). The Science of Monetary Policy: A New Keynesian Perspective, Journal of Economic Literature, 37, pp.1661--1707. Clarida, R. and M. Gertler, (1997). How the Bundesbank Conducts Monetary Policy, in Reducing Inflation: Motivation and Strategy, C. D. Romer and D. H. Romer, Eds, University of Chicago Press. Chadha, J. S., L. Corrado and S. Holly, (2014). A Note on Money and the Conduct of Monetary Policy, Macroeconomic Dynamics, 18(8), pp.1854-1883. Fischer, S., (1990). Rules versus Discretion in Monetary Policy, Handbook of Monetary Economics, in: B. M. Friedman and F. H. Hahn (ed.), Volume 2, chapter 21, pp.1155-1184. Hawtrey, R. G., (1934). The Art of Central Banking, London: Longmans. Hume, D., ([1752] 1970), Of Money, in 1970. in Writings on Economics. Eugene Rotwein, ed. Madison: University of Wisconsin Press. Kydland, F. E. and E. C. Prescott, (1977). Rules rather than discretion: The inconsistency of optimal plans, Journal of Political Economy, 85(3), pp.473-492. McCallum, B. T. (2002). Recent Developments In Monetary Policy Analysis: The Roles Of Theory And Evidence, FRB Richmond - Economic Review, 88(1), 67-96. Rotemberg, J. and M. Woodford, (1997). An optimization-based econometric framework for the evaluation of monetary policy, NBER Macroeconomics Annual, 12, 297-346. Lucas, R. E., (1976). Econometric Policy Evaluation: A Critique, Carnegie Rochester Conference Series, 1: pp.19-46. Lucas, R. E., (1987). Models of Business Cycles, New York: Basil Blackwell. King, M. A., (2010). Banking - from Bagehot to Basel, and back again, Speech at the Second Bagehot Lecture, Buttonwood Gathering, New York, 25 October. Sargent, T. J. and N. Wallace, (1976). Rational Expectations and the Theory of Economic Policy, Journal of Monetary Economics, 2(2), pp.169-183. Sayers, R. S., (1957). Central Banking after Bagehot, London: Oxford University Press. Solow, R. M., (1987). `'Growth Theory and After, lecture in memory of Alfred Nobel, 8th December 1987, http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/1987/solow-lecture.html Tobin, J. (1969). A General Equilibrium Approach to Monetary Theory, Journal of Money, Credit and Banking, 1(1), pp.15-29.  

 

 

 

                                                                                                                                    © Professor Jagjit Chadha, 2015

 

       

[1]Microeconomic (or Harberger) triangles will allow the calculation of welfare losses for financial matters and Okun Gaps (the deviation of output from potential) for monetary policy. It was usually felt that the latter will outweigh the former.

[2]See Fischer, 1990.

[3]Clarida et al (1999).

[4]Although it is quite a simple matter to find long run non-neutralities in many standard models, it is generally found that long run non-neutralities should not be exploited as there is not clear enhancement in the the welfare of the representative household. Naturally though, perturbations in the money market will lead to temporary changes in the market clearing level of (overnight or short-term) policy rates and, because of various forms of informational uncertainty or indeed structural rigidity, will lead to temporary deviations in the expected real rate from its natural level and thus act on aggregate demand. The key question though is the extent to which shocks emanating from the money market can be stabilised by an interest rate rule or indeed whether an additional tool may required, see Chadha et al (2014) on this point.

Jagjit Chadha

Professor Jagjit Chadha

Mercers’ School Memorial Professor of Business

Jagjit Chadha was the Mercers’ School Memorial Professor of Commerce at Gresham College from 2014-2018.

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.