Singapore Institute of Statistics

Occasional Article, January 2009

 

Economic Forecasting – De-Mystifying the Art of Modern Crystallomancy

The advent of 2009 has been greeted with words of gloom and doom. The effects of the collapse of major financial institutions in US have left the world in deep economic despair. Attempts by policymakers around the world to salvage the world economy seem to have met with little success, at least in the short term. Alas, if only humans can foretell the future, then mankind would have been spared from all the current economic malaise. Indeed, this article will discuss an endeavour that seems to have failed most of the times - economic forecasting.

 

The reason of choosing to address this topic is two-fold. First, due to impatience and risk-aversion, experts from all cultures in the history of mankind have used various methods to predict the future, ranging from tarot reading, crystallomancy and five elements of divination. Hence, forecasting the future economic performance seems relevant in this period of uncertainty and crisis. Second, the general public tends to find forecasting like a black box; some probably take the numbers given by economic forecasters to be god-given and any deviation from the predicted value is grounds to accuse the professional as incompetent and ungodly. Hence, we will try to strip away the veil and look at the nuts and bolts behind economic forecasting. In doing so, we will hopefully eradicate any misconceptions that the general public may have about forecasters and their profession.

 

There are many methods of forecasting but we shall look at only the more common and popular ones. Given that economic forecasting involves looking at the future that involves time dimension, the appropriate data set to look at is time series data, which are economic data that measures the same economic variables that evolves over time and are collected on regular intervals. The forecaster will construct a mathematical equation that specifies how the variable of interest - which is commonly called the dependent variable  – will depend on the other economic variables that we are not interested to measure, i.e., the so-called independent variables. This is what commonly known as a regression analysis and when such technique is used to study economic events, it is classified as econometrics [1]. It follows that if a variable y is the dependent variable while x is the independent variable, then we say that y is regressed on x. Since the economy is so complex and its individual variable can be affected by probably thousands or more factors, it is not possible to specify all influencing factors in the equation. Hence, any mathematical model has to include a disturbance or an error term to capture all other factors that are not explicitly included in the equation.

 

Generally, the objective of the users of forecasted information will determine the approaches used by forecasters. If the users are mainly concerned about the future values of economic variables such as GDP, inflation or interest rate, then methods that are based on some form of moving-averaging or smoothing method (this refers to eradicating short-term fluctuations in the data to highlight the trend and cycles) will be used and the data is then extrapolated to get the future values based on its trend. A trend in time series refers to the data of a variable to move towards certain direction that persist over time in the long-run; hence, by extracting the trend, the future value of the data can be estimated. Hence, a moving average or rolling average is a technique that constructs an average of a subset of the full data and each number in the subset can be given an equal or different statistical weight (such as allocating more weights to recent data). For instance, if the full data set consists of 10 data points, the first value of the un-weighted moving average might be the average of data points ranging from 1 to 4. The next value would be this simple average of data points 2 through 6, and the process continues. It follows that the final value would then be the same simple average of data points 7 through 10. However, such a simple method to extrapolate future values is often less meaningful and subject to more uncertainty. Currently, most professional forecasters will make use of formal statistical techniques to produce more accurate forecasts.

 

 

COMMON FORMAL STATISTICAL METHODS FOR FORECASTING

 

Types of Variations

 

 

Basically, there are four basic types of variation in a time series data that are of concern to forecasters. The first type is the trend, which was has been mentioned earlier. The next kind of variation that is of concern to forecasters is whether the data exhibits any seasonal pattern. For example, due to year-end festive season such as Christmas and New Year, consumer spending tends to be higher which results in higher GDP in the last quarter of the year. The third type of variation is the cyclical component, whereby certain types of pattern in the series tend to repeat over time. In economic time series, the common cycles of concern are the business cycles, consisting of recurring patterns of ups and downs in business and economic activities. They are different from seasonal variation because the causes are different and economic cycles tend to extend over longer time horizons. The last type of variations is the irregular variation which tends to arise due to unexpected events, such as wars, elections, bank failures, stock market crash or natural disasters such as earthquakes, floods and tsunamis. This type of variation is most challenging to economic forecasters because they occur randomly and there is large uncertainty about their recurrence.

 

The ARIMA method

 

A common method used by professionals for both forecasting and seasonal adjustment is the Box-Jenkins approach that applies the ARIMA (Autoregressive Integrated Moving Average) method, which is commonly classified under the discipline of econometrics. Although the term ARIMA may sound rather daunting to some, it is actually not a difficult concept to understand. Basically, the dependent variable is regressed on its past (i.e., it’s lagged) values and the moving-average value of the error term in the equation. As mentioned earlier, an error term is always included in any mathematical equation that is used to model real-life phenomena and it is usually assume that it captures all other factors that are not explicitly included as explanatory variable. If the equation only includes the last period values, then it is called autoregressive of order one (or an AR(1) process). This way of naming can also be applied to the moving average term.

 

In an economic time series model, one of the major concerns is that whether the data is stationary or non-stationary, which has important implications on the model. A time series is said to be stationary when its statistical properties such as the mean, variance etc. is constant over time.  If the time series data set is non-stationary, it also makes forecasting difficult so one way to transform the data to make it stationary is to subtract the current value from the last period value, a process known as differencing. Hence, if a non-stationary time series data set becomes stationary after differencing once, we say that the data is integrated of order one; if it requires differencing twice to achieve stationarity, it is integrated of order two. Hence, this is why the acronym “ARIMA” includes “I” that stands for “integrated”, which means the data is integrated of what order while AR refers to the autoregressive process of the dependent variable and MA stands for the moving average process of the error term. The ARIMA[2] technique is used by the US census bureau for seasonal adjustment or forecasting and is also widely used by other statistical agencies in the world.

 


Causal Macroeconometric Models

 

However, at times, the users (usually policy makers) may need to understand the quantitative contribution of various economic factors that contribute to one or several major economic variable/s of interest.  The use of causal modeling that includes other independent variables, in addition to the lag values of the dependent variable, will be appropriate. For example, if the government spending increase by 2 billion dollars, what will be the value of GDP next year?). In this case, causal econometric models will be used to answer such questions. As started in the 1940s, many national governments started collecting national income and production data which allows economists to construct large scale macroeconometric models that can be used for policy analysis and forecasting. Dutch economist Jan Tinberjan produced the first comprehensive model for the Netherlands. In US, another economist Lawrence Klein built the first global econometric model at the Wharton school at University of Pennsylvania. Both economists won the Nobel Prize in Economic Sciences for their work in economic modeling. These large scale macroeconometric models are built on a system of structural simultaneous equations and their specifications are based on economic theories. These models can be simulated on a computer to produce forecasts for policy makers to assist in making policy decisions.

 

Structural Time Series Models

 

A structural time series model is formulated with the intention to highlight the components of interest within a set of time series data. They are constructed directly in terms of distinctive components such as trends, seasonal and cycles, together with other economic variables that is suppose to affect the dependent variable based on economic theory. Compared to the classical regression model, a structural time series model has the additional flexibility to allow for variables to change over time because the explanatory variables are functions of time. The estimated values of the variables of interest can be computed by using the Kalman Filter, a type of algorithm that was being used by control engineers in predicting rocket movements. Specifically for the purpose of forecasting, the economist has to continuously monitor the economic system and the estimated values must be continuously updated as new observations becomes available so as to produce more accurate forecasts. In order to do this, the economic forecasters has to filter the data to remove any random component so that policy measures that is being made is based on the essential but unobserved movement of the economic variable of interest (e.g., GDP movements, exchange rate fluctuations etc.)

 

INFORMAL JUDGEMENT

 

Although modern economic forecasting procedures use formal economic models to predict future economic events, good forecasters tend to use some informal judgments to form their forecasts. They do not blindly follow what formal models tell about the future but rather use the information produced by formal models in conjunction with their own experience and analytical skills to do their job. Hence, like most professional disciplines, economic forecasting is both an art and a science.

 

 

WHY ECONOMIC FORECASTS CAN FAIL…

 

Despite all the most sexy computation techniques used by economic forecasters, there are times (in fact, quite often) when forecasts produced by the best economic forecasters deviate too much from the actual outcome. One of the main reasons is that economic models are bad at capturing unexpected turning points, such as stochastic shocks arising from political crisis, wars, natural disasters, bank runs etc. These events usually arise without early warning signs; hence, most standard economic models are unable to capture relevant information during such idiosyncratic episodes. As Paul Samuelson commented “The stock market has forecast nine of the last five recessions”. His quote clearly illustrated the difficulty of predicting future economic event.

 

 

POINT FORECAST, RANGE FORECAST AND THE RIVERS OF BLOOD

 

The economy is dynamic in nature and each outcome is the product of millions of decisions made by people. Hence, it is very difficult to produce precise estimates of the future. In corollary, they may also produce at least two possible scenarios and give an upper and lower bound forecasts especially in times of high uncertainty and there are possible regime switching events that may disrupt the usual economic activities.

 

Modern economic forecasters also tend to give a range of possible numbers, such as “GDP growth in the next quarter will range from 3 to 5 percent”, rather than report a precise number. In addition, forecasting is even more difficult if the forecast horizon is further into the future. A typical example is the inflation forecasts regularly published by the Bank of England. The forecasts are presented as a set of intervals that depicts the range of possible future inflation paths. The professional economic forecasters in the bank refer this as the “fan chart”. More evocatively, the news press has described the red colored shades spreading out as “rivers of blood”. As mentioned in the  report produced the bank, …. “the Bank’s inflation forecast has been published explicitly in the form of a probability distribution—presented in what is now known as ‘the fan chart’. The aim of the fan chart has been to convey to the reader a more accurate representation of the Bank’s subjective assessment of medium-term inflationary pressures, without suggesting a degree of precision that would be spurious.” [3]

 

An example of the “rivers of blood” is given below for the November 2008 and it reflects the expert judgment of the Bank of England senior staff and the Monetary Policy Committee.  As shown, the economists in the bank expect inflation to fall considerably in the next two years and the forecast uncertainty (as indicated by the “blood” red shades) increase considerably over longer period of time. For readers who wanted to learn more about the fan chart, you may visit the Bank of England website.

 

November 2008 CPI Fan Chart

                   Source: Bank of England homepage

 

 

Conclusion

 

We hope that this article would help to clear some doubts among the general public about the nature of economic forecasting and to understand and appreciates the difficulty in predicting future economic events. It is impossible for forecasters to incorporate all information into their statistical models especially when information is not symmetrically known to everyone in the society. In sum, forecasters as mortals are simply not equipped to play the role of god. Nevertheless, quantitative models are still indispensable for making important decisions since it generally gives a more robust estimate about what can reasonably happen in different state of contingencies.

 

The formal quantitative models that have been illustrated in this article are just among the more common ones used for economic forecasting. Besides using econometric and statistical approach, professional economic forecasters may also use other computationally intensive approach by constructing hypothetical scenario to help predict future values of economic variables. Some common approach includes computational general equilibrium (CGE) and dynamic stochastic general equilibrium (DSGE) models. Given the technical nature of such approach, we shall not further illustrate the methods here and leave the interested readers to conduct their own research.

 

 

"Prediction is very difficult, especially if it's about the future."

--Nils Bohr, Nobel laureate in Physics

 

Reference

Harvey, Andrew., 1989, Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge University Press, UK.

 

Stock, James H., and Watson, Mark W., 2003, Introduction to Econometrics, Pearson Education.

 

Lindbeck, Assar – editor, (1992), Nobel Lectures in Economic Sciences, 1969 – 1980, World Scientific Press.

 

 

Endnotes

 

1 One of the common definitions of econometrics is the use of statistical and mathematical methods to study economic problems. This includes testing economic theories and also forecasting. One salient features that differentiate it from general regression analysis in Statistics is that econometrics tend to focus on establishing evidence of causality rather than simply correlation.

 

2 There are currently 2 variants of the ARIMA method known as X-11-ARIMA and X-12- ARIMA

 

3 Bank of England Quarterly Bulletin (February 1998): The Inflation Report projections: understanding the fan chart

 

__________________________________

Article contributed by Yong Soo Keong.

 

Edited by Ho Yuen Ping.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Singapore Institute of Statistics.

Please do not quote without attribution.

 

©  2009

__________________________________

Back to SIS Homepage

 

 



[1] One of the common definitions of econometrics is the use of statistical and mathematical methods to study economic problems. This includes testing economic theories and also forecasting. One salient features that differentiate it from general regression analysis in Statistics is that econometrics tend to focus on establishing evidence of causality rather than simply correlation.

[2] There are currently 2 variants of the ARIMA method known as X-11-ARIMA and X-12- ARIMA

[3] Bank of England Quarterly Bulletin (February 1998): The Inflation Report projections: understanding the fan chart