Why Time Series Econometrics Has Always Confused Me...

Menzie Chinn says:

Here is a very long run (1867-2008) extension of... this post.

Figure 1: Log real U.S. GDP, 1867-2000, in billions 2000\$. Source: GDP from Johnston and Williamson, and author's calculations.

The Elliott-Rothenberg-Stock (1996) DF-GLS tests statistic (assuming constant and linear trend, lag length equal 1, selected using Schwartz Bayesian information criterion), is -3.3258. The 5%(1%) critical values are -2.988 and -3.5296. Hence, we reject the unit root null at the 5% msl.

I never know what to do with statements like this...

Suppose v(t) is actually:

v(t) = ε(t) - (1-θ)ε(t-1)

so that u(t) is actually:

(1-L)u(t) = (1-(1-θ)L)ε(t)
(1-L)(1-(1-θ)L)-1u(t) = ε(t)
(1-L)(1+(1-θ)L+(1-θ)2L2+(1-θ)3L3+...)u(t) = ε(t)
(1-θL-θ(1-θ)L2-θ(1-θ)2L3+...)u(t) = ε(t)

Then for θ arbitrarily close to zero, u(t) is arbitrarily close to ε(t) in the observed finite sample, and yet u(t) definitely has a unit root.

So how can one run a test to reject a unit root null with any power at all?

The answer, of course, is in the statement of the hypothesis: that v(t) has a pth-order AR representation and hence that u(t) has a p+1st-order AR representation. That automatically rules out the MA(1) process for v(t) that I chose above. But I am never presented with any reason why we should begin our analysis by ruling out such processes. I mean, yes, if we model log real GDP as the sum of a trending deterministic component and a low-order AR process, then we can be confident that the largest root of the AR process is not on but instead well inside the unit circle. But why is that a question that we would like answered?

(The natural answer--that it was the hardest question that Elliott, Rothenberg, and Stock could solve; and that I certainly was not in my prime and am not now smart enough to debug the comment lines in their quadratic hill-climbing routines--has force.)