Lars Peter Hansen Prize Lecture: Uncertainty Outside and Inside Economic Models



Yüklə 1,76 Mb.
Pdf görüntüsü
səhifə13/21
tarix15.08.2018
ölçüsü1,76 Mb.
#62743
1   ...   9   10   11   12   13   14   15   16   ...   21

422 

The Nobel Prizes

This factorization emerges because of the two different probability distribu-

tions that are in play. One comes from the baseline model and another is that 

used by investors. The martingale M makes the adjustment in the probabilities. 

Risk prices relative to the  ⋅  distribution are distinct from those relative to the 

baseline model. This distinction is captured by (13).

Investor models of risk aversion are reflected in the specification of . For 

instance, example (7) implies an  based on consumption growth.

30

 The martin-



gale M would then capture the belief distortions including perhaps some of the 

preferred labels in the writings of others such as “animal spirits,” “over-confi-

dence,” “pessimism,” etc. Without allowing for belief distortions, many empirical 

investigations resort to what I think of as “large values of risk aversion.” We can 

see, however, from factorization (13) that once we entertain belief distortions it 

becomes challenging to disentangle risk considerations from belief distortions.

My preference as a model builder and assessor is to add specific structure to 

these belief distortions. I do not find it appealing to let M be freely specified. My 

discussion that follows suggests a way to use some tools from statistics to guide 

such an investigation. They help us to understand if statistically small belief dis-

tortions in conjunction with seemingly more reasonable (at least to me) specifi-

cations of risk aversion can explain empirical evidence from asset markets.



5.2  Statistical discrepancy

I find it insightful to quantify the statistical magnitude of a candidate belief dis-

tortion by following in part the analysis in Anderson et al. (2003). Initially, I 

consider a specific alternative probability distribution modeled using a positive 

martingale M with unit expectation and I ask if this belief distortion could be 

detected easily from data. Heuristically when the martingale M is close to one, 

the probability distortion is small. From a statistical perspective we may think of 

M as a relative likelihood process of a perturbed model vis a vis a baseline prob-

ability model. Notice that M

t

 depends on information in F



t

, and can be viewed 

as a “data-based” date t relatively likelihood. The ratio 

M

t

+1

M



t

 has conditional 

30 

When ρ ≠ γ in (11), continuation values come into play; and they would have to be 



computed using the distorted probability distribution. Thus M would also play a role 

in the construction of . This would also be true in models with investor preferences 

that displayed “habit persistence” that is internalized when selecting investment plans. 

Chabi-Yo et al. (2008) nest some belief distortions inside a larger class of models with 

state-dependent preferences and obtain representations in which belief distortions also 

have an indirect impact on SDFs.

6490_Book.indb   422

11/4/14   2:30 PM




Uncertainty Outside and Inside Economic Models 423

expectation equal to unity, and this term reflects how new data that arrive be-

tween dates t and t + 1 are incorporated into the relative likelihood.

A variety of statistical criteria measure how close M is to unity. Let me mo-

tivate one such model by bounding probabilities of mistakes. Notice that for a 

given threshold η,

 log 

M

t



 – η ≥ 0 

implies that

 

M

t

exp(


η)

[



]

α

≥1  (14)



for positive values of α. Only α’s that satisfy 0 < α < 1 interest me because only 

these α’s provide meaningful bounds. From (14) and Markov’s Inequality,

 

Pr log M

t

η⎪F



0

{

}



≤ exp(−

ηα)E M



t

( )


α

⎪F

0





⎦.  (15)

The left-hand side gives the probability that a log-likelihood formed with a 

history of length t exceeds a specified threshold η. Given inequality (15),

 

1



t

logPr log M



t

η⎪F



0

{

}



≤ − ηα

t

+

1



t

logE M



t

( )


α

⎪F

0





⎦.  (16)

The right-hand side of (16) gives a bound for the log-likelihood ratio to ex-

ceed a given threshold η for any 0 < α < 1. The first term on the right-hand side 

converges to zero as t gets large but often the second term does not and indeed 

may have a finite limit that is negative. Thus the negative of the limit bounds the 

decay rate in the probabilities as they converge to zero. When this happens we 

have an example of what is called a large deviation approximation. More data 

generated under the benchmark model makes it easier to rule out an alternative 

model. The decay rate bound underlies a measure of what is called Chernoff 

(1952) entropy. Dynamic extensions of Chernoff entropy are given by first tak-

ing limits as t gets arbitrarily large and then optimizing by the choice of α:

 

κ(M) = − inf



0

<

α<1


limsup

t

→∞

1



t

logE M



t

( )


α

⎪F

0





⎦.  

Newman and Stuck (1979) characterize Markov solutions to the limit used 

in the optimization problem. Minimizing over α improves the sharpness of the 

6490_Book.indb   423

11/4/14   2:30 PM



Yüklə 1,76 Mb.

Dostları ilə paylaş:
1   ...   9   10   11   12   13   14   15   16   ...   21




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə