|Discussion Papers 2004|
|Discussion Papers 2003|
|Discussion Papers 2002|
|Discussion Papers 2001|
|Discussion Papers 2000|
|Discussion Papers 1999|
|Discussion Papers 1998|
|Discussion Papers 1997|
|Discussion Papers 1996|
|Discussion Papers 1995|
Papers from 1998 onwards are available on-line as .PDF files.
If you would like to submit your paper to Repec, please email firstname.lastname@example.org
10 Most Recent Papers
10 Most Recent Papers
16/10 P. A. V. B. Swamy, I-Lok Chang, Jatinder S. Mehta, William H. Greene, Stephen G. Hall, and George S. Tavlas
We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.
16/10 Martin Foureaux Koppensteiner
Students in Brazil are typically assigned to classes based on the age ranking in their cohort. I exploit this rule to estimate the effects on maths achievement of being in class with older peers for students in fifth grade. I find that being assigned to the older class leads to a drop in Math scores of about 0.4 of a standard deviation for students at the cut-off. I provide evidence that heterogeneity in age is an important factor behind this effect. Information on teaching practices and student behaviour sheds light on how class heterogeneity harms learning.
16/09 Matthew Polisson, Ludovic Renou
Suppose that we have access to a finite set of expenditure data drawn from an individual consumer, i.e., how much of each good has been purchased and at what prices. Afriat (1967) was the first to establish necessary and sufficient conditions on such a data set for rationalizability by utility maximization. In this note, we provide a new and simple proof of Afriat's Theorem, the explicit steps of which help to more deeply understand the driving force behind one of the more curious features of the result itself, namely that a concave rationalization is without loss of generality in a classical finite data setting. Our proof stresses the importance of the non-uniqueness of a utility representation along with the finiteness of the data set in ensuring the existence of a concave utility function that rationalizes the data.
16/08 Ali al-Nowaihi, Sanjit Dhami
We set up a simple quantum decision model of the Ellsberg paradox. We …find that the matching probabilities that our model predict are in good agreement with those empirically measured by Dimmock et al. (2015). Our derivation is parameter free. It only depends on quantum probability theory in conjunction with the heuristic of insufficient reason. We suggest that much of what is normally attributed to probability weighting might actually be due to quantum probability.
16/07 Tewodros Makonnen Gebrewolde, James Rockey
Prioritizing the growth of particular sectors or regions is often part of LDC growth strategies. We study a prototypical example of such policies in Ethiopia, exploiting geographic and sectoral variation in the form and scale of the policy for identification. Using product-level data on Ethiopian manufacturing firms we show that the policy was unsuccessful: There was no improvement in productivity, productive assets, or employment. The policy failed due to its negative effects on productivity of the entry of new firms and existing firms diversifying. Moreover, subsidised loans and tax-breaks led to an increase in capital but not in machinery.
16/06 Daniel Ladley, Guanqing Liu, James Rockey
Margin trading is popular with retail investors around the world. This is a puzzle, since, as we show, it has a negative expected return. Our explanation is that whilst lowering mean returns, the collateral requirement imposed by margin calls induces positive skew in the distribution of returns. Investments in assets with symmetric returns now offer limited losses and a small chance of a large gain, like lottery tickets and other gambles. Results from a unique dataset of retail futures traders show that actual losses are substantial. Traders’ behaviour is demonstrated to be best understood as motivated by hedonic returns.
16/05 Sergio Currarini, Jesse Matheson, Fernando Vega Redondo
Biases in meeting opportunities have been recently shown to play a key role for the emergence of homophily in social networks (see Currarini, Jackson and Pin 2009). The aim of this paper is to provide a simple microfoundation of these biases in a model where the size and type-composition of the meeting pools are shaped by agents' socialization decisions. In particular, agents either inbreed (direct search only to similar types) or outbreed (direct search to population at large). When outbreeding is costly, this is shown to induce stark equilibrium behavior of a threshold type: agents \inbreed" (i.e. mostly meet their own type) if, and only if, their group is above certain size. We show that this threshold equilibrium generates patterns of in-group and cross-group ties that are consistent with empirical evidence of homophily in two paradigmatic instances: high school friendships and interethnic marriages.
16/04 Subir Bose, Arup Daripa
We study the problem of elicitation of subjective beliefs of an agent when the beliefs are ambiguous (the set of beliefs is a non-singleton set) and the agent’s preference exhibits ambiguity aversion; in particular, as represented by α-maxmin preferences. We construct a direct revelation mechanism such that truthful reporting of beliefs is the agent’s unique best response. The mechanism uses knowledge of the preference parameter a and we construct a mechanism that truthfully elicits α. Finally, using the two as ingredients, we construct a grand mechanism that elicits ambiguous beliefs and a concurrently.
16/03 Matteo Foschi
I analyse the optimal contracting behaviour of an employer who faces workers with different, incorrect beliefs about their own productivity. While the literature has focused mostly on the exploitative (when the principal knows agents’ types, Eliaz and Spiegler, 2006) and speculative (when the principal has priors on gents’ types, Eliaz and Spiegler, 2008) aspects of contracts, I introduce the assumption that workers’ naïveté depends on their actual productivity level. The employer uses this information to form posteriors on agents’ productivity and design more efficient contracts. In particular, I highlight the employer’s trade-off between exploiting strongly naïve workers and designing efficient contracts for the most widespread type of worker, according to her posteriors.
16/02 P. A. V. B. Swamy, S. G. Hall, G. S. Tavlas, I. Chang, H. D. Gibson, W. H. Greene, J. S. Mehta
This paper contributes to the literature on the estimation of causal effects by providing an analytical formula for individual specific treatment effects and an empirical methodology that allows us to estimate these effects. We derive the formula from a general model with minimal restrictions, unknown functional form and true unobserved variables such that it is a credible model of the underlying real world relationship. Subsequently, we manipulate the model in order to put it in an estimable form. In contrast to other empirical methodologies, which derive average treatment effects, we derive an analytical formula that provides estimates of treatment effects on each treated individual. We also provide an empirical example that illustrates our methodology.