Narrow your search

Library

KU Leuven (11)


Resource type

dissertation (11)


Language

English (10)

Dutch (1)


Year
From To Submit

2009 (2)

2008 (9)

Listing 1 - 10 of 11 << page
of 2
>>
Sort by

Dissertation
Essays in exchange rate economics.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords


Dissertation
Financial transaction data and volatility.

Loading...
Export citation

Choose an application

Bookmark

Abstract

This Ph.D. thesis focuses on financial transaction data and volatility. Transaction data capture the characteristics of financial transactions (e.g. transaction time, transaction price, transaction volume, bid and ask price) as they take place on an exchange. Volatility refers to the degree to which asset prices tend to fluctuate. The thesis improves the use of transaction data for market microstructure analysis of the New York Stock Exchange (NYSE). This type of analysis contributes to a better understanding of the functioning of financial markets. Furthermore, the thesis improves the estimation of a standard volatility model and proposes volatility estimators that are much more efficient than classic volatility estimators. Better volatility (model) estimators contribute to better investment decisions and risk assessment within an economy. I start by explaining the recent focus of financial research on transaction data, followed by the focus on volatility, and then present the contributions of each chapter of the thesis to the literature. Databases of transaction data, also called tick data, only became publicly available in the 1990s. Before, financial research and analysis had been mainly based on daily data, i.e. daily averages, closing prices, etc. It soon became clear that this new type of data has specific advantages and offers new research opportunities. For example, higher-frequency data allow more accurate measurement of volatility. The thesis contributes to the growing research on this issue. However, more is not always better. The new data have their own features such as unequally-spaced observations, non-synchronous trading, intra-day seasonal effects, measurement errors due to bid-ask spreads and reporting difficulties, which brought new challenges. Only when these features are satisfactorily dealt with can the advantages of high-frequency data be fully exploited. The thesis also contributes to the literature that seeks solutions to this type of problems. Volatility enters as an essential ingredient in many financial computations, like portfolio optimisation, option pricing and risk assessment. Despite its importance, volatility remains an ambiguous term for which there is no unique, universally accepted definition. The main approaches to compute volatility are by historical indicators computed from daily squared returns, from econometric models such as GARCH, or by indirect computation from option prices based on a pricing model such as Black-Scholes'. Following the introduction of transaction databases, new estimators that exploit intradaily price dynamics have been proposed in the literature. The thesis also presents new estimators along this line. Before the attention switched to measuring volatility, the financial econometrics literature already contained a lot of research on the modelling of volatility. Since the 1980s, starting from the observation of volatility clustering, i.e. periods of high volatility versus low volatility, many models of volatility have been developed that produce and improve forecasts. To this end, Stochastic Volatility (SV) models were developed. These models treat volatility as unobserved, driven by a separate process. Their very nature makes them hard to estimate, however. The thesis points out that a simple estimation method (Generalized Method of Moments, GMM) should be reconsidered to estimate stochastic volatility models. While the chapters of this thesis have a common theme, each chapter can be seen as a separate entity addressing different well-defined issues within financial econometrics. Chapter 1 proposes a new procedure to determine the time of the prevailing quote relative to the time of the trade for New York Stock Exchange data. At the NYSE, trades and quotes are recorded separately, receiving their own time stamp. As a result, trades and quotes are subject to different and varying lags, which makes it hard to reconstruct the sequence of trades and quotes. For market microstructure analysis that is based on trade and quote data at high frequency, it is important to be able to reconstruct this sequence, as mismatching potentially affects the analysis. The procedure put forward in Chapter 1 tests whether the quote revision frequency around a trade is contaminated by quote revisions triggered by a trade, and then determines the smallest timing adjustment needed to eliminate this contamination. An application to various stocks and sample periods shows that the time difference between trade and quote reporting lags varies across stocks and time. The procedure takes this variation into account and hence offers a stock- and time-specific update to the Lee and Ready (1991) 5-second rule, which is often applied in this literature. Chapter 2 contributes to the extensive literature on the estimation of stochastic volatility models. Due to the fact that in SV models the mean and the volatility are driven by separate stochastic processes, volatility is unobservable, which makes SV models hard to estimate. This chapter presents analytical results that may be used to improve and assess the quality of GMM-based estimation of SV models. GMM, while not asymptotically efficient, is still the simplest estimation method for SV models currently available. In particular, we derive closed-form expressions for the optimal weighting matrix for GMM estimation of the SV-model with AR(1) log-volatility, and for the asymptotic covariance matrix of the resulting estimator. The moment conditions considered are generated by the absolute observations, which is the standard in this literature, or by the log-squared observations. We use the expressions to compare the performance of GMM and other estimators that have been proposed, and to optimally select small sets of moment conditions from very large sets. A Monte Carlo study shows large efficiency gains for iterated GMM estimation if it is based on the analytical optimal weighting matrix compared to the case where this matrix is estimated. Chapter 3 proposes new estimators of volatility based on quantiles of the price series, under the assumption that prices are observed without noise. It develops unbiased and consistent estimators of the diffusion coefficient based on quantiles of either the Brownian motion or the Brownian bridge. These estimators are shown to be much more efficient than the range-based estimators of Parkinson (1980) and Kunitomo (1992), where the range is the difference between the supremum and infimum. In particular, efficiency is improved by using more quantiles in the estimation. Moreover, two methods are presented that turn any of the unbiased estimators into consistent estimators. One way to obtain consistency is to apply the unbiased estimators to subintervals and then to average the subinterval estimators. This corresponds to a generalization of the realized range estimator of Christensen and Podolski (2005) and Martens and van Dijk (2007). Furthermore, a new type of consistent estimator based on permuted subintervals is presented. The quantile-based estimators provide an interesting alternative to the existing realized volatility and realized range estimators. Chapter 4 deals with the time-discreteness bias and noise bias of quantile-based volatility estimators when applied to high-frequency data. The former bias is a result of volatility estimators being derived in continuous-time, but applied to discrete-time observations. Despite being derived in continuous time, quantile-based volatility estimators turn out to be fairly robust to the time-discreteness bias except if the estimator is based on price extrema or the number of observations is very small. Analytical and simulation-based bias corrections are presented to deal with the latter cases. Furthermore, attention is given to the bias introduced when the estimators are applied to a price series perturbed by noise. In practice, this noise is due to market microstructure effects, e.g. the transaction price bouncing between bid and ask prices, implying that the ‘true’ price is not observed. A simulation-based noise-bias correction is proposed that deals even with the case in which the noise distribution is unknown. The bias corrections allow the practitioner to exploit the efficiency gain of quantile-based volatility estimation at high frequency. Financiële transactiedata en volatiliteit vormen het onderwerp van dit doctoraat. Transactiedata omvatten de eigenschappen van financiële transacties (bv. de transactietijd, transactieprijs, hoeveelheid, bied- en vraagprijzen, enz.) zoals deze plaatsvinden op een beurs. Volatiliteit verwijst naar de mate waarin prijzen van financiële producten fluctueren. Het doctoraat verbetert het gebruik van transactiedata voor analyse van de micro marktstructuur van de New York Stock Exchange (NYSE). Dit soort analyse draagt bij tot een beter begrip van het functioneren van financiële markten. Verder draagt het doctoraat bij tot het beter schatten van een standaard volatiliteitmodel en stelt het volatiliteitschatters voor die veel efficiënter zijn dan klassieke volatiliteitschatters. Beter volatiliteit(model)schatters dragen bij tot betere investeringsbeslissingen en risicobeheer binnen een economie. Ik leg eerst de recente focus van financieel onderzoek op transactie data uit, gevolgd door een uitleg over de focus op volatiliteit, en bespreek dan de bijdrage van elk hoofdstuk. Databanken met transactiegegevens werden enkel publiek beschikbaar in de jaren 90. Voordien werd financieel onderzoek vooral gebaseerd op dagelijkse gegevens zoals daggemiddelden en sluitingsprijzen. Het werd vlug duidelijk dat deze nieuwe bron aan gegevens voordelen bood en nieuwe onderzoeksmogelijkheden opende. Volatiliteit kan bijvoorbeeld preciezer gemeten worden op basis van dergelijke hoge frequentie data. Het doctoraat draagt bij tot de groeiende literatuur over dit onderwerp. Meer is echter niet altijd beter. De nieuwe gegevens hebben hun eigen kenmerken die nieuwe uitdagingen met zich meebrachten: ongelijke verdeling van observaties over de tijd, niet-synchrone handel, intradag seizoenseffecten, meetfouten wegens het aanbod-vraag ecart, moeilijkheden met rapportering, enz. Enkel wanneer dit soort eigenschappen voldoende in rekening worden genomen, kunnen de voordelen van hoge frequentie data volledig worden benut. Het doctoraat draagt ook bij tot de literatuur die oplossingen zoekt voor dit soort problemen. Volatiliteit vormt een essentieel ingrediënt in velerlei financiële berekeningen zoals portefeuille optimalisering, het prijzen van opties en risicobeheer. Ondanks het belang van volatiliteit blijft het een ambigue term waarvoor geen eenduidige en algemeen aanvaarde definitie bestaat. Volatiliteit wordt vooral berekend op basis van historische maatstaven die gebaseerd zijn op dagelijkse prijsveranderingen, econometrische modellen of indirect afgeleid worden uit optieprijzen. Sinds de introductie van databanken met transactiegegevens werden nieuwe schatters voorgesteld in de literatuur die de intradag prijsdynamiek uitbuiten. Het doctoraat voegt hier nieuwe schatters aan toe. Voordat de aandacht oversloeg naar het meten van volatiliteit bevatte de financieel econometrische literatuur al veel onderzoek over het modelleren van volatiliteit. Beginnend van de observatie van volatiliteitclusters, dat zijn periodes van hoge volatiliteit versus lage volatiliteit, werden sinds de jaren 80 veel volatiliteitmodellen ontwikkeld die projecties voortbrengen en verbeteren. Hoofdstuk 1 behandelt een probleem dat zich voordoet met data van de NYSE, een belangrijke bron van transactiegegevens. Transacties en prijsaanduidingen worden op deze beurs afzonderlijk geregistreerd met elk hun eigen tijdsaanduiding. Hierdoor vertonen transacties en prijsaanduidingen verschillende vertragingen bij hun registratie wat het moeilijk maakt om de volgorde nadien te reconstrueren. Voor analyse van de micro marktstructuur is het echter belangrijk om deze volgorde te kunnen reconstrueren aangezien een verkeerde schikking de analyse kan beïnvloeden. In hoofdstuk 1 wordt een procedure voorgesteld die toelaat om de gepaste tijdsaanpassing te bepalen per aandeel en periode. Hoofdstuk 2 behandelt het schatten van Stochastische Volatiliteitmodellen. Volgens deze modellen wordt volatiliteit niet geobserveerd en wordt het gedreven door een apart proces. Die eigenschap maakt hen moeilijk om te schatten. Hoofdstuk 2 toont aan dat een eenvoudige schattingsmethode (Generalized Method of Moments, GMM) dient te worden herbeschouwd voor het schatten van stochastische volatiliteitmodellen. Analytische resultaten worden gepresenteerd die kunnen worden aangewend om de kwaliteit van schattingen op basis van GMM te verbeteren en te beoordelen. Hoofdstuk 3 introduceert een nieuwe set volatiliteitschatters die gebaseerd zijn op kwantielen van de intradag prijsreeks. Deze schatters vormen een interessant alternatief voor de schatters gebaseerd op intradag prijsveranderingen. De nieuwe schatters halen hun voordeel uit het feit dat kwantielen robuuster zijn voor ruis - dat aanwezig is op hoge frequentie - in vergelijking met intradag prijsveranderingen. Hoofdstuk 4 maakt de schatters klaar voor de praktijk. De nieuwe schatters, die in continue tijd werden afgeleid, worden hier aangepast aan discrete tijd en ruiscorrecties worden voorgesteld.

Keywords


Dissertation
Macroeconomic fluctuations in developing countries.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Macroeconomic variables such as consumption, investment, output, and prices exhibit ups and downs over time. This phenomenon, known as macroeconomic fluctuations , is the central theme of this doctoral dissertation. The motivation for studying macroeconomic fluctuations stems from the observation that they affect the welfare of a society. For example, given that economic agents are risk averse, their inability to protect consumption from fluctuating causes welfare losses. A better understanding of the causes and consequences of macroeconomic fluctuations may therefore lead to insights into how policy making could limit such welfare losses. Developing countries are especially vulnerable to fluctuations. This is due to many factors including large external shocks, volatile macroeconomic policies, political instability, poorly developed insurance and financial markets, and weak institutions. Therefore, successful stabilization policies may considerably improve welfare in developing countries. Against this background, the thesis seeks to address the following research questions. How large is the welfare cost of consumption fluctuations in developing countries compared to developed countries? Is the welfare cost significantly similar across countries in each group? Or, is there for example a systematic difference between developing countries in Latin America and those located in other parts of the globe? How large is the welfare cost of fluctuations in USA compared to European industrialized countries? To what extent should a country solely focus on stabilization policies? Or on growth policies? Should appropriate stabilization policies be focused at a country level? Or at a regional level? Or even at the world level? To what extent are macroeconomic fluctuations in developing countries driven by global or regional shocks or by events that are more specific to a country? These questions are analyzed in three chapters. Specifically, Chapter 1 provides an international comparison of the welfare cost of consumption fluctuations. For this purpose I propose a Bayesian method to estimate the distribution of the welfare cost of volatility in consumption. Previous studies only reported point estimates of welfare rather than distributions. Therefore, this is the first time uncertainty related to the estimation of the welfare cost of fluctuations has been taken into account. In line with existing research, I interpret the cost of fluctuations as the gain from stabilization and I compare the latter with the benefit from growth. The empirical analysis is performed using annual data on real consumption per capita on 82 countries for 1960-2003. This sample includes many developed as well as developing countries around the world. The main findings of Chapter 1 can be summarized as follows. First, the welfare cost of fluctuations is on average two to eight times higher in developing countries compared to their developed counterparts. Moreover, the results show that larger shocks combined with lower levels of wealth make developing countries more vulnerable to fluctuations. Comparing the results among developing countries reveals that Sub-Saharan African and the oil producing countries of the Middle East exhibit the largest welfare cost. Next, I identify the group of South and Latin American countries. Finally, the group of Asian countries displays the smallest welfare cost of fluctuations among developing countries. The estimates of the welfare cost are also different among developed countries. Apart from Australia and Luxembourg, the USA displays the lowest welfare cost of consumption fluctuations among the developed countries analyzed in this study. Moreover, developed countries located in Asia are the ones that face the largest cost of fluctuations for this group of countries. Second, making use of confidence intervals instead of points estimates of welfare, I find that one cannot choose between growth and stabilization in about 45 percent of the countries of the sample. This finding of indecisiveness between stabilization and growth is new to the literature. Moreover, in 43 percent of the countries the welfare benefit of stabilization exceeds the gain from growth whereas the reverse is true for the remaining 12 percent of countries in the sample. Contrasting results between developed and developing countries reveals some important differences. For instance, 54 percent of developing countries of the sample display larger gains from stabilization while the same holds true for only 16 percent of industrialized countries. On the other hand, growth is superior to stabilization in 28 percent of developed countries compared to only 5 percent of developing countries. Overall, these results suggest that stabilization and growth policies are needed in both developed and developing countries but stabilization is more urgent in the case of developing countries. In order to implement appropriate stabilization policies, policy makers should be aware of the sources of shocks. This motivates Chapter 2, in which I employ a Bayesian dynamic factor model to break down the variations of the main macroeconomic aggregates (output, consumption, and investment) into four factors: the world, the regional, the country factors, and the so-called idiosyncratic component, which is a set of variations that cannot be captured by the three other factors. In the empirical part I use data on the same 82 countries analyzed in Chapter 1. The results indicate that the country and the idiosyncratic factors are the main driving forces of fluctuations in developing countries. In understanding what these two types of factors capture, the literature indicates that country and idiosyncratic factors in developing countries are mostly caused by domestic fiscal policies, monetary policies, the quality of institutions, political instability, terms of trade shocks, financial development, and rainfall shocks. It follows that stabilization policies should focus on these variables in developing countries. Chapter 2 also makes a modest contribution to the current debate on the impact of globalization on international business cycles. In particular, given that globalization reinforces trade and financial linkages, one can expect the synchronization of business cycles across countries. As such, a large literature that aims to test whether there is a convergence or a decoupling of international business cycles has emerged. In order to address this issue I re-estimate the dynamic factor model for two subperiods: the subperiod 1960-1985 characterizing the pre-globalization, and the subperiod 1986-2003 capturing the globalization period. The results suggest the convergence of business cycles between the North and the South but in two distinct groups. One group includes the USA and Latin America and Asia. The other comprises the European Union and Sub-Saharan Africa. Finally, the subperiod results help to explain the decrease of volatility in macroeconomic variables since the mid 1980s. This phenomenon is referred to as the  great moderation period. I find that the great moderation in the USA coincides with the reduction of the importance of the global and the domestic business cycles in explaining fluctuations in the country. In the case of European Union member states it is the decline of the role of national business cycles that coincides with the great moderation . Chapter 3 also analyzes stabilization policies but in the context of a monetary union. My premise is that a monetary union may reduce macroeconomic fluctuations by enforcing fiscal and monetary discipline. Indeed, the autonomy of the common central bank can make low inflation a time consistent monetary policy goal. This has been proven in the existing Francophone monetary unions in Africa. However, the theory of Optimal Currency Areas (OCA) tells us that in the event of asymmetric shocks members of a monetary union will find it harder to adjust to shocks. This is the case because the single monetary policy at the disposal of the common central bank will not be appropriate to respond to idiosyncratic shocks. In the presence of asymmetric shocks, one group of countries in a monetary union may need an expansionary monetary policy to respond to cyclical downturns while the other might require a contractionary monetary policy to respond to cyclical booms. For these reasons, the presence of asymmetric shocks to member countries of a monetary union is referred to as the cost of a monetary union in the OCA literature Today, Africa has a number of monetary union initiatives. One such initiative that received a great deal of attention in the recent literature is the monetary union project of West Africa. In this context Chapter 3 analyzes the desirability of a monetary union in West Africa by looking at the synchronization of macroeconomic shocks in the region. For this purpose I make use of a dynamic structural factor model to recover information on aggregate demand and aggregate supply shocks of each country in West Africa. The shocks are identified with the more recent sign restrictions identification scheme. Existing research mainly used Vector Auto-Regressive (VAR) models to estimate shocks. In the recent literature, however, a number of criticisms have been addressed to VAR models. It is precisely to avoid VAR shortcomings that I apply a novel approach based on dynamic factor models. The results show negative and low positive correlations among supply shocks of West African countries. Demand shocks correlations are also low except for the group of French-speaking countries of the region. These findings suggest that West African countries will find it difficult to adjust to shocks if they form a monetary union. country and idiosyncratic factors in developing countries are mostly caused by domestic fiscal policies, monetary policies, the quality of institutions, political instability, terms of trade shocks, financial development, and rainfall shocks. It follows that stabilization policies should focus on these variables in developing countries. Chapter 2 also makes a modest contribution to the current debate on the impact of globalization on international business cycles. In particular, given that globalization reinforces trade and financial linkages, one can expect the synchronization of business cycles across countries. As such, a large literature that aims to test whether there is a convergence or a decoupling of international business cycles has emerged. In order to address this issue I re-estimate the dynamic factor model for two subperiods: the subperiod 1960-1985 characterizing the pre-globalization, and the subperiod 1986-2003 capturing the globalization period. The results suggest the convergence of business cycles between the North and the South but in two distinct groups. One group includes the USA and Latin America and Asia. The other comprises the European Union and Sub-Saharan Africa. Finally, the subperiod results help to explain the decrease of volatility in macroeconomic variables since the mid 1980s. This phenomenon is referred to as the  great moderation period. I find that the great moder ation in the USA coincides with the reduction of the importance of the global and the domestic business cycles in explaining fluctuations in the country. In the case of European Union member states it is the decline of the role of national business cycles that coincides with the great moder ation. Chapter 3 also analyzes stabilization policies but in the context of a monetary union. My premise is that a monetary union may reduce macroeconomic fluctuations by enforcing fiscal

Keywords


Dissertation
Institutions and market structure in transition countries.

Loading...
Export citation

Choose an application

Bookmark

Abstract

The aim of this doctoral thesis is to contribute to the understanding of market dynamics and the determinants of firm performance in transition economies. The privatization and restructuring of state-owned enterprises and the introduction of market forces in transition countries implied the emergence of new small firms and a decline of the old inefficient ones. The restructuring process was accompanied by the dismantling of trade barriers and the inflow of foreign direct investment (FDI). These dynamics not do only offer great opportunities for domestic firms in terms of increased market access and the chance to learn from their foreign counterparts, but they also engender important challenges through increased competitive pressure and the need to adjust production patterns. At the same time, the way companies adjust to exogenous shocks also depends on government policies and the institutional settings in product, labour and financial markets. In this doctoral thesis I shed light on three research questions in particular: (1) Does FDI generate (positive) externalities on domestic firms? (2) Do liquidity constraints lead to lower productivity? (3) What are the determinants of product switching? Rather than addressing these questions from an aggregate perspective, I use in this dissertation unique enterprise micro datasets that allow me to study the heterogeneity within the group of private businesses. Over the past two decades, the academic literature has accumulated ample micro-evidence documenting the wide diversity in firm’s performance. Even within narrowly-defined sectors, enterprises differ substantially in their output, employment, productivity, investment, and product choices. When studying firm behaviour it is important to take into account this heterogeneity. Knowledge of the determinants of differences across firms may, in turn, contribute to the understanding of how government policies affect the aggregate economy. In Chapter 1 of my doctoral thesis, I use a new panel data set of more than 15,000 firms producing in the Chinese manufacturing sector to analyze the impact of inward foreign investment on the performance of domestic firms. Attracting foreign direct investment has become an essential part of development strategies among developing countries, including China. However, despite the range of positive effects predicted by economic theory and the strong conviction of policy makers that domestic firms benefit from the presence of foreign companies, the empirical literature is ambiguous on the effects of FDI on domestic productivity in developing and transition countries. In this first chapter, I explore the richness of the Chinese dataset and argue that the magnitude of FDI spillovers varies according to: (i) the origin and structure of FDI; (ii) the export status of domestic firms; and (iii) the characteristics of the special economic zones firms are operating in. In particular, my results suggest that attracting export-driven investment is not necessarily a beneficial strategy for generating positive externalities on the domestic market. So rather than simply establishing export-processing zones, it is more advisable for governments to focus on creating a pro-active environment in which the collaboration with foreign firms is more likely to generate the expected beneficial effects. The current global financial crisis has reopened the debate on the potential spillover effects from the financial sector to the real economy. The second chapter of my doctoral thesis adds to that debate by providing new evidence on the link between finance and firm-level productivity, focusing on the case of Estonia. The main idea is that access to external finance facilitates firms’ investment in long-duration and productivity-enhancing projects. The existing empirical papers in the academic literature study various aspects of financial development or access to finance, but they put little or no emphasis on the direct effect of financial constraints on firm productivity. In Chapter 2, I contribute to this literature in two important respects: (i) I explicitly look at the role of financial constraints; and (ii) I develop a methodology that corrects for misspecification problems in previous studies. My results indicate that a large number of firms shows some degree of financial constraints, with firms in the primary sector being the most constrained. Especially young and highly indebted firms tend to face significant problems in accessing external capital. More importantly and contrary to initial expectations, I find that financial constraints do not have an impact on productivity for most sectors. These findings are robust to several sensitivity tests and underscore the importance of credit allocation and project screening. In the third and final chapter, I analyze how firms adjust their production in response to import competition and changing conditions in export markets, using Estonian firm-level data for the years 1997-2005. In the media, globalisation is generally associated with the closure of firms and jobs losses as a result. Yet, while firm bankruptcy represents one response to increased foreign competition, it is not the only option. Rather than ceasing production firms may actually switch to a different industry or product market, or merge with another firm. At the same time, the more productive firms can take advantage of new business opportunities in foreign markets. In fact, my results in Chapter 3 indicate that globalisation is not a driver of firm bankruptcy in Estonia, while it is important in explaining product switching. Whereas previous academic studies on industrial countries have shown that product switching has been a defensive strategy against low-cost imports, my results suggest that Estonian firms have switched products as an offensive strategy to take advantage of the export opportunities created by trade liberalization.

Keywords


Dissertation
Diamonds are a Girl's Best Friend : five essays on the economics of social status.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Consumptie communiceert. Wanneer consumenten een goed kopen, hechten ze meestal belang aan twee soorten eigenschappen: de eigenlijke intrinsieke kwaliteiten van hun keuze enerzijds en wat deze keuze communiceert over hun identiteit naar toeschouwers anderzijds. Dit geldt net zo goed voor ontwikkelingseconomieën als voor postindustriële samenlevingen, waarin merken, reclame, design, lifestyle en identiteit de economie en cultuur naar een verbazingwekkende verscheidenheid sturen. De betekenis van goederen wordt in essentie bepaald door twee oorspronkelijke mechanismen: conformiteit en distinctie. Via conformiteit proberen mensen hun lidmaatschap van een sociale groep (in de ruimste zin van het woord) te bevestigen. Dit proefschrift analyseert met behulp van speltheorie echter de tweede dynamiek, sociale distinctie. Sociale distinctie beduidt dat consumenten zich van anderen van een ‘lagere’ kwaliteit (bv. rijkdom, productiviteit, intelligentie, fysieke kwaliteiten) proberen te onderscheiden. Het eerste hoofdstuk biedt een ruim overzicht van de huidige economische analyse van sociale status. Een uitgebreide verzameling empirische evidentie uit de biologie en de medische en sociale wetenschappen toont dat mensen (net als vele andere dieren overigens) een smaak voor sociale status of superioriteit hebben, waarbij sociale status staat voor de positie die men in de sociale rangorde zijn groep of samenleving inneemt. Mensen blijken algemeen bereid tot het investeren van aanzienlijke hoeveelheden middelen en energie in het verbeteren van hun sociale status. Deze smaak voor status kan gemakkelijk gerationaliseerd worden met behulp van enkele eenvoudige en alom tegenwoordige bouwstenen: individuele verschillen in kwaliteit en samenwerking met vrije partnerkeuze. Mensen werken uitzonderlijk veel samen met niet-familieleden voor tal van activiteiten (productie van goederen en diensten, handel, groepsactiviteiten allerhande, huwelijken enz.), en de kwaliteit van de partners bepaalt meestal in belangrijke mate vruchten van deze samenwerking. Indien de partnerkeuze vrij is zal men aantrekkelijker moeten zijn of lijken dan de concurrenten om de best mogelijke partner aan zich te binden, en zo het surplus uit samenwerking te vergroten. Het welzijn van een consument hangt op die manier af van diens relatieve kwaliteiten, d.w.z. van diens positie in de rangschikking van mogelijke partners. Indien consumenten bovendien in staat zijn zelf hun aantrekkelijkheid als partner te veranderen, komen zij voor een strategisch investeringsprobleem te staan. De resultaten die ze mogen verwachten van investeringen in hun aantrekkingskracht hangen niet enkel af van hun eigen investeringen, maar ook van die van hun concurrenten. Dit mechanisme kan worden begrepen als een veralgemening van Darwin’s principe van de seksuele selectie: om zich te verspreiden in de populatie moeten genen niet enkel beter aangepast zijn om te overleven, maar ook in staat zijn seksuele partners aan te trekken, en dus aantrekkelijker lijken dan de concurrenten van de eigen sekse. Eén en hetzelfde mechanisme kan in die zin verklaren waarom een smaak voor sociale status rationeel kan zijn, alsook waarom mensen en andere dieren een aangeboren smaak voor superioriteit hebben. Indien de dimensie waarop men wordt vergeleken inkomen of productiviteit is, dan is consumptie het middel bij uitstek om zich van armere of minder productieve individuen te onderscheiden. Niet alle consumptiegoederen zijn echter even geschikt voor dit doel: opzichtige consumptiegoederen zoals auto’s, kleding, gsm’s of vastgoed worden duidelijk meer gebruikt om de sociale status te communiceren. Sociale status motiveert daarom een afwijking het in consumptiepatroon van mensen in de richting van luxueuze en opzichtige goederen, vergeleken met wat ze in sociale isolatie zouden verkiezen. Deze afwijking is problematisch omdat de competitie voor sociale status in essentie een nulsom spel is: indien een consument een positie wint in de sociale rangschikking, impliceert dit noodzakelijkerwijs dat een ander een positie verliest. En indien allen dezelfde extra inspanning leveren om een positie te stijgen op de sociale ladder, blijft iedereen te plaatse trappelen op de positie die ze eveneens zouden innemen indien niemand een extra inspanning leverde. Wat slim lijkt voor het individu, is maatschappelijk gezien een zekere verliesstrategie. Dit mechanisme verklaart eveneens waarom de verlangens van consumenten in essentie onbevredigbaar lijken en het gemiddeld geluk toch niet stijgt met economische ontwikkeling eens een zeker niveau van economische ontwikkeling is bereikt. De mate waarin mensen in een samenleving hun energie en middelen investeren in het vestigen van een schijn van superioriteit, eerder dan te sparen of te investeren in de toekomst is uiteraard ook een belangrijke determinant van economische ontwikkeling. Sociale normen die bepalen hoe sociale status wordt toegekend, kunnen op die manier bijdragen tot verschillen in economische ontwikkeling. Tenslotte heeft de economie van sociale status ook belangrijke implicaties voor de welvaartsanalyse en het sociale en economisch beleid. Zo rationaliseren status motieven bijvoorbeeld waarom sommige uitkeringsgerechtigden verkiezen hun uitkering niet op te nemen wanneer die opname publiek zichtbaar is (en dus een signaal van laag inkomen), of waarom middenklassengezinnen soms tegen een herverdelende maatregel stemmen die hen materieel voordeel brengt (maar het onderscheid tussen hen en armere gezinnen doet vervagen). In het tweede hoofdstuk wordt onderzocht hoe de prikkels om in opzichtige maar verkwistende consumptie te investeren afhangen van de structuur van het sociale netwerk waarin consumenten zich bevinden. Waarom lijken de prikkels van de status competitie bijvoorbeeld sterker en dwingender steden in dan in dorpen? Het bestudeerde mechanisme werd ‘informatiesubstituten’ gedoopt. Indien toeschouwers een tweede bron van informatie ter beschikking hebben over de kwaliteiten van een consument, d.w.z. naast de informatie die ze uit de opzichtige consumptiepatronen kunnen afleiden, dan hangt het belang dat ze optimaal aan opzichtige consumptie hechten af van de relatieve kwaliteit van beide informatiebronnen. Indien de kwaliteit van de alternatieve bron van informatie verbetert, zullen toeschouwers optimaal minder belang gaan hechten aan opzichtige consumptie en daardoor zal het optimale niveau van verspillende investeringen in opzichtige consumptie dalen. Bijvoorbeeld, in economische sectoren waar de pikorde duidelijker is (bv. academisch onderzoek), zullen de prikkels om op opzichtige wijze superioriteit te suggereren kleiner zijn. Indien dan het sociale netwerk beschouwd wordt als tweede bron van informatie over individuele kwaliteiten (bv. als bron van roddels), dan kan men voorspellen hoe de structuur van het netwerk de optimale niveaus van opzichtige consumptie bepaalt. In een groter sociaal netwerk zal de alternatieve informatie namelijk gemiddeld van een slechtere kwaliteit zijn en het evenwichtsniveau van opzichtige consumptie dus groter, en omgekeerd indien mensen meer sociale relaties gaan onderhouden. Over mensen die een meer centrale plaats bezetten in het sociale netwerk, zal gemiddeld betere informatie beschikbaar zijn, zodat dezen minder moeten investeren in opzichtige consumptie dan mensen in de periferie van het netwerk. Sociaal geïsoleerde (bv. etnische) groepen, die toch belang hechten aan de indruk die ze op de hele bevolking maken, moeten meer investeren in opzichtige consumptie dan de meerderheidsgroep, enz. In het derde hoofdstuk wordt bestudeerd hoe een voorkeur voor sociale relaties met rijkere boven armere individuen aanleiding kan geven tot het doorgeven van economische ongelijkheid van generatie op generatie. Indien allen een dergelijke voorkeur voor contacten met rijkere consumenten hebben, dan zal het sociale netwerk zich zo organiseren dat men zich voornamelijk onder gelijken mengt. Indien kinderen op het sociale netwerk van hun ouders vertrouwen om de productiviteit van investeringen in menselijk kapitaal (bv. onderwijs) in te schatten, dan zullen zij dit doen op basis van een niet-representatieve steekproef. Zo zullen kinderen uit de bovenklasse relatief teveel succesverhalen te horen krijgen en daarom de productiviteit van menselijk kapitaal overschatten. Om dezelfde reden zullen kinderen uit de onderklasse, die over relatief teveel mislukkingen horen, het belang van investeringen in menselijk kapitaal onderschatten. Deze vertekening zorgt ervoor dat economische ongelijkheid van generatie op generatie wordt doorgegeven. Als uitbereiding op dit model wordt ook het effect van sociale segmentatie en massamedia op deze klassengebonden vertekening van de inschatting van menselijk kapitaal onderzocht. De twee laatste hoofdstukken onderzoeken hoe status competitie niet enkel een probleem, maar ook een kans kan zijn voor beleidsmakers. Bij het heffen van indirecte belastingen (bv. BTW) kan immers geëxploiteerd worden dat de betekenis van goederen afhangt van de consumptie van alle consumenten in de samenleving. Door een goed te belasten veranderen de consumptiepatronen van iedereen, en daardoor ook de betekenis van het goed. Beschouw, om deze merkwaardige mogelijkheid te appreciëren, even het volgende voorbeeld. Een man in een arme samenleving, kan aan zijn vrouw tonen dat hij van haar houdt door haar één roos te kopen. In een rijke samenleving echter, moet deze man zijn vrouw minstens een dozijn rozen kopen om haar dezelfde boodschap duidelijk te maken. Maar indien rozen voldoende worden belast, volstaat ook in een rijke samenleving één roos om de boodschap over te brengen. Zowel de man als de vrouw zijn na de invoering van de belasting even gelukkig als ervoor, en de gewonnen belastingsinkomsten zijn pure winst. Inderdaad, omdat de betekenis van de belaste goederen zich aanpast, kunnen goederen die enkel gebruikt worden voor communicatie belast worden zonder dat dit de consumenten slechter af maakt, zodat de belastingsinkomsten pure welvaartswinst zijn.  Ook indien goederen zowel om intrinsieke als communicatieve redenen geconsumeerd worden, kan de veranderlijkheid van de betekenis van goederen worden uitgebuit om belastingen te heffen tegen een minimale welvaartskost. Het laatste hoofdstuk omschrijft dan ook een regel die optimale indirecte belastingen omschrijft in dit geval.

Keywords


Dissertation
Hedging with futures in agricultural commodity markets.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords


Dissertation
Entry, regulation and social efficiency : essays on health professionals.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Essay 1: Entry and Regulation - Evidence from Health Care Professions Abstract: In many countries pharmacies receive high regulated markups and are protected from competition through geographic entry restrictions. We develop an empirical entry model for pharmacies and physicians with two features: entry restrictions and strategic complementarities. We find that the entry restrictions have directly reduced the number of pharmacies by more than 50%, and also indirectly reduced the number of physicians by about 7%. A removal of the entry restrictions, combined with a reduction in the regulated markups, would generate a large shift in rents to consumers, without reducing the availability of pharmacies. The public interest motivation for the current regime therefore has no empirical support. Essay 2: Supplier Inducement in the Belgian Primary Care Market Abstract: We perform an empirical exercise to address the presence of supplier-induced demand in the Belgian primary care market, which is characterized by a fixed fee system and a high density of General Practitioners (GP). Using a unique dataset on the number of contacts of all Belgian GPs, we first investigate whether we can find evidence of demand inducement. We furthermore investigate which type of contacts GPs typically use for inducing demand: consultations or visits. Our results indicate that there is a positive effect of GP density on per capita consumption of primary care. We cannot reject that GPs are responsible for part of this effect through inducing behavior. Furthermore, GPs especially employ consultations to induce demand. Essay 3: Strategic Interaction between General Practitioners and Specialists: Implications for Gatekeeping Abstract: We propose to estimate strategic interaction effects between general practitioners (GPs) and different specialist types to evaluate the viability threat for specialists associated to the introduction of a mandatory referral scheme. That is, we show that the specialists' loss of patientele when patients can only contact them after a GP referral has important consequences for the viability of the specialist types whose entry decisions are strategic substitutes in GPs entry decisions. To estimate the strategic interaction effects, we model the entry decisions of different physician types as an equilibrium entry game of incomplete information and sequential decision making. This model permits identification of the nature of the strategic interaction effects as it does not rely on restrictive assumptions on the underlying payoff functions and allows for the strategic interaction effects to be asymmetric in sign. At the same time, the model remains computationally tractable and allows for sufficient firm heterogeneity. Our findings for the Belgian physician markets, in which there is no gatekeeping, indicate that entry decisions of dermatologists and pediatricians are strategic substitutes in the entry decisions of GPs, whereas the presence of gynecologists, ophthalmologists and throat, nose and ear-specialists has a positive impact on GP payoffs of entry. Our results thus indicate that transition costs are likely upon the implementation of gatekeeping and that these costs are mainly associated to the viability of dermatologists and pediatricians. In this dissertation, we evaluate the economic consequences of different regulations with respect to health professionals, i.e. general practitioners (GPs), specialists and pharmacies in Belgium. That is, the organization of the care system determines the economic environment in which these health professionals are active and therefore affects the incentives their behavior as an economic agent is subject to. In three individual essays, we study the behavior of Belgian health providers given the economic environment in which they operate. The conclusions of each of the essays have their relevance in the discussion on cost containment and the optimality of health policies related to health professionals, which are part of the international debate. Policy research In many countries, there is intense debate as to whether entry in medical professions should be regulated or left to market forces. That is, on top of licensing procedures to ensure sufficient quality, some countries have additional barriers to entry into specific medical professions. The European Commission has recently taken an interest in this form of professional regulation and published a report describing the state of professional regulation across European countries (Paterson et al 2003). This report has launched debate in policy and academic circles on the desirability of entry regulation in, amongst others, health care markets. The first two essays of this dissertation contribute to this debate by investigating the behavior of health professionals in Belgium. Each of the essays evaluates a specific argument that is used in the policy debate either in favor or against the presence of entry regulation for health professionals. In the first essay we study firm behavior given the presence of a population-based maximum on the number of entrants and high fixed margins in the Belgian pharmacy market. In this context, we draw conclusions on the validity of the public interest motivation used to sustain this regulation. That is, we evaluate whether the combination of high margins and entry restrictions as in the current regulation for pharmacies is necessary to obtain sufficient geographic coverage of pharmacy services without excessive entry. Our policy simulations show that a combination of entry deregulation and drops in margins in the pharmacy market can generate a similar entry pattern as currently is the case. The policy implications of this essay are clear. The public interest view motivates the current regulation of high margins and entry restrictions as a way to ensure availability in rural areas without excessive entry elsewhere. However, as we are able to generate a comparable provision of pharmacy and physician services under less stringent policies, we find no support for this view. On the contrary, we find that substantial savings can be reached through deregulation, while this not necessarily reduces the availability of pharmacy (and physician) services across the country. To the extent that there are no other valid arguments for the existence of the Establishment Act, our findings favor a reduction in the entry restrictions in the Belgian pharmacy market. For the Belgian pharmacy market, we thus conclude that the current entry regulation on top of licensing is not welfare enhancing. Our essay furthermore provides a tool for policy makers to test whether the existence of entry restrictions based on population criteria in other countries or for other professional services is in the interest of the public or rather in the interest of the incumbents in the industry. This is important in the debate on the liberalization of professional services, as it sheds objective light on the desirability of competition reducing entry requirements. The second essay looks at the Belgian GP market in which there are no entry restrictions, next to licensing. As there are thus no bounds on the degree of GP competition in a local market, we study whether the remuneration system in Belgium triggers supplier-induced demand in the primary care market. That is, we study whether GPs artificially increase the demand for their services (and thus their income level) when they face a high degree of competition. The empirical analysis of the Belgian primary care market shows a positive relation between GP density and the number of contacts per capita. Our results can furthermore not reject that Belgian GPs are partly responsible for this finding by inducing demand for their services. If they induce demand, we find that GPs have a preference to induce through consultations, despite the higher fee for visits. In the margin of the analysis, we furthermore find some indication that GPs in markets with a low GP density use their discretionary influence over the demand to reduce the number of contacts (visits). When GPs induce demand for their services, a higher GP density level in the market is associated to higher GP care consumption and thus higher health expenditures. This extra care is furthermore not needed: patients contact GPs more often than they would in case they were fully informed. In an era of ever-increasing health care budgets, policy measures to limit the GP density on local market level can therefore be optimal. That is, a lower GP density reduces the incentives for GPs to induce demand. Total health consumption will therefore decrease, in principle without an accompanying drop in the health status. This strategy is already followed in Belgium by the installment of a limitation to the number of incoming students in medicine and the Impulseo I-plan by the Flemish government to give financial incentives to GPs to locate in areas with a low GP density. Another highly debated aspect of the organization of health provision concerns the access of patients to secondary care. About half of Western-European countries operate under a system of free access to all care, whereas the other half has a system of gatekeeping (Boerma 2003). In a gatekeeping system, access to specialists is limited by the requirement of a GP referral (=mandatory referral scheme). This enforces both the role of the GP as primary care provider and care coordinator and the rationalization of the use of more expensive secondary care. The literature on the optimality of mandatory referral schemes is extensive and covers many different aspects. Instead of evaluating the desirability of a gatekeeping system, the third essay of this dissertation starts from the observation that some of the Western-European countries where health provision is based on free access are now starting to implement elements of gatekeeping. We therefore evaluate the validity of the fear for viability issues for the current body of specialists in case a mandatory referral scheme would be introduced. More precisely, the third essay evaluates which types of specialists are most likely to be threatened in their viability in case the Belgian care system changes from a system based on free choice to a system with a gatekeeping role for GPs. Our results indicate that specialist types benefit from the presence of GPs in the market. On the other hand, the effect of specialists on GP payoffs depends on the specialization field. Dermatologists and pediatricians have a negative impact on GP payoffs, while the entry decisions of gynecologists, ophthalmologists and throat, nose and ear-specialists (TNE) are strategic complements to the entry decision of GPs. No significant effect is found for psychiatrists and physiologists. Our findings therefore indicate that dermatologists and pediatricians attract a lot of patients for whom GP care would suffice, while the patientele of gynecologists, ophthalmologists and TNE-specialists either get referred or correctly self-refer to these specialist types. Given our results, we expect considerable transition costs to realize when gatekeeping would be introduced in the Belgian care system. Especially dermatologists and pediatricians are likely to experience a fall in the demand for their services, which can result in viability problems. It is up to the policy makers to decide whether or not to maintain the entire body of specialists through financial mechanisms or to retrain a portion of them. Methodology The approach we take to study issues of regulation in the health care sector mainly stems from the field of New Empirical Industrial Organization. As data availability for health care markets is often limited due to privacy concerns, we make inference on firm behavior by studying firms' decisions to operate in a local market. This allows us to understand the determinants of market structure, such as the impact of market characteristics and the strategic interactions between firms. The methodology is based on the empirical literature of entry models which originated with the work of Bresnahan and Reiss (1990, 1991) and Berry (1992). The use of equilibrium models of entry to study health care markets is relatively new and situates itself primarily in hospital markets (Abraham et al 2007). The specific regulation in the markets of health professionals however yields interesting extensions to the entry literature. Next to methodological contributions to this fast-growing literature, we contribute by expanding the application field of equilibrium models of entry. Whereas entry models have been primarily used to study the extent of product differentiation in a free entry context, our research questions instead focus on the impact of regulation. We demonstrate that equilibrium models of entry, next to e.g. demand estimations and merger simulation, are useful tools to increase the understanding of the working of specific markets and to achieve better regulation. In the first essay, we study the Belgian pharmacy and GP markets. To estimate their drivers of profitability and the effect of competitors and other-type firms on payoffs of entry, we model their entry decisions as a sequential game of complete information. The model builds on the model by Mazzeo (2002) but differs from the literature in two main respects. First, entry in the pharmacy market is not free, but restricted based on population criteria. We show in the essay that the equilibrium conditions of the free entry model (i.e. firms enter as long it is profitable to enter) can be adjusted to take up the maximum number of pharmacies that is allowed to be present in the market. Second, whereas the bulk of the literature studies product differentiation, we analyze a situation in which the entry decisions of the different types are strategic complements. That is, the model assumes that firm payoffs are increasing in the number of firms of the other type (the assumption of their entry decisions being strategic substitutes is rejected in the estimation results). These adjustments to the equilibrium conditions of the game do not only allow us to better describe the reality, but also permit to simulate the effect on the realized market structures upon changes to the entry (and price) regulation. The essay is therefore furthermore unique in the static entry literature as it is able to draw direct policy implications on the existence and the specifics of the establishment act for pharmacies in Belgium. That is, our structural model set-up allows performing policy simulations on what the equilibrium market structure would look like under alterations of the law. The third essay aims to identify the nature of the strategic interactions between GPs and specialist types, which requires the empirical model to account for three specific features. First, we do not know a priori how the strategic interactions between the types are characterized: GP payoffs can both be increasing or decreasing in the presence of different specialist types. Second, we can not exclude the possibility that the strategic interaction effects between GPs and a specific specialist type are asymmetric in sign. That is, the entry decision of a GP can be a strategic complement to the entry decision of the specialist, while the latter is a strategic substitute to the entry decision of the GP. And third, there are many different types of specialists, so that the model has to allow for a high degree of product heterogeneity. We contribute to the literature by putting forward a static entry game that copes with all three of these features: we present an incomplete information entry game with sequential entry decisions to model the entry decisions by the different physician types. We argue in the essay that modeling and estimating firm conduct should allow for realistic and flexible strategic interactions between types of firms. This however involves abandoning either the pure strategy assumption or the assumption of complete information. In sum , this dissertation develops advances in the structural modeling of firm behavior, while the applications focus on policy relevant issues in the organization of health care systems. The essays therefore carefully balance between Industrial Organization and Health Economics and try to formulate clear policy implications of the findi

Keywords


Dissertation
Essays on the effects of foreign bank entry into Central and Eastern European countries.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords


Dissertation
Applications in dynamic general equilibrium macroeconomics.

Loading...
Export citation

Choose an application

Bookmark

Abstract

Introduction This PhD thesis collects three essays on macroeconomic issues that appear fairly distinct at first sight. All three essays nevertheless build upon the same set of modelling principles: optimising behaviour on the part of rational economic agents (firms, households and policy makers), intertemporal linkages, sticky prices, a general equilibrium perspective. These modelling principles have collectively become known as the dynamic stochastic general equilibrium (DSGE) framework. It is fair to say that this framework has by now become the standard way of doing macroeconomics. There are several reasons for its success. First, optimisation by rational agents imposes discipline on the kind of behaviour predicted by the model. Although the rational expectations assumption has been criticised since its early days, it guarantees that the model is internally consistent and thereby reduces the modeller's degrees of freedom in specifying agents' information sets. Moreover, it is uncontested as a benchmark even in those research fields that use other expectations formation mechanisms. The utility maximisation criterion is useful for characterising optimal decision rules of agents. Second, the general equilibrium perspective is indispensible if we want to analyse the macroeconomy as a whole. A lot of interesting economic action lies in the spillovers from one market to another. Third, the intertemporal linkages in the model are meant to capture the dynamics of real world macroeconomic time series. Finally, price stickiness is needed for monetary policy to be of any relevance. Despite its many advantages, the DSGE framework still is - without a doubt - work in progress and I try to take a critical attitude towards it. I hope that my results highlight some of its shortcomings as well as its strong points. The first two chapters of this thesis confront a DSGE model with the data. This exercise serves two purposes. On the one hand, taking the model as given, we can interpret economic developments through the lens of the model (see Chapter 1). On the other hand, certain model predictions can be tested against the data, this being a model selection device (see Chapter 2). Chapters 1 and 2 of the thesis are positive essays. Here, policies are either exogenous or follow simple mechanical rules. Chapter 3, by contrast, is normative and derives optimal monetary and fiscal policies on the basis of the utility maximisation criterion. A benevolent planner sets policy in such a way as to maximise household utility, taking the actions of the households and firms, who are in turn maximising utility and profits, as given. Outline In the following, I give an outline of the three essays. In Chapter 1, Productivity and the euro-dollar real exchange rate, I start from the conjecture that productivity differentials between the United States and the euro area were responsible for the US dollar's prolonged real appreciation in the 1990s. First, I derive impulse responses from a two-country, two-sector DSGE model. Second, I use these as sign restrictions to identify a structural vector autoregression (VAR). My results show that the Balassa--Samuelson effect, through traded sector productivity shocks, is far less important in explaining the overall variation in the euro-dollar exchange rate than are demand and nominal shocks. In particular, the strengthening of the US dollar in the late 1990s was mainly demand-driven and cannot be explained by productivity developments. This interpretation hinges on the model-based restrictions that I have imposed on the VAR. More specifically, the model predicts that productivity improvements expand production along the intensive margin. The terms of trade worsen as home prices fall relative to foreign prices, leading to a real depreciation. In other words, the model is consistent with a weakening of the dollar (in real terms), opposite to what we observed. One way of reconciling a rise in productivity with stable or increasing prices is to consider an expansion of production along the extensive margin instead. This works as follows. Higher productivity encourages the entry of firms that introduce new products. This extra output can be sold without lowering prices, as consumers value the increased product diversity. This train of thought takes me to the youngish literature on firm entry in DSGE models. Perhaps surprisingly, while the unconditional moments of firm entry in the US have been documented for some time, the conditional moments of this variable are less well understood. In Chapter 2, Business Cycle Evidence on Firm Entry, I follow an empirical strategy that is similar to the one in Chapter 1, but the focus is different. Here, I use the information provided by my empirical findings to decide which model features are needed to match certain properties of the data. Business cycle models with sticky prices and endogenous firm entry make novel predictions on the transmission of shocks through the extensive margin of investment. I test some of these predictions using a vector autoregression with model-based sign restrictions. I find a positive and significant response of firm entry to expansionary shocks to productivity, aggregate spending, monetary policy and entry costs. The estimated response to a monetary expansion supports the view that entry costs do not fluctuate much over the cycle. Insofar as firm startups require labour services, wage stickiness is needed to make the signs of the model responses consistent with the estimated ones. The shapes of the empirical responses suggest that congestion effects in entry make it harder for new firms to survive when the number of startups rises. To conclude, my VAR results can be useful in ruling out certain combinations of model features that generate "wrong" dynamic responses. While Chapter 2 aims at improving our understanding of the behaviour of entry over the business cycle, Chapter 3 turns to the implications of firm entry for policy. Chapter 3 is an analytical contribution and has the title Optimal fiscal and monetary policy responses to firm entry. It asks whether fiscal and monetary policies should be concerned with (and thus should spend resources on measuring) firm/product entry and exit. Previous research has found that monetary policy should stabilise product prices and should not respond to fluctuations in the consumer price index coming from product turnover (see Bergin and Corsetti (2006), Bilbiie et al (2007)). I analyse this issue in the framework of a stylised DSGE model with two sectors, one producing firms and the other producing consumption goods. Product and labour markets are monopolistically competitive; wages are predetermined; consumption purchases are subject to a cash-in-advance restriction. In order to ensure an efficient steady state allocation, the policy maker needs to set a labour income subsidy which aligns the markup on leisure with that on consumption goods. The optimal monetary policy sets the net interest rate equal to zero. Under preset wages, the policy maker can, in general, achieve a higher welfare level than in the flexible-wage economy by manipulating the sectoral composition of production.

Keywords


Dissertation
Optimal monetary policy design in dynamic macroeconomics.

Loading...
Export citation

Choose an application

Bookmark

Abstract

This dissertation aims at answering two questions that are relevant to the conduct of monetary policy. The first concern is related to the inference of the preferences of monetary policy makers with respect to their alternative objectives.  When the private agents in the economy are aware of the preferences of the policy makers, they will understand the conduct of monetary policy better. This in turn will help them form expectations, which contribute to the determination of macroeconomic outcomes. Hence, transparency about monetary policy preferences provides the policy makers with an additional tool to stabilize the economy, i.e. the expectations channel. In practice, however, monetary policy makers tend to be explicit only about one objective, their inflation target, leaving room for speculation about the relative importance of the other objectives (output stabilization, maximum employment, interest rate stabilization etc.) and their corresponding targets. Therefore, in this dissertation, we estimate the monetary policy preference parameters jointly with the behavioural parameters in the model for the economy, which is inspired by the analysis provided in Dennis (2000, 2006). We do this in the context of the Smets and Wouters 2003 model for the euro area and its 2007 variant for the US. In particular, the common assumption throughout the exercises is that monetary policy is performed optimally under commitment by minimizing an intertemporal quadratic loss function subject to these structural models. Estimated Dynamic Stochastic General Equilibrium (DSGE) models have recently become successful in replicating the dynamics observed in data and matching results from previous empirical VAR studies. Since the aggregate equations in these models are based on microfoundations, this framework provides a structural interpretation to the dynamics. This virtue makes the DSGE models particularly interesting for studying the implications of alternative monetary policy regimes and quantifying their effects through an analysis of the corresponding unconditional losses. The second question tackled in this dissertation is related to the performance of optimal monetary policy rules. We compute fully optimal rules under commitment as well as under discretion in order to quantify the stabilization bias for the euro area economy. We also compute optimal simple rules in order to assess their performance with respect to the fully optimal commitment rule. We compare the empirical performance of the optimal rules to the benchmark case where monetary policy is simply described by an estimated Taylor type of rule. This allows us to understand how realistic the assumption of optimal monetary policy is in a given period. We also investigate the robustness of the optimal rules and the estimated Taylor rule to parameter uncertainty. The dissertation is made up of three chapters preceded by an introductory note that explains the historical steps undertaken by monetary policy makers to arrive to the current DSGE modeling framework adopted by the majority of central banks for monetary policy analysis. We give an overview of the progresses that have been made towards what can be considered today as the state-of-the-art DSGE model used in monetary policy analysis. We assess the merits and the shortcomings of this approach and stress the important challenges for monetary policy. Chapter 1 performs an optimal monetary policy evaluation exercise for the euro area using the estimated results for the structural parameters reported in Smets and Wouters (2003).  We consider both the commitment and the discretionary approach towards monetary policy, which enables us to quantify the stabilization bias under discretion in the model. In addition, we assess the performance of alternative optimal simple rules, both with and without interest rate persistence, and compare them to the unrestricted optimal commitment rule. The results qualitatively confirm those reported in the literature on optimal monetary policy evaluation analysed in less sophisticated frameworks characterized by forward-looking rational agents. There is a considerable amount of stabilization bias present when monetary policy takes a discretionary approach towards monetary policy. Further, a Taylor rule expressed in terms of the lagged interest rate, the current output gap, inflation, a demand shock and a supply shock, performs relatively well in terms of unconditional loss with respect to the optimal commitment rule. However, the added value of the presence of the shocks to the performance of the rule is rather limited. In Chapter 2, we estimate the Smets and Wouters (2003) model for the euro area by describing monetary policy as optimizing an intertemporal quadratic loss function under commitment, using Bayesian methods. Rather than calibrating the preference parameters in the loss function of the central bank at arbitrary values, as is done in the first chapter, we estimate these values and use them in the evaluation exercises. We test alternative periodic loss function specifications and select those that perform best in terms of empirical fit. The time-inconsistency problem due to commitment is tackled by the use of an initialization period. We show that the estimation results converge to the ones we would obtain if monetary policy were operating from a timeless perspective. The results suggest that, in addition to the inflation target, interest rate smoothing and output gap variability have been the main objectives of monetary policy in the euro area. Chapter 3 extends the estimation exercise in chapter 2 to the Smets and Wouters (2007) model for the US economy. Again, under the assumption of a central bank committed to minimize an intertemporal loss function, we estimate the policy preference parameters in three subsamples: (i) the pre-Volcker sample, (ii) the Volcker-Greenspan sample and (iii) the Greenspan sample. We show that monetary policy since Volcker has been mainly concerned with inflation and output growth stability, but in addition also with interest rate variability and interest rate smoothing. Before Volcker, stabilization of the output gap level rather than the output growth was the objective of monetary policy. We show that the assumption of optimal monetary policy is most realistic for the period after Greenspan's appointment as chairman of the Federal Reserve. Based on a counterfactual analysis, we find that the main source behind the Great Moderation of output growth is the favourable environment of less volatile shocks, while the Inflation Stabilization is mainly due to the change in the conduct of monetary policy after Volcker's appointment. We investigate the effects of parameter uncertainty on the performance of the optimal commitment rule against the optimal Taylor rule and the estimated variant of the same Taylor rule under the Greenspan period. We find that uncertainty affects the optimal Taylor rule and the unrestricted optimal commitment rule to the same extent.

Keywords

Listing 1 - 10 of 11 << page
of 2
>>
Sort by