Risk Management, Strategic PDF
Risk Management, Strategic PDF
Risk Management, Strategic PDF
Hasan Dinçer
Ümit Hacioğlu Editors
Risk Management,
Strategic Thinking
and Leadership
in the Financial
Services Industry
A Proactive Approach to Strategic
Thinking
Contributions to Management Science
More information about this series at http://www.springer.com/series/1505
Hasan Dinçer • Ümit Hacio
glu
Editors
The views expressed in this book are those of the authors, but not necessarily of the publisher and editors.
Risk is part of our lives. We take risks to grow and to develop. For financial
institutions, risk is an indispensable part of their life because they cannot survive
without taking risk. At this point, the management of risk becomes the main issue.
Risks should be managed in order to minimize the threats of risks and maximize
their benefits. Risk management is defined as the process of identification and
mitigation of uncertainty in decisions related to investments and controlling threats
to an organization’s capital and earnings.
Risk management culture focuses on each person’s habit of management of risk.
The risk management culture is the situation of looking for risks when making
operational decisions. Strategic thinking, on the other hand, is the ability to come
up with effective plans pursuant to the objectives of the organization, whereas
leadership is defined as the inspiration of subordinates to achieve a goal. Financial
Services Industry encompasses the money and capital markets, as well as the
foreign exchange markets. This book gathers researchers and market professionals
across the globe to understand the Risk Management Culture, Strategic Thinking,
and Leadership in the Financial Services Industry.
Risk Measurement and Management culture on the global basis dates back to
1988. This was the year when Basel Committee for Bank Supervision (BCBS), after
the failure of Herstatt Bank, led the development of risk-based capital standard for
credit risk of the internationally active banks of G-10. Basel Capital Adequacy
Accord (Basel I) has been phased in by 1993 and became a world standard in a short
time. The Accord, due mainly to financial innovation, has been amended in 1996
(effective January 1, 1998) to take a separate account of market risk besides the
credit risk. Yet, BCBS started to communicate with the Banking industry as early as
1999 for Basel II due mainly to discrepancies in the measurement of credit risk of
Basel I and the need for the inclusion of risk-based capital standards for operational
risk. The standard method of Basel II was effective in the EU by January 2007 and
the advanced methods by January 2008. In the meantime, by 2007 and 2008 the
world witnessed a global financial crisis. Basel III is an answer to this financial
crisis. Regulatory standards concerning banks’ capital adequacy and liquidity risks
v
vi Foreword
are developed by BCBS in June 2011 and January 2013, respectively. There are five
major issues in Basel III in addition to these minimum capital and minimum
liquidity standards, which are maximum leverage, governance, and remuneration.
Capital Requirements Directive (CRD) IV, which is the parallel implementation of
the Basel rules to the EU legislation, applies from January 1, 2014, and will be fully
applicable by 2019 to banks and investment funds, whereas insurance funds and
collective investment funds are covered by Solvency II.
The book welcomes five sections and 25 chapters from authors around the globe.
In the first section, Economic Outlook and Expectations for the Financial Services
Industry is elaborated. Here, the first two chapters focus on the Global Economic
Outlook as well as Sustainable and Inclusive Finance in Turkey. The remaining
chapters intricate the Monetary Policy, Banking, and Foreign Exchange Markets.
The second section concentrates on Managing Risk in the Capital Markets. Here,
the emphasis is on the computation of operational risk, liquidity risk, credit
derivatives risk, and financial risk. Section three centers on Volatility, Hedging,
and Strategy in Risky Environment. Here, Extreme Value Theory, VaR Perfor-
mance of EMEs during FED’s Tapering, Jumps and Earnings Announcement,
Hedging Scenarios, and Option Scenarios are detailed. Section IV spotlights
Risk-based Audit and Structured Finance. The chapters in this section examine
Risk-based Internal Audit, Accounting Perspectives for Future, Reporting Trends,
and Risk Assessment. The last section is on Culture and Leadership in Risk
Management. In this section, the chapters concentrate on the role of Risk Manage-
ment Culture in Strategic Planning, Agile Intrapreneurship in Volatile Business
Environment, Emerging Trends in the Post-Regulatory Environment, The effect of
National Culture, and Contemporary Leadership Styles.
In order to understand the Risk Management Culture in the Financial Services
Industry, the book is the right one, coming at the right time. The editors of the book,
Dr. Hacıoglu and Dr. Dincer, have done an extraordinary job in collecting views of
the academics and professionals all across the world. This book is an insightful
guide to understanding the challenges of Risk Management Culture, Strategic
Thinking, and Leadership in the Financial Services Industry.
I recommend this book to all readers who are interested in the Risk notion and
Financial Services Industry.
Integrated risk management systems with its applications became important key
topic for the financial services industry in the last two decades. Apparently, the
lessons learned from the latest global financial crisis also played a major role in
understanding the pioneering role of effective risk management systems on strate-
gic business operations. With respect to the major studies in the field, it is possible
to mention that comprehensive risk governance systems have not been sufficiently
examined based on its ties with corporate culture, strategic thinking, and leadership
in the global financial institutions.
Another major experience on the global financial crisis is attached to that a
profound effect of corporate culture on risk management attitude increases the
value for shareholders in the global financial system. Additionally, the major
empirical studies in the field also point out the role of effective risk management
systems in sustainable business performance during volatile business conditions.
Adaptive risk management systems with innovative solutions at these risky condi-
tions have broader ties with economic capital allocation and increase the core value
of the business. Especially, in the banking industry, strategic investment decisions
are affected by systemic risks during volatile conditions. Hence, an appropriate
interdisciplinary approach to risk management systems should be covering theories
and practices for a new design, strategic thinking, proactive culture, and the role of
leadership.
In this novel book, a broad overview of risk-based empirical studies has been
attached to strategic thinking, design, culture, and leadership in the financial
services industry. It also aims to develop an interdisciplinary approach for the
development of risk management practices with a broader context behind the new
design and theory.
This book is composed of five contributory sections. The first section evaluates
the 2008–2009 financial crises in historical context and the economic outlook and
assesses the expectations for the financial services industry. The distinguished parts
of the first section cover the topics on the global economic outlook, sustainable and
inclusive finance in Turkey, monetary policy divergence and central banking in the
vii
viii Preface
new era, in looking into the foreign exchange risk management, the link between
dollarization and its determinants in Turkey, and finally enhancing the risk man-
agement functions in banking: capital allocation and banking regulations.
This book continues with section two by assessing basic topics of managing risks
in capital markets. In section two, the calibration of market risk measures during
period of economic downturn: market risks and measures, computation of opera-
tional value at risk using the severity distribution model based on Bayesian method
with Gibbs sampler, liquidity risk and optimal redemption policies for illiquid
investments, credit derivatives, their risks and role in global financial crisis, and
an approach to measure financial risk relative indices: a case study of Indonesian
insurance companies are some titles covered inside. The next section covers
empirical studies on volatility, hedging, and strategy in risky environment. Titles
of this section are the extreme value theory in finance: a way to forecast unexpected
circumstances, the value at risk performance of emerging market equity portfolios
during the fed’s tapering, jumps and earnings announcement: empirical evidence
from emerging markets using high-frequency data, hedging scenarios under com-
petition: exploring the impact of competitors’ hedging practices, and option strat-
egies and exotic options: tools for hedging or source of financial instability?
The fourth section of this book demonstrates the link between the risk-based
audit and the structured finance with the titles of risk-based internal audit, the recent
financial crisis and the structured finance, compliance and reporting trends with
essential strategies, developing a risk management framework, and risk assessment
for non-profit organizations.
Finally, the last section builds on the culture and leadership in risk management.
Risk management culture with its role in strategic planning, agile intrapreneurship
in volatile business environment: changing roles of financial managers and risk
takers according to Schumpeterian approach, emerging trends in the post-
regulatory environment: the importance of instilling trust, the effect of national
culture on corporate financial decisions, contemporary leadership styles, and
Schumpeterian creative destruction in volatile business environment are the titles
covered in this section.
The authors of the chapters in this publication have contributed to the success of
our work by the inclusion of their respective studies with case studies. This book
gathers colleagues and professionals across the globe from multicultural commu-
nities to design and implement innovative practices for the entire global society of
finance and banking. The authors of the chapters in this premium reference source
in the field with the contribution of scholars and researchers overseas from different
disciplines examined the related topics of risk management by assessing critical
case studies in the financial services industry.
Consequently, we believe this book with its scope and success makes it even
more attractive for readers and scholars in this field.
We have many colleagues and partners to thank for their impressive contribution to
this publication. First, we would like to praise the people at Springer International
Publishing AG: Dr. Prashanth Mahagaonkar, who has the attitude and substance of
a genius: he continually and convincingly conveyed a spirit of adventure in regard
to our research at each stage of our book development process; Sivachandran
Ravanan, our book project coordinator, without his persistent help this publication
would not have been possible; and others who assisted us in making critical
decisions about the structure of the book and provided useful feedback on stylistic
issues.
We would like to express our appreciations to the Editorial Advisory Board
Members. The members who helped with the book included Dursun Delen, Ekrem
Tatoglu, Bahcesehir University, Istanbul Turkey. Idil Kaya, Ihsan Isik, Martie
Gillen, Michael S. Gutter, Nicholas Apergis, Ozlem Olgu, Ulas Akkucuk, and
Zeynep Copur. The excellent advice from these members helped us to enrich
the book.
We would also like to thank all of the authors of the individual chapters for their
excellent contributions.
We would particularly like to thank the Center for Strategic Studies in Business
and Finance for the highest level of contribution in the editorial process.
The final words of thanks belong to our families and parents separately:
Dr. Hacio glu would like to thank his wife Burcu and his son Fatih Efe as well as
his parents; Dr. Dincer would like to thank his wife Safiye as well as his parents.
They deserve thanks for their enthusiasm, appreciation, help, and love. Their pride
in our accomplishments makes it even more rewarding to the editors.
ix
The Editors would like to acknowledge the following international advisory board
members
xi
Contents
xiii
xiv Contents
G€
okçe Çiçek Ceyhun
1 Introduction
I think the US economy is leading global economic growth. The economic growth in the US
is much more broadly based than before. So I have full confidence in the US economy’s
growth (Haruhiko Kuroda).
Maybe this quotation can give an idea relating with global economic outlook of
today and future. Due to the fact that 2008 global financial crisis has started at US
and the impacts of the turmoil have expanded all over the world swiftly, actually it’s
not an exaggeration to tell that US economy is determiner of world global economic
growth.
The global economy is at the end balancing the recent global financial crisis and
Great Recession. One of the major factor that supporting stronger economic growth
is lower global commodity prices. Moreover lower fuel prices for households and
lower input prices for industry may are providing a boost for aggregating global
growth and global demand. The global economy is passing from the Great Reces-
sion period to a new stage which hosts more steady growth. However there are still
economic weakness together with new risks (Global Economic Outlook 2015-
2020).
On the other hand global trade is growing slower for the last five years and it
seems to have slackened after global financial crisis. On account of the fact that
powerful trade and global growth go hand in hand, the developments in the
economic satiation is vital for all. International trade intensifies international
competition, keeps the domestic companies strong and extends variety for business
and consumers. That’s why the economic outlook of world trade can be identified as
pioneer of global output (OECD Economic Outlook 2015).
In order to gain momentum the global economy is still struggling and while high
income countries are trying to remove the traces of global financial crisis, emerging
economies are not dynamic as in the past. This conjuncture is giving chance to
countries for reforming their economies (Global Economic Prospects 2015).
In this chapter current and expected global economic outlook has been examined
with economic reports by analyzing global outlook, regional outlook, emerging
economies and advanced economies.
2 Literature Review
In the past years, the world economy has witnessed a great variety of positive and
negative macroeconomic cases. Actually, the changes in the economic outlook have
raised financial stability risks. Sharp drop in oil prices and commodity prices are
continued to grow and interest rates supported growth in 2016. Brave monetary
policy precautions have been taken in euro area and Japan in order to reverse
pressures of disinflation. Credit spreads have constricted in the euro area, equity
prices have advanced, yen and euro have decreased their value significantly. On the
other hand the appreciation of U.S. dollar has reflected several monetary policies.
The rapid change of real exchange rates have reflected global economic growth
(Global Financial Stability Report 2015).
The global economy has staggered in 2015 with decreasing commodity prices,
ascending financial market volatility and low aggregate demand. The growth rates
of aggregate demand and gross fixed capital formation remained same in 2015. It is
estimated that the world economy will grow by 2.9 % in 2016 and 3.2 % in 2017
(see Fig. 1). The US monetary policy stance is anticipated to reduce uncertainties of
policy and to impede extreme volatility of asset prices and exchange rates (World
Economic Situation and Prospects 2016).
Global Economic Outlook 5
Fig. 1 Growth of world gross product and gross domestic product by country grouping,
2007–2017. Source: United Nations, World Economic Situation and Prospects (2016)
–1
–2
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014f 2015f 2016f 2017f 2018f 2019f 2020f
In the US, the economic outlook which supported by private sector growth,
financial sector stability and rising demand is more powerful than it has been in
years. Although there may be seen the weakness signals, the stability of US
economy has increased. As the firms increased their fixed investments and started
to employ more workers, private sector activities have become much stronger. As a
result of strong employment numbers and steady repairs of household balance
sheets, aggregate demand has continued to rise. Moreover, the stability of fiscal
policy is also contributing the development of economic growth. On the other hand
in Japan, the economy has surprisingly fell into a recession in the third quarter of
2014. Actually new structural reform is still necessary for economic prospects of
Japan. Prime Minister Shinzo Abe has offered some reforms as containing corpo-
rate governance, the promotion of entrepreneurship, and deregulation in National
Strategic Special Zones. However, political resistance to these difficult structural
reforms must be overcome. On the other side, there is still weak growth economic
outlook in the Eurozone. The latest currency deprecation could defuse the problems
of competitiveness. Besides mild deflation and low inflation are still constituting an
impediment for economic recovery. Since the crisis Greece and Spain have
performed a variety of structural reforms, but dissatisfaction of economic recovery
has remained same. The political leaders of Euro zone should implement new
reforms in order to growth economic outlook (Global Economic Outlook 2015-
2020).
Apart from these explanations, developed economies pursued to rely on adapt-
able monetary policy in 2015. This adaptable monetary policy helped to stop fusion
of financial sector and prevented a prolonged recession. Unfortunately this situation
could not been effective as anticipated for investment and economic growth. From
historical point of view short term and long term interest rates of developed
Global Economic Outlook 7
Fig. 4 Ten-year government bond yields in selected developed economies, October 2005–
October 2015. Source: World Economic Situation and Prospects (2016)
economies are still depressed. Ten-year government bond yields for France,
Germany, Japan, the United States and the United Kingdom has shown in Fig. 4.
As the labor market of US has sustained to develop progressively, the FED (Federal
Reserve System) has become closer to its first interest rate as in 2006. FED rate is
now anticipated to occur same as in 2015. Conversely other banks of developed
countries are still loosening their monetary policy. The ECB (European Central
Bank) maintains its expanded asset purchase program since March 2015 and it is
anticipated to be implemented until the end of March 2017. This program provided
recovery of the euro zone. On the other hand, The Bank of Japan has continued the
pace of asset purchases under its quantitative and qualitative monetary easing
program (QQME). The financial authorities have decided to implement this pro-
gram until inflation is stable at 2 % (World Economic Situation and Prospects
2016).
The generation of economic power among the developing countries is inverting the
distribution for international governance of political power. The most outstanding
emerging powers in the world are the BRIC countries; Brazil, Russia, India and
China. According to December 2009 report of Goldman Sachs these countries
projected to overtake the Group of Seven (G-7) economies by 2032. It is estimated
8 G.Ç. Ceyhun
Fig. 5 Contribution to global growth, 2007–2017. Source: World Economic Situation and
Prospects (2016)
that collective political and military exploit of the BRIC countries will be com-
mensurate with their economic power in the coming decades (Toh 2010).
Developing countries produced much of the global output growth, since the
inception of global financial crisis (see Fig. 5). Particularly, China has become the
locomotive country of global economic growth during 2011 and 2012. China has
maintained strong demand of commodities and boosted export growth in the rest of
the world. With a much anticipated slowdown in China and persistently weak
economic performances in other large developing and transition economies—
notably Brazil and the Russian Federation—the developed economies are expected
to contribute more to global growth in the near term, provided they manage to
mitigate deflationary risks and stimulate investment and aggregate demand.
Moreover falling into decrease of commodity price should help reduce macro-
economic suspense and stimulate economic growth in number of developing
economies. It is expected that developing countries will grow by 4.3 % in 2016
and 4.8 % in 2017 (World Economic Situation and Prospects 2016).
On the other side Germany has become the largest surplus country in the world,
as current account of China has narrowed. Germany’s intra-euro area trade surplus
has narrowed sharply since 2007, but its extra-euro area surplus has continued to
widen, as shown in Fig. 6. The growing external surplus of Germany partly explains
the widening current-account surplus of the euro area as a whole, which also reflects
the rapid adjustment of the external positions of Greece, Ireland, Italy, Portugal and
Spain (World Economic Situation and Prospects 2016).
Against the backdrop of weakening growth, rising financial market volatility,
sharp exchange-rate depreciations and increasing portfolio capital outflows, mon-
etary policies in developing and transition economies have shown some divergence
in 2015 (Fig. 7).
Global Economic Outlook 9
Fig. 6 Euro area current-account balance (CAB). Source: World Economic Situation and Pros-
pects (2016)
Fig. 7 Central bank policy rates in the BRICS, October 2011–October 2015. Source: World
Economic Situation and Prospects (2016)
3 Conclusion
According to the future global economic scenario there will be consolidation for
recovery in major global economies, demand in China will be rebalanced, devel-
oping countries will pass a smooth transition and commodity prices will be
smoother. Besides, these presuppositions includes some risks. Changes in economic
growth and policy prospects in financial markets could produce tighter credit
10 G.Ç. Ceyhun
References
Conerly B (2015, November 24) Global economic forecast 2016-2017. Forbes. http://www.forbes.
com/sites/billconerly/2015/11/24/global-economic-forecast-2016-2017/#6e14282143f4
Global Economic Prospects (2015) Having fiscal space and using it, 2015. A World Bank group
flagship report
Global Economic Outlook 2015-2020 (2015, January) Beyond the new mediocre? ATKearney
Global Business Policy Council
Global Financial Stability Report (2015, April) Navigating monetary policy challenges and
managing risks. World Economic and Financial Surveys
OECD Economic Outlook (2015) OECD Economic Outlook, Volume 2015 Issue 2. OECD
Publishing, Paris. doi:10.1787/eco_outlook-v2015-2-en
Toh LCCHJ (2010) Brazil, Russia, India and China (BRIC): reshaping the world order in the 21st
century. Lucent: A Journal of National Security Studies
United Nations (2016) World economic situation and prospects. United Nations, New York
Abstract For strong and sustainable growth finance must be inclusive. Individuals
and companies should have equal opportunity on accessing markets and resources.
Financial inclusion does not mean pushing access for the sake of access, and it
certainly does not mean making everybody borrow. For inclusive economic devel-
opment, inclusive finance is a necessary criteria. Growth becomes inclusive if it is
supported by structural reforms. The main purpose of this chapter is to explore the
terms of sustainable and inclusive finance and assess the underlying role in the
developing countries especially in Turkey and to reveal the current situation and
further possibilities.
1 Introduction
In the traditional view, the ultimate goal of companies is to use resources efficiently
and to maximize risk adjusted return on capital. This view has been challenged by
many management scholars who argue that companies have a wider responsibility
that goes beyond profit maximization. In this context, the concept of sustainable
development has gained increasing attention and relevance in the last decade (Hahn
and Figge 2011; Jensen and Meckling 1976; Hanley 2000; Fatemi and Fooladi
2013).
The fast depletion of natural resources and the increase in social tension linked
to industrial growth and globalization, has led to a growing awareness of the urgent
need for sustainable business models. Maintaining a business—as—usual model
carries with it significant environmental and social risks and technological devel-
opments which make communication much easier and information more accessible
to a wide range of stakeholders, are increasingly putting businesses on the spot.
Businesses have a strong interest to ensure that such risks do not occur, and to
manage existing projects with an approach that focuses on multi-stakeholder
engagement in which finance sector can play a crucial role (SFF 2014).
The term sustainable development had the potential to stimulate discursive
engagement with respect to the future development of society within an ethical
framework based around the values of inclusivity, diversity, and integration (Fergus
and Rowney 2005).
Financial inclusion plays major role in inclusive growth of the country (Shah and
Dubhashi 2015). The concept of inclusive growth does not only represent an
economic growth in a way that will benefit certain segments of the society but
everyone in society, especially the poor. One of the underlying causes of prosperity
and economic growth differences between countries is the institutional structure.
Inclusive institutions reduce the power of the elites in society and create an
environment that encourages investment which paves the way for economic
growth.
Inclusive growth in the economy can only be achieved when all the weaker
sections of the society including agriculture and small scale industries are nurtured
and brought on par with other sections of the society in terms of economic
development (Swamy 2010).
The policy debate has been shifting from the finance-growth nexus to the
finance-inequality relationship (Asongu and De Moor 2015). Importance of finan-
cial inclusion arises from the problem of financial exclusion of nearly three billion
people from the formal financial services across the world. In the developed
countries, the formal financial sector serves most of the population, whereas a
large segment of the society, in developing countries, mainly the low-income
group, has modest access to financial services, either formally or informally
(Swamy 2010). An “inclusive financial sector” offers the majority of the popula-
tion, on a sustainable basis, access to a range of financial services suited to their
needs. Building an inclusive financial sector turns the tide on access: inclusion of
the majority of the population rather than exclusion (Imboden 2005).
Although there is a broad definition about inclusive finance, in the basic and
clear form, financial inclusion was defined as “Financial inclusion is the process
that ensures the ease of access, availability, and usage of formal financial system for
all members of an economy” (Park and Mercado 2015). According to the another
definition, financial inclusion means that “formal financial services—such as
deposit and savings accounts, payment services, loans, and insurance—are avail-
able to consumers and that they are actively and effectively using these services to
meet their specific needs” (Klapper et al. 2016).
Financial inclusion does not mean pushing access for the sake of access, and it
certainly does not mean making everybody borrow (Miller 2014, p. 9). Financial
inclusion, at a minimum, may be interpreted to mean the ability of every individual
to access basic financial services which include savings, loans and insurance in a
Sustainable and Inclusive Finance in Turkey 13
manner that is reasonably convenient and flexible in terms of access and design and
reliable in the sense that the savings are safe and that insurance claims will be paid
with certainty. Empirical evidence shows that inclusive financial systems signifi-
cantly raise growth, alleviate poverty and expand economic opportunity (Mor and
Ananth 2007). Financial inclusion means delivery of banking services and credit at
an affordable cost to the vast sections of disadvantaged and low income groups. As
seen in the Fig. 1, the various financial services include savings, loans, insurance,
payments, remittance facilities and financial counseling/advisory services by the
formal financial system (RBI 2008).
The perfect financial inclusion may therefore be described by the capacity to
access and use appropriate financial services proposed by mainstream providers.
Meanwhile, there may be an adequate “second best choice” to get appropriate
services proposed by alternative providers that comply with rules and regulations
and do not exploit low income people (EC 2008). The importance of financial
inclusion can be revealed from the following (Shah and Dubhashi 2015):
1. It is a necessary condition for sustaining equitable growth.
2. It protects the poor people from the clutches of usurious money lenders.
3. It will make possible for the governments to make payments under the social
security schemes through bank accounts of the beneficiaries, by electronic
transfers. This will minimize transaction costs including leakages.
4. It provides an avenue for bringing the savings of the poor into the formal
financial intermediation system and channel them into investment.
5. The large number of low cost deposits will offer banks an opportunity to reduce
their dependence on bulk deposits and help them to better manage both liquidity
risks and asset liability mismatches.
The rest of the paper is organized as follows. Section two overviews existing
literature, section three explains the role and importance of inclusive finance in
developing countries especially in Turkey and final section provides concluding
remarks.
14 S.Y. T€
urkmen and G. Ça
gıl
2 Review of Literature
There are many studies available on the financial inclusion in the literature. Zwedu
(2014) intended to explore the link among financial inclusion, regulation and
inclusive growth in Ethiopia. The study found out that despite huge progress,
financial inclusion is still very low. Sarker et al. (2015) recommended some policy
measures to overcome the challenges of financial inclusion with regards to the
banking sector’s initiatives in financing agriculture in Bangladesh.
Garg and Pandey (2007) suggested that in order to reduce poverty and propel
India towards sustainable human well-being a comprehensive financial system
based on the bank-money lender linkages is required. Mor and Ananth (2007)
aimed to express a point of view on the financial system design principles essential
to achieve the goal of financial inclusion. Dixit and Ghosh (2013) made analysis of
natural hierarchical grouping cluster considering parameters like GDP per capita,
literacy rate, unemployment rate and index of financial inclusion on few of Indian
states. Shyni and Mavoothu (2014) explained how financial inclusion can help in
the inclusive growth of the economy.
Swamy (2010) evaluated using appropriate statistical techniques the impact of
financial inclusion efforts on the inclusive growth in the case of a developing
economy like India by considering the data for the period from 1975 to 2007 and
found that bank led financial inclusion has definitive advantages for inclusive
growth in developing economies. Lavoie et al. (2011) investigated the replication
of microcredit methodologies as one of the financial inclusion strategies in Brazil
and it is expected that the results lead to increase the expansion of microcredit
operations across Brazil.
Block et al. (2013) presented a system dynamics model which they were
developing for analyzing the relationship between economic growth and consumer
debt from a financial and distribution political perspective. Neupane (2015) studied
the relation between financial access and poverty incidence in Nepal. Inoue and
Hamori (2016) found out financial access has a statistically significant and robust
effect on increasing economic growth in Sub-Saharan Africa. Yorulmaz (2013)
examined a multidimensional measurement of financial inclusion to gauge the
extent of financial system across time in Turkey.
In most countries around the world, there is growing inequality. In some important
countries, the increase in inequality has been particularly large. This is of macro-
economic importance, because those at the top consume a smaller fraction of their
incomes than do those at the bottom and middle (Stiglitz 2016).
In mature economies, rates of exclusion tend to be low—for example only an
estimated 4 % of the population in Germany and 9 % in the United States go without
Sustainable and Inclusive Finance in Turkey 15
basic access to services. But in the world’s smaller and less mature economies,
financial exclusion rates reach exorbitant levels; approximately 80 % of the finan-
cially excluded live in Latin America, Asia or Africa. In this sense, financial
inclusion poses policy challenges on a scale and with an urgency that is unique
for developing countries. Therefore, financial inclusion became an important policy
issue especially in the emerging market economies (Yorulmaz 2013).
In the wake of the global financial crisis, many developed and developing
country governments are prioritizing stability at the individual financial institutions
and systemic level by strengthening financial regulation. Even though the latter is
important to make financial systems more robust, its contribution to inclusive
growth might be insufficient, especially in poor countries (Zwedu 2014). The
financial crisis has raised deep questions about the role of the financial sector and
its impact on growth and income distribution. The crisis has also stimulated reforms
to help the sector contribute to growth that is strong, sustainable and inclusive
(OECD 2015).
Internationally, the financial inclusion has been viewed in a much wider per-
spective. Having a current account/savings account on its own, is not regarded as an
accurate indicator of financial inclusion. Financial inclusion efforts should offer at a
minimum, access to a range of financial services including savings, long and short
term credit, insurance, pensions, mortgages, money transfers, etc. and all this at a
reasonable cost (Shah and Dubhashi 2015).
Developing countries need to design appropriate strategies for increasing access
to financial services by all segments of the population. They must also turn their
strategies into effective policy measures and implementation plans. This means that
multiple stakeholders must work together to design these strategies and determine
the best ways to organize their implementation. Such an effort entails the
co-operation of the range of governments, financial institutions, civil society
organizations, development partners, and the private sector. And it requires all
stakeholders to ensure that adequate attention is focused on financial inclusion over
the long term (UN 2006).
As seen in the Table 1 financial inclusion has risen significantly in recent years in
developing countries and among people living at the base of the economic pyramid.
Account ownership among the poorest 40% has more than doubled in a range of
countries with widely varying population sizes and GDPs (UNSGSA 2015).
The United Nations General Assembly adopted the 2030 Agenda for sustainable
development. The agenda comprise seventeen sustainable development goals to
apply all countries include developing countries. The goals do not only aim to reach
financial inclusion but also access to financial services enables to fight against
poverty (Klapper et al. 2016). Therefore reaching the goal of inclusive economic
development requires reliable financial inclusion data which cover the major
components of sustainable financial inclusion development (GPFI 2016).
Access to financial services is key to growth and sustainability in developing
countries and emerging economies. In addition access to financial services for large
segments of the population is crucial to reduce of poverty and income inequality in
16 S.Y. T€
urkmen and G. Ça
gıl
developing countries. However access to financial services for the poor majority
population still remains limited (Sjauw-Koen-Fa and Vereijken 2005).
Almost 70 % of the adult population in developing countries lack access to basic
formal financial services, such as savings or checking accounts based on the 2009
Financial Access report by the World Bank Group. According to the report the
largest share of the unbanked live in Sub-Saharan Africa and South Asia, as well as
East Asia, Middle East and North Africa, Latin America and Eastern Europe and
Central Asia (Stein 2010).
It is known that mainly five challenges prevent financial access for people in
developing countries which are lack of financial literacy, lack of valid identification
documents, issue of consumer protection and regulation, the rural poor environment
situation and opening a transaction account (http://blogs.worldbank.org, 2015).
There are several reasons to prevent access to financial services in developing
countries. For example high levels of government debt lead to limited access of
credit to firms and individuals. Lack of access to financial services is a disadvantage
for individuals, especially for the poor, women, rural populations, as well as for
firms such as small and medium enterprises (SMEs) (UNCTAD 2015). SMEs are
important drivers to create jobs, employment, innovations and GDP growth in
developing countries but many developing countries don’t have strong regulation
to access to financial services for SMEs (Stein 2010). Moreover many financial
intermediaries, such as commercial banks, generally don’t accept to serve SMEs
due to the high cost of small transactions.
Governments try to increase access to financial services through direct lending to
the banks to expand their branch networks in rural areas take the economy to a
higher level (Shafi and Medabesh 2012). Banks especially commercial banks are
not adequately providing SMEs with capital in developing countries. Therefore
there are many alternatives to banking in developing countries such as postal
Sustainable and Inclusive Finance in Turkey 17
et al. 2014). 44.2 % of firms in Turkey using banks to finance investment in 2013
according to World Bank Development Indicator.
As seen in the Table 2 banking dominates the Turkish financial sector, account-
ing for over 70% of overall financial services, while insurance services and other
financial activities also show significant growth potential. There are 53 banks in
Turkey (34 deposit banks, 13 development and investment banks, 6 participation
banks). Out of 53 banks, 21 hold significant foreign capital (30 % of total assets are
held by foreign investors. Turkey’s economic growth has resulted in income growth
and a growing robust middle class with increasing purchasing power. Also the
increase in different loan product categories offered by banks supported the
increase in consumer loans. Within this scope, the introduction of mortgage
loans, which constitute more than 37 % of total consumer loans, reached to more
than TL 143 billion with a CAGR of 27 % from 2005 to 2015 (ISPAT 2016).
In terms of the financial sector itself, initiatives have developed over the past few
years to scale up sustainability within the banking sector, and the investor commu-
nity; the dialogue between these different initiatives are still at initial stage, and the
inclusion of insurance, as an important segment of the finance industry, in the
sustainability debate, is yet to be done (SFF 2014).
Commercial bank branches per 100,000 adults are 19.8 in 2014 and ATMs per
100,000 adults increased from 28.52 in 2004 to 77.08 in 2014 (data.worldbank.org).
Efforts to develop a more inclusive financial system have been successful, and
currently more than 85 % of the population has some form of saving and deposit
accounts after these laws and legislates in Turkey (Yorulmaz 2013).
As a result of broad-based industrial development and massive job creation
throughout the country, growth became more inclusive in the 2000s (Fig. 2). As
seen in the graphs income inequality, poverty and material deprivation all declined
and income gaps between regions have narrowed (OECD 2014).
Turkish government introduced financial legislation for a more inclusive finan-
cial system. The Consumer Protection Law of 1995 included explicitly to financial
services, and various consumer protection regulations within the framework of the
financial sector. In 2003, The By–Laws on Rules and Procedures for Early Repay-
ment Discount for Consumer Credits and Calculation of Annual Cost Rate was
introduced (Yorulmaz 2013).
Sustainable and Inclusive Finance in Turkey 19
Fig. 2 Growth has been quite inclusive during the 2000s. Source: OECD Economic Surveys
Turkey (2014)
Table 3 Key financial inclusion data for Turkey and peer comparisons (2011–2014)
Turkey, 2014 Europe and Central Upper middle income,
(data for 2011 in Asia, 2014 (data for 2014 (data for 2011 in
Indicator brackets) 2011 in brackets) brackets)
Account (% age 15+) 56.7 (57.6) 51.4 (44.9) 70.5 (57.2)
Account, female (% age 44.5 (32.7) 47.4 (40.0) 67.3 (53.1)
15+)
Account, young adults 41.6 (43.8) 35.6 (31.9) 58.1 (51.7)
(% ages 15 – 24)
Saved at a financial 9.1 (4.2) 8.4 (7.0) 32.2 (24.2)
institution in the past
year (% age 15+)
Loan from a financial 20.0 (4.6) 12.4 (7.7) 10.4 (7.8)
institution in the past
year (% age 15+)
Source: World Bank; Tomilova 2015
4 Conclusion
Efficient and sustainable financial systems are crucial for developing economies to
achieve long-term balanced development. Governments, regulators and interna-
tional financial institutions have known that access to financial services can play a
key role to decrease the poverty and provide financial sustainable economy.
Financial sustainability can not be considered a one-dimensional phenomenon.
To contribute to the current debate about sustainable finance is important for all
stakeholders. Collaborative work should be initiated in order to spread the sustain-
ability of the financial sector and the real sector activities. Inclusive growth benefits
Sustainable and Inclusive Finance in Turkey 21
all fractions of the society during an economic growth. Societies need sustained
growth with inclusiveness. Growth is inclusive if it comes from structural reforms.
To reduce the number of poor people, to achieve human development and in
order to increase the involvement of the private sector, Turkey’s adoption of a
market operation that supports inclusive business models is seen beneficial. In
Turkey, making inclusive market functional is not only the responsibility of the
private sector. Both the state through legal regulations, development partners with
technical knowledge and funding, as well as civil society organizations with
advocacy and awareness raising activities should be actively involved in this
process. Companies adopting inclusive business models achieve positive results
such as increasing profitability, creating new markets, supporting entrepreneurship,
increasing the quality and quantity of employment and strengthening the value
chain.
References
Ararat M, S€uel E, Yurto glu BB (2014) 2014 sustainable ınvestment in Turkey: the case in
context—an update. Sabancı University Corporate Governance Forum of Turkey
Asongu S, De Moor L (2015) Recent advances in finance for ınclusive development: a survey,
AGDI working paper, WP/15/005
Block J, Hu B, Pickl S (2013) Inclusive growth and sustainable finance—a system dynamics
model. In: 31st ınternational conference of the system dynamics society, Cambridge,
Massachusett
Dichter TW, Harper M (2007) What’s wrong with microfinance? Practical Action Publishing,
London
Dixit R, Ghosh M (2013) Financial inclusion growth of India—a study of Indian states. Int J Bus
Manag Res 3(1):147–156
EC—European Commission (2008) Financial services provision and prevention of financial
exclusion, March
Fatemi AM, Fooladi IJ (2013) Sustainable finance: a new paradigm. Glob Financ J 24:101–113
Fergus AHT, Rowney JIA (2005) Sustainable development: lost meaning and opportunity? J Bus
Ethics 60(1):17–27
Ganioglu A, Us V (2014) The structure of the Turkish banking sector before and after the global
crisis, Central Bank of the Republic of Turkey, Working paper no 14/29
Garg AK, Pandey N (2007) Making money work for the poor in India: ınclusive finance through
bank-moneylender linkages, Working Paper
GPFI (2016) G20 financial inclusion ındicators. Global Partnership for Financial Inclusion. http://
www.gpfi.org/sites/default/files/G20%20Set%20of%20Financial%20Inclusion%20Indicators.
pdf
Hahn T, Figge F (2011) Beyond the bounded ınstrumentality in current corporate sustainability
research: toward an ınclusive notion of profitability. J Bus Ethics 10(3):325–345
Hanley N (2000) Macroeconomic measures of sustainability. J Econ Surv 14(1):1–30
Imboden K (2005) Building ınclusive financial sectors: the road to growth and poverty reduction. J
Int Aff 58(2):65–86
Inoue T, Hamori S (2016) Financial access and economic growth: evidence from sub-saharan
Africa. Emerg Mark Financ Trade 52:743–753
Investment Support and Promotion Agency of Turkey—ISPAT (2016) Financial services sector in
Turkey
22 S.Y. T€
urkmen and G. Ça
gıl
Jensen M, Meckling W (1976) Theory of the firm: managerial behavior, agency costs and
ownership structure. J Financ Econ 3(4):305–360
Klapper L, El-Zoghbi M, Hess J (2016) Achieving the sustainable development goals: the role of
financial ınclusion. CGAP: 1–20
Lavoie F, Pozzebon M, Gonzales L (2011) Challenges for ınclusive finance expansion: the case of
CrediAmigo, a Brazilian MFI. Int Manag 15(3):57–69
Miller M (2014) Financial ınclusion, Global financial development report 2014. World Bank:1–47
Mor N, Ananth B (2007) Inclusive financial systems some design principles and a case study. Econ
Polit Week Mar 31:1121–1126
Nair T, Tankha A (2015) Inclusive finance ındia report 2014. Oxford University Press, India
Neupane HP (2015) Advancing ınclusive financial system in the next decade: a case of Nepal,
Chapter 5, Ed: Min.B. Shrestha, Kuala Lumpur, Malaysia
OECD (2014) OECD economic surveys Turkey 2014
OECD (2015) Finance and ınclusive growth. OECD economic policy paper, June No 14
Park CY, Mercado RV Jr (2015) Financial ınclusion, poverty, and ıncome ınequality in developing
Asia, ADB economics working paper series no 426, pp 1–25
RBI (2008) Report of the Comittee on Financial Inclusion, January
Sarker S, Ghosh SK, Mollika P (2015) Role of banking sector to ınclusive growth through
inclusive finance in Bangladesh. Stud Bus Econ 10(2):145–159
Seker M, Correa PG (2010) Obstacles to growth for small and medium enterprises in Turkey,
Policy Research working paper 5323, World Bank
SFF (2014) Sustainable finance forum. Proc Forum, 16 May, pp 1–6
Shafi M, Medabesh AH (2012) Financial ınclusion in developing countries: evidences from an
Indian State. Int Bus Res 5(8):116–122
Shah P, Dubhashi M (2015) Review paper on financial ınclusion—the means of ınclusive growth.
Chanakya Int J Bus Res 1(1):37–48
Shyni VK, Mavoothu D (2014) Financial Inclusion—the way towards ınclusive growth. Int J Adv
Res 2(2):649–655
Sjauw-Koen-Fa A, Vereijken I (2005) Access to financial services in developing countries.
Rabobank Nederland, September, pp 1–32
Staschen S (2015) Payment innovations in Turkey: not (yet) reaching the unbanked, CGAP.
21 Aug 2015. https://www.cgap.org/blog/payment-innovations-turkey-not-yet-reaching-
unbanked. 5 Aug 2016
Stein P (2010) Inclusive finance. Korea-World Bank High Level Conference on Post-Crisis
Growth and Development, Korea, pp 1–37
Stiglitz, J.K. (2016). An agenda for sustainable and ınclusive growth for emerging markets, J
Policy Model: 6295
Swamy V (2010) Bank-based financial ıntermediation for financial ınclusion and ınclusive growth.
Banks Bank Syst 5(4):63–73
Tomilova O (2015) Progress and opportunities for financial ınclusion in Turkey, CGAP,
12 August 2015. https://www.cgap.org/blog/progress-and-opportunities-financial-inclusion-
turkey. 5 Aug 2016
UNCTAD (2015) Access to services a driver for the post-2015 development agenda. Policy Brief
No 35: 1–4
United Nations(2006) Building ınclusive financial sectors for development, May, New York
UNSGSA (2015) Financial inclusion—creating an ınclusive world, Annual report to the Secretary
General
Woller GM, Woodworth W (2001) Microcredit as a grass-roots policy for ınternational develop-
ment. Policy Stud J 29(2):267–287
Yorulmaz R (2013) Construction of a regional financial ınclusion ındex in Turkey, BDDK
Bankacılık ve Finansal Piyasalar, Cilt:7. Sayı 1:79–101
Zwedu GA (2014) Financial ınclusion, Regulation and ınclusive growth in Ethiopia. ODI,
November, Working paper 408
Sustainable and Inclusive Finance in Turkey 23
http://blogs.worldbank.org/voices/five-challenges-prevent-financial-access-people-developing-
countries, 2015. 12 Jun 2016
http://data.worldbank.org/indicator. 12 Jun 2016
http://www.microworld.org/en/news-from-the-field/article/financial-inclusion-challenge-develop
ing-and-developed-countries. 12 Jun 2016
http://www.tgmp.net/en/sayfa/who-are-we-/117/0. 18 Jun 2016
http://www.mod.gov.tr/Lists/OECDEconomicSurveyofTurkey/Attachments/1/OECD%20Eco
nomic%20Survey%20of%20Turkey%202014.pdf. 25 May 2016
Sibel Yılmaz T€ urkmen Associate Professor Sibel Yilmaz Turkmen is currently a faculty
member in the Faculty of Business Administration at Marmara University, Istanbul-Turkey.
Dr. Turkmen gained her bachelor’s degree from Istanbul University, the Faculty of Business
Administration in 1999. She received her master’s degree (MA) and Doctorate (Ph.D.) in
Accounting and Finance, in 2001 and 2006, respectively, both from Marmara University. She
teaches finance courses at graduate and undergraduate levels and her research interests are in the
areas of corporate finance, financial markets and institutions and international finance.
G€ulcan Çagıl Associate Professor Cagil graduated from Faculty of Business Administration at
Istanbul University in 1999. Dr. Cagil got her master’s degree in Banking in 2001 and also her
Doctorate (Ph.D.) degree in Banking in 2006, both from The Instutute of Banking and Insurance at
Marmara University in Istanbul, Turkey. Dr. Gulcan Cagil is currently a member of School of
Banking and Insurance at Marmara University
Monetary Policy Divergence and Central
Banking in the New Era
Bilal Bagis
1 Introduction
Growth rates among the world economies are in a downhill rivalité trend. Both the
Emerging Markets (EMs) and the Advanced Economies (ACs) are likely to con-
verge to the same low growth rates looking forward, Pimco (2015). This is, in
particular, due to the major slowdown in the EMs. Even China has much lower
growth rates, today. To be more precise, growth rates are still high; but the rate of
growth is falling. Japan and the Eurozone are also in recession and need more
stimulus packages to avoid the deflationary spirals. In the US and the other EMs,
growth rates are still below the potential rates. Geopolitical risks, meanwhile, are
still high and less likely to fall in near future.
As the real economies are converging down, there is strong evidence for conver-
gence among the financial markets of the West as well. This is crystal clear
particularly for the European economies. It is, after all, one of the key elements of
the currency union, Bagis (2016). This financial market liberalization has brought in
B. Bagis (*)
Faculty of Economics and Administrative Sciences, Department of Economics, Istanbul
29 Mayis University, Umraniye Campus, Umraniye 34662, Istanbul, Turkey
e-mail: bbagis@29mayis.edu.tr
Fig. 1 The policy interest rates around the world. Source, telegraph.co.uk (Telegraph, 2016)
free capital movement; and capital movements usually result in spillover effects to
the emerging market economies. Meanwhile, there is currently a significant and
increasing variance within the monetary policy cycles of the major economies.
Growth variations are diverging more across regions and unions rather than
depending on the level of development. In particular, the USA and the UK are
growing faster; while the EU, Japan and China are coping with structural issues and
the resulting recessions (Fig. 1).
The American economy is doing quite well these days and the world’s most
prominent central bank, the Fed, has stopped its bond-buying program. As of
December 2015, it has decided to even raise its policy rate. On the other hand,
the ECB and the BOJ are imposing negative rates to stimulate their economic
activity. Meanwhile, argumentally the third most effective central bank, the Peo-
ple’s Bank of China (PBOC), has also recently increased its expansionary stance.
As the world’s major economies shift their monetary policies, though, just few of
the emerging market economies have taken suitable measures to diminish vulner-
ability of their economies to the external shocks.
The IMF projections and relevant studies point to 3 different trends across the
globe.
First of all, there is the group of Emerging Markets where growth rates are lower
than the pre-2008 levels; yet these economies are still growing much faster than
their Advanced Economy (AC) counterparts. They mostly respond to the policy
shifts in the ACs. Secondly, in the UK and the US recovery is continued. They
currently normalize their policies. The only risk factor is foreign demand, from the
Europe and China. And finally, Europe, China and Japan still struggle with the
crisis. Deflation risk is still alive and growth rates are down.
Monetary Policy Divergence and Central Banking in the New Era 27
There are currently concrete divergences across the world economies, both in
terms of growth and the monetary policies implemented. Even just focusing on the
advanced economies, you would realize that: on one hand, you have the US and UK
that are growing acceptably and tightening their monetary policy; and on the other
hand, there is the Euro Area and Japan with weak economy and hence following
expansionary policies (Fig. 2).
A multi-speed world is the core concept behind the Pimco’s New-Neutral
argument. In their December 2015 cyclical outlook, Pimco (2015) stated that they
expect the central banks policies to vary as the global economies continue to
converge. Most resources demonstrate that even the Fed has accepted and has
already started to act according to this “new normal”. The Fed is said to have
already taken the new “neutral” real interest rate into account.
As the monetary policies are diverging, one might wonder how the current eco-
nomic activity and growth rates are across the world. The US economy has been
doing relatively well, with growth rates ranging between 1.5 % and 2.5 %. On the
other hand, despite the recent improvements in Japan and Europe, most of the
significant economies such as the BRICs, Europe and Japan have had new reces-
sions post-2009.
Post the 2008 crisis, the world economies have discovered themselves in a New
Normal. This new Normal may be characterized as a new potential level of output,
inflation; as well as a new era for financial markets, key economic policies, and the
asset market movements. As Pimco has rightly pointed out, the world economies
are going through a series of changes and hence are likely to crave for the previous
growth figures of the pre-2008 period.
The economic activity across the globe is currently slowing down. The world
economy was growing at an average of over 5 % before the 2008 crisis. Post-the-
2008 crisis, though, that rate has fallen to around (and in some cases below) 3 %.
Even among the BRICs countries, growth rates are falling. More effective policies
need to be targeted at sustainable growth rates. Yet, even these growth forecasts
might be expected to further slow down from a cut back on QE in the US, which
might also trigger a capital outflow from developing economies.
28 B. Bagis
While Europe is still struggling with the recessionary trend, economic indicators
from the US economy prove that the USA is currently the strongest economy
among the ACs. The positive outlook is likely to continue for a foreseeable future.
For example, shale energy revolution has diminished reliance on oil imports and the
continuous deleveraging process post the 2008–2009 crisis has decreased the debt
burden. Meanwhile, QEs have helped the asset markets recuperate and therefore
helped improve the borrowing constrants. The construction sector, in line with
improvements in the housing sector, has also contributed to dragging the unem-
ployment rates down.
The more dynamic American economy, with its live labor markets, is gradually
recovering from the Great Recession. The European economy, on the other hand, is
still stagnant in its recessionary case. Unfortunately, it does not seem very likely to
recover very soon. Some parameters, such as extremely low inflation rate, even
point to further deterioration. The Eurozone economies are facing serious structural
issues. Most of these issues are fundamental and related to the way the Euro system
is designed.
Asian economies are also suffering. Most are slowing down, and are likely to
remain weakly growing in near future. Export and investment oriented growth
strategy of China has failed due to low demand from the rest of the world,
and inefficient use of the expansionary monetary policy. The Japanese economy
is also struggling to get out of the trap it has been stuck since the mid-1990s. Eastern
Asian and Pacific countries have had really high growth rates during the past few
decades. Post the 1997 Asian crisis, and except China, they have had relatively low
private sector and public leverage.
The Asian economies had traditionally focused on export oriented growth
strategy; increased their savings and created huge FX reserves. They increased
their savings, and invested that money into the American financial markets. This
huge capital inflow has elevated the demand for dollar, lowered the interest rates
and strengthened the domestic demand in the US. Post the Great Recession, though,
one thing the EMs have been doing very well is to appeal to the macroprudential
regulations to deal with the unprecedented monetary expansion process in the
advanced economies. That way, the EMs were able to eliminate part of the spillover
effects of the QEs implemented by major economies.
The Eurozone economies are suffering from problems in the banking system.
Talks of a move towards banking union are common recently. Troubles are prone to
carry on at some extend, hence the lower growth forecasts. In the Eurozone, growth
projections for 2016 are 1.4 %; a further 1.7 % in 2017 and 1.8 % in 2018. Growth
and inflation figures are expected to rise as more QE comes in.
The Chinese economy has recently been slowing down and that makes many
economists think that we ought to anticipate a substantial monetary easing from the
PBOC in near future. The economy is expected to grow at about 6 % and the
inflation rate is likely to be around 0 %. China’s current economic outlook may
be summarized with a gradual slowdown, structural transformation, a highly lev-
eraged economy, a recent exchange rate regime shift during the summer of 2015
and occasional asset bubbles and equity market corrections.
Monetary Policy Divergence and Central Banking in the New Era 29
Inflation rates are currently very low globally. Even future inflation expectations are
still extremely low in many advanced economies and are likely go down further.
Both low commodity prices and supply surpluses are primary reasons that have
drawn the inflation rates down. One might wonder why the low inflation rates
should matter in the first place. At its core, very low or negative inflation expecta-
tions mean higher real interest rates. Higher real rates dissuade consumption and
demand. As policy options are limited in many cases, the concern is that central
banks will not be able to respond.
Inflation rates in both the US and the UK are not much likely reach the 2 %
targets. The 2 % target is considered as an optimal inflation rate by many central
30 B. Bagis
banks and prominent economists. In the Eurozone, as opposed to the US, inflation
expectations are much lower: still positive at some 1 %, but much lower than the
over 1.5 % in the US. Inflation outlook is likely to improve as the base effect (pass-
through) of oil prices and (in the case of the US) the base effect of highly valued
dollar is likely to fade away.
In troublesome economies such as Brazil, Russia and even in Turkey inflation
rates are relatively high, and recessions and shifts in monetary policy are expected
to bring the inflation rates down. A recession, for instance would bring the demand
down; and hence, the inflation rate will head down. In most EMs where inflation is
above the target, though, tighter monetary policy is expected to bring inflation
rate down.
The recent slowdown in China has put a downward pressure on commodity
prices. As demand for housing has declined; for instance, steel, cement and glass
prices have been falling, The Economist (2016). Falling producer prices (because of
overcapacity), and decreasing commodity prices have lowered the inflation rate.
PPI (producer price index) is down to 5.9 % (the lowest since 2009) in China and
increases the deflation risk. Another critical example is the declining new home
prices. Headline inflation is expected to fall to 1.5–2.5 % in China. The PBOC is
therefore expected to respond aggressively to stimulate the economic activity.
In the Eurozone, in the meantime, headline inflation was 0.2 % in February-
2016 and the annual inflation expectation is down to 0.1 % for 2016. At around
0.2 %, the annual inflation rate is at its historic low levels, and much below the
ECB target at 2 %. At its current level, inflation rate is at the 5 year low level. Even
the 5-year forward expectations are below the 2 % target. Low energy and food
prices, as well as low capacity utilization rate, contribute to the low inflation rate
and expectations. The extremely low inflation rate brought in a new monetary
easing via a much lower negative rate and an increase in the scale of the ongoing
QE program. All of these occasions possess impacts over the risk perception of
investors and the global economic growth prospects.
2 Policy Implementation
After the Great Recession, advanced economies of the West mostly went by
the Keynesian expansionary policies. Nowadays, though, the monetary policies
are diverging. But it is more as if the ECB, BOJ and the PBOC are replacing the Fed
and the BOE (and even the BOC) to keep the global liquidity high. In particular, the
Jackson Hole meeting of August 2014 marks a key period of clear policy diver-
gence across the world economies. Post the meeting, Central Banks in the US and
the UK are currently tightening; while, the ECB, the BOJ and the PBOC (as well as
some other European central banks) are expected to expand further.
The BOJ and the ECB are running expansionary monetary policies currently and
the Fed and the BOE are contracting by interest rate hikes. The PBOC is on its way
to expand further. The ECB and the BOJ (in a similar manner to their Swedish,
Monetary Policy Divergence and Central Banking in the New Era 31
Danish and the Swiss counterparts) keep cutting their policy rates down the
negative territory. The policy implementations and the changing interest rates are
causing are higher monetary policy divergence.
In theory, policy options during a recession such as the latest Great Recession
include but not all: the currency flexibility (flexible exchange rate), monetary
easing, fiscal stimulus, and various consumption pattern improvements. Post the
2008 crisis, all of the big central banks (the Fed, the ECB, the BOE and the BOJ)
except the PBOC have decreased their policy rates to “close to zero-lower-bound”.
China, on the other hand, still has the highest nominal interest rates among the big
economies. The PBOC has not hesitated to even tighten its money supply. This
particularly obvious policy divergence offers the ‘currency wars’ argument.
Here, it is probably important to keep in mind that the unconventional monetary
policy—in the form of balance sheet expansion—is a substitute for lower policy
rates. Meanwhile, considering the fragile (export-dependent) Asian economies;
currency devaluation seems to be a good option as a solution to the current weak
activity. Yet devaluations have side effects such as a currency war that brings down
the other currencies as well and lowers commodity prices as demand falls. Falling
oil prices, for instance, in turn decreases the EM currencies’ relative value. It also
negatively changes the risk appetite and therefore raises financial market
volatilities.
One disadvantage of the expansionary policies is that, at some point, economic
growth may become directly related to the loans growth and hence the idea of
hormone-fed growth. IMF has long been warning about credit growth rates in
emerging markets such as Turkey and China. Especially in China, aggregate credit
amounts to over $20tn, a few times the size of the Chinese GDP. They have more
than doubled in the past 5 years.
The Lehman effect over the global financial system is still not erased completely.
As was mentioned above, most of the world economies are still below their
pre-2008 potential production level. Whilst the monetary policy so far implemented
have in general been effective; in special cases, such as the Eurozone and the Japan,
it has proven insufficient. Many central banks are still dealing with the adverse
outcomes of the 2008–2009 crisis. According to Pimco (2015), more than 40 central
banks have so far gone for easy monetary policies during the year of 2015. The
ECB, the BOJ and even the PBOC are very likely to keep their expansionary
monetary policies going further (Fig. 3).
32 B. Bagis
20
0
2007 08 09 10 11 12 13 14 15
The year of 2016 witnessed a new round of stock price collapse. Ongoing
recessions in various big economies and declining trade brought the marketplace
anticipation for brand spanking new expansionary policies up. Yet, in particular in
the case of China, there is this concern that the central banks are more and more into
a tit-for-tat policy action. The biggest danger in this case is the PBOC and the very
high possibility of a huge currency devaluation by the Chinese authorities. Indeed,
the recent monetary easing policies by the BOJ and the ECB have increased the
possibility of such currency devaluation cycle.
In the emerging market countries such as Russia, Brazil and Turkey, where
inflation is above the target, tighter monetary policy is expected to bring the
inflation rates down. Even IMF suggests that the emerging market economies
should focus on tighter measures to deal with, among others, the capital outflows.
The Fed is expected to hike its policy rate (tighten) gradually. The BOE and high
inflation countries such as Brazil and South Africa are also expected to tighten right
after the Fed. The other central banks such as the PBOC, the BOJ and most notably
the ECB are all very likely to ease further. They could do it either by expanding the
QE or cutting further their policy interest rates.
Advanced economy central banks are diverging. The Fed has raised its policy
rate for the first time in 10 years in December 2015 and is planning to gradually
increase it further. Japan and the Eurozone are still easing their policies, even
further below the negative territory. The fact that not all of the advanced economies
are tightening, helps make sure the monetary tightening could have minimal out-
comes over the rest of the world.
Monetary Policy Divergence and Central Banking in the New Era 33
The Fed is currently in a process of tightening. The Fed tightening and rate hike is
expected to be slow and gradual. The BOE is also expected to keep the rates high
and tighten in line with the Fed. Growth and inflation figures are expected to rise as
the economy recovers further in both the US and the UK. As the Fed is tightening,
mortgage rates and the long-term rates are also increasing. Increasing bond yields
mean more capital outflow out of the EMs and towards the US. This adverse capital
move (opposite the post 2008 period) could potentially cause malocclusions in the
financial markets.
One thing is for sure: the current Fed governor Yellen is dovish. She is likely to
favor keeping the nominal rates low; yet there is a hawkish stance in the Fed in
general. She has also explicitly stated her intention towards a gradual nature of exit;
in a way “walking on egg shells” when reversing their QE policies. The Fed finally
raised its policy rate, for the first time in about 10 years, in December 2015. They
are expected to raise the short-term rates by 1 % in 2016, another 1 % in 2017 and to
3.25 %. This is, indeed, consistent with the assumption that the neutral (or the
equilibrium) policy rate is at 2–3 % nominal and 0–1 % real. Yet, since that decision
was expected much earlier, the historic rate rise did not affect the markets much.
The Fed, meanwhile, based their decision on the favorable data from American
economy.
In the US, especially after the recent policy rate hike; both the mortgage rates
and the long-term bond yields are recorded to rise. As nominal rates go up in the
US, more and more capital flows into the US from the EMs, in order to benefit from
those high rates. This is the fundamental reason behind the recent financial market
volatilities.
This gradual upward trend is partly about the movement of the neutral real
policy rate. The neutral real rate is proven to have changed recently (see the
PIMCO’s new neutral argument as opposed to the constant real rate argument of
Fama, a classic economist), Pimco (2015). It is considered to be around 0 %
currently and moving up very slowly. The equilibrium real policy rate (or the
“neutral” real policy rate) was at 2 % before the global financial crisis of 2008.
The “equilibrium” real policy rate, mentioned here, is the neutral rate ‘r*’ that
shows up in the Taylor rule.
The BOE, on the other hand, follows the targets set by the British parliament.
They currently seek to achieve the 2 % inflation target, and that only. The UK
economy is also doing relatively better. The BOE, therefore, is expected to tighten,
in order to deal with side effects of the Fed policy and to achieve its inflation target.
Tightening is expected to follow that in the US; yet, while the Fed adjusts its LSAPs
once, the BOE follows a more active policy and adjusts its asset purchase policies
frequently.
The following statement is useful as a measure of the monetary easing used in
the UK in the past. The effect of LSAPs on the long-term yields has been small in
Japan, due to small scale of purchases and shorter maturities of the assets bought. In
34 B. Bagis
the UK, however, it was as high as that in the US, due to the similar scale and
maturity of assets purchased. For instance, during the first few years of the post
Great Recession period, LSAPs-nominal GDP ratios were 4 % in Japan as opposed
to 12 % in the US and 14 % in the UK.
than the Great Moderation period in the G7 countries. Indeed, in most of the Asian
economies, further expansionary policies are expected looking forward. These
expansionary policies should be considered via decrease in interest rates or depre-
ciation in the local currency, Eichengreen (2016).
For example; at the end of January 2016; the BOJ, at its first interest rate move in
past 5 years (since October 2010), cut its policy interest rate to 0.1 % (previously
+0.1 %). Many considered it as a surprise; yet, it is consistent with their ongoing
easing process to hit the 2 % inflation target. The BOJ, therefore, now charges 0.1 %
fee on deposits withhold at its reserves. The goal is to boost lending on the financial
institutions side and hence more borrowing on the real sectors side. And that way,
they intend to take the economy out of the deflationary boundaries. They have also
delayed the expected date to achieve their inflation target of 2 %. The 2 % inflation
target if one of the key goals of Abenomics.
The BOJ has been expanding its monetary base at the rate of 80 trillion yen per
annum, since October 2014. They have also announced that, further downing into
the negative territory is still on the table, if necessary. This is just another example
of monetary policy divergence across the globe. The BOJ has separated from the
Fed and instead approached expanding regions. The BOJ decision follows the same
move made by the ECB in June 2014; and is in the same way aimed at having the
commercial banks use their excessive reserves, held at the BOJ, as loans to the real
sector and businesses.
The BOJ is buying long-term government debts, as well as private assets such as
real estate trusts. Mr. Kuroda and his team are determined to do ‘whatever it takes to
achieve the inflation target’. The BOJ is expected to keep expanding its monetary
base by 80tn yen (equivalent to $705bn or 16 per cent of gross domestic product) in
near future. The BOJ balance sheet will therefore soon reach 80 % of GDP and at
that level, it will be far bigger than the size of the balance sheet of its other
counterparts: namely the Fed, the BOE and the ECB. They also plan to expand
maturity of the asset holdings, more towards 10 years.
The PBOC is also easing its monetary policy, by cutting the deposit rates or the
required reserve rates. For instance, through the end of January 2016, the PBOC
launched its largest easing policy in past 3 years. Alternative to that huge monetary
expansion was accepting a deflationary case and a recession. But China preferred
monetary easing to deflation. Indeed, post-the 2008 crisis, and in an effort to deal
with weaknesses in its exports and investment sectors, Chinese authorities paved
the way for an excessive easing policy. They have increased investment and eased
the credit policies. The recent easing cycle of the PBOC is candidate for one of the
greatest monetary expansions.
China has, during the past 8 years, been in a process of monetary easing and
cutting policy rates in an effort to keep the growth rate at its previously high levels.
In 2009, the Chinese government injected about $600bn into its slowing economy
in an effort to stimulate the economy. The PBOC increased the money supply by
more than the total amount in previous 4 years. It worked out and temporarily
increased the economic activity. At the end of January 2016, the PBOC announced
that it plans to increase monetary base by another 600bn yuan (a little over $90bn).
36 B. Bagis
This latest attempt is meant to deal with any possible liquidity crunches in the near
future, to stabilize the interest rates (by controlling the liquidity needs of the
market), and is also meant to diminish the pressure to decrease the reserve require-
ment ratio (another expansionary policy measure). This is, meanwhile, in addition
to the ongoing expansionary measures.
Unlike most of the developed world economies, China is not yet out of its
conventional policy measures to deal with cyclical volatilities. Policy rates are
not yet at the zero-lower bound. They therefore still have a huge room for maneuver
to deal with cyclical volatilities. This provides a room for more effective monetary
policy use compared to any fiscal expansion policy. Yet, monetary expansions have
caused a lot of problems as well. Mostly, due to the inefficient injection of the new
money supply; it caused higher leverage and hence has increased the financial risk
of the local governments and private corporations. The misdirected loans created
housing bubble and soared the debt overhung.1
As the Chinese economy has slowed down, expansionary policies from the
policymakers are more likely looking forward (in particular in 2016 and 2017).
Chinese official have already declared their willingness to expand the monetary and
fiscal policies to boost the economic activity. Indeed, even in most of the other
Asian economies, further future expansionary policies are expected. These expan-
sionary policies should be considered as part of an effort to boost the economic
activity via decrease in interest rates or a depreciation in the local currency.
Monetary easing policies have also lead to the discussion of currency war. The
idea is that when one of the countries goes for an expansionary policy, the others
follow to devalue their currencies respectively. As an example; America and China
are two big trade partners. Once one of them devalue and make their currency
relatively cheap, the other may retaliate and devalue their own currencies to make
their goods more attractive. Countries may, looking forward, work on making their
own currencies more attractive to gain the exorbitant priviledge as well. The US
dollar is the dominant global currency today. Yet, Chinese may try to make renminbi
the global currency and hence benefit from the “exorbitant privilege” (Fig. 4).
The PBOC, in particular, should be expected to assume the role the Fed took in a
foreseeable future. The Fed provided an unprecedented amount of QE after the crash
of the Lehman Brothers. The PBOC could and indeed should go for a similar measure
to stimulate the economic activity in China. They have indeed signaled that they
would not hesitate to take on such a role. Yet, the risk of causing uncertainties is out
there. If the dollar loses its reserve currency role, financial markets may have difficulty
finding the next best safe heaven to insure against a new financial turmoil.
1
It was the greatest ever monetary expansion in the world economy history according to some:
https://www.foreignaffairs.com/articles/china/2016-01-11/end-chinas-rise
Monetary Policy Divergence and Central Banking in the New Era 37
Fig. 4 Interest rates and monetary policy divergence. Source, the Fed
Not all of the EMs demonstrate the same characteristics. Given the interest rate
hikes in the US and after the recent oil price slump, most of the oil producing EMs
had to tighten their monetary policy. These contractionary policies are implemented
in an effort to stop devaluation in their currencies and keep the resulting inflation
rate down. Examples include, but not all, the Latin American oil producers Brazil,
Mexico; as well as African oil exporters Nigeria and Angola. They keep interest
rates high to attract capital and hence foreign currency inflows.
Some of the EM economies usually run huge CA deficits and are therefore
more vulnerable to external shocks. The Fed rate hike, negative policy rates and
further easing policies as well as economic weakness in China are all crucial factors
that determine the fate of the financial markets in the EMs. They frequently face
boom and bust cycles; the high growth rates are usually followed by sharp collapses.
Fragile 5 economies of the investment bank Morgan Stanley (namely Turkey, Brazil,
South Africa, Indonesia and India) are a good example. These varying responses lead
to a necessary divergence even among the different EMs as well.
Although most EM currencies have weakened recently, the corresponding econ-
omies’ outlooks differ substantially. Economies with current account surpluses and
relatively strong growth prospects are usually more resilient to tighter policies in
the ACs. In contrast, the currencies mostly at risk of a further sell-off are those in
countries with larger current account deficits and weak growth prospects, such as
38 B. Bagis
Brazil, Turkey and South Africa, Eichengreen and Bordo (2002). EM assets are
likely be driven by the forces of diverging monetary policies in core markets. As the
US takes back the lead of the global business cycle, EM economies will face
gradually higher Fed rates and a stronger USD.
In the Eurozone and Japan, the standard monetary policy has run its course. To make
a difference, central banks need to turn unconventional. At the end of January 2016,
the BOJ announced that they would implement negative rates on bank deposits
withheld at the central bank. This policy followed similar measures in the Eurozone
and some other smaller European economies such as Sweden, Denmark and
Switzerland. Before that, the ECB was giving strong signals of a new round of
expansionary policy. Negative rates basically mean that when a bank keeps extra
cash on its account at the central bank it will then have to pay the central bank a
certain percentage to keep their reserves there. Banks are encouraged to lend money
out, rather than keeping them at the central bank.
Negative rate policy, along with the recent monetary policy divergence trend
across the globe has reminded us of the recent changing trends of central bank-
ing itself. Broadly speaking, the recent central banking trends may be summarized
within two following classifications:
The first group of the central banks are changing just the tools they use. They,
either return to or are trying to get back to the conventional tools and the active use
of the interest rate channel (as in the case of the Fed), or focus on unconventional
policies: including using QEs and negative rates. The second group is changing the
focus of the Central Banks fundamentally, towards developmental central banking
(as in the case of Argentina) or more towards the recent popular nominal GDP
Targeting (that was even discussed at the Fed).
Nominal GDP targeting and the developmental central banking trends are less
popular. The usual focus today is over negative nominal interest rates. Further
easing, beyond the ZLB and implementing negatives rates is the new trend among
the advanced economies in particular. Central banks in Japan, Eurozone, Sweden,
Switzerland and Denmark have all gone for negative rates. Recently there have
been talks of possibility of even the Fed switching to negative interest rate policy.
Although Yellen has already announced that the Fed has not ruled out that option;
implementing negative rates is not very likely in the US. As pointed out by a recent
PIMCO quarterly outlook, the possibility of implementing negative rates is very
low due to the unintended side effects of such an expansion.
Meanwhile, there is a bigger tendency and more willingness, at the Fed and the
BOE, to reverse the monetary easing and increase the interest rates. Yet, even in
those economies, the short-term rates are currently still stable at or close to the
Zero-Lower-Bound. More importantly, the Fed is still following its post-1980s
trend of the neo-liberal inflation focus.
Monetary Policy Divergence and Central Banking in the New Era 39
4 Concluding Remarks
Monetary policy divergence is the new normal of the modern financial system. This is
despite the fact that the world economies are increasingly converging (Pimco 2015;
Clarida and Balls 2015). The divergence is both among today’s advanced economies
and the emerging economies. The divergence is not independent of the global growth
and inflation projections, as well as the nature of the unconventional policies. Frankly,
the QE policies of the post-2008 had similar implications as those of the post-1980s
financial liberalization period. Both were claimed to have resulted in a significant
spillover effects over the EMs. Hence, macroeconomic instability and more fragile
financial markets were inevitable. Some even argue they have caused an illusive
growth in the EMs via huge capital and liquidity inflows.
The theory goes that, the ultra loose monetary policy (and extremely cheap
loans) of the past 30 years, mainly in the US, have caused bubbles in various
markets both in the US and its big trading partners such as China. Low inflation
rates and very low nominal rates have created a vision of ‘Alice in the Wonderland’
across the world economies. In an effort to deal with these macro volatilities and
financial bubbles, policymakers implemented macro-prudential policy measures
and expansionary monetary policy. That trend might change now as the policy
divergence is becoming crystal clear.
Post-the Great Recession, markets have welcomed all the proactive monetary
policy actions and for the most part positively responded to any expansionary
measures. Central bankers, for the most part, have done their job to simulate the
stagnant World economy. Policymakers have cut policy rates for over 600 times
and expanded their balance sheets by over $10tn, since 2008. Meanwhile, ever
since the Fed’s first QE, the spillover effects have been at the heart of the discus-
sions. Yet, tapering might indeed have caused more volatility across the world
economies. In particular, EMs and the most fragile EMs suffered the most from the
end of the long expansionary program. EM currencies lost value as capital outflew.
And capital outflows, decreased the value of the EM currencies further. As EM
currencies lost their value, risk premium went up and the debt burdens went
up. Nowadays the same discussion is still alive. But this time the discussion is
within the context of policy divergence.
Policy responses across the world economies, to the policy divergence, is
changing as well. Just as they did to the earlier expansionary policies from the
ECB, the markets seem to positively respond to the BOJ’s easing policy. The only
problem is whether further weakening in yen and euro will cause any problems for
the US (via dollar appreciation) and China (yuan appreciation as a result of dollar
appreciation). Export sectors in both the US and China are currently a headache for
both economies. Central banks (in particular those in major world economies)
should beware of the spillover effects of their policies, hence policy coordination
among major central banks is of essence.
Inflation expectations is another crucial topic here. As once pointed out by
Mr. Draghi, of the ECB, there is significant positive relationship between “the
40 B. Bagis
size of a central bank balance sheet and the inflation expectations”. Maybe increas-
ing balance sheets in the Eurozone, Japan and even in China (economies that are
dealing with a major deflation risk) will help raise inflation expectations.
Policymakers and central banks in particular, usually face with the dilemma of
choosing between the continuity of the power they hold and the economic outcome
and efficacy of their policies in the global financial markets (as well as the
internationalization of their currency). Expanding countries fall more into the
group choosing economic outcomes to the power and influence. The currently
tightening ones, on the other hand, are the ones that are doing relatively well; and
hence are more focused on getting bigger and gaining more effective international
roles.
Monetary policies across the world economies are currently diverging. The
world financial markets are at an all-new normal today. Despite the growing
synchronization, economies are becoming more diverse in terms of the policy
implications. As pointed out by El-Erian (2016) and the ECB president Mario
Draghi, central banks have even recently been the single and ultimate source of
expansionary policies after the onset of the Great Recession. This makes the policy
divergence much more important than it has been contemplated upon.
An important risk factor here is that, in case of an expansionary policy, change in
the value of a major currency may cause uncertainties and side effects on various
other markets such as the commodities. For instance, devaluation in yuan may be
directly reflected as a fall in oil prices. It would therefore decrease the risk appetite
in the global financial markets and in particular towards the EMs.
References
Bagis B (2016) “D€oviz Kuru Sistemleri” (in Turkish). In: Eroglu N, Dincer H, Hacioglu U (eds)
Uluslarası Finans: Teori ve Politika. Ankara, Orion Kitabevi
Clarida R, Balls A (2015) December 2015 cyclical outlook. PIMCO.com. Retrieved 13 Feb 2016
Eichengreen B (2016) China’s exchange-rate trap. https://www.project-syndicate.org/commentary/
china-renminbi-crisis-capital-controls-by-barry-eichengreen-2016-02?utm_source¼project-syndi
cate.org&utm_medium¼email&utm_campaign¼authnote. Retrieved 10 Feb 2016
Eichengreen B, Bordo M (2002) Crises now and then: what lessons from the last era of financial
globalization? NBER working paper no 8716
El-Erian MA (2016) The only game in town: central banks, instability, and avoiding the next
collapse. Random House, New York
Pimco (2015) December cyclical outlook. https://www.pimco.com/insights/economic-and-mar
ket-commentary/global-central-bank-focus/r-new-neutral. Retrieved 27 Feb 2016
Telegraph (2016) Why negative interest rates Herald new danger for the world. Retrieved from:
http://www.telegraph.co.uk/finance/economics/12149894/Mapped-Why-negative-interest-rates-
herald-new-danger-for-the-world.html
The Economist (2016) China’s excess industrial capacity harms its economy. http://www.econo
mist.com/news/business/21693573-chinas-excess-industrial-capacity-harms-its-economy-and-
riles-its-trading-partners-march?fsrc¼scn%2Ftw%2Fte%2Fpe%2Fed%2Fthemarchofthezombies.
Retrieved 3 Mar 2016
Monetary Policy Divergence and Central Banking in the New Era 41
1 Introduction
Expansion into foreign markets offer new prospects but also gives rise to more
complicated risks, particularly financial ones, and it is the task of the accountants
and corporate treasurers to give advice on how to reduce these risks. In other words,
exchange rate movements can affect the size of payments both to and from
overseas. We define ‘exposure’ as the extent to which a company is affected by
exchange rate changes. Unanticipated exchange rate changes has serious implica-
tions for the business houses.
Suppose, if the sum that a company is supposed to receive falls because of a
change in the exchange rate, then it will find that its profits are squeezed even if
costs remain unaffected. It may be equally likely that an exchange rate movement
leads to a rise in the prices the company has to pay for overseas components, and
this would also lead to a fall in the profit margin if selling prices are fixed. What
makes this particularly perturbing is that the direction of change in the exchange
rate is uncertain, and an individual company has no control over it. Moreover, the
company also faces additional costs if it chooses to eliminate the uncertainty
associated with such fluctuations by shielding its exchange rate exposure in the
financial markets. Putting it in simple terms, exchange rate changes are of great
significance to a company as it affects their overall profitability.
Now, coming to the issue of mitigation of these risks. Companies face a trade off
between risk and control. Particularly, in the context of foreign risk, this trade off
can be made functional. The exporter may come across a transaction risk which can
be guarded against by dealing in a variety of currencies to generate a portfolio
diversification effect. At the opposite end of the control spectrum, the company
owning investments abroad has to take on the additional risk with regards to the
valuation of the assets. Therefore, it is the prerogative of all business houses to
decide the degree of protection they want against foreign exchange risk. As we
know, hedging refers to any facility having the objective of reducing risk. In foreign
exchange risk management parlance, hedging refers to the coordinated buying and
selling of currencies with the objective of minimizing the exchange rate risk. If
hedging is used to reduce risk, then the any international company exposed to such
risks is expected to face some sort of a dilemma. Should the hedging strategy be
comprehensive to cover all risk, or should it be selective? The moment you try to
hedge against a risk, an associated cost crops up; so, choosing to hedge only key
contracts or exposure above a specified value, there is the prospect of saving
money. The extent to which hedging instruments are used, and whether companies
should merely seek to reduce risk or should engage in currency speculation is a
matter of debate.
This paper basically deals with the measurement and management of the finan-
cial impact of international operations, with specific reference to exchange rate
risks. Exchange rate risks are classified under the broad headings of economic,
transaction, and translation risks. Section 3 takes up the issue of measurement of
economic, transaction and translation exposure. Control of the impact is assumed to
operate through the application of hedging techniques, the most important of which
are described in some detail in Section 4. A recent chronicle of foreign exchange
risk management in the Indian context has been discussed in Section 5. This paper
ends with a conclusion.
In Looking into the Foreign Exchange Risk Management 45
It is a challenging assignment to predict exchange rates with perfect accuracy but the
firms can atleast measure the degree to which they are being exposed to fluctuations
in the exchange rate depending on the types of exchange rate risk a firm faces. It is
common practice to summarize the financial risks of foreign trade or investment
under three broad headings viz. Economic, Transaction and Translation risk.
With the advent of globalization, capital moves quickly to take advantage of the
fluctuations in exchange rates. Economic risk crops up when there is a risk of
variation between the actual and forecasted cash flows as a consequence of volatile
exchange rate movements. This is important because, according to finance theory,
the present value of a company’s future cash flows can be used to determine its
market value. Stock markets world-wide are continually revising their valuation of
quoted stocks; the overall market value of a company is the value per single unit of
equity multiplied by the number of equity shares in issue. The current value of each
share is determined by discounting the future cash flows that will accrue to that
particular share. In this context, the general rule is that, a rise in the value of
forecasted cash flows leads to a rise in the value of the company. But, when a
business house is subject to a high degree of economic risk, that is, unstable foreign
currencies, in due course will make the share price of the corporate house concerned
more volatile (Jorion, 1988).
To understand the significance of economic risk, lets look at two real life
examples. After the exclusion of the Sterling from the European Exchange Rate
Mechanism (ERM) in September 1992, the pound relatively weakened, which
dramatically improved trading conditions for the British exporters. Also, according
to an extract of the report published by Dorling Kindersley (India), the East Asian
crisis in the late 1990s had a damaging effect on the economic problems of the
Asian countries. During this period, the value of the Japanese Yen declined
drastically, which ruthlessly affected the cash flows of the companies having
trade relations with Japan (Damodaran, 2006). This ultimately led to a fall in the
market value of those companies.
For example, suppose that an Indian company exports 75 % of its turnover to
USA, and the US Dollar is depreciating relative to the Indian Rupee. There is a risk
that even if US sales grow rapidly over the next five years, profit and cash flow will
not, because the Rupee value of those earnings is actually diminishing. This
external factor to the Indian company nevertheless affects the company’s market
value. Therefore, the technique is to carry out the buying and selling activities in a
variety of currencies which leads to diversification of the portfolio. In other words,
46 A.K. Karmakar and S. Mukherjee
this implies may be in one currency a business house benefits while in another it
suffers losses. So, in the long run the gains will act to counterbalance the losses and
minimize the economic risk on the whole.
Transaction risk describes a risk that arises when most goods and services are sold
on credit. If the deal is takes place in a currency other than that of the seller’s
currency, then the seller may find that the sum he is expected to receive varies from
what he actually receives due to changes in exchange rates. Whether foreign
currency receipts valued in terms of local currency falls or the local value of foreign
currency payments rises between the fixing of a contract and the date of payment or
receipt is not known. So, this uncertainty actually gives rise to transaction risk. A
brief illustration given below shows how the risk arises.
Suppose, ABC Limited is a renowned Indian industrial equipment manufacturer.
Suppose that they receive a contract from a US wholesaler for 200 tractors, at a
price of 46,000 US$ each. The exchange rate on the date of issue of the invoice is
1 US$ ¼ 60 Rupees (INR). The invoice is paid six months later, when the exchange
rate is 1 US$ ¼ 50 Rupees (INR).
When the deal was struck, the rupee value of the invoice is:
Six months later, when the actual payment is made, the sum received amounts
to:
The second outcome is lower as compared to the first. Thus, it is clear that there
is a difference between what the company expected and what it actually received.
Transaction risk affects the profit and loss account, and it is a manifestation of the
impact of short-term movements in the exchange rates. Say for example, in 1998,
sports retailer JJB Sports in their annual report had testified a decline in their profits
to the tune of 103,000 US$ as a result of random exchange rate movements. For this
reason, transaction risk needs to be frequently hedged. The different forms of
hedging are discussed in subsequent sections of this paper.
When the profit earned abroad by a business house is used to buy foreign assets,
then at the end of the accounting year, when the financial statements are prepared,
In Looking into the Foreign Exchange Risk Management 47
the value of those foreign assets needs to be translated into domestic currency of the
company. Purely on account of exchange rate changes, the valuation of these
foreign assets might change from year to year. This sort of a balance sheet exchange
rate risk is referred to as translation risk. This particular risk relates exchange rate
movements to the valuation of foreign assets in the parent company’s balance sheet.
If the risk relates to a change in values, it follows that the values may either go up or
down. As pointed out by the definition, the variation in value relates to both assets
and liabilities, and so a fall in the value of a liability might be considered as good,
whereas a fall in the value of an asset might be viewed as bad.
Suppose, XYZ is a renowned Indian construction company. We have considered
a hypothetical example, where in the year 2015, the company’s annual report states
that a 50,000 US$ rise in the company’s net debt, caused purely by exchange rate
movements. From this example, it is clear that liabilities are being valued more
purely on account of exchange rate movements than they ought to be. Such changes
in valuation are therefore an unrealised loss/gain (as the case maybe). Thus, the
magnitude of the exposure to translation risk is determined by the difference
between the value of overseas assets and liabilities.
Now, the question which arises is how to take guard against translation risk. One
of the ways to reduce translation risk is to tie in foreign held assets with liabilities
denominated in the same currency, so that the overall balance sheet impact of
changes in exchange rates is eliminated. This type of hedging is known as
matching, and later in the paper, it has been explained.
To measure the extent of economic exposure, we are going to discuss two specific
approaches. The approaches have been given in details below.
(a) Forecasting: Calculating economic exposure through this measure involves
categorizing the firm’s cash flows into income statement items, and then
evaluating how the earnings that have been forecasted in the income statement
changes in response to alternative baseline exchange rate scenarios. In general,
firms having more foreign costs than revenues will be unfavorably affected by
stronger foreign currencies.
(b) Regression Analysis: This method requires the use of regression analysis to
historical cash flow and exchange rate data. The model we are suggesting here
is,
PCFt ¼ a0 þ a1 et þ ut
48 A.K. Karmakar and S. Mukherjee
where,
pm denotes the proportion of total portfolio value in terms of currency m; pn
denotes the proportion of total portfolio value in terms of currency n; σ m gives us
the standard deviation of quarterly percentage changes in currency m whereas σ n is
the standard deviation of quarterly percentage changes in currency n; Now, ρmn
gives the correlation coefficient of the quarterly percentage changes between
currency m and n.
The analysis remains the same for the multi-currency framework and σ can be
defined as-
In Looking into the Foreign Exchange Risk Management 49
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
σ¼ Σi¼1
n
p2i σ 2i þ 2 Σi¼1 n
Σi¼1n
pi pj σ i σ j ρij ð2Þ
where, i not equal to j. The standard deviation equation takes care of all these
elements. The value of the portfolio can be easily calculated from the data provided
by the MNC for which we are estimating the risk. Empirically, using the monthly/
quarterly/yearly percentage changes in each currency one can calculate σ and
comment on the risk associated with the two-currency portfolio and then move
onto time series analayis by highlighting the variability in the movement of σ over
time. To calculate the correlations among exchange rate movements, we have used
quarterly exchange rate data for some of India’s major trading partners. We have
compiled the data from World Bank Data Bank and OECD’s reported exchange rate
statistics. The results shown in Table 1 have been obtained using the statistical
package Stata 12. Looking at the first column of the following table, it is clear that
exchange rate movements are going to have across-the-board implications for the
corporate houses located not only in India but also in the countries whose currency
correlations have been calculated here. It is important to note that countries who are
not engaged in major trading relations, their currency correlations are on the lower
side.
The currency correlation results show that the Indian Rupee is highly correlated
with the US Dollar, British Pound, Euro and the Chinese Renminbi. By and large,
currency correlations are positive and tend to move in the same direction. But, this
positive correlation may not always occur on a daily basis, but apparently holds
over longer periods of time.
Accounting rules vary from country to country in respect of the way in which
balance sheet values should be arrived at for the purpose of year-end translation.
There are two fundamental methods regarding the exchange rate that should be used
for the purposes of translation. These are explained below.
(a) Closing Rate Method: All financial statements are translated using the
‘current’exchange rate, that is, the exchange rate prevailing at the time of
preparation of the balance sheet. Starting from assets and liabilities, dividends
and equity accounts are all valued at the current exchange rate
(b) Temporal Method: The rate used is the one prevailing at the time the asset/
liability was acquired. In other words, certain assets and liabilities are translated
at exchange rates consistent with the timing of the item’s creation. Also, it
assumes that a number of line items for instance inventories and plant and
equipment are restated to reflect market value of that particular asset.
Once a firm recognizes its exposure it deploys the resources for managing it
effectively. In Fig. 1 below, the step by step procedure for effective risk manage-
ment has been highlighted (Sivakumar and Sarkar, 2008). Accordingly, the figure
clearly demonstrates which step should be followed by what.
Now, we move onto discuss the hedging techniques available to a firm. In
seeking to protect itself from transaction risk, a company may employ a variety
of hedging techniques, either in isolation or in combination. These are detailed
below.
At the simplest level, an exporter can pass the risk of a change in the exchange rate
over to the foreign buyer, by invoicing in the exporter’s own currency. As expected,
it is possible that the buyer will not deal with the currency risk without some
compensation, maybe in the form of discounted prices. Even this simple form of
hedging therefore has a cost.
Risk Estimation
Hedging
4.2 Matching
Transaction risk only arises when there is a mismatch in respect of the net value of
receipts or payments in foreign currencies. This means, that if, for example, a
Singapore-based company knows that it needs to make a payment of 20,000 US$ in
three month’s time it faces the risk that the value of the payment when expressed in
terms of Singapore Dollars may change over the three-month period, and may
increase. If, however, the company organizes its operations such that it has invoiced
a foreign buyer in US Dollars, and is due to receive 20,000 US$ in three months’
time, then there is no net exposure to the risk. The risk has been fully hedged.
Matching can be applied to receipts and payments to or from both external or
internal suppliers or customers. It is therefore, particularly useful for large group
organizations where there is a large volume of inter-group sales in a variety of
different currencies. Practically, such matching may be quite difficult to organize,
as it requires that both the time-scale and the value of deals get matched. Where the
match is not perfect, any outstanding net exposure can then be hedged using
external markets.
Matching is also used to reduce translation risk exposures, by means of matching
of assets and liabilities in a common currency. For example, by borrowing dollars, a
business house ensures that if the sterling value of dollar assets reduces because of
changes in the dollar-sterling rate, they will also benefit from a fall in the sterling
value of the dollar borrowings. If the assets and liabilities are of similar size, the net
exchange rate impact on the balance sheet is then zero.
4.3 Netting
An Indian firm has 2 million US$ receipts due in 12 months and makse use of the
forecasts to come to a decision whether to cover the dollar receivable with a
forward contract or wait for 12 months and sell in the spot market 12 months
hence. In terms of the forecasting errors, Mr X’s prediction of 46 Rupees (INR) ¼ 1
US$ yields an error of 13.2 % against a realized future spot rate of 53 Rupees (INR).
Mr. Y’s forecast is much closer to the realized spot rate, with an error of 11.3 %.
Here lies the puzzle. While Y’s forecast is closer to the rate eventually realized, this
however is not the important feature of a good forecast. According to our proposed
example, Mr Y predicts a future spot rate in excess of the forward rate, so if the firm
sticks to Mr Y’s prediction, the firm would wait and sell the dollars in the spot
market in 12 months i.e., a long position in dollars. Since, the future spot rate is
53 Rupees (INR) ¼ 1 US$ is less than the current forward rate at which the dollars
can be sold (55 Rupees (INR) ¼ 1 US$), the firm would actually gain by going for a
long position.
Following Mr X’s forecast of a future spot rate below the forward rate, the Indian
firm would sell dollars in the forward market (or go for a short position in dollars).
The firm would then sell the dollars at the current forward rate of 55 Rupees (INR)
per dollar rather than wait and receive only 53 Rupees (INR) per dollar in the spot
market in the near future. In this case, the forward contract yields much more. So,
the moral of the story is that a forecast should be on the correct side of the forward
rate; or else a small forecasting error will not be advantageous. Going by two
different predictions would lead to two different standpoints. The question which
crops up is that should the firm adopt a long or a short position in the FOREX
market. The speculators want a forecast that will give them the direction in which
the future spot rate will progress and accordingly they would position themselves.
An understanding of whether the currency due to be paid or received is expected
to weaken or strengthen is vital to the development of a coherent hedging policy. To
analyse it from the standpoint of the trading agents, we first look into the case of
importers. An importer may consider hedging to be unnecessary when the the
currency payable is weakening. Conversely, if a currency payable is strengthening,
a forward contract to buy that currency will be useful to check the increase in value
of the sum owed. For exporters, the opposite happens. In practice, many MNCs
have in-house forecasting units whose task is to predict future exchange rates.
One of the disastrous effects of hedging using the forward market is that it ‘locks
the company in’ to the forward rate, and if sales are being made in a currency which
is declining in value, i.e., depreciating, then even though the hedge protects the
value of the receipts, it cannot protect against the depreciation, which will be built
in to the forward rate.
4.6 Options
forward market is that an option does not have to be exercised, whereas a forward
contract is binding. The purchaser of an option must therefore make a decision as to
whether or not to exercise a specific option.
An option which grants a privilege to buy a fixed amount of currency at a present
price is a call option while one that confers a right to sell is a put option. The option
works by the parties agreeing an exercise price; this is the exchange rate agreed for
the currency if the option is to be exercised. There is a charge made for the option
itself (the option premium), and this is paid upfront regardless of whether the option
is exercised. The result is that the maximum cost of the hedge is the premium cost.
Options are particularly attractive in situation in which there is some uncertainty
regarding a transaction. The potential deal can be hedged at a cost equal to the
premium charged, and then if the deal stalls, no more expense is incurred and the
company is not locked into a currency deal as would be the case with a forward
contract. Alternatively, if the transaction does go ahead, the company has the
opportunity to exercise or not exercise the option depending on the prevailing
spot rate.
If the option is a call option, then the risk is that the option exercise price will
exceed the spot price for that particular currency. When this is the case, it is
preferable to buy the currency in the spot market rather than exercise the option.
In contrast, if the option is a put option, then the concern is that the spot rate will be
below the option exercise price. In this instance, it is again preferable to deal in the
spot market rather than exercising the option.
4.7 Swaps
Currency futures have declined in popularity in recent years, and are now traded on
only a limited number of exchanges. Presently, International Money Market based
in Chicago is the the main market where futures are traded.
A future is an obligation to buy or sell a fixed quantity of a commodity at some
future point in time. The original futures markets dealt in commodities such as grain
or soya beans, but the financial futures markets developed, when it was recognized
that currencies or interest rates are simply special types of commodities. The buyer
of a futures contract is required to make a deposit equal to between 1 and 5 % of the
contract value. While, it is on deposit this sum cannot earn any interest, and so the
lost interest forms part of the cost of this type of hedging instrument.
Currency futures are sold in blocks, for example, we consider a hypothetical
example where US Dollar futures are traded in 25,000 US$ blocks. This means that
to hedge a transaction of, say, 30,000 US$, a single future contract needs to be
bought. Consequently, this would leave 5000 US$ unprotected, and a different form
of hedge would be needed to protect this balance. The fixed contract sums are an
important disadvantage of futures deals.
If a currency is strengthening, the value of a futures contract denominated in that
currency will rise. This means that if an importer faces a bill in the strengthening
currency, if the position is unhedged the value of the bill will increase over time. If,
however, a futures contract has been purchased, the gain in value of the contract
will serve to offset the ‘loss’ from the increased bill value. It is very unlikely that the
futures gain will exactly offset the exchange rate loss, but the mechanism does offer
some protection.
The global financial crisis in 2008 showed how the opposite happens, when
domestic currency starts depreciating due to reversal of capital flows during crisis
situations. An abrupt depreciation in the domestic currency coupled with increase in
debt service liability would eventually lead to wearing away of the profit margin
and market-to-market implications for the corporate (Gupta, 2013).
Given this background, it is felt that one of the major factors behind the faster
recovery of the Indian economy, post the 2008 global financial crisis was the low
level of corporate external debt. And as a result, the significant weakening in the
value of the rupee did not have major consequences for the corporate balance-
sheets. A corporate body needs to be vigilant when going for a contractual foreign
currency borrowing, in particular when natural hedging is not available. This sort of
a natural hedge automatically happens when a foreign currency borrower also has
an export market for its products. Therefore, export receivables would offset, at
least to a certain extent, the currency risk inherent in debt service payments. This
transpires because fall in the value of the rupee that leads to higher debt service
payments gets partly compensated by the increase in the value of rupee receivables
through exports. It might so happen, that the currency of export receivables and
currency of borrowings is different. Under such a scenario, the prudent approach for
corporations is to enter currency swaps to redenominate asset and liability in the
same currency to create natural hedge. Sadly, too many Indian corporations with
little foreign currency gains leave foreign currency borrowings unhedged, so as to
profit from the low international interest rates. This is a risky venture for reasons
cited above and should be avoided.
One imminent hazard for a company, is that, a country in which it owns the
assets may institute currency exchange controls. Many emerging and developing
countries, including India, have introduced a system of currency exchange controls
which puts a ceiling on the use of local and foreign currencies. Developing
countries very often have far less hard (convertible) currencies than they need.
Therefore, rationing is the procedure which is followed. Anybody wanting hard
currencies are required to apply to a government agency, specifying how much of it
is wanted and the use to which it will be put to.
In the recent past, phases of exchange rate stability have spread complacency
among investors. The Indian corporations have always kept away from hedging
their exposures. This can be attributed to the traditional outlook they have, that is,
importers had no doubt that the Reserve Bank of India (RBI) would intervene to halt
any rupee decline whereas exporters believed that the Rupee has always been over
rated and that there is no way that it shall appreciate from the present value. In
general, due to corporate reluctance coupled with the lack of information and
technology and considering hedging as unwanted cost centres, companies involved
in hedging have mostly followed the traditionalist path to hedge their exposures,
that is, by entering into forward contracts (FC) with banks, which have been the
Authorized Dealers (AD) in foreign exchange market in India. The restricted use
and the lack of interest shown in the available instruments explained in Section 4
has not really led to the development of the external sector.
In Looking into the Foreign Exchange Risk Management 57
6 Conclusion
The aim of the chapter is to look in detail at the additional financial risks faced by
companies that choose to trade internationally. The risks can be compartmentalized
under the headings of transaction, translation and economic risk, but the type of risk
faced by the business firm is to some extent dependent on the nature of the
international involvement. Companies are primarily concerned about the effect of
changing exchange rates on cash flows, and a variety of hedging techniques are
employed to reduce exposure to exchange rate volatility. The extent to which
hedging instruments are applied and whether companies should merely seek to
reduce risk or should engage in currency speculation is a matter of debate. There
seems little willingness on the part of corporate houses to disclose to shareholders
the features of their foreign exchange management policy, although pressures are
increasing in this regard.
On the whole, the conclusion must be that internationalization serves to consid-
erably increase the financial risks faced by companies in determining corporate
strategy, there, management need to be aware of these risks, and confident in their
ability to manage them. Then, armed with the funds for expansion, and the
knowledge to limit financial risks they can address the next area of functional
management—how to find a market for their products abroad. International mar-
keting coupled with foreign exchange risk management should be the thrust area in
this era of globalization.
References
Ozlem Tasseven
1 Introduction
Heavy dollarization is observed in countries which suffer from high and volatile
inflation and high rate of domestic currency depreciation. In countries with high
inflation rates and depreciation of domestic currencies, foreign currency captures
the store of value function of the domestic currency. Domestic currency losses no
longer fulfills its function of acting as a unit of account and a medium of exchange
in ongoing high inflation environment. Achievement of functions of money with
another currency is observed as currency substitution or dollarization. According to
portfolio models of asset demand, currency substitution depends on the opportunity
costs of holding domestic currency and other assets (Vegia 2006).
Dollarization occurs as a result of macroeconomic instability. It explains the
vulnerabilities and currency crises in such parts of the world including Latin
O. Tasseven (*)
Faculty of Business Administration, Department of Economics and Finance, Dogus University,
Acıbadem, Kadikoy 34722, Istanbul, Turkey
e-mail: otasseven@dogus.edu.tr
America, parts of East Asia and East Europe. While the financial systems remain
underdeveloped in many developing countries, signs of progress in financial deep-
ening are emerging. Dollarization can impose difficulties on policy makers by
reducing the effectiveness of monetary policy (Giovannini and Turtelboom
1994). It constraints the ability of Central Banks to act as a lender of last resort,
avoids liquidity management of banks and decreases the financial stability. It
negatively affects banks’ balance sheets. Miles (1978), Girton and Roper (1981),
Ortiz (1983), Thomas (1985), Giovannini (1991), Giovannini and Turtelboom
(1994), Guidotti (1993), Kruger and Ha (1995), McKinnon (1985), Calvo and
Vegh (1992) analyzed the effects of dollarization on monetary policy.
Dollarization can cause ineffectiveness of economic policies by exposing the
balance sheets of public and private sector to foreign exchange rate risk when there
is currency mismatches between assets and liabilities as well as weakening the
structural fiscal balance. Furthermore, dollarization weakens the ability of govern-
ments to handle the debt issues and thus causes macroeconomic and output fluctu-
ations. By allowing residents to have foreign currency deposits, encouragement of
transacting their savings through financial system is maintained, rather than trans-
ferring the savings abroad or holding the savings in non-financial assets (Mecagni
et al. 2015).
Currency substitution in developed and developing countries shows different
processes. In developed countries, the demand for foreign currency, based on
foreign trade, and sometimes on portfolio diversification has been intended for
maximizing financial returns. In developing countries, foreign currency is
demanded in order to avoid the inflation tax, as well as performing operations in
the country along with these two cases. Successive crises experienced in banking
and financial systems are assumed to be related to financial dollarization. Financial
dollarization is observed when the assets and liabilities of economic units are
denominated in foreign currency. Studies carried out in recent years, have shown
that particularly liability dollarization, increases financial fragility. These fragilities
play an important role in spreading crisis which started with 1994–1995 Tequila
crisis, and the Asian crisis in 1997 (Galindo and Liederman 2005).
This chapter uses theoretically strong methods to examine the relationship
between dollarization and its determinants for Turkey. These methods are the
Geweke linear feedback, causality tests in frequency domain developed by
Breitung and Candelon (2006) and the wavelet comovement analysis developed
by Rua (2010). Frequency domain causality analysis investigates the causality
relationship in both low and high frequencies without the need of a selection of
maximum lag. Whether one variable causes the other or there is bi-directional
causality over the whole period is determined. The wavelet comovement analysis
considers both time and frequency domains. The advantage of using wavelet
analysis is that the variables can be observed at different frequencies and though
time simultaneously (Gencay et al. 2001).
The objective of this chapter is to investigate the relationship between dollari-
zation and its determinants such as interest rate differential, Istanbul stock
exchange index, central bank reserve ratio, expected inflation, expected
The Link Between Dollarization and Its Determinants in Turkey 61
depreciation and volatility index using data between 2001 and 2014 for Turkish
economy. Section 2 reviews the studies on dollarization in the literature. Section 3
describes the methodologies used in the chapter briefly which are the frequency
domain causality analysis and wavelet comovement methods. Section 4 presents
the empirical findings and finally Sect. 5 concludes.
Dollarization and currency substitution can be seen more in the countries where
there is a high and inconsistent inflation, where exchange rate depreciates often and
where there is a largely public deficit. Studies on dollarization show that dollariza-
tion proceeds rapidly in countries with poorly developed capital markets and
limited choices for domestic investments. Several Latin American countries includ-
ing as Argentina, Bolivia and Peru have experienced high dollarization. Once a
country experiences hyper-inflation or high and volatile inflation rates, dollarization
tends to be irreversible due to the anxiety caused by past experiences. This
irreversibility is known as the “ratchet effect” (Kamin and Ericsson 1993). Policies
adopted to affect money supply and similar monetary variables may not have the
desired results (Miles 1978). The main reason for failing to de-dollarize the
economy is that people may not find the reversal credible. In order to reduce
dollarization, the use of foreign macroeconomic policies should be strengthened
and financial system should be adjusted to increase the attractiveness of local
currency. In order to achieve irreversibility, the economy the macroeconomic
stability should be restated, companies should take measures to prevent currency
mismatches, market based strategies should be considered, investment opportuni-
ties in domestic currency should be encouraged so that the dependability of the
residents will be restored.
In the literature currency substitution for Turkey is analyzed by several authors
including Selcuk (1994, 1997), Akçay et al. (1997), Akıncı (2003), Bahmani-
Oskooee and Karacal (2006), Yazgan and Zer-Toker (2010), Domac and Oskooee
(2002), Civcir (2003). Turkey experienced high and chronic inflation in during
nearly 30 years before 2001. Therefore, the credibility of Turkish lira was lost for a
long time. Currency substitution can be seen as a response of economic agents to
inflation uncertainty. Akçay et al. (1997) and Domac and Oskooee (2002) find that
as currency substitution increases, the volatility of the exchange rate increases as
well. Bahmani-Oskooee and Domac (2002) state that in order to limit dollarization
the effects of increasing the volatility of exchange rate without raising that of
inflation would be effective under inflation targeting regime in Turkey (Table 1).
62 O. Tasseven
The data used in the analysis covers the period between 2001 August and
2014 December. The total sample data is divided into three groups. The first
group called “Global Liquidity Abundance” period, the second group called “The
Great Recession” period and the third group called “Current Period” cover
2001 August–2007 December, 2008 January–2011 December, 2012 January–
2014 December respectively.1
The data is obtained from the Central Bank of Republic of Turkey’s electronic
data distribution system. The monetary variable we consider (M2Y) is the broadly
defined monetary balances in natural logarithms which is the sum of currency in
circulation and demand, time and foreign currency deposits in the banking system.
Currency substitution ratio (FCD) is defined as the logarithm of the ratio of foreign
currency deposits to broad monetary aggregate. Interest rate differential (INTD)
refers to the percentage of interest rate differential between 3-month TL deposits
and 3-month foreign currency deposits. Istanbul stock exchange index (BIST) is
defined as the logarithm of the stocks of companies which are included in BIST.
Central bank reserve ratio (RES) is the ratio of the level of central bank reserves to
monetary aggregate, M2Y. Expected inflation (INFE) is the median of the end of
the current month consumer price index obtained from Central Bank Expectation
Questionnaire. Expected depreciation (EXPD) is the median of the end of the
current month nominal exchange rate obtained from Central Bank Expectation
Questionnaire. VIX index (VIX) shows Chicago Board Options Exchange Volatil-
ity Index, which shows the market’s expectation of 30-day volatility.
Geweke (1982) linear measure of feedback is important in the sense that it
provides detailed information on how one variable has the capability of explaining
the percentage of another variable’s variance over different frequency bands.
Hence, we observe how much the variation in dollarization is explained by several
key variables for the Turkish economy.
On the other hand the well-known time domain Granger causality test indicates
whether the past changes in x ( y) have an impact on current changes in y (x) over a
specified time period (Granger 1969). Nevertheless, these test results only provide
causality for a discrete time dimension.
Through using a Fourier transformation to VAR (p) model for x and y series, the
Geweke’s measure of linear feedback from y to x at frequency ω is defined as2:
1
To conserve space we only report the figures for the whole period, but stil keep the interpretation
of the results for the sub-samples. The remaining figures are available from the author upon
request.
2
For details of the computation of the measure, see Geweke (1982) and Breitung and
Candelon (2006).
64 O. Tasseven
" #
2πf x ðωÞ jψ 12 ðeiω Þj
2
My!x ðωÞ ¼ log ¼ log1 þ ð1Þ
ψ 11 ðeiω Þj2 ψ 11 ðeiω Þj2
If jψ 12 ðeiω Þj ¼ 0, then the Geweke’s measure will be zero, then y will not Granger
2
The null hypothesis tested by Geweke, My!x ðωÞ ¼ 0, corresponds to the null
hypothesis of H 0 : RðωÞβ ¼ 0 where βis the vector of the coefficients of y and
cos ðωÞ cos ð2ωÞ . . . cos ðpωÞ
R ð ωÞ ¼ .
sin ðωÞ sin ð2ωÞ . . . sin ðpωÞ
Breitung and Candelon (2006) use an F-statistics so that they can test causality in
frequency domain. Therefore, it is obvious that a continuous time dimension will
produce a superior analysis of causality with the frequency domain. This is due to
the F test which provides us whether this statistically significant causality running
from one variable to the other (or bi-variate case).
The last analysis we conduct is the recent wavelet comovement analysis devel-
oped by Rua (2010) which is shown to be superior to all the other comovement
analysis.3 Wavelet comovement technique combines the time dimension with the
frequency dimension through waves that form a shape which signals for the
statistically significant correlation coefficients over the specific interval. In this
analysis the pink (light blue) shaded areas constitute the positive (negative) corre-
lation coefficients. Nonetheless, due to the Brownian motion process in the contin-
uous frequency domain, it is not possible to apply any significance tests for the
correlation coefficients. Hence, we follow Rua (2010) and assume that any coeffi-
cient over 0.75 denotes statistical significance.
4 Empirical Findings
In order to incorporate both the frequency and time domains, Geweke measure of
linear feedback and frequency domain Granger causality tests are conducted.
3
Please see Rua (2010) for details.
The Link Between Dollarization and Its Determinants in Turkey 65
This section includes the comments for the results of the frequency domain,
Granger causality analysis for the whole period and the respective sub-periods. It
66 O. Tasseven
is observed that using the frequency analysis provides clearer and more accurate
relationships between variables over the all the frequencies with statistical signif-
icance compared to the time dimension Granger Causality which provides a result
for just one point in time.
When the whole sample period is considered, it can be seen in Fig. 2 that in the
short term INTD causes FCD. This is reasonable because the change in the spread
between two countries leads to a relative change in the same direction in currency
The Link Between Dollarization and Its Determinants in Turkey 67
substitution. RES causes FCD at low frequency and this shows a smooth long term
relationship between these variables. In the short term BIST causes FCD which is
due to the capital inflows into Istanbul Stock Exchange and most of this inflow is
from the foreign investors. Once the international reserves increase most of the
small Turkish investors prefer to substitute Turkish lira for US dollar.
In the long term FCD causes INFE because the switch form Turkish lira to the
dollar leads to a relative depreciation in Turkish lira. Therefore, the inflation
expectations are caused by a switch form Turkish lira to the dollar, which is a
68 O. Tasseven
financial instrument in Turkish market. In the short term EXPD causes FCD, which
is in line with the theoretical expectations. When there is the expectation that
Turkish lira will depreciate, the agents switch to dollar. In the short term INTD
causes FCD. The differences in interest rates between Turkey and United States
increase during the Great Recession and this leads to an increase in dollarization
and contraction in the Turkish economy. In the long term it is found that the
direction of causality is from FCD to BIST. During the Great Recession foreign
investors sold some portion of their stocks that they held and switched to dollars. In
the long term EXPD causes FCD because during the Great Recession it’s rather
hard to form expectations for the shorter maturities.
In the long term BIST causes FCD in the current period because foreign
investors demand stocks through bringing dollars and converting them into Turkish
lira. Then they use the Turkish lira to buy stocks. Consequently, the only thing that
is left in the market is the foreign currency which is probably kept in the interna-
tional reserves of Central Bank. This is why this relationship is significant in the
long term.
The final part in the econometric analysis focuses on explaining the empirical
results obtained by employing the Rua (2010) wavelet comovement methodology.
It is important to note once again that Rua (2010) considers the correlation
coefficients which are greater than 0.75 and smaller than 0.75 as statistically
significant.
Figure 3 displays the wavelet comovement analysis with respective correlation
coefficients for FCD and its determinants for the whole period. FCD exhibits
significant comovement with INTD in high frequency and significant comovement
with INTD in medium frequency between 2009 and 2011. These findings mean that
FCD and INTD have a statistically significant and positive relationship, with an
increase in the spread causing more dollarization and vice versa. It is also found that
FCD and RES experience a statistically significant comovement till the Great
Recession at high frequency.
When the Global Liquidity Abundance period is examined for FCD and its
determinants, a significant positive comovement between FCD and INTD in the
short term especially in 2005 is found. On the other hand, for the Great Recession
period coefficients of correlation become negative at low frequencies and in the
current period the coefficients again change into positive magnitude. During the
Great Recession, this contrasting finding probably is due to the increase in dollar-
ization with the expectation of future depreciation in Turkish lira and this demand
for dollar reduces the international reserves further as capital flight has already
started this vicious cycle. During the Great Recession the BIST 100 hasn’t
decreased to its 2001 Turkish financial crisis levels because there is a shorter
term positive relationship depending on the theory that foreigners are unable to
The Link Between Dollarization and Its Determinants in Turkey 69
sell a huge portion of their stock portfolio. In the mean time, the small Turkish
investors demand foreign currency, especially dollars. This is the possible reason
for obtaining a statistically significant positive correlation between FCD and BIST
during Great Recession period.
The general outlook for the comovement between FCD and INFE is in line with
the theoretical approach. Any type of currency substitution will mean that inflation
expectations will be affected in the negative manner; however the only part where
we observe a statistically significant comovement is during the Great Recession
when expectations are completely affected in an adverse manner. We observe a
positive correlation for the comovement between FCD and EXPD especially during
the first half of the whole period and after the Great Recession whereas this
relationship is reversed during the Global financial crisis. In terms of theory, this
is appropriate because during the periods of stabilization one should form
70 O. Tasseven
5 Conclusion
This chapter analyzes the relationship between currency substitution and its deter-
minants for the emerging market of Turkey using Geweke linear feedback, the
frequency domain Granger Causality and wavelet comovement analysis. We use a
monthly data set with our endogenous variable (foreign currency deposits/ M2Y) as
a proxy for currency substitution and remaining exogenous variables as interest rate
differential, BIST 100 index, central bank reserve ratio, expected inflation,
expected depreciation and VIX index.
The first empirical test that we employ is the Geweke linear feedback which
provides the percentage of variance of currency substitution explained by our
exogenous variables. The interest rate differential (BIST 100 and central bank
reserve ratio) has the power to explain the highest percentage of variance in the
short (long) term for dollarization. This means that changes in the spread immedi-
ately cause a change in dollarization whereas a change in BIST 100 and central
bank reserve ratio takes more time to affect currency substitution. In terms of
theoretical analysis these findings signal the importance of an interest rate change,
the key tool of the Central Bank. It looks as if changes in the stock exchange and
central bank reserve ratio are transmitted into currency substitution over a longer
period due to the adjustment of expectations of the investors and households.
Our empirical findings from frequency domain Granger causality analysis show
that in the short term interest rate differential, BIST 100 index and expected
depreciation cause currency substitution whereas central bank reserve ratio causes
currency substitution which causes expected inflation in the long term. An increase
in spread leads to a possible increase in dollarization with capital inflow into
Turkey. On the other hand, an increase in stock demand means a portfolio flow
into Turkey which probably ends up in international reserves of the Central Bank or
in public’s holdings of foreign currency. This indeed is well documented in the
findings in which an increase in the stock exchange leads to an increase in consumer
sentiment (Gunes and Uzun 2010), directing savings towards alternative financial
assets like dollar and euro.4 As dollarization increases, the domestic currency
depreciates which leads to the revision of inflationary expectations upwards. During
4
Turkish people have the perception that foreign currency is a financial asset which has been well
documented in the 2001 economic crisis. Although, this belief has somewhat changed during the
period 2002–2007, once the depreciation of Turkish currency especially against the dolar has
started, even small domestic investors prefer to switch to foreign Exchange, nomatter what the
amount is.
The Link Between Dollarization and Its Determinants in Turkey 71
the Great recession, our empirical findings show that the direction of causality is
from currency substitution to BIST 100 index in the long run and expected
deprecation causes currency substitution. In the current period, BIST 100 index is
found to cause currency substitution. Once the foreigners who are dominant
investors in BIST 100 leave, the amount of foreign exchange in Turkey decreases.
Although small Turkish investors would like to use this opportunity to invest in
foreign exchange, there isn’t sufficient amount of liquidity on the market for them.
Hence, their willingness to cause dollarization is not realized or they are only able
to form their foreign exchange portfolios after the high levels of domestic currency
depreciation has occurred.
The wavelet comovement methodology results demonstrate that when the whole
sample is considered, currency substitution shows significant correlation with
interest rate differential and central bank reserve ratio at high frequency. We
found that there is a negative correlation between currency substitution and
expected inflation and there is a positive correlation between currency substitution
and expected depreciation, especially during the first half of the whole period. It is
observed that the relationship between currency substitution and its determinants is
rather weak at low frequencies during the global liquidity abundance period.
However, the coefficients change into positive in the current period. From a
theoretical standpoint, these findings are all compatible with what the orthodox
theory proposes.
References
Akçay OC, Alper CE, Karasulu M (1997) Currency substitution and exchange rate instability: the
Turkish case. Eur Econ Rev 41:827–835. doi:10.1016/S0014-2921(97)00040-8
Akıncı O (2003) Modeling the demand for currency issued in Turkey. Central Bank Rev 1:1–25.
ISSN 1303-0701
Bacha EL, Holland M, Goncalves FM (2007) Is Brazil different? Risk, dollarization, and interest
rates in emerging markets, International Monetary Fund working paper, WP/07/294
Bahmani-Oskooee M, Domac I (2002) On the link between dollarization and inflation: evidence
from Turkey. The Central Bank of the Republic of Turkey, Discussion paper
Bahmani-Oskooee M, Karacal M (2006) The demand for money in Turkey and currency substi-
tution. Appl Econ Lett 13:635–42. doi:10.1080/13504850500358819
Breitung J, Candelon B (2006) Testing for short and long-run causality: a frequency domain
approach. J Econ 132:363–378. doi:10.1016/j.jeconom.2005.02.004
Calvo GA, Vegh CA (1992) Currency substitution in developing countries: an introduction.
Revista de Analisis Economico 7(1):3–28
Civcir I (2003) Dollarization and its long-run determinants in Turkey. Middle East Econ Ser 6
Domac I, Oskooee MB (2002) On the link between dollarization and inflation: evidence from
Turkey. Research and Monetary Policy Department, Central Bank of the Republic of Turkey
discussion papers no 1217
Galindo A, ve Leiderman L (2005) Living with dollarization and the route to de-dollarization,
Inter-American Development Bank, Research Department working paper no 526
Gencay R, Selcuk F, Witcher B (2001) Differentiating intraday seasonalities through wavelet
multi-scaling. Phys A 289:543–556. doi:10.1016/S0378-4371(00)00463-5
72 O. Tasseven
Geweke J (1982) Measurement of linear dependence and feedback between multiple time series. J
Am Stat Assoc 77:304–324. doi:10.1080/01621459.1982.10477803
Giovannini A (1991) Currency substitution and monetary policy. In: Wihlborg C et al (eds)
Financial regulation and monetary arrangements after 1992. North Holland, Amsterdam, pp
203–216
Giovannini A, Turtelboom B (1994) Currency substitution. In: Van der Ploeg F (ed) The handbook
of international macroeconomics. Blackwell, Malden, MA, pp 390–436
Girton L, Roper D (1981) Theory and implications of currency substitution. J Money Credit Bank
13(1):13–30. doi:10.2307/1991805
Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral
methods. Econometrica 37(3):424–438. doi:10.2307/1912791
Guidotti PE (1993) Currency substitution and financial innovation. J Money Credit Bank 25
(1):109–124. doi:10.2307/2077823
Gunes H, Uzun S (2010) Differences in expectation formation of consumers in emerging and
industrialized markets. Paper presented at 69th International Atlantic Economic Society
Conference, March, Prague
Kamin SB, Ericsson NR (1993) Dollarization in Argentina, Board of Governors of the Federal
Reserve System, International finance discussion papers, 460
Kruger R, Ha J (1995) Measurement of cocirculation of currencies, IMF working paper no 95/34
McKinnon R (1985) Two concepts of international currency substitution. In: Connolly MD,
McDermott J (eds) The economics of the Caribbean Basin. Praeger, New York, pp 101–113
Mecagni M, Corrales JS, Dridi J, Garcia-Verdu R, Imam P, Matz J, Macario C, Narita F, Pani M,
Rosales M, Weber S, Yehove E (2015) Dollarization in Sub-Saharan Africa, experience and
lessons. International Monetary Fund, African Department. ISBN: 978-1-49836-847-6.
doi:10.1093/jae/ejv020
Miles MA (1978) Currency substitution, flexible exchange rates and monetary independence. Am
Econ Rev 68(3):428–436
Ortiz G (1983) Currency substitution in Mexico: the dollarization problem. J Money Credit Bank
15:174–185. doi:10.2307/1992398
Rua A (2010) Measuring comovement in the time–frequency space. J Macroecon 32:685–691.
doi:10.1016/j.jmacro.2009.12.005
Selcuk F (1994) Currency substitution in Turkey. Appl Econ 26(5):509–522. doi:10.1080/
00036849400000019
Selcuk F (1997) GMM estimation of currency substitution in a high inflation economy. Appl Econ
Lett 4(4):225–228. doi:10.1080/758518499
Thomas LR (1985) Portfolio theory and currency substitution. J Money Credit Bank 17
(3):347–357. doi:10.2307/1992629
Vegia F (2006) Currency substitution, portfolio diversification and money demand. Can J Econ 39
(3):719–743. doi:10.1111/j.1540-5982.2006.00366.x
Yazgan ME, Zer-Toker I (2010) Currency substitution, policy rule and pass-through: evidence
from Turkey. Appl Econ 42(18):2365–2378. doi:10.1080/00036840701858018
€
Ozlem Taşseven is an Associate Professor of Quantitative Methods at Do guş University Depart-
ment of Economics and Finance, İstanbul-Turkey. Dr. Taşseven has a BS in Statistics Department
from Middle East Technical University (1995), a master degree in economics from Middle East
Technical University (2000), a Ph.D. in economics from Newcastle University (2007) in United
Kingdom. Her PhD thesis was on “Money Demand and Currency Substitution in Turkey”. Her
research interests are econometric analysis of Turkish economy, applied macroeconomics and
finance, research methods, time series, panel data analyses, statistics, operations research and
industrial concentration. She has taught statistics, econometrics, microeconomics, macroeconom-
ics and research methods courses at both undergraduate and graduate levels. She has been a
reviewer for Doguş University Journal and Banking & Finance Letters.
Enhancing the Risk Management Functions
in Banking: Capital Allocation and Banking
Regulations
Abstract This chapter reviews capital allocation in the banking sector. Capital is
crucial if banks are to be protected from banking risks. In order to ensure financial
stability in the banking sector, banking regulators demand that banks hold sufficient
capital to support their risks. The Basel Capital Accords, which aim to enhance the
risk management functions of banks and to strengthen the stability of the interna-
tional banking system, have introduced a common regulation framework for the
capital allocation. They are international guidelines to encourage convergence
toward common standards in the banking sector. The Basel Capital Accords have
evolved over time because of the growth of international risks.
1 Introduction
One of the crucial objectives of a bank is to allocate its capital optimally within its
business units. Capital allocation is an important part of the risk management process.
Risk management aims to ensure the survival of the bank. Risk management is
mainly concerned with defining the optimal amount of capital the bank should hold
if it is to be protected from financial risks. Banks face different types of risk in the
course of their operations. These risks have potentially negative effects on the banking
industry. The key risks that banks face are credit risk, market risk, liquidity risk,
operational risk, business risk, legal risk, reputational risk, and systemic risk.
Banks need to hold capital in order to protect themselves against financial risks
and failures. Banks decide how much capital is required to cover the potential losses
S. Kuzucu (*)
Institute of Banking and Insurance, Marmara University, Kadık€
oy, İstanbul, Turkey
e-mail: kuzucuserpil@gmail.com
N. Kuzucu
Faculty of Economics and Administrative Sciences, Beykent University, Ayazaga Maslak
Campus, Sarıyer, Istanbul, Turkey
e-mail: narmankuzucu@gmail.com
deriving from these risks. A bank’s capital allocation decisions depend on many
variables, such as the riskiness of its assets, financial conditions, its banking
operations and its long-term business strategies. When capital is allocated effec-
tively in the banking sector, economies can grow in a manner that is sustainable and
stable over the long term.
The financial environment has become more volatile with increasing globaliza-
tion and financial integration. Banks face more risks because of the increasing
interconnectedness of financial markets. Because banks have the highest leverage
of firms in any industry, the banking sector is more critical to financial stability than
other sectors. Governments also get involved in the financial system, in order to
ensure that there is financial stability and to protect banks’ investors and customers.
Bank deposits are often insured by governments, so that governments operate as the
final guarantor if there is a bank failure. Besides these safety nets, national govern-
ments are concerned with banking regulations and supervision. Banking regulators
monitor banking activities and systemic risks in order to ensure financial stability in
the banking sector. Regulators attempt to ensure the safety and soundness of the
banking system. In order to protect banks from bankruptcy and the costs of financial
distress, regulators impose minimum capital requirements on banks. The purpose of
minimum capital requirements is to prevent a bank’s financial problems from
spreading and threatening financial stability.
This chapter is structured as follows. The next section discusses the role of
capital in the banking sector. The loss absorption function of capital and the key
banking risks are reviewed. Section 3 defines economic capital and regulatory
capital. Section 4 reviews the banking regulations on capital requirements. The
Basel Capital Accords are discussed. Section 5 presents a literature review on the
economic impacts of capital requirements. The costs and benefits of higher capital
requirements are reviewed. The final section concludes the chapter.
Capital in banks plays a critical role in the safety and soundness of the banking
system. Capital is held to protect depositors. Furthermore, bank capital builds and
maintains confidence in the banking sector. The main role of bank capital is to
absorbe large unexpected losses. Capital in banks is a substitute for the transfer of
risk, and is a buffer to protect a bank against costly unexpected shocks. Capital
protects the safety of the bank (Schroeck 2002, p. 141). Banks face various kinds of
risks that may cause bank losses. Credit risk, market risk, operational risk, liquidity
risk, reputational risk, business risk, and systemic risk are the key risks for banks.
The most important risk that banks face is credit risk, because the business of
financial institutions is largely, extending credit to clients. The Basel Committee on
Banking Supervision (BCBS) (1999) defined credit risk as the potential that a bank
borrower or counterparty will fail to meet its obligations in accordance with the
Enhancing the Risk Management Functions in Banking: Capital Allocation and. . . 75
agreed terms. Credit risk arises from an unexpected deterioration in the credit
quality of a counterparty (Saita 2007, p. 68). For most banks, loans are the largest
source of credit risk.
Market risk is the risk of losses from balance sheet and off-balance-sheet posi-
tions arising from the movements of market prices (Bessis 2015, p. 189). Market risk
involves other risks, such as the risk of changes in interest rates and exchange rates.
Ultimately, market risk arises from changing market conditions. Market risk expo-
sure relates not only to the assets and liabilities on the balance sheet but also to
off-balance sheet positions such as securitized products and derivatives.
The main cause of many risks for financial institutions is mismatching. If
financial institutions were able to match their assets and liabilities (for instance,
maturities, interest rate and currencies) perfectly, then the only risk they faced
would be credit risk. Nevertheless, it is not practicable for a bank to match all its
assets and liabilities.
Liquidity risk is the risk that an entity does not meet its commitments on time
because sufficient liquid assets are not available. Banks and financial institutions collect
deposits or funds and place them in investments and loans with different maturities.
Liquidity risk and the cost of liquidity arise from the gaps between maturities.
Operational risk is defined as the risk of loss resulting from inadequate or failed
internal processes, people, and systems, or from external events (BCBS 2005,
p. 140). Losses from operational risk arise from a range of operational weaknesses
including inadequate systems, management failure, faulty controls, fraud, human
error, natural and man-made catastrophes (e.g., earthquakes, terrorism) and other
non-financial risks (Crouhy et al. 2014, p. 35).
Credit risk, market risk, liquidity risk, and operational risk are the most impor-
tant risks in the banking sector. Business risk, which is related to a bank’s long-term
business strategy, reputational risk, which damages corporate trust, and systemic
risk can also cause losses in banks. The danger of systemic risk has been shown
in the global financial crisis. Systemic risk is the risk that one financial institution
affects its counterparties with a domino effect and threatens the stability of the
financial system as a whole.
Economic capital is also related to the risks faced by a bank. Economic capital acts
as a buffer that provides protection against all the risks faced by a bank. The BCBS
(2008) defines economic capital as the methods and practices that allow a bank to
attribute capital to cover the economic effects of its risk-taking activities. Although
economic capital is conceptually similar to regulatory capital, it differs from regula-
tory capital in the sense that economic capital is calculated internally by the bank
itself, and the calculation methodology and risk parameters may be different
from those of the regulator’s framework. Regulatory capital is mandatory capital,
and it is calculated according to the rules and methodologies of the regulators.
However, economic capital is an estimate of the necessary capital and is used
internally by the bank to run its business and manage its own risks. The estimate of
the economic capital can be different from the minimum required regulatory capital,
because the bank may include risks that are not considered in calculating the regula-
tory capital, or may use different methodologies to estimate the economic capital.
Economic capital is the amount of capital that a bank needs to be able to absorb
unexpected losses up to a certain time horizon at a given confidence level. Most
banks calculate economic capital on a monthly or quarterly basis, and the confi-
dence level depends on the risk appetite of the bank. In the measurement of
economic capital, risks from certain activities or exposures are identified, and
those risks are measured and quantified. After the aggregation of the risks, capital
is allocated to them. A wide range of risk measures is used. Standard deviation,
Value at Risk (VaR), Expected Shortfall (ES), and spectral and distorted risk
measures are commonly used to measure risks. In practice, VaR and ES are the
two most widely used risk measures, with VaR being the most widely used in
banking. VaR is more easily explained and understood, but it may not always
satisfy the subadditivity condition and this leads to a lack of coherence. ES is
coherent and it makes capital allocation and internal limit setting consistent with the
overall portfolio measure of risk. However, it is not easily interpreted. Economic
capital measures are used in decision making areas such as profitability, pricing and
portfolio optimization. Economic capital measures also influence senior manage-
ment decisions (BCBS 2009, pp. 8–21).
The Basel I Accord was published in 1988 and only includes credit risk in the
determination of regulatory capital. Basel I was revised in 1996, when market risk
was included. The Committee introduced Basel II in 2004. Basel II comprises three
pillars -minimum capital requirements, a supervisory review process, and market
discipline. Operational risk was included in the risk calculation. Risk measurement
methods were enhanced with Basel II. Following the global financial crisis, Basel
III was introduced, in 2010. Basel III covers a comprehensive set of reforms to
Basel II.
The Basel Capital Accords are international guidelines to encourage conver-
gence across the world towards common standards in the banking sector. Although
they are not legally binding, the Basel Capital Accords have been adopted by the
regulators of many countries. However, the implementation of international bank-
ing standards has some limitations in the real world. First, it takes several years for
countries to make changes to their local legislative and regulatory frameworks.
Secondly, some countries are not applying the Basel standards to the whole banking
industry: they may exempt small banks from adopting the standards. Thirdly, the
national regulators and individual banks implement the standards with a consider-
able degree of interpretation. For example, when the European Capital Require-
ments Directive IV transposed Basel III into EU law, European banks were
exempted from deducting investments in insurance entities from their core capital
(Crouhy et al. 2014, pp. 67, 68).
4.1 Basel I
five risk weights, which are 0, 10, 20, 50 and 100 %. Basel I addressed only
credit risk.
Basel I has been adopted in more than 100 countries, and Basel II has not yet
replaced Basel I in many countries. Europe adopted Basel II in 2008. On the other
hand, the United States has not yet adopted Basel II. Larger banks in the United
States will move directly to adopting Basel III (Crouhy et al. 2014, p. 72). Although
Basel I was successful in forcing banks to maintain higher capital ratios, it has been
argued that Basel I encouraged banks to use regulatory capital arbitrage techniques,
particularly securitization (Jablecki 2009, 16). Regulatory capital arbitrage tech-
niques enable banks to lower their capital requirements while keeping the risk level
unchanged. Capital arbitrage lowers the efficiency of the Basel Capital Accords
(Balthazar 2006, pp. 35-36).
Besides the negative effects of regulatory capital arbitrage, there are other
deficiencies in the Basel I framework. Risk sensitivity does not exist in Basel
I. The risk weights do not adequately reflect the riskiness of bank assets. There is
also a “one size fits all” approach: the requirements are the same for all banks,
despite the differences in their risk levels, sophistication, and activity types. It is
also argued that Basel I focused primarily on credit risk and ignored the other risks
that are faced by banks. Although market risks arising from banks’ exposures to
foreign exchange, traded debt securities, equities, commodities and options were
incorporated into the capital requirement measurement by the Market Risk Amend-
ment in 1996, there are still other risks like operational risk, reputation risk,
strategic risk, and so on (Balthazar 2006, pp. 35-36).
4.2 Basel II
Despite the deficiencies of Basel I, it was beneficial to the banking industry because
it raise the capital amounts in the banking sector. In order to develop a more
sophisticated regulatory framework, the Basel Committee started on Basel II. The
Basel Committee issued a proposal in 1999 to replace Basel I. The final proposal
was published in 2004. BCBS (2015) states that Basel II aims to improve the way in
which regulatory capital requirements reflect underlying risks, and to deal better
with the financial innovations. The Basel II regulation framework comprises three
pillars, minimum capital requirements, a supervisory review process and market
discipline.
The definition of capital is the same under Basel II as it was under Basel I. Banks
must maintain a minimum capital of 8 percent of their risk-weighted assets.
However, there are two key changes relating to the capital requirement. First, two
more advanced approaches for calculating credit risk are introduced. Risk sensitiv-
ity and flexibility are increased through an updated standardized approach and new
internal ratings-based (IRB) approaches. Second, Basel II extended the risk calcu-
lation to include operational risk. Thus, risk-weighted assets are the sum of the
assets subject to market risk, credit risk, and operational risk. Banks may use one of
Enhancing the Risk Management Functions in Banking: Capital Allocation and. . . 79
three different approaches — the basic indicator approach, the standardized approach
and the advanced measurement approach — to measure their operational risk.
The second pillar in the Basel regulation framework is the supervisory review
process, which aims not only to ensure that banks have adequate capital, but also to
encourage banks to use better risk management techniques to monitor and manage
their risks. Banks are expected to strengthen their risk management, apply internal
limits, strengthen the level of provisions and reserves, and improve internal con-
trols. Banks should apply forward-looking stress tests to identify possible events or
changes in market conditions that may have an adverse impact. The second pillar
gives a clear role to the national supervisory authorities. Supervisors are expected to
evaluate how well banks are assessing their capital requirements against their risks.
The supervisors are also expected to intervene in banks where appropriate. Super-
visors should intervene at an early stage to prevent capital from falling below the
minimum required levels and take rapid remedial action if capital is not maintained
or restored.
The third pillar aims to strengthen market discipline. A set of disclosure require-
ments has been defined to increase the transparency of each bank’s risk profile and
risk policy. Disclosure requirements cover a qualitative description of the risk
management objectives, policies and techniques of the bank, and quantitative
details of the scope of application, capital structure (elements and instruments),
capital adequacy, pillar 1 risks (credit, market, and operational risks) including
equities and securitization holdings, and interest rate risk in the banking book
(Docherty and Viort 2014, p. 132).
The 2007–2008 global financial crisis and the failure of many banks revealed
that Basel II was not capable of covering all risks. Basel II was criticized for being a
failed piece of bank regulation. However, this was a harsh criticism because Basel
II was newly implemented in Europe at the time of the crisis, and the United States
had not implemented Basel II when the crisis started. In addition, Basel II was
imposed on commercial banks and not on investment banks. However, the crisis
showed that Basel II has many potential inadequacies and that it needs reform
(Crouhy et al. 2014, pp. 73–75). According to Roubini (2008), capital adequacy
ratios are procyclical in Basel II and promote credit booms in good times and credit
busts in bad times. There is little emphasis on liquidity risk management. Excessive
reliance on internal risk management models and rating agencies are the other
weaknesses of Basel II.
While the national regulators and banks were in the process of adopting and
implementing Basel II, the 2007–2008 global financial crisis began. According to
the Basel Committee, the causes of the crisis were excessive leverage, weak capital
bases, poor funding profiles and insufficient liquidity buffers. The Basel Committee
immediately revised the market risk framework of Basel II, and the July 2009
80 S. Kuzucu and N. Kuzucu
risk indicators, reliance on internal and agency assessments continues in Basel III
(Docherty and Viort 2014, pp. 161–163). Another criticism is that the new regula-
tory framework is too costly. The Institute of International Finance (IIF) estimated
the cost of Basel III implementation in the G20 countries at US$1.3 trillion in 2011.
It is also suggested that Basel III is too complex and should be replaced by a simple
leverage ratio (Crouhy et al. 2014, p. 113) (Table 1).
The Financial Stability Board (FSB) and the BCBS established two working
groups, the Macroeconomic Assessment Group (MAG) and the Long-term Eco-
nomic Impact (LEI) Group, in order to measure the macroeconomic impacts of the
stronger bank capital and liquidity requirements. The MAG report (BCBS, 2010a)
investigates the impact of a one percentage point increase in bank capital ratios by
aggregating the outputs of macroeconomic models from 15 member countries and a
number of international organizations. The report employs forecasting and policy
analysis models to estimate the impact on GDP. A total of 97 sets of model results
and simulations were submitted to carry out these estimates. The findings reveal
that the overall effect of a one percentage point capital increase would be a
widening of the lending spreads by a maximum of 15.5 basis points and a reduction
in the level of GDP by a maximum of 0.15 %.
The LEI report (BCBS, 2010b) assesses both the economic costs and the benefits
of the stronger capital and liquidity regulations. The report uses data from a total of
6,660 banks from 13 OECD countries over the period from 1993 to 2007. The study
on cost impacts concludes that a one percentage point change in the capital ratio
raises loan spreads by 13 basis points. Loan spreads are estimated to rise by 14 basis
points because of the cost of meeting the liquidity standards. The report also
82 S. Kuzucu and N. Kuzucu
examines the impact of increases in bank capital and liquidity on the long-term
level of output. Macroeconomic models find that a one percentage point increase in
capital reduces long term GDP by 0.09 %. GDP reduction associated with the
liquidity requirements is 0.08 %.
There are other studies that evaluate the cost of higher capital requirements on
economic activity. Recent studies on the cost of capital reveal that there are
opportunity costs, in terms of reduced lending and economic activity, of higher
bank capital requirements. Higher capital requirements could increase lending
rates, which in turn would decrease credit levels. Sutorova and Teply (2013)
study the impact of Basel III on the lending rates of European Union banks. They
employ a simultaneous equations model where banks choose the optimal level of
capital. Using data for 594 banks in the European Union during the period
2006–2011, they find that a one percent increase in the common equity ratio would
increase lending rates by 19 basis points and that full adoption of Basel III would
decrease the level of loans by 2% from the current level. However they do not
expect a larger drop in loans because many European banks are already complying
with the Basel III capital requirements and the elasticity of demand for loans is
relatively low. Fraisse et al. (2015) measure the impact of bank capital requirements
on corporate borrowing and business activity. In their study they use a large sample
of loans extended by French banks to French firms over the period 2008–2011.
Their findings reveal that a one percentage point increase in capital requirements
leads to a reduction in lending of approximately 10%. Noss and Toffano (2014)
estimate the effects of capital requirements on lending using data from UK banks.
Their findings support the theory that an increase in the aggregate bank capital
requirements is associated with a reduction in lending. The impact on GDP growth
is found to be statistically insignificant.
Despite the unfavorable effects of the capital requirements on lending and GDP
growth, the costs of a higher capital ratio appear to be small when compared to the
estimated benefits. Better capitalized banks are less vulnerable to shocks. The
literature on the benefits of capital requirements reveals that greater bank capital
reduces the probability and costs of banking crises (BCBS 2010b; de Bandt and
Chahad 2015; De Ramon et al. 2012; Miles et al. 2013). The LEI report (BCBS
2010b) uses three different methods (reduced-form models, calibrated portfolio
models and calibrated stress test models) in order to estimate the relationship
between regulatory requirements and the probability of a crisis. Using banking
data for 13 OECD countries over the period between 1980 and 2008, the report
concludes that there is a significant reduction in the likelihood of a banking crisis
with higher levels of capitalization and liquidity. Increasing the capital ratio from
7 % to 8 % reduces the probability of a banking crisis by one third. The findings also
support the argument that higher capital and liquidity standards are likely to reduce
not just the probability, but also the severity, of banking crises. The report also
analyse the net benefits of the capital requirements. Net benefits are measured by
the percentage change in the yearly level of output. The findings conclude that the
net benefits are positive and there is considerable scope to increase capital and
liquidity standards while yielding positive net benefits.
Enhancing the Risk Management Functions in Banking: Capital Allocation and. . . 83
6 Conclusion
leverage ratios are introduced. The minimum capital requirement is increased with
the capital conservation buffer.
The economic impacts of the higher capital requirements have been studied in
the literature. There is an overall consensus that higher capital requirements
increase lending rates and widen the loan spreads in the banking sector. There is
also a small reduction in the level of loans because of the increased lending rates.
On the other hand, higher bank capital reduces the probability and costs of banking
crises. The costs of the higher capital ratio appear to be small when compared to the
estimates of the benefits. The net result is positive.
References
Schroeck G (2002) Risk management and value creation in financial institutions. John Wiley &
Sons, New Jersey
Sutorova B, Teply P (2013) The impact of Basel III on lending rates of EU banks. Finance a uver-
CJEF 63(3):226–243
Serpil Kuzucu is a banking expert and has been working for a multinational banking group’s
affiliate in Istanbul for more than ten years. She holds a PhD in banking from Marmara University
School of Banking. She has an MA and BA degree in economics from Marmara University. Her
research areas are emerging markets and banking.
Abstract This study explores the calibration of market risk measures during period
of economic downturn. This calibration is done in two frameworks: firstly individ-
ual profit and loss distribution is modelled using two different types of extreme
value distribution namely: the generalized extreme value (GEV) distribution, and
the generalized Pareto distribution (GPD). The resulting shape parameters are all
positive indicating that these distributions can in fact capture the negative skewness
and excess kurtosis of the profit and loss (P&L) distribution during period of
economic downturn. We show that the presence of such positive shape parameters
indicates the existence of large probabilities of extreme price drops in the left tail of
the P&L distribution. Based on these results the second framework used in this
study builds two multivariate copula distributions with GEV and GPD marginals.
This procedure captures the dependence structure of stock markets during periods
of financial crisis. To illustrate the computation of market risk measures; we
consider one elliptical copula (student t copula) and one Archimedean copula
(Gumbel copula). Using two stock market indices we compute what we refer to
as EVT based mark risk measures and the copula based market risk measures for
both the left and right tails of the P&L distribution. Our results suggest that copula
based risk measures are more reliable in predicting the behavior of market risks
during period of economic downturn.
1 Introduction
After the 2007–2008 financial crisis; the practice of risk management has under-
gone significant changes related to market risk modelling. The Basel III a recently
modified version of the late Basel II aimed at determining the minimum capital
requirements that financial institutions need to keep on their balance sheet in order
to promote stability and make global financial institutions more resilient to financial
crises. Since then, there is a significant increase in the number of academic
publications aimed at proposing new risk models that can predict large losses
observed during financial crises. Furthermore the new Basel III encourages finan-
cial institutions (banks) to develop their own internal market risk models under the
supervision of the national central bank authorities in order to regularly meet the
minimum capital requirement. These market risk models make use of different risk
factors including interest rate, currencies spread, credit spread, equity prices and
their respective volatility, as well as other balance sheet items. The minimum
capital requirement is calculated using the value-at-risk (VaR) and the expected
shortfall (ES) measures which are essentially the quantile of the profit and loss
(P&L) distribution. In many applications the P&L distribution is essentially
assumed to be Gaussian (see for example Longin 1996)
This study contributes to the ongoing literature by calibrating different market
risk measures using a combination of different extreme value theory (EVT) distri-
butions, GARCH processes, and copulas; and highlighting their impact in
predicting extreme losses during financial crises. The study explores the computa-
tion of two market risk measures (VaR and ES) under non-normality conditions
i.e. under periods of economic downturn. To overcome the effect of extreme market
conditions on the modeling of the market risk measures, two frameworks are
considered in this study: -firstly we consider individual market risk factor and
compute its corresponding market risk measures by making use of two extreme
value distributions namely the generalized extreme value (GEV) distribution and
the generalized Pareto distribution (GPD). Secondly we consider a portfolio made
of a finite number of assets or risk factors and build their corresponding joint
distribution using multivariate copulas. Both elliptical and Archimedean uncondi-
tional copulas are used. Although this technique is relatively known by few lucky
quants and financial practitioners; the contribution of this study is in stressing on the
use of the multivariate copula distributions in market risk modeling with GEV
distribution as marginal rather than the usual GPD distribution (see for example
Frad and Zouari 2014). The two market risk measures (VaR and ES) are computed
for the lower and upper tails of empirical P&L distribution.
We illustrate the computation of copula and the EVT market risk measure with
two stock indices collected from the Bloomberg database. The two indices consid-
ered are the weekly U.S. SP500, and UK FTSE100 from April, the 17th of 2000 to
February, the 29th of 2016.
The computation of the copula and EVT market risk measures is done in three
different steps. The first step consists in removing the effects of autocorrelation and
The Calibration of Market Risk Measures During Period of Economic Downturn:. . . 91
heteroscedasticity pattern in the returns series. This is done by fitting the return
series to a ARMA(1,1)—APARCH(1,1)1 model of order one. The estimation
results are reported in Table 1. All coefficients are found to be statistically signif-
icant, and lead to the conclusion that both the leverage effect and the Taylor effect2
have significant impact on the stock market returns.
The second step involves the fitting of the standardized residuals of the ARMA-
APARCH model to the GEV and GPD respectively. The estimated coefficients are
reported in Table 2. The shape parameters are all found to be positive indicating that
the presence of negative skewness and excess kurtosis during period of financial
crisis. The presence of such excess skewness and kurtosis indicates serious devia-
tions from the normal distribution and may suggest that the P&L empirical distri-
bution is asymmetric and exhibit a fat-tailed behaviour. During financial crisis, a
positive shape parameter implies extreme price drops with large probabilities on the
left tail of the P&L distribution. Many studies (Giacomini and Hardle 2005; Hotta
and Palaro 2006; Hotta et al. 2008) intended to deal with the fat-tail behavior of
stock markets have failed to highlight these characteristic features of fat tailed
distributions. These features have prevented many of these EVT models from being
of practical use in risk management during financial crisis period as they result in
unrealistic market risk measures.
Based on these characteristic features, the third step consists in modeling the
dependence structure of P&L distributions during financial crisis. We do so by
building two multivariate copula distributions: the first with an elliptical copula
(student t distribution) and second with an Archimedean copula (Gumbel copula).
Lastly the four step consists in computing the copula market risk measures for both
the left and right tail of the P&L distribution. These market risk measures are
reported in Tables 3, 4, and 5 for the GPD, GEV, and copula distribution respec-
tively. We find that the GEV based risk measure (return level) is invariant of the
significance level at which it is calculated. Copula based risk measures are found to
be coherent since they are less than the sum of individual risks. We find that during
financial crisis copula based market risk measures are larger than during period of
tranquility. This can be explained by the fact that stock market tend to co-move
together in downturn than in normal period. The rest the study is structured as
follows. Section 10.1 will discuss in details the process fitting the data to an
ARMA—GARCH process in order to remove the effect of the autocorrelation
and heteroscedasticity in the data. This will be followed by the fitting of the
standardized residuals to the GEV and GPD distribution; and the building of the
multivariate copula distribution. Section 10.3 will present the empirical analysis,
while Sect. 10.4 will conclude.
1
ARMA(1,1)-APARCH(1,1) stands for the Autoregressive Moving Average with Asymmetric
Power Autoregressive Conditional Heteroscedasticity model proposed by Ding et al. (1993).
2
The Taylor effect: the sample correlation of absolute returns are larger than that of squared returns
(Taylor 1986).
92 J.W. Muteba Mwamba
2 Methodology
The computation of the copula and EVT based market risk measures involves three
steps. These steps are described in details below:
STEP 1: The filtering of the returns i.e. removing the effect of autocorrelation
and heteroscedasticity in the data is done by fitting the returns series to
autoregressive model of the following form:
where r t , ut , σ t , δ, and γ i are the returns series of a market risk factor, the error term
of the model, the volatility of the return, the Taylor effect, and the leverage effect
respectively. This model is estimated by making use of the maximum likelihood
method.
STEP 2: the residuals of the above-mentioned model are after standardized and
fitted to two extreme value theory (EVT) distributions namely the GEV and the
GPD distributions. We refer to these standardized residuals as filtered returns. The
process of fitting these filtered returns to EVT distributions is described below.
The process of fitting filtered returns of each risk factor to the GEV distribution is
often referred to as the Block of Maxima method. This method collects the largest
losses (profits) of each monthly, quarterly or yearly block period and fits them to the
GEV distribution. Assume that X1 , X2 , . . ., Xn is a sequence of iid random variables
representing the largest losses (profits) for the left tail (right tail) of the P&L
distribution of risk factors with common density function F. The limiting distribu-
tion of the normalised largest losses (profit) X1 , X2 , . . ., Xn is known as the
generalized extreme value distribution and is expressed as:
n x μo1=ξ
H ðξ;μ;σ Þ ðxÞ ¼ exp 1 þ ξ ð3Þ
σ
ξ represents the shape parameter of the tail distribution, μ its location, and σ its scale
parameter. When ξ ¼ α1 > 0, Eq. (3) corresponds to the Fréchet type of distribu-
tions which includes some well-known fat-tailed distributions such as the Pareto,
Cauchy and Student-t distributions. When ξ ¼ α1 < 0 Eq. (3) corresponds to the
Weibull type of distributions which includes among others the Pareto type II
The Calibration of Market Risk Measures During Period of Economic Downturn:. . . 93
where Rnk represents the return level that is the maximum loss expected in one out of
k periods of length n computed as:
1
Rnk ¼ H 1
ξ, μ , σ 1 ð5Þ
k
To fit the filtered returns of each risk factor to a GPD distribution one needs to have
an optimal threshold set in consultation with an experienced risk manager or set by
making use of the empirical mean excess function introduced by Davison and Smith
(1990). The mean excess function plots the conditional mean of the largest losses
(profits) above different thresholds using the following expression:
94 J.W. Muteba Mwamba
X
Nu
ð x i uÞ
i¼1
meðuÞ ¼ ð7Þ
XNu
I uðxi >uÞ
i¼1
where I u ¼ 1 if xi > u and 0, otherwise. N u is the number of extreme returns over the
threshold u. If the empirical mean excess function has a positive gradient above a
certain threshold u, it is an indication that the return series follows the GPD with a
positive shape parameter ξ. In contrast, an exponentially distributed log-return
series would show a horizontal mean excess function, while the short tailed
log-return series would have a negatively sloped function.
The process of fitting filtered returns to a GPD distribution is often referred to as
the pick over threshold approach. Let X be a vector of filtered returns representing
the largest losses for the left tail (or the profits for the right tail) above a specific
threshold u, and assume that the density function of X is given by F. The limiting
distribution of X above this threshold is known as the GPD distribution and is given
by the following expression:
8
> 1
>
<1 1 þ ξ x ξ
; ξ 6¼ 0
β ðuÞ
Gξ, βðuÞ ðxÞ ¼ ð8Þ
>
> x
: 1-exp - ; ξ¼0
βðuÞ
where ξ is the shape, and u is the threshold parameter, respectively. It is assumed that
the random variable x is positive and that
βðuÞ > 0; x 0 for ξ 0 and 0 x - βðξuÞ ; for ξ < 0:. The shape parameter
ξ is independent of the threshold u. If ξ > 0 then Gξ, βðuÞ is a Pareto distribution,
while if ξ ¼ 0 then Gξ, βðuÞ is an exponential distribution. If ξ < 0, then Gξ, βðuÞ is a
Pareto type II distribution. These parameters are estimated by making use of the
Maximum Likelihood method. The parameters of the GPD distribution are obtained
by maximising the following log-likelihood function:
Nu
1 X ξxi
Lðξ; βÞ ¼ Nu LogðβÞ 1 þ Log 1 þ ð9Þ
ξ i¼1 β
Embrechts et al. (1997) show that the tail distribution of the GPD distribution can
be expressed as follows:
The Calibration of Market Risk Measures During Period of Economic Downturn:. . . 95
!^1
^
^ ðxÞ ¼ 1 N u 1 þ ξ ðx uÞ
ξ
F ð10Þ
n β^
Two risk measures can directly be calculated as quantiles of the GPD distribution—
the VaR and the ES. Since the VaR is not a coherent risk measure and doesn’t
satisfy the diversification principle, it is advisable to use the ES which measures the
maximum loss of a portfolio, given that the VaR is exceeded.
^ !
β^ 1 p ξ
VaRðpÞ ¼ u þ 1 ð11Þ
^ξ N u =n
ESðpÞ ¼ EðY=Y > VaRðpÞÞ ¼ VaRðpÞ þ EðY VaRðpÞ=Y > VaRðpÞÞ ð12Þ
VaRðpÞ β^ ^ξ u
ESðpÞ ¼ þ ð13Þ
1 ^ξ 1 ^ξ
where p is the significance level at which the VaR is computed. For example, when
p ¼ 0:99, Eqs. (11) and (12) produce the VaR and ES measures at the 99 %
significance level. Most VaR and ES methodologies (except historical simulation
method) assume that the joint P&L distribution of the risk factors is a multivariate
normal distribution. The dependence structure between different risk factors is
therefore defined by the correlation between those factors. Correlation is very
often incorrectly used to model dependence structure (Embrechts et al. 2002).
The authors argue that thought independence of the two random variables can
imply that the correlation is equal to zero; generally speaking the opposite is not
correct i.e. zero correlation does not imply independence. A more prevalent
approach which overcomes this disadvantage is to model the dependency between
risk factor with copulas.
STEP 3: consists in modelling the dependence structure between different risk
factors with copulas rather than correlation. The word copula comes from the Latin
for a “link” or “bond” and was coined by Sklar (1959) who first proved the theorem
that a collection of marginal P&L distributions can be coupled together via a copula
to form a multivariate distribution. To overcome the effects of the extreme market
conditions; we will assume that the marginal P&L distribution are GEV or GPD
distributions. Therefore market risk measures will be computed from the simulated
data using Monte Carlo simulation methods.
∘ 8ui 2 ½0; 1, Cðu1 , u2 , u3 , u4 :::::, un Þ ¼ 0, if at least one of the ui 0 s equal to zero
Since the t-distribution tends to the normal distribution when υ goes to infinity, the
t-copula also tends to the normal copula as υ ! þ1.
Archimedean copulas on the other hand allow for a great variety of different
dependence structures. This family of copulas include Clayton copula, Franck
copula, Joe copula, Symetric Joe copula, Gumbel etc. The dependence structure
between different risk factor is embedded into one function of a single parameter
called the generator ϕðθÞ for Archimedean copulas. Let for example the generator
function for a Gumbel copula be given by:
There exist different methods to estimate the parameters of copulas; these include
the parametric method (ML method), the inferences function margins,
semiparametric and nonparametric method. Given that we have precise marginal
distributions which are GEV and GPD distributions, we will make use of the ML
The Calibration of Market Risk Measures During Period of Economic Downturn:. . . 97
method to estimate the parameters of the copula in Eq. (20) shown below. The ML
method proceeds as follows. Let F be a multivariate distribution function with
continuous marginals Fi and copula C. Then getting the first derivative of Eq. (20)
we obtain the joint density function fð:Þ:
Y
n
f ðx1 , . . . , x2 Þ ¼ cðF1 ðx1 Þ, ::::, Fn ðxn ÞÞ f i ð xi Þ ð17Þ
i¼1
where f i is the density function of marginal Fi and c is the density of the copula given
by:
∂ Cðu1 , :::::, un Þ
cðu1 , ::::, un Þ ¼ ð18Þ
∂ u1 :::::∂ un
Let δ ¼ ðβ1 , :::::, β2 , αÞ be the vector of all the parameters to estimate, where βi is
the vector of the parameters of marginal distribution Fi , and α is the vector of the
copula parameters. The log-likelihood of the Eq. (17) can be written as:
X
T X T X
n
l ð δÞ ¼ lnc F1 x1t ; β1 , ::::, Fn xnt ; βn ; α þ ln f i xit ; βi ð19Þ
t¼1 t¼1 i¼1
xi ¼ F1
i ð ui Þ ð22Þ
where F1
i denotes the inverse of Fi , i.e. the quantile of Fi
3 Empirical Analysis
In this section we illustrate the application of the three steps discussed above. We
consider two weekly stock market indices: the U.S SP500 and the UK FTSE100
indices. The sample period starts from April the 17th, 2000 to February the 29th,
2016; making a total of 829 weekly observations. We first start by fitting the
log-return series of each index to an autoregressive model shown in Eqs. (1) and
(2). The resulting estimates are reported in Table 1.
This table shows that the coefficients of the leverage effect (gamma) and the
Taylor effect (delta) are all statistically significant. The leverage effect shows that
bad news have more impact on these stock market volatilities than good news
do. The Taylor effect shows that the sample autocorrelation of absolute log returns
are larger than that of squared residuals. The rest of the coefficients reported in
Table 1 are all significant except the independent terms suggesting that previous
shocks and volatilities have a significant impact on current level of return.
To model the effect of the extreme market conditions on the market risk
measures, we fit the filtered returns to the left and right tail of the P&L distribution
using the GEV and the GPD distributions respectively (Table 2).
The estimated shape parameters for the left tail of the P&L distribution are found
to be positive suggesting moderately high probability of price drops during finan-
cial crises. Positive shape parameters is an indication that the Fréchet type of
distributions which includes well-known fat-tailed distributions such as the Pareto,
Cauchy and Student-t distributions best fit the left tail of the P&L distribution. All
EVT estimated parameters are statistically significant.
Risk measures under extreme market condition are calculated as the quantile of
the GPD distribution. The GPD based VaR and the ES for both the right tail (which
represents the short position on these stock market indices) and the left tail (which
represents the long position on these stock market indices) are reported in Table 3.
The right tail risk measures are relatively lower than the left tail risk measures
highlighting the impact of large losses during financial crisis. For example at the
99 % significant level the left tail risk measures are as follows: 3.067 % and 2.94 %
weekly maximum losses for long positions in the SP500 and FTSE100 respective
when the VaR measures is used; or 4.37 % and 4.26 % weekly maximum losses for
long position in the SP500 and FTSE100 respectively when ES measure is used. As
one can easily observe; the ES risk measure are relatively lager than the VaR
measures because the ES is equal to the VaR plus mean of all other loses greater
than the VaR.
In addition to computing the GPD based weekly maximum losses of the P&L
distribution using the VaR and ES; this study makes use of the GEV to compute the
another risk measure referred to in the literature (see for example Gilli and Kellezi
2006) as the “return level”.
This risk measure is calculated using the Eq. (4) above; the estimated return level
are reported in Table 4 below.
100 J.W. Muteba Mwamba
Considering for example the left tails, we find that the “return level” for the
SP500 and FTSE100 are 4.212 % and 4.038 % respectively. Given that we have
used the maximum blocks of 22 weeks each; we would interpret these risk measures
as follows: 4.212 % (3.038 %) of maximum losses is expected in one out 22 weeks
at the 99 % significance level if one holds a long position in the SP500 (FTS100)
respectively. Our results show that the “return level” is invariant to the significance
level i.e. the return level is almost identical at different significant level.
The Copula risk measures are also computed and reported in Table 5. These risk
measures represent the portfolio (of two asset: SP500 and FTSE100) VaR and ES.
These risk measures are computed as the quantile of the joint P&L distributions
with dependence structure embedded in the copula. We consider one elliptical
copula (Student t copula) and one Archimedean copula (Gumbel copula) to
model the dependence structure between these two stock markets. We build the
(joint) bivariate distribution using the Sklar’s Theorem expressed in Eq. (17) above.
Using a Monte Carlo simulation we are able to compute the VaR and ES measures
using the simulated data. Considering the left tail for example we find that at 99 %
significance level; the maximum losses expected on weekly basis computed in
terms of ES are is equal to 3.90 % when a t-copula with two GEV marginal are
used; and 1.44 % when a Gumbel copula with two GPD marginal are used.
The Calibration of Market Risk Measures During Period of Economic Downturn:. . . 101
The copula based ES measures are reliably coherent as they fulfil the principal of
diversification. The principal of diversification states that the portfolio risk should
be less than the sum of individual asset risk. ES risk measures reported in Table 5
are almost lower than individual ES risk measures reported in Table 3.
4 Conclusion
This study aimed at exploring the computation of VaR and ES measures during
period of financial crisis from a dependence structure point of view. Two stock
market indices namely SP500 and FTSE100 have been used to implement a market
risk model able to predict the behavior of financial losses during period of financial
crisis. Three steps have been implemented. The first one involving the filtering of
the return series with AR—GARCH process. The second step dealt with the fitting
of the filtered return to two extreme value distributions:—the GPD and the GEV
distributions.
We find that the estimated shape parameters were all positive and statistically
significant suggesting the existence of large probability of price drop during
financial crisis. The GEV and GPD based risk measures were also computed in
order to estimate individual risk that one can encounter when investing in these two
markets. For that reason, we investigated the impact of these risks in the left tail for
an investor with long position; and in the right tail for an investor with a short
position in these indices.
A copula based approach was used to estimate the portfolio risk for an investor
with simultaneous positions in these two indices. Using the Sklar’s theorem, we
were able to build a bivariate distribution with Student t and copula Gumbel copulas
and simulate 10 thousands observations using Monte Carlo simulations. The copula
based risk measures were computed as the quantiles of the bivariate copula distri-
bution. We found that our GEV distribution based ES risk measures where incred-
ibly coherent since they fulfilled the principal of diversification.
References
Davison AC, Smith RL (1990) Models for exceedances over high thresholds. J R Stat Soc 52
(3):393–442
Ding Z, Granger CWJ, Engle RF (1993) A long memory property of stock market returns and a
new model. J Empir Finance 1:83–106
Embrechts P, Kl€uppelberg C, Mikosch T (1997) Modelling extremal events for insurance and
finance. Springer, Berlin
Embrechts P, McNeil A, Straumann D (2002) Correlation and dependence in risk management:
properties and pitfalls. Cambridge University Press, Cambridge
Frad H, Zouari E (2014) Estimation of value-at-risk measures in the Islamic stock market:
approach based on extreme value theory (EVT). J World Econ Res 3(2):15–20, Published
102 J.W. Muteba Mwamba
Abstract Under Basel III the minimum capital requirement due to operational risk
is computed as the 99th quantile of the annual total loss distribution. This annual
loss distribution is a result of the convolution between the loss frequency and the
loss severity distributions. The estimation of parameters of these two distributions
i.e. frequency and severity distributions is not only essential but crucial to obtaining
reliable estimates of operational risk measures. In practical applications, Poisson
and lognormal distributions are used to fit these two distributions respective. The
maximum likelihood method, the method of moments as well as the probability-
weighted moments used to obtain the parameters of these distributions can some-
times produce nonsensical estimates due to estimation risk and sample bias. This
paper proposes a different calibration of the frequency and the severity distributions
based on Bayesian method with Gibbs sampler. Further to that, the paper models
the severity distribution by making use of the lognormal and the generalised Pareto
distribution simultaneously. Simulated results suggest that computed operational
value at risk estimates based of this new method are unbiased with minimum
variance.
1 Introduction
Operational risk is the risk emanating from loss due to inadequate or failed internal
processes, people, systems or external events. The tier 1 of the recent Basel III
refers to the computation of the minimum capital requirements due to market,
credit, and operational risk. The market risk is the risk due to fluctuations in the
trading books, while the credit risk is the risk that the borrowers will not be able to
meet their obligations. Since losses arise from errors and ineffective operations the
approach used in modelling operational risk are quite different from the one used in
market and credit risk. Table 1 below highlights fundamental differences in market,
credit, and operational risk modelling. Examples of operational risk include inter-
nal fraud: UBS London, 2011 (Kweku Adoboli: unauthorised trading): US $ 2
billion; Société Générale, 2008 (Jérôme Kerviel: fictitious trades to hide big bets
he took): Euro 4.9 billion. System failure: Bank of America: US $225 million;
external events: September 11 terrorist attacks, etc. Based on these examples one
can define operational risk in terms event types and business lines. Unfortunately
data related events types and business lines are not exhaustive. Furthermore
implementing statistical models for operational risk model remain a challenge for
both academics and practitioners.
This chapter reviews available statistical models for operational risk modelling,
discusses their pitfalls and develops a new operation risk model based on Bayesian
inferences. The following Table 2 summarises recent studies done in operational
risk modelling. This table shows that the three commonly used operational risk
methodologies are the basic indicator approach (BIA), the standardised approach
(STA), and the advanced measurement approach (AMA). The main focus of this
chapter is on the advanced measurement approach under the loss distribution
technique. Studies have shown that the AMA methodology is more reliable than
the first two. For example Teply (2012) shows that when the AMA is used in lieu of
BIA or the STA; banks are likely to save between 6 % and 8 % of their capital
requirement due to operational risk. Similar findings were also obtained by Lin
et al. (2013). They argue that it is more appropriate to adapt the AMA in operational
risk modelling since it can help banks to enjoy a much lessened capital requirement.
1 X3
MCR ¼ i¼1
GOI i α ð1Þ
3
3 Standardised Approach
1 X3 nX8 o
MCR ¼ i¼1 j¼1
ðGOI Þij β ij ð2Þ
3
Table 3 An hypothetical example of the basic indicator and standardised approaches (in $000)
Business line Year-3 Year-2 Year-1 Beta Beta*Year-3 Beta*Year-2 Beta*Year-1
Corporate finance 2000 3000 40,000 18 % 360 540 7200
Trading and sales 2000 3000 40,000 18 % 360 540 7200
Retail banking 2000 3000 40,000 12 % 240 360 4800
Commercial banking 2000 3000 40,000 15 % 300 450 6000
Payment and settlement 2000 100,000 40,000 18 % 360 18,000 7200
Agency services 2000 100,000 40,000 15 % 300 15000 6000
Asset management 2000 30,000 40,000 12 % 240 3600 4800
Retail brokerage 2000 30,000 40,000 12 % 240 3600 4800
Total for the bank 16,000 128,000 320,000 2400 23,910 48,000
Computation of Operational Value at Risk Using the Severity Distribution. . . 107
total annual gross income of 16,000, and 320,000 multiplied by an alpha of 15%,
that is 168,000 multiplied by 15% which equals 25,200 keeping in mind that
negative total annual gross income is omitted.
The critics toward the two methodologies are related to the fact that they do not
consider individual risk profile corresponding to each business lines (Hull 2012). In
addition the minimum capital requirement obtained with these two methodologies
is not linked to the operational loss data; and that the risk profile of each event type
within the same business line is not reflected in the process of the calculation of the
minimum capital requirement.
To overcome these pitfalls the Basel Committee for banks Supervision (BCBS)
identifies for each business line a number of event types that can impact the compu-
tation of the minimum capital requirement. These event types are classified in terms of
their frequency and severity they have. Event types with low frequencies and higher
severities are considered as dangerous and need particular attention during the model-
ing process. Low frequency, high severity event types can put the future of a financial
institution at risk. These events cannot be actively managed on a day-to-day basis. In
contrast the high frequency low severity event types have high expected loss but low
unexpected loss. The medium frequency medium severity event types are often the
main focus of operational risk capital measurement provisions of the business since
they can be managed with suitable systems and processes.
Table 4 exhibits the business lines and event types as per BCBS proposal. For
example internal fraud has low frequency and higher severity in corporate finance.
Examples of internal fraud include for instance intentional misreporting of trading
positions, employee theft, and insider trading on an employee’s own account.
Whereas external fraud in the corporate finance business line has a low frequency
and medium severity. Example of external fraud include for instance computer
hacking, robbery and forgery.
Other examples of event types can include worker compensation claims and
sexual discrimination claims for employment practices and workplace safety.
Fiduciary breaches, misuse of confidential customer information, improper trading
activities on the bank’s account and money laundering for clients, products, and
business practices. Earthquakes, fires and floods for damage to physical assets.
Hardware and software failures, telecommunication problems, and utility outages
for business disruption and system failures. Data entry errors, collateral manage-
ment failures, incomplete legal documentation, unapproved access given to clients’
accounts for execution, delivery and process management. These seven distinct
event types and sources of operational losses, need particular individual attention in
the modelling of operational risk.
108
Under the advanced measurement approach the minimum capital requirement due
to operational risk is calculated with the bank’s internal operational risk models
based on four elements:
Internal Data: negative economic flow for which it is possible to identify the impact
on the profit & loss account as consequence of an operational event.
External data: operational loss events occurred to other financial institutions and
banks.
Scenario analysis: a scenario is a fictitious operational event (also inspired from an
occurred external event). The goal is to evaluate the impact in case the scenario
occurs in the bank.
Risk indicators: risk indicators are quantitative metrics reflecting operational risk
exposure of specific processes or products for example a drop in sales or in asset
prices.
As mentioned earlier, a major challenge in operational risk modelling is that data
on the severity and frequency of historical losses are often not available. Internal
historical data on high frequency low severity might be available within the bank.
Therefore combining internal, external, and scenarios based—loss data is crucial in
determining the minimum capital requirement due to operation risks.
The minimum capital requirement under the AMA method is the one obtained
with in-house operational risk models. These models include:—the Internal Mea-
surement Approach: IMA, the Loss Distribution Approach: LDA, and the Score-
card Approach.
This study makes use of the loss distribution approach. Under this approach, the
operational loss data is fitted separately to two different distributions: -the loss
frequency distribution, and the loss severity distribution. The loss frequency distri-
bution is the probability of the number of loss events over a fixed interval of time.
Whereas the loss severity distribution is the probability of the magnitude of the loss
once it occurs. These two distributions are then combined using the convolution
technique described below. The resulting distribution known as the annual loss
distribution is depicted in Fig. 1.
Figure 1 illustrates graphically how operational risk measures can be derived as
the quantile of the total annual loss distribution. It also shows the size of expected
operational losses (EL), the unexpected operational losses, and the 99th quantile of
the annual loss distribution which represents the operational risk measure.
110 J.W. Muteba Mwamba
The total annual operational loss S is the sum of the annual operational losses sj
related to the jth business lines. The annual operational loss sj , related to business
line j, is affected by two sources of uncertainty:—the number of losses nj in one year
time horizon, and—the impact i.e. severity of each single loss xij . Therefore
Xnj
sj ¼ x
i¼1 ij
ð3Þ
XK
S¼ s,
i¼1 i
i ¼ 1, 2, . . . , K; ð4Þ
Let p nj be the loss frequency distribution, and f xij the loss severity distribution
of the jth business line. The total
annual loss distribution corresponding to the j
th
Repeat step 1, step 2, and step 3 K times to get S2j , . . . , SKj observations of the
total annual loss distribution D sij corresponding to the jth business line.
A number of theoretical discrete probability distributions are used to model the loss
frequency distributions. The most common ones include Poisson distribution, and
the Negative Binomial distribution. A random variable x follows a Poisson
probability distribution if its density is given by:
eλ ðλÞnj
P X ¼ nj , λ ¼ , nj ¼ 0, 1, 2, . . . ð5Þ
nj !
where, nj is the number of trials, k is the kth success, and p the probability of success.
The mean number of losses occurring in a year and its variance are given by:
k ð1 p Þ
E nj ¼ ð7Þ
p
k ð 1 pÞ
Var nj ¼ ð8Þ
p2
112 J.W. Muteba Mwamba
Since this is the distribution of a positive random variable whose logarithm exists
and follows a Normal (Gaussian) distribution; m and σ are the mean and the
standard deviation not of the variable x but of the logarithm of this variable.
The second continuous probability distribution used in the modelling of the
severity distribution in operational risk management is the gamma distribution
whose density is expressed as:
1
xα1 eβ ,
x
f ðxÞ ¼ x > 0, β > 0: ð10Þ
ΓðαÞβα
The mean and variance of the Gamma distribution are given by: EðxÞ ¼ αβ,
Var ðxÞ ¼ αβ2 respectively. The exponential distribution is a special case of the
Gamma distribution. It is given by the following expression:
( x
1
f ðx Þ ¼ exp ; x > 0, θ>0 ð11Þ
θ θ
0
NB: If a bank has more than one business lines then the total annual loss distribution
corresponding to each business line needs to be calibrated. The overall loss distri-
bution for the entire bank can be obtained by making use of copula functions.
Traditionally the maximum likelihood method is often used to estimate the para-
meters of both the loss frequency and severity distributions. The maximum likeli-
hood method consists in maximising the log-likelihood function of the sample data:
hY n i
LogLik ¼ Log i¼1
f ðxi , θÞ ð12Þ
Computation of Operational Value at Risk Using the Severity Distribution. . . 113
ϑLogLik
θ¼ ¼0 ð13Þ
ϑθ
Z1
ωr ¼ f ðxÞxr dx; ð14Þ
0
When r ¼ 1 ) ω1 ¼ EðxÞ ¼ μ;
When r ¼ 2 ) ω2 ¼ Eðx μÞ2 ¼ σ 2 .
The characteristics of the loss data used in operational risk management is quite
unique. This data comes from different sources:—internal loss data, external loss
data, and scenario based -loss data. This mixture of loss data makes the method of
maximum likelihood and the probability weighted moment inadequate due to
sample bias and presence of outliers. For mixture distributions with generalised
Pareto distribution Bermude and Turkman (2003) show that maximum likelihood
and the probability weighted moment methods fail to obtain the shape parameter ξ
when it is greater than 0.5 or less that 1 (ξ > 0:5 or ξ 1 . To overcome this
drawback; the present study recommends the use of the Bayesian method.
9 Bayesian Method
The number of losses occurring during a year are assumed to be Poisson distributed,
that is
n1 , n2 , . . . , nj ! P nj =θk :
1
Γ α, β ¼ α xα1 eβx , α > 0, β > 0; ð17Þ
β Γ ðα Þ
Γ ðαÞ ¼ ðα 1Þ!
Based on external loss data and expert opinions i.e. ϕk ; we assume that the
likelihood function is a gamma distribution. Therefore the Posterior distribution
of J business lines is given by:
Y
nj J θk
P , ϕk Γ θ , ϕ Γ ðθ k Þ ð18Þ
θk k¼1 Nj k
Equation (18) is a Generalised Inverse Gamma distribution where θk ¼ ðα, βÞ. The
predictive distribution of future losses h-step ahead can be expressed as:
where θk ¼ ðαk , βk Þ
Computation of Operational Value at Risk Using the Severity Distribution. . . 115
Let y ¼ lnðxÞ ! N ðμ, σ 2 Þ be the loss data constituting the body part of the severity
distribution. We consider a diffuse prior for both the mean and the standard
deviation of the log-normal distribution:
1
pðμ, σ Þ ¼ ð21Þ
σ2
Z1 2
ϑLogLik
l μ, σ 2 ¼ f ðx; θÞ f ðx; θÞdx ð22Þ
ϑθ
0
Therefore the joint posterior of the body part of the severity distribution is
expressed as:
8 9
ZZ <Z1 2 =1
ϑLogLik
p μ, σ 2 = lossData ¼ f ðx; θÞ f ðx; θÞdx 2 dμdσ 2 ð23Þ
: ϑθ ;σ
0
To consider the truncation at the threshold; we need to adjust the severity distri-
bution using this formula:
116 J.W. Muteba Mwamba
f ðyi ; θk Þ f ðyi ; θk Þ
¼ ð27Þ
PðMin yi ThreshÞ FðThreshÞ FðMinÞ
Bermudez and Turkman (2003) show that the maximum likelihood estimator of ξ,
and β exist only if ξ 1. If ξ > 1 the log-likelihood tends to infinity as βξ
approaches xn . Hosting and Wallis (1987) show that PWM of ξ and β exist only
when ξ > 0:5. Similarly Castillo and Hadi (1997) have also shown that for small
sample size the maximum likelihood and the probability weighted moment methods
produce biased estimates.
Given that the shape ξ and location β parameters do not exist when ξ > 1 (see
Bermudez and Turkman, 2003; Smith 1984). Our Bayesian setting for the esti-
mation of these two parameters is as follows. Firstly we restrict the shape parameter
to be positive i.e. ξ > 0; thus the generalised pareto distribution becomes:
1
ξx ξ β
Gξ, β ðxÞ ¼ pðx= ξ, βÞ ¼ 1 1 ; 0<x< ð30Þ
β ξ
The joint posterior distribution of ξ and β can be found by making use of Bayesian
theorem:
Computation of Operational Value at Risk Using the Severity Distribution. . . 117
pðx= ξ, βÞpðξÞpðβÞ
pðξ, β=xÞ ZZ ð31Þ
pðx= ξ, βÞpðξÞpðβÞdξdβ
Therefore predictive distribution for the h step ahead for severity loss data can be
given by :
e
y ¼ yKþ1 , yKþ2 , . . . , yKþh ð33Þ
ZZ
pðe
y = yÞ ¼ pðe
y = ξ, β, xÞpðξ, β=xÞdξdβ ð34Þ
The Predictive distributions (Poisson, Lognormal and GPD) are obtained from their
posterior distributions. Let θk ¼ ðα, βÞ be either the parameters of the posterior of
Poisson distribution in Eq. (18); or θk ¼ ðμ, σ 2 Þ the parameters of the posterior of
Log-normal distribution in Eq. (23). Or θk ¼ ðξ, βÞ the parameters of the posterior
of GPD in Eq. (31). Given the posterior distribution pðθk = θk , LossDataÞ ; we
simulate from this distribution using the Gibbs sampler as follows:
• Step 1: Draw θ1 ðtÞ from p θ1 = θðtÞ 2 , LossData
• Step 2: Draw θðtÞ 2 from p θ2 = θðtÞ 1 , LossData
• Repeat step 2 until convergence
In our simulations of the posterior distribution; we fix the starting values
ðα0 , β0 Þ or ðμ0 , σ 2 0 Þ or ðξ0 , β0 Þ and follow the abovementioned steps:
• Step 1: Draw αð1Þ ! pðα= β0 , LossDataÞ or
μð1Þ ! p μ= σ 2 0 , LossData or
ξð1Þ ! pðξ= β0 , LossDataÞ
118 J.W. Muteba Mwamba
• Step 2: βð1Þ ! p β= αð1Þ , LossData or
ð1Þ
σ2 ! p σ 2 = μð1Þ , LossData or
ξð1Þ ! p ξ= βð1Þ , LossData ;
ðK Þ converge in Pbty to
βðK Þ , αðKÞ or μðKÞ , σ 2 or ξðK Þ , βðKÞ or ! ðα, βÞ
or ðμ, σ Þ or ðξ, βÞ as t ! 1
2
Once the parameters of both the loss frequency and mixtured loss severity
distributions are estimated using Bayesian technique described above; the total
annual loss distribution is obtained by making use of the convolution method.
Let f ðS1 Þ, . . . , f ðSJ Þ be the individual total annual loss distribution corresponding
to business line 1, 2, . . . , J. The overall annual loss distribution for the entire
bank f ðS1 , S2 , . . . , SJ Þ is obtained by making use of a copula function.
A function C : ½0; 1n ! ½0; 1 is n-dimensional copula if it satisfies these
properties:
• 8u 2 ½0; 1, Cð1, 1, . . . 1, u, 1, 1, . . . , 1Þ ¼ u:
• 8ui 2 ½0, 1, Cðu1 , . . . , un Þ ¼ 0; if at least one of the u’s is equal to zero
We make use of the probability integral transform to transform the empirical loss
distribution of each business line into a uniform distribution:
Z Sj
Y¼ f Sj dSj ¼ F Sj 3 ½0, 1 ð35Þ
1
We simulate 5000 loss data for two business lines using the software R available at
www.r-project.org. We use the Poisson distribution with parameter lambda ¼ 750
for the first business line, and 751 for the second business line in order to model the
loss frequency distribution. We also use the lognormal distribution with mean equal
to 150 and sigma ¼ 200 for the first business line, and 151 with sigma ¼ 201 for the
second business line respectively in order to model the body part of the loss severity
distribution. The tail part of the loss severity is modelled using the generalised
Pareto with shape parameter equal to 0.5 and scale of 1550 for the first business
line, and 0.6 and 1551 respectively for the second business line based on a threshold
of 1500.
Table 5 reports the estimated parameters of the loss frequency i.e. Poisson
distribution, the mixed loss severity (lognormal and generalised Pareto) distri-
butions using three estimation methods namely the maximum likelihood, the proba-
bility weighted moment, and the Bayesian methods. In addition to that; Table 5 also
reports the value at risk, the minimum capital requirement, and its bootstrapped
confidence interval for the maximum likelihood and the probability weighted
methods.
We find that the estimation method has a huge impact on the operational value at
risk, and the minimum capital requirement. Frequentist methods (i.e. the maximum
likelihood and the probability weighted moment methods result in contracting
findings: the shape parameter of the generalised Pareto distribution for instance is
12 Conclusion
This chapter aimed at developing a new operational risk methodology that can
generate reliable minimum capital requirement and promote financial stability in a
banking sector during period of economic turmoil. To achieve this aim, the chapter
started by introducing the basic methodologies in operational risk management.
The basic indicator and standardised approach were discussed. Through an example
of the calculation of the minimum capital requirement, we were able to show the
pitfalls of these two traditional operational risk methodologies. We showed that the
minimum capital requirement was not linked to the operational loss data, and that
the risk profile of each business line was not taken into consideration.
We thereafter introduced the advanced measurement approach under the loss
distribution methodology. Under this approach banks are required to develop their
own in-house operational risk models under the guidance of the central bank
authorities. This method relies heavily on the accurate estimation of the parameters
of the loss frequency, and loss severity distributions.
Given the small sample size of available external and internal operational loss
data, we have shown that the maximum likelihood method as well as the probability
weighted moment method produce biased estimates that can misrepresent the
minimum capital requirement due to operational risks.
To overcome this estimation problem, the present chapter proposes the use of the
Bayesian technique to estimate the parameters of the loss frequency, and loss
severity distributions. Through a simulated example, we have been able to highlight
the significant differences in the minimum capital requirement calculations. Our
Bayesian technique was able thanks to Monte Carlo simulations, to generate
unbiased and significant parameters, and higher minimum capital requirement
due to operational risks. Higher minimum capital requirement can promote stability
and confidence in financial sector. Higher minimum capital requirement can also
make it less likely that banks would fall back on the local government to mitigate
their losses, hence discourages them from taking excessive risks. Finally higher
minimum capital requirement lead to more common equity which can basically
help banks to absorb severe losses.
Computation of Operational Value at Risk Using the Severity Distribution. . . 121
References
Castillo E, Hadi AS (1997) Fitting the generalized Pareto distribution to data. J Am Stat Assoc
92(440):1609–1620
De Zea Bermudez P, Turkman MA (2003) Bayesian approach to parameter estimation of the general-
ized Pareto distribution. Test 12(1):259–277
Hosting JRM, Wallis JR (1987) Parameter and quantile estimation for the generalized pareto distri-
bution. Technometrics 29:339–349
Hull J (2012) Risk management and financial institutions, + Web Site, vol 733. Wiley, New York
Smith RL (1984) Threshold methods for sample extremes. In: Trago de Oliveita J (ed) Statistical
extremes and applications. Springer, Netherlands, pp 621–638
Teply P (2012) The application of extreme value theory in operational risk management. Ekono-
micky Casopis 60(7):698–716
Lin TT, Lee C-C, Kuan Y-C (2013) The optimal operational risk capital requirement by applying the
advanced measurement approach. Centr Eur J Oper Res 21(1):85–101
Sklar A (1959) Fonctions de Répartition a n Dimensions et leurs Marges, vol 8. Publication de
L’Institut de Statistique de L’Universsité de Paris, pp 229–231
Cenk C. Karahan
1 Introduction
Alternative investments such as hedge funds, private equity and real estate have
come to hold a considerable weight in portfolios of most investors, both individual
and institutional, over the recent decades. The market participants agree that the
common feature of such investments is illiquidity, even though they may not agree
on the exact meaning of the term, which will be addressed later. Therefore, it
becomes an ever more important issue how to balance an investment portfolio
among liquid and illiquid ones.
Consider a wealthy investor who divides his investable assets between a liquid
portfolio such as a mutual fund and an illiquid alternative investment. The inves-
tor’s main objective is to maximize the expected utility derived from total wealth at
a future terminal date, at which time he liquidates his entire holdings. We are not
interested in what he does with the money after the terminal date. The terminal date
could be considered as the planned retirement time and the investor would be trying
to maximize his retirement fund. The investor studied here has a standard risk-
averse utility function.
The general problem doesn’t put far-reaching restrictions on the market dynam-
ics. The value of each investment is a Markovian process evolving over time
stochastically and independently of each other. The liquid investment has an
expected return that is lower relative to the illiquid investment’s. We may consider
it to be perfectly hedgeable, thus return the risk-free rate. The illiquid investment is
not hedgeable and has a higher expected return commensurate with the risks and
restrictions associated with it.
The investor needs liquidity if his liquid investments fall in value below a critical
level. In this event, the investor would instantly liquidate his illiquid holding if it
was possible. However, instant liquidation is not an option when part of the wealth
is tied to illiquid investments. If the investor finds himself needing liquidity due to
shortfall in his liquid investments, he suffers a liquidity cost as a function of deficit
he faces until he can gain access to his illiquid holdings. This cost may well be
considered as a heavy borrowing cost on the amount the investor has to borrow to
fund the deficit.
Most financial literature defines illiquidity as a cost incurred while trading a
security. The cost is paid as a transaction fee or bid/ask spread, thus reducing the
return on the security. Such a definition of the illiquidity assumes one can trade the
said security whenever he wants as long as he is willing pay the hefty fee associated
with the transaction. The asset pricing implications of illiquidity as transaction cost
has been studied extensively starting with Amihud and Mendelson (1986),
Constantinides (1986) and more recently by Stambaugh (2003). Asset allocation
problems similar to our study in spirit have been studied under transaction costs by
the likes of Davis and Norman (1990) and Shreve and Soner (1994).
This study takes a different and more easily observable definition of illiquidity
into consideration, one similar in nature to Lippman and McCall’s (1976 and 1986)
operational measure of liquidity. They present a precise definition of liquidity in
terms of its most important characteristic—the time until an asset is exchanged for
money. Thus any factor extending that time or putting restrictions on it renders the
asset in question relatively illiquid. Asset pricing and portfolio implications of
illiquidity in this form have been studied in theoretical papers by Longstaff
(2001), Schwartz and Tebaldi (2006), Cetin and Zapatero (2015) and empirically
by Aragon (2007) and others.
The illiquidity defined as such can be in many forms and be attributed to many
factors. The version studied in this paper is imposed by the asset managers of an
investment pool such as in the form of redemption restrictions in a hedge fund.
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 125
stochastic control problem with state variables as the amount held in each invest-
ment fund and the remaining time until the end of the investment horizon; and the
decision variable as the amount that should be transferred between the two funds.
The problem in this form resembles cash management problem, an extensively
studied subfield of inventory management.
Earliest studies on stochastic cash management problems are due to Girgis
(1968), Eppen and Fama (1968, 1969, 1971), Neave (1970) and Heyman (1973).
The problem considered in these preceding studies is as follows: A firm keeps its
cash balance either in the form of cash on hand or as a bank deposit. The goal of the
firm is then to choose a policy on cash transfer to minimize its cost over some
period. Bensoussan et al. (2009) is a recent application of a similar idea in a
financial market setting in continuous time, where an investor balances his portfolio
between a bank account and a risky stock. Yan (2006) and Nascimento and Powell
(2010) study the cash management problem in a context similar to ours: mutual
fund cash holdings. Since the amount that can be transferred is limited by the
investor’s wealth, the problem considered here reflects an inventory problem with
capacity restrictions. This trading limitation has been addressed as a finite produc-
tion capacity within the traditional inventory management framework by Whisler
(1967), Federgruen and Zipkin (1986a, b) and more recently by Özer and Wei
(2004). Penttinen (1991) studies myopic and stationary solutions for stochastic cash
balance problems.
Our study differs from inventory management in general and cash manage-
ment problems in particular in its objective function, cost function, the fact that
both investments move stochastically and limitations put on the amount that
could be moved between accounts. Therefore, the problem studied here is a more
realistic modeling of an actual financial problem. The eventual goal of this study
is to show that the optimal policy structure in hedge fund redemption problem
displays a structure similar to the ones found in the traditional inventory
problems.
The amounts invested in liquid stock and illiquid hedge fund follow the
processes
dS ¼ S rdt
þ σ dW
S S
t ð1Þ
dH ¼ H μdt þ σ H dW tH
1
The evidence of positive alpha in hedge fund returns has been well-documented in various
empirical studies. See Ackermann et al. (1999), Agarwal and Naik (2004), Aragon (2007) and
references therein for examples.
128 C.C. Karahan
Note that we put a lower limit of zero to the investor’s liquid holdings as a solvency
requirement. This essentially assumes the investor does not pay a liquidity cost
beyond what he has in liquid holdings at his disposal at any settlement date. The
assumption is not unrealistic, if we assume the liquidity cost as interest paid on
borrowed funds. The investor would not owe more than he already has, as he can
only use his liquid holdings as collateral when borrowing. Solvency issue does not
arise in a hedge fund, because fees are paid only above a certain threshold as a
fraction of the excess return.
The amount to be transferred between the two investment opportunities ai is
bounded by
0 0
Si ai gH i ð3Þ
where the upper limit is due to gate provision of the hedge fund. Let a positive
action ai denote a redemption, i.e., transfer of funds from hedge fund to the liquid
stock; while a negative ai means transferring of liquid holdings back to the hedge
fund. Then the amounts held in each investment after rebalancing would be
0
Si þ ð1 cÞai if ai 0
Si ¼ 0
Si þ
0
ai if ai < 0
ð4Þ
H i ai if ai 0
Hi ¼ 0
Hi ð1 sÞai if ai < 0
Note that advance notice period is ignored in this setup. The assumption that the
funds are instantly transferred at the time of redemption simplifies the calculations
and notation without taking away from the general insight of the solution. The
arguments on the general structure of the optimal solution can easily be extended to
a setup with a time lag between the notification and actual redemption, not unlike
the lag in delivery studied in conventional inventory models. In order to keep the
model general, we do not impose restrictions on the liquidity cost Li and perfor-
mance fee Pi functions for the time-being aside from being bounded and adapted to
the respective filtrations.
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 129
V k Sk ; H k ¼ U Sk þ ð1 cÞH k ð6Þ
0 0 0 0
J i Si ; H i ; Si ; H i ; ð7Þ
V i Si ; H i ¼ sup
Si ðSi ; Hi Þ 2 Si H ia
a
Note that we simplify the notation from previous conditions and optimize over
liquid investment holdings Si ; as it defines the hedge funds holdings through
rebalancing equations. The actions implied by the change in the liquid holdings
become
130 C.C. Karahan
8 0
< Si S i 0
ai ¼ if Si Si ðor ai 0Þ ð8Þ
: 1 c0 0
Si Si if Si < Si ðor ai < 0Þ
The hedge fund holdings are defined through the Eqs. (5) and (8) as
8 0
< Si Si
0 0
Hi ¼ Hi if Si Si ð9Þ
: 0 1c 0 0
Hi ð1 sÞ Si Si if Si < Si
The sequential decision problem is further subject to (1), the processes governing
the market dynamics and (2), the fee payments on each investment opportunity.
This general problem defined by Eqs. (5)–(9) yields a recursive formula on the
value function V, yet it doesn’t reveal any valuable information about the optimal
policy. We next try to analyze the structural properties of the value function and in
turn the optimal policy.
The structure of the optimal policy is closely related to the concavity of the
auxiliary function and the value function. In order to prove the concavity of V i ð:Þ
and J i ð:; :Þ, we make the following unrestrictive assumptions on utility function of
the investor and the fees associated with maintaining investments in the two
accounts described above.
Assumption 1 Utility function U ðxÞ is a concave, increasing and continuously
differentiable function of liquid wealth x > 0. Risk-averse utility functions, such as
constant relative risk aversion, that have become the de facto standard in the
literature as well as risk-neutral utility functions carry this property.
Assumption 2 Liquidity cost Liþ1 incurred between time ti and tiþ1 and to be paid at
time tiþ1 is a convex, decreasing and continuously differentiable function of the
liquid stock holdings Si at time ti . This assumption is a very general and
non-restrictive one on the borrowing costs incurred in case of liquidity need.
Assumption 3 Performance fee Piþ1 incurred between time ti and tiþ1 and to be paid
at time tiþ1 is a convex, increasing and continuously differentiable function of hedge
fund holdings Hi , thus a convex and decreasing function of the liquid stock holdings
Si at time ti , due to Eq. (9).
Assumption 4 Liquidity costs Liþ1 and performance fees Piþ1 incurred between
time ti and tiþ1 and to be paid at time tiþ1 are finite and their respective expectations
are well-defined. The assumption holds given the market dynamics assumed to
govern the movements of investments.
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 131
What follows is a set of fairly simple technical lemmas that would help establish
the concavity.
Lemma 1 If f ðxÞ is a concave non-decreasing function and gðxÞ is a concave one on
convex set C, then f ðgðxÞÞ is also a concave function.
Proof Pick any x1 , x2 2 C and λ 2 ð0; 1Þ
gðλx1 þ ð1 λÞx2 Þ λg x1 þ ð1 λÞg x2
where the second inequality is due to concavity of f ðxÞ. Hence f ðgðxÞÞ is also
concave. ∎
Lemma 2 If f t ðxÞ is a concave function on convex set C and pðtÞ 0 for all t 2 T;
then
Z
gðxÞ ¼ pðtÞf t ðxÞdt
T
Since integral is sum of these positive and finite amounts, the inequality becomes
gðλx1 þ ð1 λÞx2 Þ λg x1 þ ð1 λÞg x2
Lemma 4 Let X be a nonempty set with Ax a nonempty set for each x 2 X. Let
C ¼ fðx; yÞ : x 2 X, y 2 Ax g be a convex set and J a real valued and concave
function on C. Then
with the second inequality due to concavity of J ðx; yÞ on C: Taking the limit γ ! 0;
yields concavity of f . ∎
0 0 0 0
Theorem 1 V i Si ; H i and J i Si ; Hi ; Si ; H i are continuously differentiable and
increasing in each variable if all other variables are held constant.
Proof Monotonicity of functions in each individual variable is trivial, as investor
cannot be worse off with more money in either account.
The proof of differentiability is via induction. At the penultimate redemption
date
0 0
J k1 Sk1 ; H k1 ; Sk1 ; H k1 ¼ Ek1 V k Sk ; Hk Sk1 , H k1 , Sk1 , H k1
0 0 0 0
0
¼ Ek1 U Sk þ ð1 cÞHk Sk1 , H k1 , Sk1 , H k1
0 0 0
pffi H ð10Þ
Hi ¼ H i1 eðμσH =2ÞtþσH tW Pi
0 2
Assumptions 2 and 3 along with Eqs. (9) and (10) reveal that the terminal wealth is
a differentiable and increasing function of Sk1 and H k1 . Note that
0 0
U Sk þ ð1 cÞH k is differentiable and increasing in liquid wealth, thus in each
0 0
of Sk and H k due to assumption 1. Assumption 4 and Eq. (10) would prove that
0 0
J k1 Sk1 ; H k1 ; Sk1 ; H k1 is also increasing and differentiable since the well-
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 133
pffi s
Sk1 eðrσ S =2Þtþσ S
2
tW
is a linear, thus concave function of Sk1 . Due to assumption 1,
Lk is a convex function of Sk1 , thus Lk is a concave function. ð1 cÞH k1
pffi H
eðμσH =2ÞtþσH tW is a linear and concave function of Hk1 . Pk is also a concave
2
function of H k1 by assumption 2. Also note that H k1 is a linear function of Sk1
0 0
, Sk1 and H k1 as shown in Eq. (9). This linear relationship makes ð1 cÞH k1
pffi H
eðμσH =2ÞtþσH tW and Pk concave in Sk1 , Sk1 and H k1 .
2 0 0
pffi s
The expression inside the utility function Sk1 eðrσS =2ÞtþσS tW Lk þ ð1 cÞ
2
pffi H
H k1 eðμσH =2Þtþσ H tW Pk is the sum of functions that are concave in variables
2
0 0
Si ; H i ; Si ; H i . Therefore, it is a concave function of those 4 variables.
The utility function is then a concave function of a concave expression. Due to
0 0 0 0
Lemma 1, U Sk þ ð1 cÞHk is concave in Si ; Hi ; Si ; H i . Since expectation
preserves that concavity by Lemma 3, J k1 is concave in its parameters as well.
134 C.C. Karahan
0 0
Note that the set the four variables belong to Si ; H i ; Si ; H i 2 ½0; 1Þ ½0; 1Þ
Sia Hia is a convex set, since it’s a combination of affine sets. Consider
0 0
n 0 0
o
V i Si ; H i ¼ sup J i Si ; H i ; Si ; H i ; ðSi ; H i Þ 2 Sia H ia
Si
0 0 0 0
As a direct result of Lemma 4, V k1 Sk1 ; Hk1 is also concave in Si ; H i .
We now have established concavity J k1 and V k1 for the penultimate trading
opportunity. Extending the result to all times is done via inductive use of the above
0 0
lemmas. Assume the proposition holds at time tiþ1 , V iþ1 Siþ1 ; H iþ1 is concave in
0 0
Siþ1 ; H iþ1 : Consider
h 0
i
J i Si ; Hi ; Si ; H i ¼ Ei V iþ1 Siþ1 ; H iþ1 Si , H i ; Si , H i
0 0 0 0 0
0 0
Using arguments similar to above, Siþ1 and H iþ1 are each concave functions of the
0 0
variable set Si ; H i ; Si ; H i by assumptions 2 and 3, Eqs. (9) and (10). We would like
0 0 0 0
to show that V iþ1 Siþ1 ; H iþ1 and by extension J i Si ; H i ; Si ; H i are concave
0 0 0 0 0
functions of Si ; H i ; Si ; Hi . For ease of notation, let A ¼ Si ; H i ; Si ; H i , Siþ1 ¼ g
0
ðAÞ and Hiþ1 ¼ hðAÞ where gðAÞ and hðAÞ are concave functions. Pick any A1 , A2 in
the feasible set and λ 2 ð0; 1Þ
Since gðAÞ and hðAÞ are concave functions
0 0
By induction the concavity of V i Si ; Hi and J i ðSi ; H i Þ holds in general. ∎
Note that given the amount held in each account prior to rebalancing and only
one of the values after rebalancing, the other state variable and the action taken are
determined. Si , the liquid holdings after rebalancing the portfolio, is considered the
control variable in this problem. The optimal action is defined as the optimal level
of liquid holdings given the amounts prior to rebalancing. Hi ; the optimal holdings
in the hedge fund, will be determined by the control variable Si , and state variables
0 0
Si and H i via Eq. (9).
The following two theorems are the main results on optimal hedge fund redemp-
tion policies. The main structure of the optimal policy is as follows. If the hedge
funds is an open ended one, meaning the investor can invest back into the hedge
fund at the same dates he can redeem his investments: the investor has two critical
values; one upper bound and one lower bound. If the liquid holding of the investor
is above the upper bound, he should transfer excess amount back to hedge fund to
bring his liquid holdings down to that certain level; if his liquid holdings are below
the lower bound, he should redeem his hedge funds holdings to bring his liquid
holdings up to that critical level or as close to it as possible. For close ended hedge
funds, where the hedge fund is closed to new investments, the policy is of a single
critical value. If the liquid holdings are below that certain level, the investor should
redeem his hedge fund holdings to bring his liquid holdings up to that critical level
or as close it as possible.
Theorem 3 Consider the dynamic programming setup in Eqs. (5)–(9). If
0 0
J i Si ; H i ; Si ; H i is concave in ðSi ; H i Þ 2 Sia H ia , and
0 0
n 0 0
o
V i Si ; Hi ¼ sup J i Si ; H i ; Si ; Hi ð11Þ
Si
then for each ti 2 Γ, there are numbers Li U i such that optimal policy is of the
form
8 0
< Ui if U i < Si
0 0
Si ¼ Si if Li Si U i
: 0 0
max Li , Si þ ð1 cÞGi if Si < Li
dJ i
ð aÞ 0 for 8a S*i
dSi ð12Þ
dJ i
ð aÞ 0 for 8a > S*i
dSi
dJ i ∂J i ∂J i dH i
¼ þ
dSi ∂Si ∂H i dSi
Define
∂J i ∂J i
Ui ¼ sup a; ð aÞ ð1 sÞ
a2½0;1Þ ∂Si ∂Hi
0
If U i < Si , then the optimality condition (12) is satisfied at U i and the optimal liquid
investment level should be S*i ¼ Ui :
With an analogous definition
∂J i ∂J i 1
Li ¼ sup a; ð aÞ
a2½0;1Þ ∂Si ∂H i ð1 cÞ
0 0
and similar arguments S*i ¼ max Li , Si þ ð1 cÞGi becomes optimal if Si < Li .
Note that the constraint put by gate provision limits the amount the investor can
redeem to increase his liquid holdings but it satisfies the optimality condition.
Due to monotone decreasing nature of the derivative dS dJ i
i
, it’s clear that Li U i :
0 0
If Li Si U i , we want to argue that S*i ¼ Si ; hence no rebalancing needed.
Since J i is concave in Si ; the derivative dS dJ i
ðaÞ is monotone decreasing. dJ
dSi ðaÞ dSi
i dJ i
0 0 i
0
Si 0 for 8a Sii and dS dJ i
i
ðaÞ dS
dJ i
i
Si 0 for 8a > Sii . Therefore, Si satisfies the
optimality conditions in Eq. (12). ∎
The policy described by the two critical values has a natural intuition. If the
liquid holdings are too high, the investor should transfer his holdings back to hedge
fund as long as marginal benefit of the hedge fund exceeds that of the liquid
investment. Conversely, if his liquid investment is too low, he should redeem his
hedge fund holdings as long as the marginal benefit of the liquid investment is
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 137
larger than that of the hedge fund. Since he has to pay a fraction of each dollar
transferred as either placement or redemption fee, there is a range of values where it
is optimal to abstain from any adjustments.
Theorem 4 Consider a closed-end hedge fund, where the investor cannot invest
any more of his liquid funds back into the hedge fund. The dynamic programming
0 0
setup in Eqs. (6) and (7) holds. Concavity of J i Si ; H i ; Si ; Hi in ðSi ; H i Þ 2 Sia H ia ,
where
0 0 0
Sia ¼ Si : Si Si Si þ ð1 cÞgH i
0 0 0 ð14Þ
H ia ¼ Hi : H i gH i H i H i
The hedge fund holdings are defined through the Eq. (15) as
8 0
< 0 Si Si 0
Hi ¼ H if Si > Si ð16Þ
: 0
i
1c
Hi otherwise
Then for each ti 2 Γ, there is a critical number Li such that optimal policy is of the
form
0 0
max Li , Si þ ð1 cÞGi if Si < Li
Si ¼ 0 0
Si if Li Si
Proof The proof is based on a similar concavity argument made in Theorem 3 for
open-ended hedge funds, where the investor can trade in our out of the hedge fund.
The fact that he is now constrained to a one-way trade doesn’t affect the concavity
of the function J i .
We define
∂J i ∂J i 1
Li ¼ sup a; ð aÞ
a2½0;1Þ ∂Si ∂H i ð1 cÞ
0
If Si < Li , the optimality condition (12) is satisfied at Li or as close to it as the
0
investor’s holdings allow him to get, thus S*i ¼ max Li , Si þ ð1 cÞGi becomes
0 0
optimal. If Si Li , then dSdJ i
i
ðaÞ dS
dJ i
i
Si 0 for 8a > Sii , which satisfies the
138 C.C. Karahan
0
optimality condition. a < Sii is not in the feasible set in this case. Therefore, S*i ¼ Si
0
is optimal when Si Li : ∎
The two critical values Li Ui defining the optimal policy are functions of state
variables and other market dynamics such as interest rates, volatility etc. in the
general form of the model. General behavior patterns of the critical values in terms
of state variables are discussed next.
The critical values that define the portfolio allocation policy for the investor are in
terms of the amount he has in liquid holdings. This section provides the structural
properties of the critical values in terms of other state variables, particularly the
hedge fund holdings and the time until investment horizon ends.
Theorem 5 Critical values Li and U i are increasing in hedge fund holdings prior to
0
rebalancing Hi .
Proof Consider the critical values
∂J i ∂J i
Ui ¼ sup ð aÞ
a; ð1 sÞ
a2½0;1Þ ∂Si ∂Hi
∂J i ∂J i 1
Li ¼ sup a; ð aÞ
a2½0;1Þ ∂Si ∂H i ð1 cÞ
∂J i ∂J i
Since J i is concave in Si and H i , ∂S i
and ∂H i
are decreasing in Si and H i respectively.
∂J i 0
Therefore, it is sufficient to show that ∂H i
is also decreasing in Hi . By chain rule
2 2
∂ Ji ∂ J i ∂H i
0 ¼ 0
∂Hi ∂Hi ∂H i 2 ∂H i
2 2
∂ Ji ∂Hi ∂ Ji
∂Hi 2
0 due to concavity of J i . ∂H 0 ¼ 1 by Eq. (16). Therefore,
∂H ∂H
0 0, proving
i i i
∂J i 0
that ∂H i
is decreasing, thus Li and U i are increasing in H i : ∎
holdings within a certain range, which does not depend on how much wealth he has
tied to an illiquid investment.
Theorem 6 Critical values Li and U i are decreasing in time remaining until
investment horizon, i.e., increasing in time index Γ ¼ fti : i ¼ 0, 1, . . . , kg such
that 0 ¼ t0 < t1 < . . . < tk ¼ T.
Proof We forgo a rigorous proof and make a heuristic argument about the mono-
tonicity of the critical values in time. We conjecture that the expected utility would
be more sensitive to changes in hedge fund holdings further away from the end of
investment horizon. In mathematical terms
∂J i ∂J iþ1
∂H i ∂H iþ1
This study is among the few who study portfolio allocation under illiquidity.
Here I take a practical view of the illiquidity imposed by trading restrictions of a
hedge fund. As an extension of this study, I intend to analyze the impact of
illiquidity in the former of longer expected time of sale due the innate thinness of
the market for individual assets such as real estate or exotic derivatives. Optimal
time and price to sell such a single illiquid asset is an area will be considered in a
separate study.
As the hedge funds and investors realized during the recent financial crisis and
its aftermath, liquidity is should be deliberately managed. This study proposed a
simple objective in that regard to keep liquid holdings above a threshold or else pay
a penalty. Although this is not far from goals of most investors, more sophisticated
objectives can be considered for institutional investors with other liquidity needs.
This study provides a simple structure to the optimal redemption policies.
Quantifying these policies and its superiority to ad hoc approaches to redemption
would be welcome extension of this study. The structural analyses are also limited
to monotonicity in time and amount held in hedge funds. One can analyze the
impact of changes in risk aversion factor, interest rates or volatility in the optimal
policies.
This study is admittedly simple and lacks rigorous economical foundation. In a
more theoretical study, it would be a natural extension to bring in intertemporal
consumption as a new control variable and add a new dimension to the problem.
One can also consider the impact of correlation between liquid and illiquid holdings
on the optimal policies.
The implication of this study from the hedge fund managers’ point of view is that
they would not be immune to market fluctuations, where the investors hold their
liquid holdings. As the recent crisis have shown, even if a hedge fund has a low beta
and is not correlated with the wider stock market, a plummeting stock market,
where most investors hold their liquid assets, means that hedge funds would be
inundated with redemption requests. Such a flood of redemption requests put
irreversible strains on the hedge funds and assets they hold. A further study
would approach the problem from a game theoretical point of view to optimize
the redemption restrictions that hedge fund management imposes.
References
Ackermann C, McEnally R, Ravenscraft D (1999) The performance of hedge funds: risk, return,
and incentives. J Financ 54(3):833–874
Agarwal V, Naik NY (2004) Risks and portfolio decisions involving hedge funds. Rev Financ Stud
17(1):63–98
Amihud Y, Mendelson H (1986) Asset pricing and the bid-ask spread. J Financ Econ 17
(2):223–249
Aragon GO (2007) Share restrictions and asset pricing: evidence from the hedge fund industry. J
Financ Econ 83(1):33–58
Liquidity Risk and Optimal Redemption Policies for Illiquid Investments 141
Bensoussan A, Chutani A, Sethi SP (2009) Optimal cash management under uncertainty. Oper Res
Lett 37(6):425–429
Cetin C, Zapatero F (2015) Optimal acquisition of a partially hedgeable house. Math Financ Econ
9(2):123–147
Constantinides GM (1986) Capital market equilibrium with transaction costs. J Polit Econ
94:842–862
Davis MH, Norman AR (1990) Portfolio selection with transaction costs. Math Oper Res 15
(4):676–713
Eppen GD, Fama EF (1968) Solutions for cash-balance and simple dynamic-portfolio problems. J
Bus 41(1):94–112
Eppen GD, Fama EF (1969) Cash balance and simple dynamic portfolio problems with propor-
tional costs. Int Econ Rev 10(2):119–133
Eppen GD, Fama EF (1971) Three asset cash balance and dynamic portfolio problems. Manag Sci
17(5):311–319
Federgruen A, Zipkin P (1986a) An inventory model with limited production capacity and
uncertain demands I. The average-cost criterion. Math Oper Res 11(2):193–207
Federgruen A, Zipkin P (1986b) An inventory model with limited production capacity and
uncertain demands II. The discounted-cost criterion. Math Oper Res 11(2):208–215
Girgis NM (1968) Optimal cash balance levels. Manag Sci 15(3):130–140
Heyman DP (1973) A model for cash balance management. Manag Sci 19(12):1407–1413
Heyman DP, Sobel MJ (1984) Stochastic models in operations research, vol II. McGraw Hill,
New York
Lippman SA, McCall J (1976) The economics of job search: a survey. Econ Inq 14(2):155–189
Lippman SA, McCall JJ (1986) An operational measure of liquidity. Am Econ Rev 76(1):43–55
Longstaff FA (2001) Optimal portfolio choice and the valuation of illiquid securities. Rev Financ
Stud 14(2):407–431
Nascimento J, Powell W (2010) Dynamic programming models and algorithms for the mutual
fund cash balance problem. Manag Sci 56(5):801–815
Neave EH (1970) The stochastic cash balance problem with fixed costs for increases and
decreases. Manag Sci 16(7):472–490
Özer Ö, Wei W (2004) Inventory control with limited capacity and advance demand information.
Oper Res 52(6):988–1000
Penttinen MJ (1991) Myopic and stationary solutions for stochastic cash balance problems. Eur J
Oper Res 52(2):155–166
Schwartz ES, Tebaldi C (2006) Illiquid assets and optimal portfolio choice (No. w12633).
National Bureau of Economic Research
Shreve SE, Soner HM (1994) Optimal investment and consumption with transaction costs. Ann
Appl Probabil: 609–692
Stambaugh RF (2003) Liquidity risk and expected stock returns. J Polit Econ 111(3)
Whisler WD (1967) A stochastic inventory model for rented equipment. Manag Sci 13(9):640–647
Yan XS (2006) The determinants and implications of mutual fund cash holdings: theory and
evidence. Financ Manag 35(2):67–91
Abstract Derivatives are financial instruments that derive its value from underly-
ing asset such as bond, loan or credit. Credit derivatives are a subgroup of
derivatives and mainly consist of credit default swaps, credit linked note, credit
swap options and collateralized debt obligations. Credit derivatives market has
experienced an exponential growth in recent years. From almost nothing in
1990s, approached to $60 trillion in 2008. Growth was particularly strong in credit
default swaps. Force behind this fast growth is rising demand for hedging and
transferring the credit risk. After the credit crisis, misuse of credit derivatives and
insufficient regulations are come into light and mostly argued. Many claimed to ban
these instruments whereas many other tried to find alternative solutions. The
purpose of this paper is to explain the issue of credit derivatives, their mechanism
and their role in financial system and global credit crisis.
1 Introduction
Credit derivatives are financial agreements that allow credit risk transfer between
buyer and seller. Credit derivatives are a subgroup of derivatives market that
includes futures, forward, swaps and options. Since its inception in the
mid-1990s, the market for credit derivatives has grown rapidly and has gone
through rapid change. More sophisticated and complex credit derivatives instru-
ments have introduced. From $631 billion in notional amount in the first half of
2001, credit derivatives market reached to $17 trillion at the end of 2005, to $26
trillion by the mid-2006 (Tijoe 2007).
It’s very hard to give a certain time about when credit derivatives emerged
firstly. But roughly it is said that it emerged around 1993. Loan trading market and
collateralized loan obligations are two main driving forces behind the evolution of
credit derivatives. Because these loan market has revealed some necessities and
bring about new developments like credit ratings, quantitative pricing models and
pricing of default risk. High trading volume in loan market yield to transfer of credit
risk. But still market has problems about pricing of defaultable loans and transfer of
credit risk. In order to fill the gaps in the market credit-linked notes emerged first as
credit derivative (Kothari 2011).
The evolution of credit derivatives market can be analysed in four stages. In the
pre-1997 stage usage of credit derivatives were mostly one-time transactions. In
this stage total return swaps and equity-linked swaps were dominate the market. In
the second stage between 1997–1999, there has been new developments about
standardization of CDS and the need of protection from credit risk especially during
the Asian, Russia and Mexican crisis. In the third stage between the years
1999–2003 credit derivatives transactions began to made by dealers, and in the
same period liquidity of credit default swap has increased extremely. During this
period lots of credit events have occurred like Enron, Worldcom, Argentina and
National Power. In the fourth stage credit derivatives indices has introduced to the
market (Kothari 2011). Credit derivatives has been seen a breakthrough in manag-
ing credit risk. Credit derivatives provide helpful tools to hedge, to reduce and to
transfer credit risk. It enables to decompose risk and reallocate risk, without
transferring ownership of underlying asset, to different parties who are willing to
bear these risks. Even by using credit derivatives it is possible to make out a
replicating portfolio. One of the most important reason behind the exponential
growth of credit derivatives is the raising usage of LİBOR as an interest rate
benchmark. Because it indicates the credit quality of banks and the cost of hedging.
Also owing to new pricing models and quantitative approaches to credit risk,
derivatives market has reached its new position (O’Kane 2001).
Credit default swaps are financial contracts that is used to transfer credit risk of
reference entity between credit protection buyer and credit protection seller. CDS
buyer, in return for transferring credit risk to CDS seller, accepts to make periodic
fee payment to protection seller. Protection seller collects these fees and has to
make default payment to CDS buyer in the case of credit event. CDS buyer
continues to make periodic payment until the occurrence of credit event that
Credit Derivatives, Their Risks and Role in Global Financial Crisis 145
triggers default payment. After the credit event the contract terminates. Credit
default swap contract can be closed either physical delivery or cash settlement
(IOSCO 2012).
As it is indicated in Fig. 1, in a credit default swap (CDS), protection seller
commit to compensate the loss of protection buyer if the reference entity experi-
ences one of a number of defined credit events. Figure 1 indicates an example for
5 year maturity CDS issued with notional value of $100 million at an annual spread
of 100 basis points. XYZ corporation is the reference entity. The protection seller is
paid premium, typically expressed as an annualised percentage of the notional
value of the transaction in basis points and paid quarterly over the life of the
transaction (Rule 2001).
Selling protection via CDS resembles to take a leveraged long position on a
floating rate note of reference entity. Since both of CDS and FRN reflects the credit
risk, periodic fee payment of CDS must be equal to return of FRN that is spread
over Libor. From this point CDS prices should be equal to bond prices. Likewise,
buying protection through CDS is similar to take a short position on underlying
asset. Protection buyer can easily replicate CDS contract by shorting reference bond
and investing on riskless rate (IOSCO 2012).
CDS contract Credit default contracts can be designed on a specific reference
asset as well as on a portfolio of reference entities, so called index or tranche CDS.
Markit as a main provider of CDS indices, developed two main index families
called “CDX” and “iTraxx” by using the most liquid single-name CDS. “CDX”
index family use North American and Emerging Markets reference entities whereas
“iTraxx” index family use European and Asian reference entities. These index
families have sub-indices varying depending on region, industry or maturity
(IOSCO 2012).
Total Return Swaps are also bilateral agreements that enables the transfer of credit
risk between total return buyer and total return receiver. Different from CDS, total
return swaps transfer credit risk by exchanging total return and the credit risk of the
asset for another cash flow. The receiver of the total rate of return refers to investor
who get the benefits of total return without taking ownership of the security.
Payments between TR receiver and TR payer changes depending on market
valuation of the security (JP Morgan 1999).
Two parties of total return swap is total return receiver and total return payer. As
seen from the chart, without taking the ownership of the underlying asset, TR
receiver gets the total return of the reference obligation (underlying asset), in
exchange for making payment of LİBOR plus a fixed spread. Total return on the
reference asset is interest, fees and the changes in the market value of reference
asset. In the case of increase in the market value of underlying asset, TR payer
makes the payment whereas in the case of negative total return, TR receiver must
compensate the loss and makes the payment to the TR payer (JP Morgan 1999).
The maturity of the underlying asset and the total return swap does not need to be
the same. Total return swaps can be terminated both cash settlement or physical
settlement.
Total return swap bring about some risks that counterparties have to bear in
mind. The first risk is caused by a decrease in the value of the reference asset and
the default of TR receiver. The second risk involves the joint default of reference
asset obligator and TR receiver before making payment to the TR payer related to
the decrease market value of the asset.
Total return swaps are mainly used for funding and trading purposes. In the
trading based usage of TR swap, it is possible to create a new synthetic asset or to
short asset without selling it. By using TR swap it is easier to leverage credit view
(Choudhry 2004).
Credit-Linked Note is hybrid instruments that combine bond with credit derivatives
like credit default swap, total return swap and credit spread options. Credit-Linked
Notes are the funded variation of the Credit default swap.
CLN issuer as a protection seller makes regular coupon payments in return for
taking principal amount at maturity. The party who wants to buy protection against
credit risk issue CLN. Unless the credit event occur, the issuer is responsible for
making interest payments until maturity and for the repayment of 100 % of the
principle at maturity. If the reference entity defaults before maturity, CLN termi-
nates and interest payments stop. Similar to the CDS, CLN can terminate by cash
settlement or physical delivery (Bruyere et al. 2006).
Credit Derivatives, Their Risks and Role in Global Financial Crisis 147
In a Credit-Linked Note protection buyer doesn’t take the risk of joint default of
protection seller and reference entity, contrary to CDS. Because in CLN, protection
buyer receives the principal amount at the beginning and repays it if credit event
does not occur (Bruyere et al. 2006).
To illustrate CLN, think of a bank that invested on X bond. As an alternative way
to protect itself from a potential default of X bond, bank can issue a credit-linked
note. Bank B as protection seller buys $10 million CLN at par. Unless the credit
event occur, cash flows between CLN issuer and investor (Bank B) continue and at
maturity it terminates. In the case of default, if the market value of the bond
declines 35 %, then the recovery value on 65 % will be ($10 million*65 %) $6.5
million. Bank A’s gain of $3.5 million from CLN will compensate the loss of $3.5
million ($10 million*%35) due to Bond X.
Credit-linked notes were very popular in 1990s, however when credit derivatives
market has developed and credit swap and credit options have used widely, the use
of CLN has decreased (Finnerty 1998).
A put option gives investor a right to sell and a call option gives a right to buy. To
illustrate suppose that the strike spread is 125 bp. If an investor believes that the
yield spread on bond will rise (for ex. 200 bp) above strike spread, then he must
invest on credit spread put option. Because the option will be in the money.
However, if the investor expects that yield spreads will decrease (for ex.100 bp)
below strike spread must invest on credit spread call option. Because credit spread
put option will be out of money. By using credit spread options investors can
separate credit risk from market risk and other types of risk. Suppose an investor
that demands a protection against a decrease in credit ratings of bond. In this
situation the investor would better buy credit spread put option. Instead suppose
that the investor expects an upgraded credit rating, means a decrease in spreads and
increase in bond price. This time the investor realize a profit by buying credit spread
call option (Finnerty 1998).
Besides decomposing the credit risk, CDS can also be used to design new portfolio
instruments with different risk and return characteristics. This use of CDS leads to a
discovery of collateralized debt obligations. The simplest form of CDO is shown in
Fig. 2 (Rule 2001).
CDOs are asset-backed securities that derives its value from the underlying
collateralized asset portfolio. These asset portfolio consist of loans, mortgages,
credit card receivables and even other CDOs. CDOs can be used for three different
148 F.S. Dural
purposes. The main reason behind the issuance of CDO is to remove assets from
bank’s balance sheet and arbitrage.
Banks put pool of assets together and sell this package to Special Purpose
Entities as collateralized debt obligations. These securities are divided into different
asset tranches such as senior, mezzanine, equity and sold to investor. Between these
CDOs, the senior tranches have the lowest risk and return, whereas equity tranches
have the highest risk and possible return. In the case of losses senior tranch is the
first to take compensation, mezzanine is the second and equity tranch comes after.
Investors would select the tranch that is consistent with their risk preferences. Risk
avoidance investors would tend to prefer senior tranches, high risk tolerance
investor would prefer equity tranches. Collateralized debt obligations can be
divided into three groups (Russell and Kyle 2012).
Cash Flow CDOs, in this most common type of CDO, cash flow of the
underlying assets of the CDO are sufficient to cover all of the payments made to
investors.
Arbitrage CDOs give an opportunity to make profit from the spread between
the yield on underlying asset and the yield that is paid to investors in the CDO. For
example, assume that the yield of underlying asset is 15 % and the yield required by
CDO investor is %10. CDO issuer can make a profit of %5.
Synthetic CDOs, unlike to other types of CDOs, synthetic CDOs don’t need to
have an underlying asset at all. Investors enjoy the opportunity to take a share of a
diversified underlying portfolio. So that the investors do not care whether CDO
owns underlying asset or not, they just expect to take cash flows. Being conscious of
this situation, CDO issuers look for a new way to pay these cash flows without
owning underlying asset. At this point “credit default swap” and “total return swap”
come into daylight. Rather than buying assets, the special purpose entity sells CDS
protection, and then sells the cash flows from the CDS to investors, just like a
regular CDO.
Credit Derivatives, Their Risks and Role in Global Financial Crisis 149
According to Chart 1, the major product of credit derivatives is credit default swaps.
OCC’s Quarterly Report 2013 stated that CDSs constitute 97 percent of total credit
derivatives. The largest four banks hold 93% of the total notional amount of
derivatives, JP Morgan, Citigroup, Bank of America and Goldman Sachs. Invest-
ment grade reference entities constitutes 49% of all credit derivatives notional, the
largest part of the market. Efforts related to compression operations as well as
reduced demands for structured products, resulted a decline in the amount of
notional outstanding credit derivatives. According to the OCC’s Quarterly Report
2015, trading volumes of credit derivatives started to decline in 2007 and this trend
continued in the first half of 2015.
One of the main reason behind the rise of CDS is the regulation of International
Swap and Derivatives Association that is founded in 1985 to improve operational
infrastructure in derivative market. ISDA developed Master Agreement to
strengthen the effect of netting and collateral and revised in 2002. ISDA defined
a format and standardized documentation for CDS in terms of reference entity,
credit event, CDS premium, maturity date and nominal value. An increasing
demand to CDS market reveals the necessity for more standardization and trans-
parency. In 2009, ISDA developed a new Master Confirmation Agreement (Big
Bang Protocol) to strengthen the CDS contract standardization in terms of expira-
tion date, periodic premium payments and to develop central counterparties. Single-
name CDS premiums were set at 100 or 500 basis points for US contracts and at
25, 100, 500 or 1000 basis for European market (IOSCO 2012).
Credit default swap market has experienced an exponential growth since its
inception. Notional amounts of outstanding CDS rose by 60% in the first half of
2005 and reached 13.7 trillion dollar at the end of the year. Growth rate of single-
name CDS was 40% while Multi-name CDSs’ 21%. At end of 2007, the volume of
outstanding CDS peaked at approximately $60 trillion. It is noteworthy that in the
same period global GDP was $54 trillion. After this upward trend CDS market size
declined sharply to $29 trillion at end 2011 and approximately $21 trillion at the
end of 2013. Especially after the financial crisis, it is argued that CDS lost some of
its appeal. Actually, the decline in volumes of outstanding CDS reflects the result of
the efforts to reduce counterparty risk, and compression operations. Indeed,
according to Markit, trading volumes have continued to rise and it is almost twice
as high in the first nine months of 2010 as in the same period in 2007. According to
BIS December 2015 Quarterly Review, downward trend continued in 2014 with
16.4 trillion dolar and declined to 14.5 trillion dollar in the first half of the 2015
(Graph 1).
Trading activities in CDSs is mostly occur in interbank market. BIS Derivatives
Report 2006 indicates that two thirds of the total notional amounts outstanding
transactions is done by dealers and one third of remaining is done by financial
institutions. Furthermore, Comparing to the bond market CDS market has low trade
frequency but large average trade size. The trade frequency of index CDS is higher
than single-name CDS (Graph 2).
The CDO market also grew dramatically between 1996 and 2007. The issuance
reached a new high of $312 billion, an increase of 102 % on 2005. In 2006 CDOs
represented the second largest asset backed securities. The year 2006 brought
innovation to the CDO market. Driven mostly by US housing market, high grade
structured products spread out in 2006 and represent nearly two-thirds of the
structured product of CDO market (Thompson et al. 2007). The market reached
the peak by 2008 and especially after the financial crisis the market declined and got
abreast of year 2000.
Credit Derivatives, Their Risks and Role in Global Financial Crisis 151
Credit derivatives can be used either to bear risk or to avoid risk. A market
participant holds a bond, credit or a loan and thereby exposed to credit risk can
hedge its position by buying protection from credit derivatives market, even
without transferring the ownership of the underlying asset. Similarly, a market
participant who is willing to bear more risk in return for realizing a profit can sell
protection via credit derivatives (Bomfim 2001).
Risk has always existed when a new product enters the financial market due to a
lack of infrastructure. And the lack of infrastructure may have costly consequences
as it has had in credit derivatives market. For that reason credit derivatives pose
some additional risks to financial markets, beside allocating and hedging risk (Tijoe
2007).
Credit risk refers to the default of the reference credit. Credit derivatives shift it
around rather than eliminating. As a result, when default rise in credit cycle
someone will lose money. Credit derivatives transfer credit risk in a way that
isn’t very easy to understand and to follow. Most of the market participants even
are not aware of the amount of exposures they have had. Financial institutions lost
billions of dollars on investment in credit derivatives, Especially, in the 2001–2002
credit cycle CDO investors suffered losses (Gibson 2007).
Credit is the main risk of protection seller. Depending on contract terms, in the
case of default protection seller has to make either a payment to compensate the fall
152 F.S. Dural
in value of the reference asset or buy the asset at the notional contract amount
(Scott-Quinn and Walmsley 1998).
Counterparty risk is the most important type of risk to the financial markets. It is
also known as default risk and mostly caused by asymmetric information. Because
individuals and firms typically know more about their own financial condition and
prospects than do other individuals and firms (Bullard et al. 2009).
The buyer of the protection can face with counterparty exposure that is opposing
party of the contract can fail in meeting its payment obligations. As a result
protection buyer remains unprotected and lose money. Counterparty risk is found
depending on the mark-to-market value of credit derivative and potential future
credit exposure. In order to eliminate counterparty risk, protection buyer can
require collateral post (Scott-Quinn and Walmsley 1998).
Collapse of Lehman is a good example to illustrate. The notional amount of
protection bought on Lehman was unclear at the time of the bankruptcy. Estimates
for the total notional amount of credit default swaps written on Lehman ranged
from $72 billion to $400 billion. When Lehman failed, it had close to one million
derivatives contracts on its books with hundreds of financial firms. But Lehman
failed to fulfill its payment obligations.for that reason the biggest protection is
generally the use of collateral, and usually the amount of collateral insuring a
counterparty’s performance on a contract changes with the value of the contract
(Stulz 2009).
Systemic risk is the imparing effects of a triggering event, such as the failure of a
large financial firm, on financial markets and it harms the broader economy.
Complex mortgage-backed securities and insufficient transparency, high leverage,
and inadequate risk management created systemic risk in global credit crisis. That
trade of large commercial and investment banks with each other through deposit
markets and the transactions in OTC derivatives increased interconnectedness and
cause systemic risk. Also highly leveraged positions enable banks and hedge funds
to invest heavily in mortgage-related securities and to finance their holdings by
borrowing heavily in debt markets (Bullard et al. 2009). Lehman Brothers, AIG,
and other organizations were the major writers of credit default swaps. When
coupled with high leverage, CDS protection sellers became highly vulnerable to
mortgage defaults (Harrington 2009). In this context the failure of Bear Stearns,
Lehman Brothers and AIG contributed to systemic risk. Because of
Credit Derivatives, Their Risks and Role in Global Financial Crisis 153
Transaction risk may occur unless the allocation of credit risk is done under the
ISDA documentation. Because parties may be in contradiction about the credit
event that trigger the default payment, or about the deliverable asset. So relying on
standardized documentation and agreement terms enables a clear understanding of
transaction (Scott-Quinn and Walmsley 1998).
For hedging purposes, thereby for protection buyer liquidity risk is relatively
unimportant. In contrast for issuers of credit derivatives and for investors who
plan to close out their positions liquidity risk is crucial (Scott-Quinn and Walmsley
1998).
OTC market permits greater customization and this customization means that it
is difficult to liquidate an OTC position. An investor with an illiquid product will
have a very difficult time unloading the risk or minimizing the loss. Therefore,
holding illiquid products on the balance sheet may negatively and significantly
affect a party’s cash flow. As a result, during the credit crisis financial institutions
and governments have suffered from illiquidity (Tijoe 2007).
Credit derivatives have stayed on the focal point of arguments during and after the
financial crisis. Many have argued that credit derivatives (CDS) must be banned
and on the contrary many other claimed that credit derivatives can not be totally
excluded, only the misuse of these instruments must be prevented and new regula-
tions must be done (Al-shakrchy and Almsafir 2014). Difference of opinion can be
seen also between Warren Buffet who defined derivatives as”weapons of mass
destruction” and the former Chairman of the Federal Reserve System, Alan
154 F.S. Dural
Greenspan, who claimed that CDS is an efficient vehicle of credit risk transfer
(Augustin et al. 2015).
According to Stulz (2009), credit default swap market remained liquid especially
during the first year of credit crisis, between July 2007 and July 2008 and market
processed the defaults successfully as it was in the default of Lehman. However, it
is true that credit default swap market has some problems. Although the collateral
arrangements became widespread in 2007, consisting of %63 of derivatives con-
tract, still they were not universal and may cause large losses. Furthermore, it is
difficult to measure the size of gross exposures of dealers. However, even the net
amount of risk is zero and mark-to-market and collateral mechanism is running
properly, still dealers can pose some risks to financial system as a result of limited
transparency and counterparty risk. Even so, these problems does not require to
ban CDS.
CDS are highly leveraged and unfunded credit derivatives so that financial
institutions enter into large positions without restrictions and without having
underlying asset. This feasible use of CDS has brought about the misusage and
decreased transparency (Al-shakrchy and Almsafir 2014).
Especially concurrently with the bancruptcy of Bear Stearns, Lehman Brothers
and AIG credit derivatives are blamed. According to Al-Shakrchy and Almsafir
(2014) in the case of Bear Stearns, Lehman Brothers and AIG, opaqueness of
derivatives market prohibited market participants from knowing accurate amount
of exposures of counterparties. Incorporation of this opaqueness and willingness to
bear more risk and more profit, has caused bankruptcies and spilled over global
financial markets.
AIG bankruptcy is mostly argued in this context. Actually the reason behind the
AIG bankruptcy is its subsidiary company called AIG Financial Product Corpora-
tion that heavily issued and traded CDS on mortgage-backed securities. AIG FP
sold protection to make money especially on “super senior risk tranches of diver-
sified pools of loans and debt securities.” AIG had an AAA credit rating of AIG
convince investors to pay higher premium for protection in respect of AIG’s
quarantee for all present and future payment obligations of AIG FP transactions.
Because according to investor, default possibility of AIG was very low and even
counterparties didn’t require to post collateral. As of December 31, 2007, AIG had
total assets of $1.06 trillion, shareholders’ equity of $95.8 billion, and a market
capitalization of $150.7 billion. AIG has $61.4 billion of CDSs on multi-sector
CDOs with subprime mortgage loan exposure. AIG collapsed because collateral
obligations embedded in the CDSs it wrote triggered a chain reaction. When AIG’s
share market value drop sharply, financial distress began and forced it to put up
$14.5 billion in collateral. AIG had to raise funds or quickly sell off some of its
trillion dollars in assets to satisfy the collateral demand (Sjostrom 2009). As in the
case of AIG, many failures experienced during the credit crisis were caused by
counterparty risk. Because most of CDS contracts were bilateral and depending on
directly negotiated terms, even without any collateral (Calistru 2012). Furthermore
highly leveraged CDS positions that enables large selling positions and misleading
risk models (ERM) that prevent the accurate measure of credit exposures and
Credit Derivatives, Their Risks and Role in Global Financial Crisis 155
default probabilities are other reasons that cause financial crisis, except CDS (Skeel
and Partnoy 2007; Wacek 2008).
As a result of all these agreements, in September 2009, G-20 Leaders agreed that
“All standardized OTC derivative contract should be traded on exchange or
electronic trading platforms, where appropriate, and cleared through central
counterparties by end-2012 at the latest. OTC derivative contracts should be
reported to trade repositories. Non centrally cleared contracts should be subject
to higher capital requirements.” By providing full knowledge about derivative
transactions and introducing central counterparties and standardization, regulatory
policies aimed to increase transparency and to provide proper evaluation of risk.
Through central counterparties (CCPs), insolvency of one of the parties is going to
be absorbed and contagion effect is going to be prevented. As a result CCPs play an
active role in risk mitigation (Calistru 2012).
6 Conclusion
irresponsible trade of market participants, especially major large banks, and their
ambitions to use gaps in regulations against the market has caused financial distress.
Expectations and actions of some CDS investors who does not hold the underlying
bond but anticipating any bankruptcy that can trigger payment, is another unethical
business action.
Every financial innovation bring about some problems and risks together. Thing
that should be done is to achive effective risk management via credit derivatives is
to complement the clearing requirements of central counterparties, settlement
infrastructure, collateral arrangements and to increase transparency.
References
Allen F, Gale D (2006) Systemic risk and regulation. In: Carey M, Stulz RM (eds) The risks of
financial institutions. University of Chicago Press, Chicago, p 341
Al-shakrchy E, Almsafir MK (2014) Credit derivatives: did they exacerbate the 2007 global
financial crisis? AIG: case study. Proc Soc Behav Sci 109:1026–1034
Augustin P, Subrahmanyam MG, Tang DY, Wang SQ (2015) Credit default swaps: past, present,
and future. Present and Future (October 14, 2015)
BIS (2006) Statistics retrieved from http://www.bis.org/statistics/
Bomfim AN (2001) Understanding credit derivatives and their potential to synthesize riskless
assets. Board of Governors of the Federal Reserve System, Washington, DC, p 50
Bruyere R, Copinot R, Fery L, Jaeck C, Spitz T (2006) Credit derivatives and structured credit: a
guide for investors. John Wiley & Sons, New York
Bullard J, Neely CJ, Wheelock DC (2009) Systemic risk and the financial crisis: a primer. Fed
Reserve Bank St Louis Rev 91(5):403–418
Calistru RA (2012) The credit derivatives market—a threat to financial stability? Proc Soc Behav
Sci 58:552–559
Choudhry M (2004) Total return swaps: credit derivatives and synthetic funding instruments.
Consultado en Yield. com
Finnerty JD (1998) The Pricewaterhousecoopers credit derivatives primer. Pricewaterhousecoopers
LLP, New York
Fouque JP, Langsam JA (eds) (2013) Handbook on systemic risk. Cambridge University Press,
Cambridge
Gibson MS (2007) Credit derivatives and risk management
Harrington SE (2009) The financial crisis, systemic risk, and the future of insurance regulation. J
Risk Insur 76(4):785–819
IOSCO (2012) Report of the International Organization of Securities Commissions. Retrieved
from https://www.iosco.org/library/pubdocs/pdf/IOSCOPD385.pdf
JP Morgan guide to credit derivatives. Risk Publications, 1999
Kothari V (2011) Credit derivatives and structured credit trading, vol 749. John Wiley & Sons,
New York
O’kane D (2001) Credit derivatives explained. Lehman Brothers
Office of the Comptroller of the Currency (2013) Quarterly report on bank trading and derivatives
activities
Office of the Comptroller of the Currency (2015) Quarterly report on bank trading and derivatives
activities
Rule D (2001) The credit derivatives market: its development and possible implications for
financial stability. Financ Stabil Rev 10:117–140
Credit Derivatives, Their Risks and Role in Global Financial Crisis 157
Russell A, Kyle G (2012) Understanding collateralized debt obligations. Retrieved from https://
www.batesresearch.com/BR_Publications/CDOExplained/CDO%20Explained.pdf
Scott-Quinn B, Walmsley J (1998) The impact of credit derivatives on securities markets.
International Securities Market Association
Singh MM (2010) Collateral, netting and systemic risk in the OTC derivatives market (No 10-99).
International Monetary Fund
Sjostrom WK Jr (2009) AIG bailout. The Wash Lee L Rev 66:943
Skeel DA, Partnoy F (2007) The promise and perils of credit derivatives. Univ Cinci Law Rev
75:1019
Stulz RM (2009) Credit default swaps and the credit crisis (No w15384). National Bureau of
Economic Research
Thompson A, Callahan E, O’Toole C, Rajendra G (2007) Global CDO market: overview and
outlook. Deutsche Bank Global Securitisation and Structured Finance
Tijoe L (2007) Credit derivatives: regulatory challenges in an exploding ındustry. Ann Rev Bank
Fin L 26:387
Wacek M (2008) Derivatives, AIG and the future of enterprise risk management. Risk manage-
ment: the current financial crisis, lessons learned and future implications
Heri Kuswanto
H. Kuswanto (*)
Department of Statistics, Institut Teknologi Sepuluh Nopember (ITS), Kampus ITS, Sukolilo
60111, Surabaya, Indonesia
e-mail: heri_k@statistika.its.ac.id
1 Introduction
Indonesia has been ranked as the fourth most populated country in the world.
However, this high number is not linearly correlated with the number of citizens
equipped with health services and facilities, as indicated by the low percentage of
citizens covered by health insurance. The latest data shows that only 10 % of
Indonesians have been fully covered by health insurance. Moreover, only 5 % are
covered by life insurance as reported by Indonesian Life Insurance Association
(AAJI) in 2014. Nevertheless, it has been predicted that the number of Indonesians
covered by life insurance will increase over the years; this is supported also by the
increasing growth in the number of insurance companies in Indonesia. The large
Indonesian population should be able to make insurance a promising financial
industry in Indonesia.
One of the reasons which contribute to the low participation rate in Indonesia is
the little trust society places in the insurance business, such as its complicated claim
procedure, unclear terms and conditions in advance, as well as its trust in the
sustainability of the insurance companies, this last referring to the “health” factor
of the company. One of the health indicators is financial risk. Dealing with this, the
approach which is commonly applied to measure risk is Value at Risk (VaR), first
introduced by Markowitz (1959), which has been proven to be a valid way to
measure a company’s health. Several works have discussed the application of VaR
to risk management, e.g., Basak and Shapiro (2001), Culp et al. (1998), among
others. Ufer (1996) specifically discussed the concept of VaR for an insurance
company. Majumdar (2008), at the 10th Global Conference of Actuaries, described
a simple model to measure insurance risk through VaR. Moreover, Kaye (2005)
discussed in detail measuring risk in insurance companies using VaR as well as
other approaches.
It is well known that a risk calculation using VaR is applied to time series data.
Furthermore, this method performs well if we have sufficiently long time series data
in order to obtain unbiased parameters for the distribution. However, the availabil-
ity of a long series of data usually becomes a major challenge, which means that in
some cases we are required to estimate the company’s risk but we do not have
enough data available. Consequently, we need an alternative approach to measure
the risk, which does not require long time series data, such as using cross-
sectional data.
An anual financial report is an example of cross-sectional data, consisting of
information required to assess the company’s financial condition, and hence, it can
be used to measure the financial risk. Blach (2010) discussed assessing financial
risk based on the information available in the balance sheet, and described in detail
three components to identify financial risk: capital structure risk, liquidity risk, and
insolvency risk. In statistics, financial risk is a latent variable, which can be
measured only from those three risk components. Ho et al. (1998) discussed also
the required information in a balance sheet which can be used to measure risk using
VaR. However, the data used in the research consist of a long (multiple) period of
annual balance sheet reports, and hence it is still feasible to apply VaR.
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . . 161
The CFA approach proposed in this study is applied to calculate risk relative indices
of life insurance companies in Indonesia based on the dataset reported in the
Statistic of Indonesian Insurance 2013 published by OJK Indonesia. The risk
indices are calculated based on the variables which are available in the report.
The report covers the financial performance of 23 national life insurance companies
and 18 joint venture life insurance companies. In the analysis, the two types of
companies will not be distinguished, and therefore, the total size of the sample
(insurance companies) to be analyzed is 41 companies. Several companies are
excluded from the analysis due to incomplete information in the balance sheet.
Table 1 lists the names of the insurance companies.
The data available in the annual balance sheet report that will be used as the
financial risk indicators are Assets (including Fixed Assets and Current Assets) and
Capital (Equity Capital and Debt Capital/liability). Several risk components used to
calculate risk indices are based on the research carried out by Blach (2010):
1. Capital Structure Risk measured from a Debt Ratio Analysis consisting of the
following indicators:
– Debt/Equity Ratio (D/E),
– Debt/Asset Ratio (D/A),
– Equity/Asset Ratio (E/A)
162 H. Kuswanto
Fig. 1 Theoretical framework to measure financial risk [adopted from Blach (2010)]
In this case, the Capital Structure Risk, Liquidity Risk and Insolvency Risk are
latent variables from which will be constructed the financial risk. The theoretical
framework as the basis of using those variables refers to the following scheme
(Fig. 1):
The steps of the data analysis can be summarized as follows:
1. Preprocessing the data: The raw data will be transformed using simple linear
scoring to obtain a new dataset with values within the range of 0 to 1, to make it
interpretable and comparable with others. Meanwhile, the data which are avail-
able in the balance sheet are on the raw currency rate (Rupiah). Moreover, the
preprocessing step will transform the raw data in the same direction (positive
and negative indicators), which means that an index should have the same
definition for all risk indicators. Indices of 1 and 0 represent the lowest risk
and highest risk.
2. CFA stage 1: Estimation of weights connecting indicators to risk components
The estimation of the weights is conducted by applying two stages of the CFA
approach to the indicators, with the path structure shown in Fig. 2. The param-
eters are estimated by the maximum likelihood method.
The weights will be used to calculate the performance index of each risk
component by the following formula
i¼n X , j¼k
Index X1j ¼ wi *Index Xij
i, j¼1
where i indicates the number of the indicator and j is the number of the insurance
company. The X1 indicates the risk performance at stage 1 measured by the risk
indicators and ei represents the measurement error of the ith indicator.
3. CFA step II: Estimate the weights connecting risk components to financial risk
164 H. Kuswanto
X 3
Index X2j ¼ wp *Index Xj
i¼1
where p indicates number of risk components and X2j denotes the risk perfor-
mance for each company.
4. Mapping the result of financial risk relative indices
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . . 165
We begin this section by summarizing the descriptive statistics of the variables used
to calculate the relative risk indices. Figure 4 shows boxplots of total assets and
capital (raw data) of 41 Indonesian life insurance companies in 2013.
From the figure, we see that there are five companies with extremely high
amounts of assets and capital. These companies are PT. Asuransi Jiwa Manulife
Indonesia, PT. Prudential Life Assurance, PT. Asuransi Allianz Life Indonesia,
PT. AIA Financial, and PT. Asuransi Jiwa Sraya (Persero). Four of them are joint
venture life insurance companies and another one (Persero) is managed by the
government. In fact, those five companies are big insurance companies and hold the
biggest market shares in Indonesia compared to other companies. Do those com-
panies have the lowest financial risk? The answer is not as simple as just looking
directly at how much assets and capital they have, because financial risk is a latent
variable. Moreover, financial risk is a result of interaction among several financial
indicators with their complexity. This means that a company with a large amount of
assets does not always have a low risk, while a company with little assets does not
always have a high risk.
166 H. Kuswanto
40000000
30000000
Data
20000000
10000000
From the asset and capital data, we can derive several indicators to calculate the
companies’ performance on the risk components. Table 2 lists several statistics of
transformed indicators with the range of values between 0 and 1, where 0 represents
the highest risk, while 1 indicates the lowest risk.
The values in the Table 2 are indices of indicators showing the performance of
the companies in general. From these seven indicators, the average Current Ratio
(CuR) and Cash Ratio (CaR) are very low, which indicates that on average the
liquidity risk of the companies is very high. However, the indices for capital risk
structure, in particular, the Debt to Equity Ratio (D/E), are high, which indicates
that on average the companies in Indonesia are able to generate enough cash to
satisfy their debt obligations. The performance of companies in terms of insolvency
risk is shown by the indicators E/F and C/F, which tend to be average, especially for
C/F, while E/F shows relatively high risk.
From the standardized indicators data, the Confirmatory Factor Analysis (CFA)
will estimate the weights connecting the indicators to the risk components. The
weights in this case can be interpreted as the contribution of the indicator to the
exposure to the corresponding risk component. The structure of the CFA path is
shown in Fig. 5.
The estimated values that result from the CFA path in Fig. 5 are listed in Table 3.
Table 3 summarizes all estimated parameters of a CFA model which can be used
to determine whether the indicators are valid indicator for the underlying risk
component. The P-values of all indicators are below the 5 % significance level,
meaning that the D/E, D/A and E/A are valid indicators of Capital Structure Risk;
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . . 167
the Current Ratio and Cash Ratio are valid indicators of Liquidity Ratio; while E/F
and C/F are valid indicators for Insolvency Risk. The CFA requires a setting of an
indicator (within a risk component) which is assumed to be significantly valid, i.e.,
the one with an estimated value of 1.
The estimated values in Table 4 show how big is the contribution of each
indicator to generating the corresponding risk component. For the Capital Structure,
the D/A and E/A have relatively the same contribution, i.e., about 0.9, while D/E
has a relatively smaller contribution than those two. For the Liquidity risk compo-
nent, the Current Ratio and Cash Ratio have almost the same contribution. A very
significantly different contribution is obtained for the Insolvency risk indicators.
The E/F has a very high contribution to the insolvency risk, while the Capital/Fixed
Asset has a very low contribution. The estimated values are standardized to obtain
weights, and we can see the weights in the last column of Table 4.
The fact that the among-risk components are related each other is confirmed by
the statistical evidence listed in Table 5 showing the linear relationships between
the variables.
In Table 5, the P-values are lower than the significance level of 5 %, meaning
that the among-risk components are significantly related although the correlation is
low. The value 0.028 connecting liquidity risk with insolvency risk means that
increasing the liquidity risk will be followed by increasing the insolvency risk, and
vice versa. The other values can be interpreted similarly.
Another important part of this analysis is to study the contribution of each risk
component to exposure to financial risk simultaneously. The results of the CFA can
be seen in Fig. 6, Tables 6 and 7. Based on the P-values in Table 6, the Capital
Structure Risk, Liquidity Risk and Insolvency Risk are the three components which
are significantly valid indicators of financial risk.
Based on the weights listed in Table 7, the weight of Insolvency Risk is 0.4304,
much higher than the weights of Capital Structure Risk and Liquidity Risk with
values of 0.275 and 0.293, respectively. This shows that the financial risk of the
insurance companies is mostly determined by the Insolvency Risk.
Furthermore, the risk index of each company can be seen from Fig. 7, where the
blue box represents the Capital Structure Risk index, red shows Liquidity Risk, and
green represents Insolvency Risk. The company’s performance with respect to
these three risk components shows a high degree of variation. There are some
168 H. Kuswanto
companies with a very low degree of Liquidity Risk, but high Capital Risk
Structure. Meanwhile, several companies have low Capital Risk but high Liquidity
Risk, etc. These combinations lead to complications in measuring the actual
financial risk, and hence, an index representing the financial risk as an aggregate
definition needs to be calculated. Figure 8 shows the financial risk relative indices
of the life insurance companies in Indonesia calculated by a CFA. The indices have
been sorted from the lowest index (highest risk) to the largest index (lowest risk). A
summary of the indices for all companies can be seen in Fig. 9.
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . . 169
2013
PT Asuransi Jiwa Adisarana Wanaartha
PT BNI Life Insurance
PT Asuransi Jiwa Bringin Jiwa sejahtera
PT Asuransi Jiwa Central Asia Raya
Cap_Str_Risl
Cap_Str_Risk
PT Equity Life Indonesia
Liquidity_Risk
Liquidity_Risk
Insolvency_Risk
Insolvency_Risk
PT Heksa Eka Life Insurance
PT Indolife Pensiontama
PT Asuransi Jiwa InHealth Indonesia
PT Asuransi Jiwasraya (Persero)
PT Asuransi Kresna Life (d/h PT A.J. Mira…
<––
<––
<––
<––
<––
<––
PT Multicor Life Insurance
PT Panin Life (d/h PT Panin Anugrah Life)
PT Pasaraya Life Insurance
PT Asuransi Jiwa Recapital
PT Asuransi Jiwa Sequis Financial
PT Asuransi Jiwa Sequis Life
PT Asuransi Jiwa Tugu Mandiri
Financial_Risk
Financial_Risk
Financial_Risk
1.144
0.758
1.000
PT Asuransi CIGNA
0.198
0.135
1.126
0.768
0.722
index lower that 0.175, which is high risk. The detailed names of the companies can
PT. Asuransi Jiwa Tugu. Again, we should note that the measured relative risk is an
highest financial risk, namely PT. CIMB Sun Life, PT. Great Eastern Life and
the others. Meanwhile, the first quartile means that 25 % of the companies have risk
Life Insurance. This shows that this company has very low financial risk relative to
Figure 9 shows that there is a company with an index of 0.96, namely, Multicar
Fig. 7 Performance of financial risk components index for life insurance companies in Indonesia
be seen in Fig. 8, which clearly shows that there are three companies with the
aggregate risk, and the risk performance of each indicator can be seen from Fig. 8.
H. Kuswanto
0.2
0.3
0.4
0.5
0.9
0.1
0.6
0.7
0.8
0
1
2013
PT CIMB Sun Life (d/h PT Commerce…
Mean
Median
PT Great Eastern Life Indonesia
PT Asuransi Jiwa Tugu Mandiri
PT Axa Financial Indonesia
0.20
PT Prudential Life Assurance
PT Asuransi Allianz Life Indonesia
0.2
PT Asuransi Jiwasraya (Persero)
PT BNI Life Insurance
PT Equity Life Indonesia
0.24
PT Axa Mandiri Financial Services
PT Asuransi Jiwa Sequis Life
0.4
PT Asuransi Jiwa Manulife Indonesia
PT Asuransi Jiwa Generali Indonesia (d/h…
PT AIA Financial (d/h PT AIG Life)
0.28
PT Avrist Assurance (d/h PT Asuransi AIA…
PT Sun Life Financial Indonesia
0.6
PT Commonwealth Life (d/h PT Astra CMG…
PT Asuransi Jiwa Adisarana Wanaartha
PT Asuransi Jiwa Bringin Jiwa sejahtera
0.32
PT Indolife Pensiontama
0.8
PT Heksa Eka Life Insurance
PT ACE Life Assurance (d/h PT A.J. Bhumi…
PT Asuransi Jiwa Central Asia Raya
PT Asuransi Jiwa Mega Life
PT Asuransi Jiwa Sinar Mas MSIG
0.36
1.0
PT Asuransi CIGNA
PT Asuransi Jiwa Sequis Financial
PT Asuransi Aviva Indonesia (d/h PT…
PT MNC Life Assurance (d/h PT UOB Life…
PT Tokio Marine Life (d/h PT MAA Life…
N
PT Pasaraya Life Insurance
Mean
StDev
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . .
Median
Kurtosis
0.14488
0.19819
0.25820
PT Asuransi Jiwa InHealth Indonesia
Variance
Minimum
Maximum
Skewness
P-Value <
A-Squared PT Asuransi Jiwa Recapital
1st Quartile
3rd Quartile
PT Asuransi Jiwa Reliance
PT Asuransi Jiwa Mega Indonesia
PT Axa Life Indonesia
0.22579
0.35000
0.36960
0.96000
0.41000
0.29000
0.17500
0.08000
3.10443
1.48185
0.03114
0.17646
0.31390
0.005
1.38
41
PT Central Asia Financial ***
PT zurich Topas Life (d/h PT Mayapada Life)
A nderson-Darling Normality Test
Fig. 9 Summary of the financial risk relative indices of life insurance companies in Indonesia
Fig. 8 Performance of financial risk relative index for life insurance companies in Indonesia 2013
171
172 H. Kuswanto
4 Conclusion
This chapter proposed the use of a simple statistical approach, namely, Confirma-
tory Factor Analysis, which has received less attention in the insurance field, in
particular for measuring risk, but has been used in many other fields. From the
analysis, we know that a relative risk index can be developed or calculated by using
information which is publicly available. This research confirms the established
theoretical framework that Insolvency Risk, Capital Structure Risk and Liquidity
Risk are valid indicators of the financial risk of the companies. For Indonesian life
insurance companies in 2013, Insolvency Risk is the highest exposure to financial
risk. Information about the evaluation of each company in terms of these three risk
components can be used as guidance to reducing the risk by focusing on one or two
indicators with a high index. The median value. 0.29, can be used as a threshold to
justify whether a company has a lower or higher financial risk than the others.
References
Basak S, Shapiro A (2001) Value at risk based risk management: optimal policies and asset prices.
Rev Financ Stud 14(2):371–405
Blach J (2010). Financial risk Identification based on the balance sheet information. 5. mezinárodnı́
konference Řı́zenı́ a modelovánı́ finanč nı́ch rizik. Available at http://www.ekf.vsb.cz/export/
sites/ekf/rmfr/.content/galeriedokumentu/2014/plne-zneni-prispevku/Blach.Joanna.pdf
Culp CL, Miller MH, Neves AMP (1998) Value at risk: uses and abuses. J Appl Corpor Financ 10
(4):26–38
Fernando MACS, Samita S, Abeynayake R (2012) Modified factor analysis to construct composite
indices: illustration on urbanization index. Trop Agric Res 23(4):327–337
Ho T, Abrahamson A, Abbott M (1998) Value at risk of a bank’s balance sheet. Int J Appl Theor
Finance 2(1)
J€oreskog KG (1969) A general approach to confirmatory maximum likelihood factor analysis.
Psychometrika 34:183–202
Kaye (2005) Risk measurement in insurance, a guide to risk measurement, capital allocation and
related decision support issues. Casualty Actuarial Society discussion paper program
Long DA, Perkins DD (2003) Confirmatory factor analysis of the sense of community index and
development of a brief SCI. J Commun Psychol 31:279–296
Majumdar C (2008) VaR (Value at Risk) for insurance risk—a simple model. Available at: http://
www.actuariesindia.org/downloads/2008-actuariesindia.org
Markowitz HM (1959) Portfolio selection: efficient diversification of investments. Wiley,
New York
Ufer W (1996) The “value at risk” concept for insurance companies. Contribution to the 6th AFIR
International Colloquium. Available at http://www.actuaries.org/AFIR/Colloquia/Nuernberg/
Ufer.pdf
An Approach to Measure Financial Risk Relative Indices: A Case Study of. . . 173
Heri Kuswanto is a lecturer and researcher at the Department of Statistics, Institut Teknologi
Sepuluh Nopember (ITS) Indonesia. He is a member of the Laboratory for Economic Statistics,
Financial and Actuarial Science in the department. Dr. Kuswanto has Bachelor and Master Degree
in Statistics from ITS Indonesia. In 2006, he started to work as PhD fellow at the Institute of
Statistics, School of Economics and Management, Leibniz Hannover University Germany. He was
awarded with PhD in Economics in 2009. In 2010, Dr. Kuswanto worked as a Postdoctoral Fellow
at Laval University Canada. His research interests lie in the methodological development for time
series forecasting and econometrics which are applied mainly to financial cases. He is now
working also in the field of extreme events. He has been intensively works in the field of risk
estimation, and did projects dealing with the risk modelling in some financial institutions.
Dr. Kuswanto is now the Head of Postgraduate Program at the Department of Statistics-ITS.
Part III
Volatility, Hedging and Strategy in Risky
Environment
Extreme Value Theory in Finance: A Way
to Forecast Unexpected Circumstances
Abstract EVT works on extreme affairs and those affairs are generally classified
as outliers. Although in some analyses it is preferred to exclude extreme events,
oppositely EVT directly focuses on extreme events and analyze them. Financial
data also contain outliers due to crashes, breaks and peaks. Since extremal events
are more commonly seen in financial data than many other data types and excluding
of those results in under or overestimation, academicians and financial institutions
utilize EVT especially in risk management as a contributing function to Value-at-
Risk. Additionally, distribution characteristics of financial data which do not fit
normal distribution are other major points to use EVT. The finance literature on
EVT indicates the EVT does the best especially in fat-tail modeling and extremal
event analysis.
1 Introduction
During the last decades, the globalization in financial markets and financial product
innovations have shown a significant increase. Therefore, fundamental changes
required in financial environment due to:
• Advances in technology,
• Rapid innovation in financial instruments,
• Changes in the structure of the banking systems,
• Growing interest on stock markets,
EVT is a special research field in Statistics Science that tries to model the tail loss of
well-known distributions through model fitting approximations, iterative tech-
niques (e.g. Maximum Likelihood) and simulations (e.g. Markov Chain Monte
Carlo). Thus Extreme Value (EV) distributions are used to model maximum and
minimum values of a set of normal distributed random variables. Extreme value
distributions are asymptotic distributions and can be regarded as family of distri-
butions. EVs are widely used in modeling peak and/or deep flows of data streams
come from any kind of events in daily. It is not a new research area because of
initial studies emerged in early 1900s (Kotz and Nadarajah 2000).
Gumbel (1954, 1958), who is the pioneer of extreme value theory, opened a new
field of research for statisticians and engineers. It was just as trials of experiments
before his researches. In the first half of 1900s, Gumbel initially used meteorolog-
ical data for preliminary models in those researches. There are many other contrib-
utors to this new field of science during these years and it is shaped around several
classifications of methods as block maxima, peak over threshold, rth largest order
models (see Fig. 1); it was also studied as univariate, bivariate and multivariate
models; from another view of classification, methods are ordered by means of their
parameter estimation techniques like approximation methods by calculus, iterative
techniques through maximum likelihood method and Markov Chain Monte Carlo
Method as simulation based techniques.
Generalized Extreme Value (GEV) Distribution is used to model the block maxima
type of extreme value distributions. GEV is family of distributions in which
Gumbel (Type 1) and Frechet (Type 2) distribution where the parent distribution
is unbounded against extreme values and for minimum values, Weibull (Type 3) is
180 B.E. Aslanertik et al.
the parent distribution that has an upper bound (see Fig. 2). It can be presented in (1)
through (3) below. The reason for having three different type of tail modeling
distribution is to obtain best estimator parameters of the original distributions’ loss
tail on the right and left side, which depends on the characteristics of the research.
xμ
Type 1 Gumble : PðX xÞ ¼ ee
σ
ð1Þ
0, x<μ
Type 2 Frechet : PðX xÞ ¼ xμ ξ ð2Þ
eð σ Þ , xμ
( ξ
ðxμ
σ Þ ,
Type 3 Weibull : PðX xÞ ¼ e xμ ð3Þ
0, x>μ
Where
x μ
1þξ >0
σ
β^ ¼ X 0:45σ where X ¼ mean, } } for max } þ } for min ð7Þ
Variance:
Skewness Coefficient:
EVT seeks for a distribution that best fits the maximums or minimums of
i.i.d. random variables in a time horizon n starting from X1 through Xn. Time
horizon can be expressed as half-months, months, quarters and even years in that
data is observed daily.
Data for block maxima models can simply be presented as in (11)
Mn ¼ maxfX1 ; . . . ; Xn g ð11Þ
Where X1,. . .,Xn: sequence of i.i.d random variable on a time basis of n periods.
In Fig. 1, there are ten blocks and maximum of each block is considered to create
a model. Assuming the distribution function for the Xi is known, the distribution of
Mn can be derived as:
¼ PfX1 x, . . . Xn xg
Pf M n x g ¼ Pf X 1 x g . . . P f X n x g ð12Þ
¼ fFðxÞgn
where
an > 0 and bn are sequences of constants.
Finally if an and bn exist, the probability expression can be expressed as:
M n bn
P M*n ¼ x ! GðxÞ for n ! 1 ð14Þ
an
Where G(x) is one of the following family of GEV distributions in (1) through
(3).
One of the basic problems with block maxima for EVT in real world applications is
lack of data on a sequence of time periods. It causes a uncertainty and the error in
modeling since variance of parameter estimation is reduced. On the other hand
there may be a problem due to biased sample because of newcomers are not really
extreme losses. rth largest order models considers up to the rth data in each block to
characterizing the GEV distributions. The data pattern to the block maxima and the
r largest order model is presented in Fig. 1 where 10 blocks are constructed,
maximum losses are depicted in dark grey squares and second largest data in
each block is shown by solid light grey circles and a dashed horizontal line is
sketched at the ordinate of 2.5 as a threshold.
In the last decades, Peaks-over-threshold (POT) models are getting popular because
of controlling the number of extreme values by a certain threshold value that set by
researcher. Therefore POT is more applicable in case of small amount of data on
extreme values (McNeil 1999).
Since the lack of data (especially data regarding in financial markets), unknown
distribution parameters and not exactly real maximum values in the block maxima,
the POT method is more preferable when financial risks are of interest. In POT
method, instead of the block maxima data points or the rth largest values within a
block are considered as extreme observations, all observations above a certain
threshold value are evaluated.
The probability of any observation has a higher value than the threshold u is
expressed as below:
184 B.E. Aslanertik et al.
1 Fðu þ yÞ
P X > u þ yX > u ¼ , y>0 ð15Þ
1 Fð u Þ
3 EVT in Finance
low probabilities conditions for time series with heavy (fat) tailed marginal distri-
butions. The sample quantile provided by Monte Carlo simulation with EVT and
nonparametric approaches is used to estimate VaR and shortfall of stock market
index.
EVT has recently been also utilized in extreme stress events’ identification in the
financial markets. EVT, as explained before, works on asymptotic behaviors of
extremal observations of the variables. This special kind of probability theory offers
methods for modeling the events with extremely low probabilities by considering
the special characteristic of the financial data, non-normal pattern. Another reason
of using EVT approach is that it does not require any a priori assumption for
distributional properties of data series. Studies of Guru (2016), Lestano and Jacobs
(2007), and Pozo and Dorantes (2003) are some of the distinctive papers which
indicates that EVT use is better for identifying and characterizing crisis/stress
events. One of the latest studies in this field (Guru 2016) uses EVT to identify
extreme stress events in the Indian financial system using the Financial Sector
Stress Index (a combination of crises indices for currency, banking and stock
markets in India over the period April, 2001 to December, 2012). The application
mainly focuses on the fact that extreme stress events are very rare and have a very
small probability of occurrence. Hence, definition of stress events entails
pre-specification of the above-mentioned small probability. Probability of a crisis
188 B.E. Aslanertik et al.
4 Conclusion
EVT is a science field that can be used to model tail loss of well known distributions
since early 1900s. GEV and GPD are basic modeling approaches, that considers
block maxima of data periods and an appropriate threshold, used for best model
fitting with respect to data available. There are many potential application areas
such as meteorology, engineering and finance. Utilizing EVT in finance mainly
focus on tail modeling due to stylized facts of financial data such as fat tails,
skewness and volatility clustering. Distinctive performance of EVT in tail modeling
makes it a useful tool especially in risk related topics. Hence EVT is employed in
risk management as a new method under VaR. Additionally EVT is used for
describing crisis events and for stress tests as well.
References
Artzner P, Delbaen F, Eben JM, Heath D (1999) Coherent measures of risk. Math Financ
9:203–228
Bensalah Y (2000) Steps in applying extreme value theory to finance: a review. Bank of Canada
Brodin E, Klüppelberg C (2008) Extreme value theory in finance. Encyclopedia of quantitative
risk analysis and assessment
Chan KF, Gray P (2006) Using extreme value theory to measure value-at-risk for daily electricity
spot prices. Int J Forecast 22(2):283–300
Diebold FX, Schuermann T, Stroughair JD (2000) Pitfalls and opportunities in the use of extreme
value theory in risk management. J Risk Finance 1(2):30–35
Dutta S, Biswas S (2015) Extreme quantile estimation based on financial time series. Commun Stat
Simul Comput. doi: 10.1080/03610918.2015.1112908
Embrechts P, Resnick SI, Samorodnitsky G (1999) Extreme value theory as a risk management
tool. North Am Actuar J 3(2):30–41
Ergen İ (2010) Essays in financial risk management. PhD Thesis, Rice University
Furió D, Climent FJ (2013) Extreme value theory versus traditional GARCH approaches applied to
financial data: a comparative evaluation. Quant Financ 13(1):45–63
Extreme Value Theory in Finance: A Way to Forecast Unexpected Circumstances 189
Gencay R, Selcuk F (2004) Extreme value theory and value-at-risk: relative performance in
emerging markets. Int J Forecast 20(2):287–303
Gilli M, Kellezi E, Hysi H (2006) A data-driven optimization heuristic for downside risk
minimization. Swiss Finance Institute Research Paper, 06-2
Gumbel EJ (1954) The maxima of the mean largest value and of the range. Ann Math Stat
25:76–84
Gumbel EJ (1958) Statistics of extremes. Columbia University Press, New York
Guru A (2016) Early warning system of finance stress for India. Int Rev Appl Econ 30(3):273–300
Ho LC, Burridge P, Cadle J, Theobald M (2000) Value-at-risk: applying the extreme value
approach to Asian markets in the recent financial turmoil. Pac Basin Finance J 8(2):249–275
Koliai L (2016) Extreme risk modelling: an EVT–pair-copulas approach for financial stress tests. J
Bank Finance. doi:10.1016/j.jbankfin.2016.02.004
Kotz S, Nadarajah S (2000) Extreme value distributions: theory and applications. World Scientific
Lestano JP, Jacobs AM (2007) Dating currency crises with ad-hoc and extreme value based
thresholds: East Asia 1970-2002 [dating currency crises]. Int J Finance Econ 12:371–388
Longin FM (2000) From value at risk to stress testing: the extreme value approach. J Bank Finance
24(7):1097–1130
Mandelbrot B (1963) The variation of certain speculative prices. J Bus 36(4):394–419
Markowitz H (1952) Portfolio selection. J Financ 7:77–91
McNeil AJ (1999) Extreme value theory for risk managers. Departement Mathematik ETH
Zentrum
McNeil AJ, Frey R (2000) Estimation of tail related risk measures for heteroscedastic financial
time series. J Empir Finance 7:271–300
Ozun A, Cifter A, Yilmazer S (2010) Filtered extreme-value theory for value-at-risk estimation:
evidence from Turkey. J Risk Finance 11(2):164–179
Patton AJ (2006) Modeling asymmetric exchange rate dependence. Int Econ Rev 47(2):527–556
Pfaff B (2012) Extreme value theory. Financial risk modelling and portfolio optimization with R,
pp 84–111
Pozo S, Dorantes CA (2003) Statistical distributions and the identification of currency crises. J Int
Money Finance 22(4):591–609
Rocco M (2014) Extreme value theory in finance: a survey. J Econ Surv 28(1):82–108
Shukla RK, Trivedi M, Kumar M (2012) On the proficient use of GEV distribution: a case study of
subtropical monsoon region in India. arXiv preprint arXiv:1203.0642.
Wang X (2007) Essays on financial analysis: capital structure, dynamic dependence and extreme
loss modeling. PhD thesis, Rice University
Sabri Erdem is associate professor at Dokuz Eyl€ ul University, Faculty of Business, Business
Administration Department, Chair of Quantitative Methods Division and member of Faculty
Board. He is Industrial Engineer since 1996 and holds two MSc degrees both in Computer
Engineering and Business Administration. He received PhD in Computer Engineering in 2007.
He who received Associated Professor title by April 2011, published many articles in national and
international journals and has many proceedings at national and international symposium and
congress and partner of three patents about automated medical dispensing machines. He is member
of Chamber of Turkey Mechanical Engineers (Industrial Engineers Branch), Information Associ-
ation of Turkey, Turkish Operations Research Association, vice chair of DEU Entrepreneurship,
190 B.E. Aslanertik et al.
Business and Economics Applied Research Center, board member of DEU Biomedical Calibration
and Metrology Applied and Research Center, advisory board member of Dokuz Eyl€ ul University
Information Technology Research Center. Main interest areas are optimization algorithms, nature
inspired algorithms, intelligent systems, data mining, healthcare information technologies,
healthcare materials management, data envelopment analysis and structural equation modeling.
He is also co-founder and board member of CENOS AS at Health Technopark in DEPARK.
G€ul€
uzar Kurt G€ um€uş is an Associate Professor of Finance at Dokuz Eylul University Depart-
ment of International Business, Izmir-Turkey. Dr. Kurt Gumus has an MBA from Dokuz Eylul
University (2004) and a PhD in Business from Dokuz Eylul University (2007). Her research
interests lie in responsible investment, corporate finance, sustainability in finance, capital markets
and investment, and international finance. She has taught Responsible Investment, International
Financial Management, Corporate Finance courses, among others, at both graduate and under-
graduate levels.
Value at Risk Performance of Emerging
Market Equity Portfolios During the Fed’s
Tapering
Abstract This paper investigates the issue of market risk quantification for twelve
emerging market equity portfolios during the FED tapering period. The perfor-
mance of most popular VaR methods, namely Variance-Covariance, Classical and
Weighted Historical Simulation Methods are compared. The results indicate that
Classical and Weighted Historical Simulation outperform Variance-Covariance
VaR. Kupiec back testing is supporting this argument. In the second stage of
analysis, VaR performance of equally weighted equity index and US Government
Bond portfolios are analysed. We obtain lower VaR values than equity portfolios.
Russia, Turkey and Brazil are the worst performers out of 12 countries. The
performance of portfolios are measured by Sharpe ratio and VaR adjusted Sharpe
Ratios and found parallel rankings.
1 Introduction
Quantitative Easing (QE) policy of Federal Reserve (FED) has increased capital
flows to Emerging Market Economies (EMEs) and put upward pressure on asset
prices and exchange rates after 2008 financial crisis. Meanwhile, most of the
portfolio managers were using Value at Risk (VaR) tools to manage their risk,
and they were exploiting the favourable conditions of QE program. The first signal
about the end of purchases under the Fed’s QE program which is called tapering is
given in May 22, 2013. Just after the first announcement that has been expected for
a while by the market players, foreign investors started to withdraw their invest-
ment in EMEs, leading to capital outflows, a drop in EME currencies and stock
markets, and a rise in bond yields (Rai and Suchanek 2014). Thus, the new volatile
market conditions became significant tests to evaluate risk management tools of
emerging market fund managers.
The main research question of this study is that to test the performance of VaR
models and how the fund managers of emerging markets could have managed their
market risk during the extreme market conditions. Most of the risk models that
worked perfectly during the resilient periods slogged to estimate real picture in
volatile market conditions. Nevertheless, Value at Risk (VaR) that is recommended
by the Basel II directives, has been widely employed by fund managers to manage
market risk of their portfolios in last decades. VaR assesses possible losses in a
portfolio over a target horizon at a certain confidence level. Although this model is
simple and popular, there are no widely recognized methods to arrive at the VaR of
a particular portfolio (Dimitrakopoulos et al. 2010). The most popular of VaR
estimation methods are Variance Covariance VaR and Historical VaR. Both of
them has significant drawbacks. Variance Covariance VaR is a parametric model
and assumes that returns of risk factors are normally distributed. Hence, it under-
estimates extreme outcomes that occur in volatile periods. On the other hand,
Historical Simulation has no normal distribution assumption and estimates the
Value at Risk by simulating or constructing the cumulative distribution function
(CDF) of assets returns over time. Classic (Equally weighted) and Weighted
Methods are two important versions of Historically Simulation. However, it has
deficiencies during the crisis period when the returns are very volatile and explo-
sive. To solve the limitations of VaR methods, researchers have formulated differ-
ent versions of VaR to develop a systematic way to segregate extreme events and
mitigate problems associated with the original measure, like Expected Shortfall and
Extreme Value Theory methodologies. However, the new models are not only
impractical for business professionals but also but have significant limitations
(Embrechts 2000). The complex nature of emerging markets is continuously cre-
ating new risks to portfolio managers. It imposes them to draw lessons from market
movements and construct their portfolios and risk model accordingly. Therefore,
performances of risk models on equity investments are low at the beginning of
crisis periods in emerging markets, and it improves during the post-crises periods
due to the inclusion of extreme events in the estimation sample (Dimitrakopoulos
et al. 2010).
The success of VaR models doesn’t only depend on conditions of the market, but
also, the types of assets in the portfolio. Particularly risky portfolios such as equity
and derivatives are severely affected by volatile market conditions. As previous
studies indicate that the VaR models on equity portfolios are more successful in
developed economies that are more liquid and stable, whereas less in emerging
countries that have more volatile and fragile conditions (Andjeli et al. 2010).
Most of the previous studies have analysed the performance of VaR models from
the viewpoint of international investors and calculated US dollar or Euro
denominated risk and return. However, this paper investigated the performance of
VaR models of local investors who made an investment with local currencies. It
should be noted that when the currency of an emerging economy is expected to
Value at Risk Performance of Emerging Market Equity Portfolios During the. . . 193
depreciate, locals may invest in US dollar currency and attempt to maximize their
income in terms of local currency. It may give them great opportunities in equity,
real estate and other markets, whose values have low or negative correlation with
US currency.
In sum, in the first part of the study, we analysed performance of equity indices
funds using three popular VaR approach during the Fed’s Tapering period, namely
April 1, 2013–September 30, 2015 and compared the results of 12 emerging
countries. In the second part of the paper we propose to local fund managers to
diversify their equity portfolios with US dollar denominated less risky securities. It
would not only decrease volatility but also to make the portfolio more manageable
with VaR models. We think that US government bonds are a noteworthy choice for
local investors. These bonds are safe but and available with a wide range of
maturity dates. Moreover, dollar-denominated US bonds will hedge local investors
to currency risks. Expecting negative or low correlation between emerging market
equity performances and the exchange rate of US dollar with local currencies,
makes US Dollar denominated security a considerable investment option for local
fund managers, particularly approaching ends of sunny days. Then we investigated
VaR of equally weighted equity-US bond portfolios for 12 emerging countries
during April 1, 2013–September 30, 2015.
Lastly, our paper has three purposes. First one is to test the performances of
Variance Covariance Classic and Weighted Historical Simulation VaR approaches
in twelve emerging markets equity portfolios during the Fed’s tapering. Second one
is to improve the VaR performances of portfolios by diversifying with US govern-
ment bonds. Finally to present the risk adjusted performance of equity and equally
weighted equity-US bond portfolios for twelve EME countries.
The rest of the paper is organized as follows. Section 2 explains the impact of
Fed’s tapering announcements on EMEs, Sects. 3 and 4 summarizes previous
studies and methodology. Data and Empirical results on equity indices and equally
weighted equity-US bond portfolios are given in Sects. 5 and 6. Discussion exists in
Sect. 7 and the paper concludes in Sect. 8.
The Quantitative Easing (QE) program of FED, which is employed just after the
financial crisis of 2008, expanded its balance sheet dramatically by purchasing debt
instruments from financial institutions. Hence, it raised the prices of those financial
assets and lowered their interest while simultaneously increased the money supply.
This policy has created a great opportunity for portfolio managers to generate a
considerable return on their equity investments. It had been implemented in differ-
ent phases from 2008 to 2013 and has raised capital flows to emerging market
countries and put upward pressure on asset prices and exchange rates. The general
194 M.B. Karan et al.
effect of QE on these countries was positive because of the beneficial trade and
confidence stemming from stronger economic activity (Lavigne et al. 2014).
Although it was clear that this policy was inevitably unsustainable in the long
run, emerging markets have been addicted to the liquidity of FED and have greedily
exploited the positive market conditions. After all, US long term bonds yields have
started to increase in late 2012. Then, IMF and other financial authorities frequently
warned the risk of QE for emerging economies in early 2013 (IMF 2013). They
expressed that when advanced economies begin to normalize monetary policy, a
certain amount of capital-flow reversal and higher borrowing costs are likely in
some EMEs (Lavigne et al. 2014). Finally, the signal of FED tapering that is the
gradual decreasing of central bank interventions used to improve the conditions for
economic growth had given in May 22, 2013 when Bernanke, the president of FED
testified the intention of FED in US Congress. Consequently, an anxiety begun over
the global markets and currencies and stock markets in emerging markets have
fallen dramatically. The successive FED announcements during 2013–2015 period
caused an increase in U.S. government bond yields and emerging market bonds.
Besides, capital flows to EMEs slowed and markets became more volatile with fall
in stock market indices, and depreciated currencies (Fig. 1). Particularly the
announcements in 20131 had more impact on EMEs and reaction of the market
decreased over time. The new market conditions for emerging economies hardened
the performance of portfolio managers with incompetent risk management tools
and enforced them to be more selective in their investments. The market pressures
became more concentrated on particular economies with important financial or
macroeconomic vulnerabilities (Rai and Suchanek 2014). Interestingly the Emerg-
ing Market Economies experienced a significantly stronger depreciation in nominal
exchange rates during the taper-talk period than during the actual taper period (Diez
2014).
The reports of IMF points out five emerging countries; Brazil, India, Indonesia,
Turkey, and South Africa, which have larger external financing needs and macro-
economic imbalances. The countries which diminish their macroeconomic imbal-
ances since May 2013 have shown more resilience (Mishra et al. 2014).
3 Literature Review
1
Important FED announcement days in 2013 are May 22, June 19 and September 18.
Value at Risk Performance of Emerging Market Equity Portfolios During the. . . 195
0.98 Sept
FOMC
0.96
0.94
Bernanke
0.92 testimony
0.90
01–Jan–13 01–Mar–13 01–May–13 01–Jul–13 01–Sep–13 01–Nov–13 01–Jan–13
Source: Rai V. and Suchanek L. (2014) ‘‘The Effect of the Federal Reserve’s Tapering
Announcements on Emerging Markets’’ Bank of Canada, Working Paper –50.
Fig. 1 Fed’s tapering announcements and emerging market economies. (a) EME stock markets
fell on tapering announcements. Average of EME domestic stock market indexes normalized to
January 2, 2013—1.00. (b) EME currencies depreciated on tapering announcements. Average
of EME currency indexes normalized to January 1, 2013—1.00. Source: Rai V. and Suchanek
L. (2014) “The Effect of the Federal Reserve’s Tapering Announcements on Emerging Markets”
Bank of Canada, Working Paper -50
underlined the difficulties of risk estimation in volatile markets. They claimed that
the risk model has to be capable of describing well the marginal distribution
phenomena of the returns series such as fat-tails, skewness, and clustering of vola-
tility. Also, the model has to capture the dependence structure, and the risk model
has to incorporate an appropriate measure.
Many of the papers indicates that estimation power of the VaR models is poor
for emerging markets. For example, Silva and Mendes (2003) and Bao et al. (2006)
found that the estimation performance of the EVT was poor for Asian stock market
indexes. Žiković and Aktan (2009) arrived the similar result by analysing the daily
returns of Turkish (XU100) and Croatian (CROBEX) stock index models during
the 2008 crisis with VaR models. They revealed that all tested VaR model except
EVT and HHS models seriously underpredict the true level of risk in the crisis
period. Andjeli et al. (2010) tested the performance of historical simulation and
Variance Covariance VaR methods in four central and east European emerging
markets and claimed that these methods are not working in emerging markets. The
196 M.B. Karan et al.
study of Dimitrakopoulos et al. (2010) backed up the previous results and found that
VaR estimation during periods of financial turmoil is not possible for emerging
markets. However, the performance of the parametric (non-parametric) VaR
models improves (deteriorates) during post-crises periods due to the inclusion of
extreme events in the estimation sample.
In the recent study of K€oksal and Orhon (2013), the performance of VaR as a risk
measure across a large sample of developed and emerging countries are investi-
gated. They underlined three main conclusions. First one is the performance of VaR
as a risk measure was worse for developed countries than for emerging ones during
the global financial crisis. Secondly they found evidence about the decoupling of
emerging and developed countries regarding market risk during the global financial
crisis. They lastly they suggested alternative measures of risk, used together with
VaR, and the performance of these risk measures should be regularly evaluated to
improve the assessment of risks in a market.
Recent studies employed more complicated methodologies to increase the
estimation power of risk models. Mendes et al. (2000) investigated the robustness
of EVT to estimate the VaR in South American stock markets. He showed that the
EVT performed more precise risk estimates than the traditional estimation pro-
cedures. Seymour and Polakow (2003) revealed that established methods such as
historical simulation are prone to underestimating value-at-risk in such developing
market, and the combined GARCH-type time-series approach and extreme value
theory model is found to provide significantly better results than both straightfor-
ward historical simulation as well as the extreme value model. Gencay and Selcuk
(2004) searched the relative performance of Value-at-Risk (VaR) models with the
daily stock market returns of nine different emerging markets, and their results
indicated that EVT-based VaR estimates are more accurate at higher quantiles.
Snoussi and El-Aroui (2012) claimed that the adjustment of VaR in emerging
market is far more reliable than that of other method used in the developed market.
Cifter (2011) showed that the wavelet-based extreme value theory increases pre-
dictive performance of financial forecasting according to number of violations and
tail-loss tests by analysing the Istanbul Stock Exchange (ISE) and the Budapest
Stock Exchange (BUX).
These studies indicate that the success of two basic VaR method, historical
simulation and Variance Covariance have a very limited success in volatile emerg-
ing economies. Moreover, estimation power the advanced risk methods are also
questionable. In spite of these results, most of the financial institutions and invest-
ment funds are employing these techniques, because of the simplicity and practical-
ity of them.
Measurement of portfolio performance is an important component of the port-
folio management process. It refers to the determination of how a particular invest-
ment portfolio has performed relative to some comparison benchmark. Sharpe ratio
is the most popular tool amongst the others. However Sharpe Ratio is subject to
estimation errors including data limitations, negative skewness and positive excess
kurtosis of returns. In addition, the ex-post Sharpe ratio implicitly assumes that the
returns of the asset under consideration are independent and identically distributed
Value at Risk Performance of Emerging Market Equity Portfolios During the. . . 197
normal random variables. However, these assumptions are violated by real world
financial data (Deng, et al. 2013). Thus, we also measured the risk adjusted perfor-
mance of the portfolios using the calculated VaR values (Grau-Carles et al. 2009).
VaR adjusted Sharpe Ratio (VaRSR) has some advantages over Sharpe Ratios. It
covers the uncertainty involved in estimating the Sharpe ratio, and include the
effects of higher order moments of the return distribution. VaRSR is particularly
adjusted to assets with non-normal return distributions like our data. Furthermore,
Sharpe Ratio and VaRSR may not give us same rankings under non-normality
assumptions (Alexander and Baptista 2003).
VaRSR is, similar to the Sharpe ratio but with the risk measured using the dif-
ferent VaR measures in the denominator, namely (Grau-Carles et al. 2009);
Ri R f
VaRSR ¼
VaRðRi Þ
ðRi Rf Þ ¼ The average portfolio return in excess of the risk-free rate of return
VaRðRi Þ ¼ Value at Risk of portfolio
4 Methodology
Three popular methods of VaR estimating are used to estimate emerging market
portfolios in this study: Variance-Covariance VaR, Classic Historical VaR, and
Weighted Historical VaR and. These methods have significant superiors but also
drawbacks. In the all above three methods for calculating the yields, the natural log
is usually preferred because small changes in the natural log are equivalent to
percentage changes.
Yield ðr t Þ ¼ ln ft
=f t1
The results of VaR methods are tested using Kupiec back testing, then risk adjusted
ratios are used to rank the portfolio performances.
two main assumptions: (1) the distribution of potential portfolio returns is normally
distributed, (2) the portfolio value changes in a linear manner with changes in the
underlying prices. Although it is simple and analytical, the most important short-
coming is normality assumption of risk factors. The normality assumption does not
mostly hold and linearity hypothesis is not validated for nonlinear assets such as
fixed income securities and options (Lleo 2010).
Calculation of volatility is the most important step of VaR methods. EWMA
(Exponentially Weighted Moving Average) which is introduced with Risk Metric
from JP Morgan is used in this paper. It assumes returns on financial assets have
serial correlations and give more weight to the latest returns than the old ones.
Moreover, it is an extension of the standard weighting scheme of weighted histor-
ical simulation that assigns equal weight to every point in time for the calculation of
the volatility, by assigning (usually) more weight to the most recent observations
using an exponential scheme. λ is the decay factor in the formula and a high λ, puts
less weight, and low λ provides more weights on the most recent estimate.
The volatility of the investment becomes the volatility of a linear function of
random variables. Moving from volatility to VaR implies an assumption about the
normal distribution of the changes of value. The confidence level of the model is
99 % in this paper. The following formulas of Variance Covariance methods are
used to estimate VaR in this paper;
W is the present value matrix of the portfolio, Wt is the transpose of the present
value matrix of the portfolio, C is the correlation matrix, V is the volatility matrix,
z value is determined for 99 % confidence level.
Historical simulation is the most practical way of estimating the Value at Risk for
portfolios. Pérignon and Smith (2010) revealed that 73 % of banks among 60 US,
Canadian and large international banks over 1996–2005 applied VaR methodology
with historical simulation. Historical Simulation uses a sample of historical obser-
vations for the risk factors. For every historical data, the portfolio return is found
out according to a non-linear pattern. The positions are re-priced under each
scenario. The VaR is computed as a percentile of the historical data of portfolio
returns. This approach does not require a normal distribution. Although the other
Value at Risk Performance of Emerging Market Equity Portfolios During the. . . 199
models can include skewed and heavy-tailed risk factor returns, they must still fit a
parametric form for modelling the multivariate risk factors. So, the main advantage
of historical simulation is that it makes no assumptions about risk factor changes
being from a particular distribution. It only employs the empirical distribution of
historical data to generate realistic future scenarios. The dynamic behaviour of risk
factors is included in the model in a natural and realistic manner. On the other hand,
the major shortcoming of the method is that it does not consider the future changes
in the market. Mostly economic problems may alter the volatility of the market and
past data does not reflect the further developments in the economy.
We used both classic and weighted historical simulation methods in this paper.
In the classical historical simulation method, the weights assigned to past changes,
in other terms scenarios are equal but in the weighted historical simulation the
weights are not equal. In the weighted historical simulation weights (wt) are
assigned using an exponentially declining function.
weightt ¼ ð1 λÞ*ðλt Þ= 1 ðλÞ251
The result of the VaR models is evaluated by back testing. These tests are techniques
for assessing the validity of a VaR method by counting the number of times when the
losses exceed the estimated VaR in a historical or simulated sample (Levy and Post
2005). The Kupiec’s test is a statistical test and a standard way of back testing
different models used to forecast VaR. If the model used produces significantly more
or significantly fewer losses exceeding VaR, the Kupiec’s test will reject the model.
Kupiec test assumes that the losses follow a binominal distribution. For adopting this
approach, the likelihood ratio form is used and assumed to be χ2 distributed which is
200 M.B. Karan et al.
a test-statistic with one degree of freedom. This makes it possible to compute the
p-value of the Kupiec’s test that can easily be interpreted. In our paper, the level of
the confidence level is 99 %.
In the second stage, we constructed equally weighted equity indices and dollar-
denominated US Bond portfolios as a robustness test of our approach. Five-year US
government bond yields are used for the period of April 1–September 30. There was
rationality behind this approach for a domestic portfolio manager. The general
expectation in the market is that the forthcoming announcements in the spring of
2013 may start a reversal in the QE program of Fed. These decisions of Fed
inevitably would have negative impact on EMEs’ equity markets and local curren-
cies. At early spring of 2013, the reports of international companies have already
underlined the risk of emerging market equities pointing out forthcoming Fed
announcements. Therefore the portfolio managers of EMEs should consider a
new strategy that decrease equity and currency risk of the portfolio. Moreover,
the previous studies, has showed that there was not only a low correlation between
emerging and developed markets but also a negative correlation between developed
and emerging market currencies during the crisis period (Bénétrix et al. 2015).
Certainly there were many options for domestic investors to hedge their risk at the
end of QE program. But investing in the portfolio of equally weighted equity
indices and dollar-denominated US Bond is one of the easiest strategies. In this
way, the fund managers expect to obtain low and more manageable VaR and
increase the performance of the portfolios.
Descriptive statistics of new equally weighted portfolios for 12 countries are
given in Table 3. The first important difference of new portfolio is that daily means
of all portfolios are positive. However, skewness and kurtosis values for the yields
of equity indices exceed the rule of thumb criteria of 1.0. This picture indicates that
almost all yield of indices may have no normal distribution. Since the probability
associated with the Kolmogorov-Smirnov (KS) test of normality is < 0.01 is less
202 M.B. Karan et al.
Table 2 VaR values and tests of Kupiec for local stock exchange index
Table 2 (continued)
than or equal to the level of significance (0.01), the results indicate that the half of
the countries have normal distribution. However, Shapiro-Wilk test does not fully
support the findings of KS and present only one normally distributed country
portfolio, which is South Kore. According to the literature, these models have
different assumptions and there is no concensus about the power of tests (Razali
and Wah 2011). Asserting that the new portfolios relatively more close to normal
distribution shape than the equity portfolios, will not be a mistake. So, we can
employ the Variance -Covariance VaR methodology to the data of some country
portfolios.
Results of VaR tests and Kupiec back testing are given in Table 4. The perfor-
mance of all VaR tests is more successful than the equity portfolios that are
presented at the first stage of our paper. There are only a few unsuccessful results,
for example, Taiwan failed in Weighted Historical Simulation and Variance-
Covariance VaR, Colombia is failed only in Variance-Covariance VaR and rest
of the countries are successful. Another significant result of the test is the fall in
exceedance levels of the countries. Not only is the average level of exceedances in
the new portfolio lower than the equity portfolio performances, but also the
maximum exceedances. Still Turkey and Russia have maximum hits, but not
more than 5.02.
204 M.B. Karan et al.
Table 3 Descriptive statistics of portfolio of local stock exchange and US bond index
Mean Std. error Median Std. deviation Skewness Kurtosis
Brazil .00043788 .007300624 .00061000 .007300624 .240 1.970
Turkey .00030754 .007138048 .00050000 .007138048 .945 8.760
India .00038236 .005373770 .00066000 .005373770 .452 2.905
S. Africa .00048035 .006308015 .00082000 .006308015 .228 1.422
Russia .00071188 .010292500 .00066000 .010292500 .381 11.433
Chile .00016385 .004694002 .00002000 .004694002 .134 .647
Taiwan .00008391 .004446973 .00029000 .004446973 .216 1.724
Poland .00017705 .005479638 .00018000 .005479638 .603 3.585
Hungary .00023385 .006315546 .00040000 .006315546 .780 6.625
S. Korea .00001258 .004293592 .00005000 .004293592 .095 .353
Malaysia .00025176 .003520506 .00039000 .003520506 .062 .711
Colombia .00022229 .005375756 .00024000 .005375756 .000 1.555
7 Discussion
Turkey, Russia and Brazil are the most volatile countries with respect to their VaR
values. We also find that the impact of May 22, 2013 declaration is relatively low,
with the help of the diversification. Except Russia, VaR volatility level of the
market is considerable low. Russia is committed to a series of international sanc-
tions which caused heavy problems in her economy due to invasion and annexation
of Crimea after 2014. The result is similar for the Classic and Weighted Historical
Simulation models.
At the second step of our comparison. We measured the risk adjusted perfor-
mance of equity and equally weighted equity- US Bond portfolios. The results
reveals that Sharpe and VaRSR indexes are giving parallel rankings and risk
adjusted performances of equally weighted equity- US Bond portfolios outperform
equity portfolios in every country. The highest performing countries in equity
indexes are India, S. Africa and Hungary. The lowest preforming ones are Brazil
Chile and Colombia. The ranking is remarkably changed in equally weighted
equity- US Bond portfolios, but interestingly South Africa keeps again a high
ranking. The best performers are S. Africa, Malaysia and Russia. Turkey, Taiwan
and S. Africa are placed in bottom lines (Table 5).
Emerging markets experienced a stressful period during the Fed’s tapering
period, after the QE program that makes currency more abundant.
8 Conclusion
Emerging markets were waiting a stressful period after the revocation sign of the
QE program that makes currency more abundant. This policy change was expected
to reverse the capital flows directed to the EME. In accordance with the estimations,
Value at Risk Performance of Emerging Market Equity Portfolios During the. . . 205
Table 4 VaR values and tests of Kupiec for portfolio of local stock exchange and US bond index
Table 4 (continued)
References
Stoyanov SV, Racheva-Iotova B, Rachev ST, Fabozzi FJ (2010) Stochastic models for risk esti-
mation in volatile markets: a survey. Ann Oper Res 176:293–309
Žiković S, Aktan B (2009) Global financial crisis and VaR performance in emerging markets:
a case of EU candidate states—Turkey and Croatia. Zb rad Ekon fak Rij 27(1):149–170
Prof. Dr. Mehmet Baha Karan has been with Hacettepe University (Turkey), Department of
Business Administration, since 1995. He organized numerous national and international seminars
and conferences focusing on finance and business management. He has served as chair of the
Finance Society, Turkey, president of the Multinational Finance Society, vice president of the
Center for Energy and Value, Amsterdam, member of the advisory board of Central Asia
Productivity and Research Center, Chicago. He was the co-director of the Turkish Regional
Chapter, PRMIA (Professional Risk Managers’ International Association) during period of
2011–2014. Currently he is the independent board member of important publicly traded companies
of Turkey. His current research areas of interest are finance and energy markets, risk management,
portfolio management and market efficiency. Karan’s accomplishments include three finance
textbooks, three Springer book editorship and more than 60 articles published in national and
international journals such as Corporate Governance: An International Review, Multinational
Finance Journal (MFJ), Journal of Business Economics & Management and Emerging Markets
Finance & Trade.
Dr. Ertugrul Umut Uysal worked at the Treasury Management and Risk Management Depart-
ments for 4 and 13 years respectively and has been working at the Enterprise Architecture
Department for 2 years at Ziraat Bank. He holds FRM charter and has publications on risk
management. He has a title of PHD from Business Administration of Hacettepe University and
has a master’s degree in Economy Politics of Gazi University and bachelor degree of Middle East
Technical University.
Mustafa Kaya has been with Hacettepe University (Turkey), Department of Business Adminis-
tration, since 1997. His current research areas of interest are risk management, market efficiency,
data mining and business process management. Kaya’s accomplishments include one public finance
textbooks, some articles published in national and international journals.
Jumps and Earnings Announcement:
Empirical Evidence from An Emerging
Market Using High Frequency Data
S.A.A. Saleem
School of Business Administration, Eskisehir Osmangazi Universitesi, IIBF Isletme Bolumu,
B Blok Kat 2, 26480 Eskisehir, Turkey
e-mail: shabirmeer@gmail.com
A. Yalaman (*)
Research Associate, Bogazici University, Center for Economics and Econometrics, Istanbul,
Turkey
School of Business Administration, Eskisehir Osmangazi Universitesi, IIBF Isletme Bolumu,
B Blok Kat 2, 26480 Eskisehir, Turkey
e-mail: abdullah.yalama@gmail.com
1 Introduction
It took scholars many years to analyze the jumps in asset prices and detect their
causes. Merton (1976) explains the jump as the unexpected release of new infor-
mation to the market. The jumps in stock prices are straightly connected to the
information in the market (Ross 1989). There are two types of news that affect the
asset price, usual news and unusual news. The first type is assumed to slowly evolve
changes in the asset price whereas the second type causes infrequently large
movements. A potential source of abnormal news can be a significant macroeco-
nomic event or firm-specific event such as earnings announcement and anticipation
of cash flows. News events with more information content bring more jumps to the
asset price than the news events with less information content (Andersen 1996).
In finance literature, scholars use different measurements to capture news
surprises through jumps. The price of an asset is described by a semimartingale
process where logarithmic price process is decomposed into continuities (Brownian
motion which is consists of locally bounded sample paths, cadlag stochastic
volatility) and discontinuities (Jump) components (Barndorff-Nielsen and
Shephard 2006; Aı̈t-Sahalia and Jacod 2010). To find out whether an asset prices
experienced jumps for a particular time interval, many research papers suggest
non-parametric statistical tests to detect whether an asset price movement contains
any jump during a particular period. Of these tests are Barndorff-Nielsen and
Shephard (2004), Jiang and Oomen (2005), Ait-Sahalia and Jacod (2007), Lee
and Mykland (2008). In this study we use Barndorff-Nielsen and Shephard
(2004) nonparametric statistical tests. The aim of this chapter is to measure the
company-level informational shocks based on the discontinues dynamics (jump) of
stock price around earnings announcement and tests the profitability of earnings
announcement strategy using jumps as trading signal. Our jump based strategy is
motivated by growing literature supporting the importance of accounting for the
jump component in options pricing, hedging, bond premium forecasting, systematic
risk management and credit risk management (Aı̈t-Sahalia and Jacod 2010; Zhou
and Zhu 2012; Patton and Verardo 2012). Post earnings announcement drift is one
of the most robust anomalies challenging the efficient market hypothesis. While the
researches to date has not entirely explained the reasons of post earnings announce-
ment anomaly, the anomaly has generated interesting implications for portfolio
management and trading strategies.
The purpose of this chapter is to measure the company-level informational
shocks based on the jump dynamics of stock price around earnings announcement
in emerging economy using high frequency data. Moreover, it intends to show the
profitability of earnings announcement strategy that uses jumps as trading signal.
For this, the chapter divides earnings news as “Good” and “Bad” news and tests
how jump behavior change in accordance to “Good” or “Bad” news. In another
words the chapter tests the validation of the post earnings announcement drift
anomaly in Turkish Stock Market using the jumps and cumulative average abnor-
mal returns (CAAR) of individual stocks around the announcement days. The
Jumps and Earnings Announcement: Empirical Evidence from An Emerging. . . 213
results show that there is a discrete jump in the stock price around both “Good” and
“Bad” earnings announcement in emerging economy. The cumulative abnormal
returns response statistically significant only to “Bad” earnings news for the event
window. As it can be seen in Table 2, average abnormal returns are negative for
“Bad” earnings news and positive for “Good” earnings news. The results are only
statistically significant for “Bad” earnings news. It can be conclude that investors
can make profit by taking short position in “Bad” earnings news. Moreover the
jump signals attempt to remove information about stocks future payoffs from
unusual price change around earning announcement. Our results suggest that the
discrete jump path of price process around earnings announcements may express
exclusive information about subsequent earnings momentum which is consistent
with the literature (Zhou and Zhu 2012) supporting the validation of post earnings
announcement drift anomaly in Turkish Stock Market.
This chapter additionally characterize the descriptive statistics of realized vol-
atility, returns, jump, and trade volume of individual stocks around the announce-
ment days for supporting the validation of the post earnings announcement drift
anomaly in Turkish Stock Market, however, not all the results are statistically
significant except for the jumps and volume.
The chapter is designed as followings: in Sect. 2, we discuss the summary of
literature. In Sect. 3 we present the methodology which includes data, jump
detection procedure and the estimation of abnormal return. Sect. 4 offers and
discusses our empirical findings. Sect. 5 presents the conclusion.
2 Summary of Literature
To our knowledge there is no previous study in the literature investigates the jump
behavior of individual stocks around the earnings announcement date using high
frequency data in the emerging economies however, there is only few ones that used
high frequency data to detect the relationship between jumps and earnings
announcement in developed economies (Lee and Mykland 2008; Patton and
Verardo 2012). For example Lee and Mykland (2008) used high frequency data
to detect the association of stocks jumps with pre-scheduled earnings announce-
ment and Patton and Verardo (2012) studied the behavior of systematic risk (beta)
around the quarterly earnings announcements period.
Many studies in the literature use daily data to test the post-earnings announce-
ment drift anomaly using jumps in the stock prices (Zhou and Zhu 2012) or have
followed Event Studies Methodology to detect the impact of earnings announce-
ment on stocks prices through measuring the abnormal returns. Many empirical
studies report that stock prices respond positively to good news and negatively to
bad news for the U.S. firms (see Ball and Brown 1968; Griffin 1976; Chambers and
Penman 1984; Chari et al. 1988; Landsman and Maydew 2002).
There are alot of other studies investigate the effect of earnings announcement in
developed markets using event study methodology, of which are Frost and Pownall
214 S.A.A. Saleem and A. Yalaman
(1994) and Mohammed and Yadav (2002) in UK, Chan et al. (2005) in Australia,
Wael (2004) in France, van Huffel et al. (1996) and Laurent (2000) in Belgium,
Sponholtz (2005) in Denmark, Kallunki (1996) and Vieru (2002) in Finland, Cotter
(1997) in Ireland, Jermakowicz and Tomaszewski (1998) in Poland, Pellicer and
Rees (1999) in Spain. Some papers emphasized that there is a rapid reaction to
earnings announcement in price volatility and returns notably around the event
window (Vieru 2002; Pellicer and Rees 1999; Mohammed and Yadav 2002; Cotter
1997; van Huffel et al. 1996; Chan et al. 2005), while Sponholtz (2005),
Berezovskis et al. (2010) reported slow reaction of stock prices volatility and
returns to earnings announcement. Some studies in the literature investigating the
behavior of asset prices and volatility around the macro-economic announcement
(Ball and Kothari 1991; Andersen et al. 2003, 2007; Dungey et al. 2007).
Studies from emerging markets that analyze the reaction of stocks to earnings
announcement using Event Study Methodology reported that stocks prices do react
to earnings announcements and the financial reports do affect the price volatility
(Haw et al. 2000; Kong and Taghavi 2006; Das et al. 2008; Altiok-Yilmaz and
Akben-Selcuk 2010).
Lee and Mykland (2008) investigate the relationship between jumps and
company-specific news using high frequency data through developing an advanced
statistical nonparametric test to detect jumps. They conclude that there is an
association between jumps in stock returns and pre-scheduled earnings announce-
ment. Following the same methodology, Zhou and Zhu (2012) support the presence
of the post-earnings announcement drift anomaly in US stock market.
Previous studies in the literature show the existence of jumps in the asset prices.
By following them, many other papers investigate the impacts of jumps on portfolio
management, risk management, options, bond pricing and hedging (Merton 1976;
Piazzesi 2005). Recently Barndorff-Nielsen and Shephard (2004) developed statis-
tical models to measure the stochastic features of jumps. Lee and Mykland (2008)
developed nonparametric test which distinguishes the actual jumps from the spuri-
ous ones using more robust detection rate.
The sign, the size, the arrival times, the intensity and the distribution of jumps
are important for hedging strategies and asset pricing models. For example, Zhou
and Zhu (2012) stated that a hedge portfolio could achieve an annualized abnormal
return of 15.3 % by taking long position in positive-jumps stocks and short position
in negative-jumps stocks. Moreover, Piazzesi (2005) improve bond pricing models
by incorporating jumps related to market information.
Our study differs from previous studies in aspects of the followings:
1. No previous study in Emerging economies investigates the jump behavior of
individual stocks around the earning announcement days.
2. Most of the previous studies from developed economies use daily data to detect
the association of stocks jumps with pre-scheduled earnings announcement
while we use high frequency data. Our study benefit individuals who are
interested or are engaged in portfolio management, risk management, options
and bond pricing and hedging in emerging markets.
Jumps and Earnings Announcement: Empirical Evidence from An Emerging. . . 215
3 Methodology
3.1 Data
Our data sample is consisting of 30 firms listed on the BIST30 index along with
their quarterly earnings announcement released to public for the period of 3.1.2005
to 31.12.2013. The data is received from Borsa Istanbul in form of raw format and
then it is filtered and arranged into 15-min intervals. The daily routine seans at
Borsa Istanbul starts from 9:15 to 17:40 with a lunch break at 12:30–2:00. Data
cleaning process was performed based on the existence literature, specifically the
deleting entries related to weekends and public holidays and when the market does
not trade full days (Hansen and Lunde 2006; Barndorff-Nielsen et al. 2009). The
earnings announcement dates are obtained from Borsa Istanbul website. Our study
covers a total number of 1,973,190 observations (30 65773).
where W(t) is the standard Brownian motion and μ(t) is the drift and σ(t) is the spot
volatility, dJ(t) is a counting process independent of W(t). Y(t) is the jump size.
Following their method, the return based on intraday data are calculated by the
follows formula:
Barndorff-Neilsen and Shephard (2004) suggest the following test statistics for
jumps at the interval ti1 to ti:
lnðRV t BV t Þ
ZJt, i qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð3Þ
ðvbb vqqÞ T max1 1;TPt ! N ð0; 1Þ;
Δn ð BVt Þ
π 2
where vbb ¼ 2 þ π 5; vqq ¼ 2;
XT=Δn
RV t ¼ Δ n Xi 2 ð4Þ
i¼1 i
216 S.A.A. Saleem and A. Yalaman
!
π T XT=Δn
BV t ¼
Δn Δ n Xi Δ n Xi1 ð5Þ
2 Δn 1
T i¼1 i i
!
T
T XT=Δn
TPt ¼ μ3
Δn Δ n Xi2 4=3 Δ n Xi1 4=3 Δ n Xi 4=3 ð6Þ
Δn 3 Δn 2
4 i i i
T i¼1
We estimate the abnormal returns using the following Market Model (Fama et al.
1969; Malkiel and Fama 1970):
where AARit is the Average Abnormal Return, ARit is Abnormal Return of individ-
ual stocks and N is the total number of stocks.
The cumulative average abnormal return (CAAR) is express as bellow. The event
window for this calculation is [5,þ5]:
X
CAAR ¼ AARt ð8Þ
SðCAARÞ
t test ¼ CARR= pffiffiffiffi ð10Þ
N
4 Empirical Findings
Where Dit is a dummy variables which takes the value of 1 during the announce-
ment day and 0 otherwise. Table 1 show that there are significant jumps in stock
price during Good and Bad earnings announcement window frame as [1,0],
[2,0], and [4,0]. Moreover we also captures some significant jumps during the
event window frame as [0,1], [0,4], and [0,5]. Table 1 show that there is a discrete
jump in the stock price around both “Good” and “Bad” earnings announcement in
emerging economy.
To test the validation of the presence of the post earnings announcement drift
anomaly in Turkish Stock Market, we further characterize cumulative abnormal
218 S.A.A. Saleem and A. Yalaman
returns of individual stocks around the announcement days. Table 2 show that
cumulative average abnormal returns (CAAR) response statistically significant to
“Bad” earnings news for the event window. As can be seen in Table 2, CAAR are
negative for “Bad” earnings news and positive for “Good” earnings news. The
results are only statistically significant for “Bad” earnings news. It can be conclude
that investor can take short position in “Bad” earnings news. Moreover the jump
signals attempt to remove information about stocks future payoffs from unusual
price change around earning announcement which supports the validation of post
Jumps and Earnings Announcement: Empirical Evidence from An Emerging. . . 219
earnings announcement drift anomaly in Turkish Stock Market consistent with the
existing literature (Zhou and Zhu 2012).
In order to get more evidence to support this anomaly, the chapter additionally
calculate the t-test value to characterize the descriptive statistics of realized vola-
tility, returns, jump, and trade volume of individual stocks around the announce-
ment days. Table 3 show that only volume and discrete jumps are statistically differ
from announcement to non-announcement.
5 Conclusion
Previous studies show several different trading strategies that support post earnings
anomalies drift, different from common literature this chapter focus on testing the
occurrence of price discontinuities around earning announcement. The sudden and
often extreme realization of the jump path of asset price around the earnings
announcement may capture unique information about future asset payoff and thus
predict the subsequent return drift.
This chapter investigates the jump behavior of individual stocks in emerging
economy around the earnings announcement days using high frequency data. We
applied the jump detection method of Barndorff-Nielsen and Shephard (2004) to
identify firm-specific informational shocks. Additionally we divide earnings
announcements into two part as “Good” and “Bad” earnings news. We explain
whether the “Good” or “Bad” news cause jumps? The results show that there is a
discrete jump in the stock price around both “Good” and “Bad” earnings announce-
ment in emerging economy. Both “Good” and “Bad” create volatility in the stock
prices. Moreover, it can be said that the effects of earnings news seem appearing
220
4 days before the earnings news releases and last for until the fourth day after the
news releases.
To test the validation of the presence of the post earnings announcement drift
anomaly in Turkish Stock Market, we characterize cumulative abnormal returns of
individual stocks around the announcement days. The results emphasize that
cumulative average abnormal returns (CAAR) response statistically significant to
“Bad” earnings news for the event window which supports the validation of post
earnings announcement drift anomaly in Turkish Stock Market. This is consistent
with the existing literature (Zhou and Zhu 2012).
The chapter additionally tests the presence of the post earnings announcement
drift anomaly in Turkish Stock Market through using t-test for realized volatility,
returns, jump, and trade volume of individual stocks around the announcement
days. The results show that only discrete jumps and trade volume are statistically
differed from the announcement to non-announcement period which support post
earnings announcement drift anomaly in Turkish Stock Market. Moreover, our
statistical tests support the profitability of earnings announcement strategy that
used jumps as trading signal.
The empirical results of our chapter contributes to the existing literature on post
earnings announcement drift anomalies in following ways. First, to our knowledge,
no previous study in Emerging economies investigates the jump behavior of
individual stocks around the earning announcement days using high frequency
data. Second, our findings suggest that jumps surrounding the earnings announce-
ment days are created by both “Good” and “Bad” news. Investor can use this
anomaly to make profit by taking short position on “Bad” earnings announcements.
For future studies, we suggest that an intensive study should be conducted on the
relationship between systematic risk and earnings announcement in emerging
economies.
References
Aı̈t-Sahalia Y, Jacod J (2007) Volatility estimators for discretely sampled Lévy processes. Ann
Stat 35(1):355–392
Aı̈t-Sahalia Y, Jacod J (2010) Analyzing the spectrum of asset returns: jump and volatility
components in high frequency data. National Bureau of Economic Research, No. w15808
Altiok-Yilmaz A, Akben-Selcuk E (2010) Information content of dividends: evidence from
Istanbul stock exchange. Int Bus Res 3(3):126
Andersen TG (1996) Return volatility and trading volume: an information flow interpretation of
stochastic volatility. J Financ 51(1):169–204
Andersen TG, Bollerslev T, Diebold FX, Labys P (2003) Modeling and forecasting realized
volatility. Econometrica 71(2):579–625
Andersen TG, Bollerslev T, Diebold FX (2007) Roughing it up: including jump components in the
measurement, modeling, and forecasting of return volatility. Rev Econ Stat 89(4):701–720
Ball R, Brown P (1968) An empirical evaluation of accounting income numbers. J Acc Res 6
(2):159–178
Ball R, Kothari SP (1991) Security returns around earnings announcements. Acc Rev 66:718–738
222 S.A.A. Saleem and A. Yalaman
Barndorff-Nielsen OE, Shephard N (2004) Power and bipower variation with stochastic volatility
and jumps. J Financ Econ 2(1):1–37
Barndorff-Nielsen OE, Shephard N (2006) Econometrics of testing for jumps in financial eco-
nomics using bipower variation. J Financ Econ 4(1):1–30
Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N (2009) Realized kernels in practice:
trades and quotes. Econ J 12(3):C1–C32
Berezovskis P, Visnapuu V, H€ ogholm K (2010) Post-earnings announcement drifts on the baltic
stock exchanges. Working paper, Stockholm School of Economic
Chambers AE, Penman SH (1984) Timeliness of reporting and the stock price reaction to earnings
announcements. J Account Res 22(1):21–47
Chan H, Faff R, Ramsay A (2005) Firm size and the information content of annual earnings
announcements: Australian evidence. J Bus Financ Acc 32(1–2):211–253
Chari VV, Jagannathan R, Ofer AR (1988) Seasonalities in security returns: the case of earnings
announcements. J Financ Econ 21(1):101–121
Cotter J (1997) Irish event studies: earnings announcements, turn of the year and size effects. Ir J
Manag 18:34
Das S, Pattanayak JK, Pathak P (2008) The effect of quarterly earnings announcements on Sensex:
a case with clustering of events. IUP J Acc Res Audit Pract 7(4):64–78
Dungey M, McKenzie MD, Smith LV (2007) News, no-news and jumps in the US Treasury
Market. Working Paper
Fama EF, Fisher L, Jensen MC, Roll R (1969) The adjustment of stock prices to new information.
Int Econ Rev 10(1):1–21
Frost CA, Pownall G (1994) Accounting disclosure practices in the United States and the United
Kingdom. J Acc Res 32(1):75–102
Griffin PA (1976) Competitive information in the stock market: an empirical study of earnings,
dividends and analysts’ forecasts. J Financ 31(2):631–650
Hansen PR, Lunde A (2006) Realized variance and market microstructure noise. J Bus Econ Stat
24(2):127–161
Haw IM, Qi D, Wu W (2000) Timeliness of annual report releases and market reaction to earnings
announcements in an emerging capital market: the case of China. J Int Fin Manag Acc 11
(2):108–131
Iqbal J, Farooqi FA (2011) Stock price reaction to earnings announcement: the case of an emerging
market. Munich Personal Repec Archive. http://mpra.ub.uni-muenchen.de/30865/MPRA
Jermakowicz EK, Gornik-Tomaszewski S (1998) Information content of earnings in the emerging
capital market: evidence from the Warsaw stock exchange. Multinatl Financ J 2(4):245–267
Jiang G, Oomen R (2005) A new test for jumps in asset prices. Preprint
Kallunki JP (1996) Stock returns and earnings announcements in Finland. Eur Acc Rev 5
(2):199–216
Kong S, Taghavi M (2006) The effect of annual earnings announcements on the Chinese stock
markets. Int Adv Econ Res 12(3):318–326
Landsman WR, Maydew EL (2002) Has the information content of quarterly earnings announce-
ments declined in the past three decades? J Acc Res 40(3):797–808
Laurent M (2000) The effect of earnings release for Belgian listed companies. Working Paper
WP-CEB 03
Lee SS, Mykland PA (2008) Jumps in financial markets: a new nonparametric test and jump
dynamics. Rev Financ Stud 21(6):2535–2563
Malkiel BG, Fama EF (1970) Efficient capital markets: a review of theory and empirical work.
J Financ 25(2):383–417
Merton RC (1976) Option pricing when underlying stock returns are discontinuous. J Financ Econ
3(1-2):125–144
Mohammed SR, Yadav PK (2002) Quality of information and volatility around earnings
announcements. In: Paper presented in EFA 2002 Berlin Meetings, Feb 2002
Jumps and Earnings Announcement: Empirical Evidence from An Emerging. . . 223
Patton AJ, Verardo M (2012) Does beta move with news? Firm-specific information flows and
learning about profitability. Rev Financ Stud 25(9):2789–2839
Pellicer MJA, Rees WP (1999) Regularities in the equity price response to earnings announce-
ments in Spain. Eur Acc Rev 8(4):585–607
Piazzesi M (2005) Bond yields and the Federal Reserve. J Polit Econ 113(2):311–344
Ross SA (1989) Information and volatility: the no‐arbitrage martingale approach to timing and
resolution irrelevancy. J Financ 44(1):1–17
Sponholtz C (2005) Separating the stock market’s reaction to simultaneous dividend and earnings
announcements. In: EFA 2006 Zurich meetings, Nov 2005
van Huffel G, Joos P, Ooghe H (1996) Semi-annual earnings announcements and market reaction:
some recent findings for a small capital market. Eur Acc Rev 5(4):693–713
Vieru M (2002) The impact of interim earnings announcements on the permanent price effects of
trades on the Helsinki stock exchange. J Multinatl Financ Manag 12(1):41–59
Wael L (2004) Market reaction to annual earnings announcements: the case of Euronext Paris.
EFMA 2004 Basel Meetings Paper
Zhou H, Zhu JQ (2012) Jump on the post-earnings announcement drift (corrected). Financ Anal J
68(3):63–80
Abstract Hedging without giving regard to what competitors are doing may
actually increase the variance of profits as opposed to decreasing it. In this study,
a market maker and an individual firm are taken as the players of a simultaneous
game. We explore the impact of competitors’ hedging practices on the optimal
hedging policy of an individual firm by explicitly considering the other factors such
as the level of pass-through of cost shocks and the level of profitability in the
industry. Computational results are based on the simulations of an analytical model
which incorporates a Nash equilibrium strategy.
1 Introduction
G. Fas (*)
Department of Business, Istanbul Bilgi University, Santral Campus, 34060 Istanbul, Eyup,
Turkey
e-mail: genco.fas@bilgi.edu.tr
K. Senel
Graduate School of Finance, Istanbul Commerce University, Sutluce Campus, 34445 Istanbul,
Beyoglu, Turkey
e-mail: keremsenel@gmail.com
However, the hedging practices of competitors is not the single determinant for
the hedging decision of an individual firm. If firms can increase their output prices
in response to an increase in input prices, this reduces the need to hedge. Previous
research focuses on the factors that may have an impact on the firms’ ability and
propensity to reflect such price increases. For instance, Allayannis and Ihrig (2001)
observe that it is easier (more difficult) for firms in less (more) competitive [or high
(low) markup] industries to respond to exchange rate movements by increasing
their prices. On the other hand, Nain (2004) shows that product prices are less
responsive to foreign exchange rates in industries where currency hedging is more
common.
It is also plausible that the minimization of variance of profits may not be the
sole purpose for an individual firm. Adam et al. (2007) and Mello and Ruckes
(2005) attribute deliberate deviation from competitors in terms of hedging practices
to strategic considerations. An individual firm may choose to act differently in order
to benefit from cash inflows when their competitors experience cash outflows due to
their hedging activities.
The main objective of this paper is to explore the impact of competitors’ hedging
practices on the optimal hedging policy of an individual firm by explicitly consid-
ering the other factors such as the level of pass-through of cost shocks and the level
of profitability in the industry (which is negatively correlated with the degree of
competition in the industry).
In a game theoretical setting, a market maker and an individual firm are taken as
the players of a simultaneous game. Firms make their order decisions for replen-
ishment with respect to the estimated effective demand. They have to decide on
their order quantities simultaneously in this competitive environment as they try to
maximize their expected cash flows. Different from previous studies exploring the
impact of competitors’ hedging practices, the hedging ratios, the level of pass-
through of cost shocks, and the level of profitability are treated as exogeneous
variables. This enables us to analyze the impact of each factor separately.
Treating the hedging ratio as an exogeneous variable is also consistent with the
notion of selective hedging. Brown et al. (2006) observe that firms tend to decrease
hedging as prices move against them—behavior contrary to that predicted by risk
management theory. These results suggest that firms attempt to time market prices,
so-called selective hedging. Hence, from the viewpoint of the individual firm, it is
possible to decide on the optimal hedging ratio that minimizes the variance of its
cash flows given the hedging ratio of the market maker, the level of pass-through of
cost shocks, and the level of profitability. If the individual firm is negligible
compared to the market maker in terms of size and market impact, treating these
factors as exogeneous variables is a reasonable assumption.
Our experiment setup also enables us to pinpoint the optimal hedging ratio that
maximizes the risk-adjusted mean cash flow as well as the optimal hedging ratio
that minimizes the cash flow variance. Comparison of these different hedging ratios
sheds light on the tradeoff between risk and return in the context of hedging.
The rest of the paper is organized as follows. The literature review consists of
two parts. First, we review previous studies focusing on hedging, in general, and the
Hedging Scenarios Under Competition: Exploring the Impact of. . . 227
2 Literature Review
McGillivray and Silver (1978) first studied a substitutable product inventory prob-
lem by using the Economic Order Quantity (EOQ) context. Later, Parlar and Goyal
(1984) gave single period formulations for an inventory system with two substitut-
able products independent of each other.
Also Khouja et al. (1996) formulated a two-item newsboy problem with substi-
tution, but they identified the optimality with a Monte Carlo simulation without an
analytical solution.
Parlar (1985) proposed a Markov Decision Process model to find the optimal
ordering policies for perishable and substitutable products from the point of view of
a single retailer. Parlar (1988) also studied a game theoretic analysis of inventory
control under substitutable demand. He modeled a two-product single period
problem as a two-person nonzero-sum game and showed that there exists a unique
Nash equilibrium. He proved that the expected profit function is concave to find
optimal stocking levels for the two products in a single period with different
revenue levels.
Netessine and Rudi (2003) considered centralized and competitive inventory
models with demand substitution. They used deterministic proportions for
230 G. Fas and K. Senel
unsatisfied demand and showed that the total profit is decreasing in demand
correlation when demand is multivariate normal.
Papadimitriou and Roughgarden (2008) initiated a systematic study of algorith-
mic issues involved in finding Nash and correlated equilibria in games with a large
number of players. They presented a polynomial-time algorithm for finding a Nash
equilibrium in symmetric games without an algebraic approach.
Avsar and Baykal‐G€ursoy (2002) showed that competition between retailers for
a substitutable demand leads to a Nash equilibrium characterized by a pair of
stationary base stock strategies which are expressed by constant order-up-to-levels
for the infinite horizon problem.
Nagarajan and Rajagopalan (2009) examined the nature of optimal inventory
policies in a system where a retailer manages substitutable products. They showed
that a basestock policy is optimal in a multi-period problem for deterministic
demand. After that, they also showed in their later work that retailers can ignore
the substitution effect altogether and implement monopolistic strategies (indepen-
dent order-up-to-policies) as the unique equilibrium when the total ordering units
are above a threshold. They made a finite period analysis and showed the unique-
ness of the Nash equilibrium with a lower bound on total ordering units.
Finally, Fas and Bilgiç (2013) extend the results coming from the work of
Nagarajan and Rajagopalan (2009). They gave a threshold in a two-period game
which has a total deteministic demand lower than the proposed level in Nagarajan
and Rajagopalan (2009).
where c is a constant.
Total demand D is deterministic and known. For any demand D, p and 1 p are the
proportions of demand that belong to the market maker and the individual firm,
respectively. There is no penalty cost charged to the firms. Firms make their order
decisions for replenishment with respect to the estimated effective demand and
bring their positions to ym and yi . Then, the demand is realized. Costs are accrued
and profit or losses are collected for the single period. Firms have to decide on their
order quantities simultaneously in this competitive environment as they try to
maximize their expected cash flows.
Single period cash flows for the market maker and the indivual firm are as
follows:
and
where Λm and Λi are the effective demands of the market maker and the individual
firm, and they are equal to pD þ β½ð1 pÞD yi þ and ð1 pÞD þ γ ½ pD ym þ ,
respectively, where ½xþ denotes maxð0; xÞ. In the above equations, the first term is
the revenue from sales, the second and the third terms are the costs coming from the
unhedged and hedged parts of the input, respectively. Since the cash flow functions
include the random variable p, we take expectations on p. We also derive the cash
flow functions per unit demand to simplify the equations and express the effective
demand and inventory levels in the interval ½0; 1. Cash flow functions per unit
demand are:
232 G. Fas and K. Senel
y y y
π m ðym ; yi Þ ¼ Pm min m
; λ m k 3 Pr ð 1 h m Þ m k 3 Pr 0 h m m ; ð4Þ
D D D
and
y y y
π i ðym ; yi Þ ¼ Pm min ; λi k3 Pr ð1 hi Þ i k3 Pr0 hi i ;
i
ð5Þ
D D D
þ þ
where λm ¼ p þ β ð1 pÞ yDi and λi ¼ 1 p þ γ p yDm .
The nonnegative random variable p has a continuous density function f with
finite expectation. The corresponding cumulative distribution function is denoted
by F, and let Fð0Þ ¼ 0 and Fð1Þ ¼ 1, since p 2 [0, 1]. We, first, consider concavity of
the single period expected cash flow function. First order conditions give us the
optimal ordering decision. Concavity also yields the existence of unique response
for a firm to maximize the cash flow function for all possible strategies of the other
firm. The explicit expressions for the single period formulation of the expected cash
flow function per unit demand of the market maker and the individual firm are:
Z αm h Z 1
yi i ym
E½π m ðym ; yi Þ ¼ Pm p þ β ð 1 pÞ f ðpÞdp þ f ðpÞdp
0 D αm D
y y
k3 Pr ð1 hm Þ m k3 Pr0 hm m ; ð6Þ
D D
Z 1 h Z αi
ym i yi
E½π i ðym ; yi Þ ¼ Pm 1pþγ p f ðpÞdp þ f ðpÞdp
αi D 0 D
y y
k 3 Pr ð 1 h i Þ i k 3 Pr 0 h i i ; ð7Þ
D D
ym
D β ð1yDi Þ y
1 Di γ
ym
where αm ¼ 1β and αi ¼ 1γ
D
.
Proposition 3.1 Best responses of the market maker and the individual firm are
unique for the single period two-player game and they are:
R y*m ; y*i ¼ βD þ Dð1 βÞF1 ðAm Þ βyi , D γym ð1 γ ÞDF1 ðAi Þ ; ð8Þ
∂E½π m 1 y y
¼ Pm αm þ β 1 αm i m f ðαm Þ
∂ym D ð1 β Þ D D
Z 1 ð9Þ
f ð pÞ 1 1
þPm dp k3 Pr ð1 hm Þ k3 Pr0 hm ¼ 0
αm D D D
and
∂E½π i 1 y y
¼ Pm 1 αi þ γ αi m i f ðαi Þ
∂yi DðZ1 γ Þ D D
αi ð10Þ
f ð pÞ 1 1
þPm dp k3 Pr ð1 hi Þ k3 Pr0 hi ¼ 0:
0 D D D
βð1 i Þ
ym y y y
1 Di γ Dm
Since αm ¼ D 1β D and αi ¼ 1γ , the first terms in the above two equations
are cancelled out and the result follows as in 8. Concavity of the expected cash flow
functions is shown in order to give the uniqueness of the best responses by using the
second order conditions.
2
∂ E½ π m 1
¼ Pm f ðαm Þ 2 <0 ð11Þ
∂ym2 D ð1 β Þ
2
∂ E½π i 1
¼ Pm f ðαi Þ 2 <0 ð12Þ
∂yi2
D ð1 γ Þ
∎
Finally, we show uniqueness of the Nash equilibrium as follows.
Proposition 3.2 There exists a unique Nash equilibrium for the single period
two-player non-zero sum game.
Proof The proof is completed by the Implicit Function Theorem in Cachon and
Netessine (2004). The property ’diagonal dominance’ has to be satisfied on
response functions for the uniqueness of the Nash equilibrium:
∂2 E½π k
∂2 E½π k
, 8k, k ¼ i, m: ð13Þ
i, i6¼m
∂ym ∂yi
∂y2k
Equation 13 is used for the cross partial derivative of the single period formu-
lations. The cross partial derivative is:
2
∂ E½ π m β 1
¼ Pm f ðαm Þ : ð14Þ
∂ym ∂yi 1 β D2
∂2 E½π
β 1
m
¼ Pm f ðαm Þ : ð15Þ
∂ym ∂yi
1 β D2
Since β < 1, comparing the absolute value of the right hand side of Eq. 11 with
the right hand side of Eq. 15, we satisfy the diagonal dominance condition:
∂2 E½π
∂2 E½π
m
m
<
: ð16Þ
∂ym ∂yi
∂y2m
Since the game is symmetric, the same is true for the individual firm. Then, there
exists a unique Nash equilibrium in the single period game.
4 Simulations
For each point on a grid of 101 101 (hm , hi ) couples (hm , hi 2 {0, 0.01, . . ., 1}),
1000 simulations have been carried out to simulate the input price, corresponding
output price and proportions of demand that go to the market maker and the
individual firm. For each point on the grid, these simulations have been used to
find the (ym , yi) couple that maximizes the expected cash flows for the market maker
and the individual firm. Hence, the Cournot model has been solved. Then, the
simulated cash flow distributions for the optimal ym and yi s have been used to
calculate variances and risk-adjusted means (mean/standard deviation) for the cash
flow of the individual firm. The parameter set is as follows:
D ¼ 100;
β ¼ 50%;
γ ¼ 50%;
k1 ¼ 30%;
k2 ¼ 50% (for the base case) (sensitivity analysis for k2 ¼ 0, 25%, 75%, 100%),
k3 ¼ 1;
p is assumed to be normally distributed with mean 0.95 and standard deviation
0.01,
Pr is assumed to be lognormally distributed with mean 10 % and standard
deviation 30 %,
Pr0 ¼ 100;
decreases, the hedging ratio that minimizes the variance of cash flow increases. At
the extreme level of 0 %, the optimal hedging ratio turns out to be 100 %,
irrespective of the hedging ratio of the market maker. On the other hand, it turns
out that as the level of pass-through increases, the hedging ratio that minimizes the
variance of cash flow decreases.
Figures 6 and 7 describe the sensitivity analysis with respect to the level of
profitability. Figures 6 and 7 shows the simulation results for the scenario in which
the pass-through level is set at 50 % with lower (higher) profitability. As the
profitability decreases (increases), the hedging ratio that minimizes the variance
of cash flow increases (decreases).
In addition to the above observations, the similarity of the first and second
graphs in all figures shows that the optimal hedging ratio that maximizes the risk-
adjusted mean cash flow is almost the same as the optimal hedging ratio that
minimizes the cash flow variance.
Hedging Scenarios Under Competition: Exploring the Impact of. . . 237
Our results confirm the hypothesis that a firm should conform to the majority in
terms of hedging in order to minimize the variability of its cash flows. This result is
also in line with the findings of Nain (2004) and Adam and Nain (2013). On the
other hand, the results do not present a simple one-to-one correspondence of
hedging ratios, but, rather, a tendency which might be drastically affected by a
number of factors such as the level of pass-through of cost shocks to output prices
and the degree of competition in the industry (which is negatively correlated with
the level of profitability in the industry).
If firms can increase their output prices in response to an increase in input prices,
this reduces the need to hedge. Previous research focuses on the factors that may
have an impact on the firms’ ability and propensity to reflect such price increases.
For instance, Allayannis and Ihrig (2001) observe that it is easier (more difficult) for
firms in less (more) competitive [or high (low) markup] industries to respond to
exchange rate movements by increasing their prices. On the other hand, Nain
238 G. Fas and K. Senel
(2004) shows that product prices are less responsive to foreign exchange rates in
industries where currency hedging is more common.
Our model incorporates a separate parameter that enables us to observe the
impact of pass-through of cost shocks regardless of the reason. Our results indicate
that, as the level of pass-through decreases, the optimal hedging decision for the
individual firm to minimize the variability of cash flows leans toward more hedg-
ing. This result is in line with Allayannis and Weston (1999) who find that firms that
operate in more competitive (low markup) industries are more likely to use cur-
rency derivatives than firms that operate in industries with high markups (through
the reasoning that the level of pass-through is lower in more competitive
industries).
Our results show that, depending on the level of pass-through, the optimal
hedging ratio for the individual firm may turn out to be significantly higher or
lower than that of the market maker. In the extreme case when there is no pass-
Hedging Scenarios Under Competition: Exploring the Impact of. . . 239
through, the optimal hedging decision for the individual firm calls for hedging
100 % of the exposure, regardless of the hedging decision of the market maker.
Adam et al. (2007) and Mello and Ruckes (2005) attribute deliberate deviation
from competitors in terms of hedging practices to strategic considerations. They
indicate that such type of behavior should be more accentuated and lead to more
heterogeneity as the level of competition increases. Allayannis and Weston (1999)
find that firms that operate in more competitive (low markup) industries are more
likely to use currency derivatives than firms that operate in industries with high
markups.
Adam and Nain (2013) observe that smaller firms are less likely to use deriva-
tives in more competitive industries. In contrast, larger firms are more likely to use
derivatives in more competitive industries. Our results indicate that with lower
levels of profitability (which is a proxy for higher level of competition), the optimal
hedging decision for the individual firm leans toward more hedging. Similar to the
case of pass-through, depending on the degree of competition, the optimal hedging
240 G. Fas and K. Senel
ratio for the individual firm may turn out to be higher or lower than that of the
market maker.
As a result, the notion that your hedging decision should conform to the majority
in order to minimize the variability of cash flows is generally, but not specifically,
valid. It should be a nuanced decision taking into factors such as the level of pass-
through of cost shocks and the degree of competition as indicated by the level of
profitability in the industry. It is worthwhile to stress that the nuances of this
decision does not even take into account the strategic considerations as documented
by Adam et al. (2007) and Mello and Ruckes (2005) who argue that in competitive
environments some firms may strategically reduce their hedge positions, or not
hedge at all, in order to benefit from cash inflows when their competitors experience
cash outflows due to their hedging activities.
Our experiment setup enables us to pinpoint the optimal hedging ratio that
maximizes the risk-adjusted mean cash flow as well as the optimal hedging ratio
that minimizes the cash flow variance. Our simulation results show that these two
Hedging Scenarios Under Competition: Exploring the Impact of. . . 241
optimal hedging ratios behave very similarly under all the different conditions that
we have tested. This result intuitively makes sense as the impact of the hedging
ratio on mean cash flow should be negligible when compared to its impact on cash
flow variance. Hence, we can tentatively conclude that the optimal hedging ratio
that minimizes cash flow variability also maximizes risk-adjusted mean cash flows.
5 Conclusion
Our results confirm the hypothesis that a firm should conform to the majority in
terms of hedging in order to minimize the variability of its cash flows. On the other
hand, the results do not present a simple one-to-one correspondence of hedging
ratios, but, rather, a tendency which might be drastically affected by a number of
242 G. Fas and K. Senel
factors such as the pass-through of cost shocks to output prices and the degree of
competition in the industry.
Our results indicate that, as the level of pass-through decreases, the optimal
hedging decision for the individual firm to minimize the variability of cash flows
leans toward more hedging. Depending on the level of pass-through, the optimal
hedging ratio for the individual firm may turn out to be significantly higher or lower
than that of the market maker.
Our results indicate that with lower levels of profitability (which is a proxy for
higher level competition), the optimal hedging decision for the individual firm leans
toward more hedging. Similar to the case of pass-through, depending on the degree
of competition, the optimal hedging ratio for the individual firm may turn out to be
higher or lower than that of the market maker.
As a result, the notion that your hedging decision should conform to the majority
in order to minimize the variability of cash flows is generally, but not specifically,
valid. Finally, we can tentatively conclude that the optimal hedging ratio that
minimizes cash flow variability also maximizes risk-adjusted cash flows.
References
Adam TR, Nain A (2013) Strategic risk management and product market competition.
In: Advances in financial risk management. Palgrave Macmillan, London, UK, pp 3–29
Adam T, Dasgupta S, Titman S (2007) Financial constraints, competition, and hedging in industry
equilibrium. J Financ 62(5):2445–2473
Allayannis G, Ihrig J (2001) Exposure and markups. Rev Financ Stud 14(3):805–835
Allayannis G, Ofek E (2001) Exchange rate exposure, hedging, and the use of foreign currency
derivatives. J Int Money Financ 20(2):273–296
Allayannis G, Weston JP (1999) The use of foreign currency derivatives and industry structure.
In: Brown G, Chew D (eds) Corporate risk: strategies and management. Risk Books
Avsar ZM, Baykal-G€ ursoy M (2002) Inventory control under substitutable demand: a stochastic
game application. Nav Res Logist 49(4):359–375
Brown GW, Crabb PR, Haushalter D (2006) Are firms successful at selective hedging? J Bus 79
(6):2925–2949
Cachon GP, Netessine S (2004) Game theory in supply chain analysis. Handbook of quantitative
supply chain analysis. Springer, Berlin, pp 13–65
Carter DA, Rogers DA, Simkins BJ (2006) Hedging and value in the US airline industry. J Appl
Corp Finance 18(4):21–33
Fas G, Bilgiç T (2013) A two-period dynamic game for a substitutable product inventory control
problem. Int J Invent Res 2(1–2):108–126
Géczy C, Minton BA, Schrand C (1997) Why firms use currency derivatives. J Financ 52
(4):1323–1354
Khouja M, Mehrez A, Rabinowitz G (1996) A two-item newsboy problem with substitutability. Int
J Prod Econ 44(3):267–275
McGillivray AR, Silver EA (1978) Some concepts for inventory control under substitutable
demand. Infor 16(1):47–63
Mello AS, Ruckes ME (2005) Financial hedging and product market rivalry. Available at SSRN
687140
Hedging Scenarios Under Competition: Exploring the Impact of. . . 243
Genco Fas received his BS degree in Mathematics from Bogazici University (1999), MS degree
in Mathematics from Yeditepe University (2005) and PhD degree in Industrial Engineering from
Bogazici University (2012). Dr. Fas is a faculty member of Department of Management, Bilgi
University, Istanbul, Turkey, where currently he is an Assistant Professor of Operations Research.
He provides consultancing for the banks and nonbank financial institutions. His research interests
include stochastic models, inventory theory, and game-theoretic applications.
Kerem Senel is currently a part-time Professor of Finance at Istanbul Commerce University and
Bogazici University, Istanbul, Turkey. Prof. Senel has a BS in Electrical & Electronics Engineer-
ing from Bilkent University (1992), an MSIA from Carnegie Mellon University (1994), and a PhD
in Management from Istanbul University (2001). His research interests include asset allocation in
pension funds and financial risk management. He is a member of GARP (Global Association of
Risk Professionals) and a certified Financial Risk Manager (FRM) since 2004.
Option Strategies and Exotic Options: Tools
for Hedging or Source of Financial
Instability?
Sıtkı S€
onmezer
Abstract Development of options markets has been rapid and constant develop-
ment has led to innovations in the financial sector in recent decades. Despite the fact
that innovations are far from complete, this chapter aims to address the prominent
options strategies and exotic options. The way they are priced is questioned and
behavioral approaches are addressed. Speculative nature of exotic options and
financial instability relationship is discussed.
1 Introduction
Options markets has fostered in recent decades not only because they provide
leverage for their users or they help to overcome restrictions, but they also help
to reach market completeness. There is a steady development in these markets as
the need of investors vary and subject to change in time therefore, options markets
are structurally open to innovation and as a result of the changing conditions,
numerous new generation options and strategies are being formed. Even though,
option markets are not fully developed in emerging markets, exotic options with
currencies are getting more popular and their volumes are increasing as well as
number of their investors. In order to increase the number of investors in these
markets, the mechanism of these contracts need to be well explained and this
chapter aims to partly fulfill this mission. The main constraint here is lack of
space. Only a few of the strategies and exotic options are discussed but the aim is
to attract attention to this promising market.
The first part of this chapter briefly explains the option contracts to remind the
readers the concept and to help them to be able to follow further strategies and
options. This part is concluded with option pricing where the binomial and Black
Scholes methods are discussed.
Options strategies are mainly a combination of an underlying asset and options
on the particular underlying assets or are just a combination of options contracts.
S. S€onmezer (*)
Beykent University, Istanbul, Turkey
e-mail: sitkisonmezer@beykent.edu.tr
Investors aim to reap profits by trading with the market sentiment they have via
implementing these strategies. These strategies help investors to make profit
regardless of the market trend. When market trend is downwards, they implement
bearish strategies; When market trend is upwards, they implement bullish strate-
gies; or even when investors think market may not be trendy for a period of time,
they may find ways to extract profit from their particular market sentiment. These
strategies are classified in terms of market trend. This paper aims to clarify the
prominent strategies in general, how to use them and to present the maximum and
minimum loss for these strategies.
Exotic options are relatively new for the literature and the prominent ones are
discussed in the last part of this chapter. The advantages of these options to
investors and the risks involved are presented briefly and a hindsight for their
pricing is provided.
2 Option Contracts
Option Contracts are contracts that two parties agree to come to terms in the future
on the followings; there has to be an underlying asset, upon which pricing of the
option contract is determined. A maturity date has to be predetermined; the contract
is settled only at the maturity date for the European options contracts whereas, these
contracts can be exercised any time within maturity for American options. A
predetermined exercise price or strike price which gives the option holder the
right to exercise the option at this price. A premium is paid by the holder (buyer) of
the options contract to the writer (seller) of the contract for the risks undertaken.
The premium amount enables investors to take a long or short position in an
underlying asset in full. Thus, leverage in these contracts is another rationale for
its investors.
There are also Asian Options which are so-called exotic. the payoffs for these
options are determined by the average underlying price over some pre-set period of
time and as they are path dependent, they are very difficult to price (Abrahamyan
and Maddah 2015).
There are two types of option contracts; namely, call and put options. The buyers of
these contracts have right to walk away if conditions are disadvantageous for them
but the sellers of these contracts have to honor their obligations.
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 247
Call options gives the buyer of the contract the right to buy the underlying asset at a
predetermined price, for a predetermined maturity. The writer of the call option
receives a premium from the buyer and has to honor his obligation in return.
Put options gives the buyer of the contract the right to sell the underlying asset at a
predetermined price, for a predetermined maturity. The writer of the put option
receives a premium from the buyer and similarly, has to honor his obligation in
return.
2.1.3 Maximum Profit and Loss for the Parties in Option Contracts
Premiums are given at the beginning of the contract and is irrevocable. Theoreti-
cally, buyers of call options can have an infinite profit whereas, buyers of put
options can earn the difference between the strike price and the spot price at the
exercise date. It must be noted that premium amount is the highest profit the writer
can achieve in return for a huge potential loss inherent in these contracts therefore,
they are carefully priced by market professionals who are aware of the possible
losses.
There are two main models used to derive option prices; Binomial model and the
Black Scholes model. Both models aim to calculate the intrinsic value of the
underlying options.
The binomial option model is a discrete time model introduced by Cox and Ross
(1976). The model is based on the assumption that the price of the underlying asset
will increase or decrease at a predetermined date and at a predetermined rate.
Options are than assessed depending on the possible closing values of the underly-
ing assets. The model is widely used in pricing European or American options and it
is also referred as Cox, Ross & Rubinstein Formula. The model has few
assumptions:
248 S. S€
onmezer
• Markets are perfect. Taxes and commissions are ignored. No short selling limit
and assets are infinitely divisible.
• Single interest rate for borrowing and lending.
• İnterest rate for the period, stock return rate are known.
• There is at least one risky asset and one riskless asset.
• No dividends for the underlying asset.
To illustrate the model, assume a stock with a spot price(S), exercise price
(X) and recall that this model assumes a down or up market in the coming period.
So, when the spot price (S) of a stock increases, in an up market (U), it rises to a new
price (US) or in a down market (D), it decreases to a lower price (DS). Similarly the
call prices will be Cu and Cd, depending on the movement of the market. In that
case, value of the call options would be;
Cu ¼ Max ½0, US X
Cd ¼ Max ½0, DS X
Value of a call option can be derived from the formula here below:
Where,
r is the risk free rate and S0 is the spot rate. Cu is the value of the call option in an
up market and similarly Cd is the value of the call option in a down market.
Black Scholes model is a continuous time approach. Black Sholes model is also
used to price European options and it also has assumptions regarding with the
market and conditions. Some are as follows;
• Markets are close to perfect markets
• Single interest rate
• No dividends or interest payments
• Returns are normally distributed
• Short selling is possible
Merton’s model helps us to take dividend income into account, otherwise, it is
same as the Black-Scholes formula, which are both using standard normal cumu-
lative distributive function to derive the call option price and then put-call parity
may be used to find the price of the put option(Kolb and Overdahl, 2009).
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 249
2.3 Greeks
There are certain measures that may help investors to hedge or evaluate risks
involved with option contracts. Some of them are discussed here below:
2.3.1 Delta
Delta is a measure that states the change in the price of the option contract with
respect to change in the underlying asset. As underlying asset’s prices increase the
call price will increase from 0 to 1; similarly the put option’s delta will range from
1 to 0. When delta of a call option on an underlying asset is 0.70, it means that the
option price will increase by 70 % of the increase in the price of the underlying
asset.
Delta neutral hedging enables investors to hedge their portfolios by combining a
long position in an underlying asset and short calls. How many option to short is
calculated by dividing number of shares to be hedges by the delta of the asset.
2.3.2 Theta
Theta measures the rate of the change of the value of the portfolio as time passes. As
time passes, the option contract is closer to expiration and the option value
diminishes. That’s why, theta is always negative (Hull 1989).
250 S. S€
onmezer
2.3.3 Gamma
Gamma is the derivative of delta. It measures the change in delta when the price of
the underlying asset changes. Gamma is largest when the option is at the money and
can be negative or positive.
2.3.4 Rho
2.3.5 Vega
Vega is the sensitivity of the value of the portfolio to the volatility of the underlying
asset. When Vega of an underlying asset is low, it means that the prices of the
underlying asset may not be much affected by changes in volatility. Vice versa is
valid. Measures derived from Vega are namely, Vanna and Vega and are used in
option pricing. Volga is the sensitivity of the Vega with respect to a change of the
implied volatility and Vanna is the sensitivity of the Vega with respect to a change
in the spot FX rate.
3 Option Strategies
There are combinations and spreads of option contracts that can form a strategy to
reap profits once the direction of the prices are foreseen correctly. There are
numerous combinations such as; straddle, strangle, collar, fence, covered call,
married put and risk reversal. Similarly, many spreads are present such as, butterfly,
bull, bear, box calendar, diagonal, vertical, inter market and ratio. Due to lack of
space, this chapter exemplifies a few of them by classifying them due to their
usefulness in trendy and untrendy markets.
These strategies aim to benefit from the investors market sentiment and mitigate the
potential loss of the portfolio. Options can be sold without possessing the
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 251
underlying security which is called naked; Options can also be sold having the
underlying security in the portfolio which is called covered in the literature.
In a covered call, An investor sells a call option and to mitigate the possible risks,
the investor purchases the underlying security. The rationale of a covered call is
earn premium income from the call option at the expense of a limited upside
potential (Chance 2003).
In a protective put, investor possesses the underlying security and fears from a
decrease in underlying asset’s price and purchases a put option. Put option serves as
an insurance but decreases the profits by the put premium amount. Protective puts
are like car insurances.
Bullish strategies pay when the market trend is upward sloping. Buy and hold
strategy almost always has a higher payoff relative to these strategies. Despite this
fact, Investors prefer these strategies over a buy and hold strategy because they
believe the price increase is not expected to be a certain threshold level and they
would like to sell the upside potential over that threshold level. In other words,
investors don’t want to buy an unnecessary bet and by selling an option contract
they reduce their costs.
The buyer of Bull Call Spread, purchases one call option with a lower exercise price
and partially compensates the premium expense by selling a call option at a higher
exercise price. The investor expects a rise in prices but believes that the rise may be
a moderate one therefore, the rise beyond the higher exercise price is foregone in
return for a premium.
The buyer of Bull Call Spread, purchases one call option with a higher exercise
price and partially compensates the premium expense by selling a call option at a
lower exercise price. The investor aims to benefit from decreasing prices and to
preserve the premium obtained by selling the option, the long call position is again
works as an insurance for this strategy.
To exemplify a long butterfly spread, the buyer of the spread purchases an in the
money call option with a lower exercise price and an out of the money call option
with a higher exercise price; simultaneously, sells two call options close to the spot
price to compensate the premiums paid for the purchased options. The buyer
foresees that the price will end up near spot price in which case the option with
the lowest exercise price will only pay off. The rest of the options will expire
worthless or cost negligible amounts. In short, this strategy pays when prices don’t
deviate much from the spot price whereas, maximum risk that the investor faces is
the net premiums paid.
There are strategies derived from options that will lead to profits when prices move
beyond break-even prices. It has to be kept in mind by writing these strategies
investors may bet on untrendy markets as well. Some are introduced here below:
3.5.1 Straddle
In a long straddle strategy, the buyer of the strategy purchases a call and a put option
with a same exercise price and a maturity date. Maximum risk of the investors are
limited to the amount paid for the premiums. The investor expects a major price
change but doesn’t want to guess the direction of the price movement. İn a straddle
strategy, investor will earn money when the prices move beyond the break even
points.
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 253
3.5.2 Strangle
This strategy is close to straddle; almost the same strategy but in strangle, investors
aim to reduce premium cost by purchasing one call and one put option that are not at
the money but close to money. Break even prices will be wider and return potential
is less than straddle but the cost is also relatively lower.
4 Exotic Options
Exotic options are more complex than regular options that are traded on exchanges
because their pay-offs may depend on more than one triggers or the underlying
asset may be extra ordinary. Examples of exotic options are namely; Asian, Barrier,
Basket, Binary, Chooser, Cliquet, Commodore, Compound, Forward start, Interest
rate, Look back, Mountain range, Rainbow, and Swaption. A few of them are
addressed here below;
Look back call options with floating exercise prices, give the holder the right to buy
the underlying asset at maturity for the lowest exchange rate over the period
involved whereas, Look back put options holders can enjoy the highest exchange
rates over the period. Look back options can either have floating or fixed exercise
prices.
A European fixed strike currency look back call option gives the holder the right
to demand the difference between the highest value of the underlying asset over the
period and the strike price. Whereas, a European fixed strike currency look back put
option pays the difference between the strike price and the minimum value over the
period.
Regarding with the pricing of these options; under continuous time assumptions
of Black-Scholes, Conze and Viswanathan (1991) have improved the analytical
pricing formulas offered by Goldman et al. (1979) for these contracts and Babbs
(1992) has introduced his continuous time based model with Brownian motions for
floating strike look back options and Cheuk and Vorst (1997) have offered a
one-step variable binomial model for look back options.
254 S. S€
onmezer
In Barrier options there is a strike level and a barrier level, when barrier level is
crossed, a rebate may be paid. A barrier may be set above the spot price, an up
barrier, or below the spot price, a down barrier. A European knock in option pays
off when the option has a value at the expiration and a certain barrier is crossed over
the life of the option contract. If the stock never passes over the barrier it becomes
worthless and when it does, it becomes a regular option. A knock out option pays
off only when the option is in the money and barrier is not crossed. Once the barrier
is passed, the option is knocked out. In short, barrier options can have four forms;
up-and-out, up-and-in, down-and-out, and down-and-in.
Barrier option premiums are generally cheaper than standard options with same
features. An investor may elect to purchase a barrier option rather than a call or put
option in order to reduce premium expenses. With barrier options, investors sell off
the scenarios that they believe unlikely to happen.
Regarding with the pricing of these contracts, it is easy to notice that they are
more complicated than standard option contracts; A call option’s premium is almost
always positively affected by a price increase in the underlying asset whereas, in an
up-and-out barrier option, there will be double, opposing effects on the value of the
option. On one hand, If the value increases near to barrier level, the likelihood that
the option will be worthless will increase. On the other hand, the call feature
embedded in the up-and-out option necessitates a price increase to finish
in-the-money. When the price is too close to the barrier, referred as barrier too
close problem, pricing is even harder. Lattice approaches are used to price barrier
options and they are favored due to their flexibility and the other approach is the
analytical approach which is favored for their efficiency (Tse et al. 2001).
Tools for Hedging or Source of Financial Instability?
Derivative Markets are a good vehicle for investors that aim to better hedge their
financial positions as they decrease the premium costs substantially. Despite this
benefit, the risk involved with these contracts are not apparent in financial state-
ments and the unrealized gains and losses from these contracts have to be well
assessed to determine the financial strength of institutions. 2008 financial crises has
underlined the importance of closely monitoring the financial institutions and their
derivatives positions.
Option strategies and exotic options are widely in use in developed countries and
they are increasing their presence in developing countries day by day. A mechanism
may be needed in place for the future to determine the possible losses accurately
and timely in these arrangement. In order to determine the loss, methods that
estimate the option prices shall be accurate, However, many studies show that
rational option pricing models are insufficient to value these contracts (Bondarenko
2003; Constantinides et al. 2009) and market sentiment of investors shall be taken
into account to have more realistic results.(Han 2008). An asset price may form a
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 255
bubble and supervisory bodies may have increasing concerns regarding with the
burst of the bubble for any reason. Regarding with risk, option prices move even
faster than the underlying prices which may lead to price instability and financial
stress when the loss is substantial. The case may be more dramatic for exotic
options as look-back option prices are more sensitive to bubbles (Heston et al.
2007).
Financial Instability Hypothesis (FIH) asserts that profits are driven by aggregate
demand and instead of hedge financing, speculative finance dominates the markets
than the likelihood that the economy deviates from the equilibrium prices increases
(Minsky 1992). By intuition, the fact that futures markets are not fully developed in
emerging markets may be an evidence of unawareness of hedging techniques or
distrust in these markets despite the existence of a clearing house. Investors that are
refraining from hedging their positions in futures market, are hardly use exotic
options to hedge. In that case, this chapter argues that especially in emerging
markets, the usage of exotic options are heavily speculative than hedging. When
exotic options markets mature in emerging markets, based on FIH, increasing usage
of these contracts will eventually distort equilibrium in the economy unless close
supervision and adequate regulations are in place.
Regarding with bank profits, exotic option trades are mostly off-balance sheet
activities and they are almost always the counter party of the trades for investors in
emerging markets. Once the volume in these instruments reach to a non-negligible
point, the risk in the banking sector may also threaten the financial stability of
economies of these markets.
5 Conclusion
By intuition, these markets will attract more investors as they learn about them.
The difference between betting and investing in these contracts has to be well
explained to investors who are in need of hedging their positions and who are not
willing to pay unnecessary premiums for unlikely conditions when they purchase
standard options. Breadth and depth of these markets are not sufficient in emerging
markets to easily find a counterparty therefore, financial institutions, mainly banks,
fulfill this mission. Financial literacy shall be encouraged for these markets to
prosper but the question regarding with the risk these contracts introduce to
emerging markets remains still.
Despite additional hedging advantages, these contracts may lead to banking
crises unless they are monitored well. As these contracts are speculative in nature;
when price instability hypothesis holds, the risk arisen from these contracts may not
be determined by conventional methods such as Black Scholes Method as they do
not incorporate behavioral issues into consideration and risks that are hard to
quantify may be a threat to economies. Since the volume in these contracts are
relatively small in emerging markets, the so-called risks may seem negligible but
proactive measures may be necessary before these contracts become center of
attention in emerging markets. Limits of the positions taken in these contracts
and disclosure requirements may be dynamically monitored to avoid financial
instability.
References
Abrahamyan H, Maddah B (2015) Pricing asian options via compound gamma and orthogonal
polynomials. Appl Math Comput 264:21–43
Babbs S (1992) Binomial valuation of lookback options. Working paper. Midland Global Markets,
London
Bondarenko O (2003) Why are put options so expensive? Q J Financ 4(3)
Cheuk THF, Vorst TCF (1997) Currency lookback options and observation frequency: a binomial
approach. J Int Money Financ 16(2):173–187
Constantinides GM, Jackwerth JC, Perrakis S (2009) Mispricing of S&P 500 index options. Rev
Financ Stud 22(3):1247–1277
Conze A, Viswanathan (1991) Path dependent options, the case of lookback options. J Financ
46:1893–1907
Cox JC, Ross SA (1976) The valuation of options for alternative stochastic processes. J Financ
Econ 3:145–166
Chance D (2003) Risk management applications of options strategies. In: Analysis of derivatives
for the CFA program, Ch 7. CFA Institute
Goldman MB, Sossin HB, Gatto MA (1979) Path dependent options: buy at the low, sell at the
high. J Financ 34:1111–1127
Han B (2008) Investor sentiment and option prices. Rev Financ Stud 21(1):387–414
Heston SL, Loewenstein M, Willard GA (2007) Options and bubbles. Rev Financ Stud 20
(2):359–390
Hull J (1989) Option futures and other derivative securities. Prentice Hall, Upper Saddle River, NJ,
pp 194–195
Kolb RW, Overdahl JA (2009) Financial derivatives: pricing and risk management, Ch 26. Wiley,
New York, pp 371–387
Option Strategies and Exotic Options: Tools for Hedging or Source of. . . 257
Minsky HP (1992) The financial instability hypothesis. The Jerome Levy Economics Institute
Working Paper No. 74
Tse WM, Li LK, Ng KW (2001) Pricing discrete barrier and hindsight options with the tridiagonal
probability algorithm. Manag Sci 47:383–393
Ali G€
orener
Abstract The audit has become an integral part of many activities in various fields
related to business life from the past to present. Fundamental changes have been
experienced in the concept of audit due to several reasons such as accounting
scandals, changes in management mentality, technological developments and
legal regulations experienced especially in recent years. Along with these changes,
risk-based audit approach focusing on uncovering the risks of business and how to
manage these risks has developed beyond the issue of benefiting from the previous
period data envisaged by the traditional audit approach.
The risk-based audit is a process containing important stages such as identifi-
cation, classification, and measurement of risks and determination of their weights.
At the end of this process, it is possible to have considerable knowledge to what
extent stress should be laid on which risks by ranging the risks identified for
business according to their probability of realization.
1 Introduction
The audit is the process of collecting objective evidence with the purpose of
investigating the compliance degree of transactions with financial nature of the
economic activities and events related to a particular economic unit with
predetermined certain criteria and informing the relevant sectors with the results
obtained, and the process of the evaluation of this evidence (Durmuş and Taş 2008)
Information users who are in the position of taking economic decisions need
various financial and non-financial information about businesses they are concerned
about. Whether this information is accurate and reliable can be revealed just by
auditing.
A. G€orener (*)
Department of Business Administration, Istanbul Commerce University Sutluce Mahallesi,
Imrahor-Caddesi, No: 90, Beyoglu 34445, Istanbul, Turkey
e-mail: agorener@ticaret.edu.tr
Nowadays, audit function performs the studies required for the examination of
records and documents used during the creation of financial statements which are
substantially prepared by an accounting system and for the determination of the
accuracy of the transactions by investigating their compliance with the accounting
principles and rules. Moreover, it carries out the task of presenting information and
findings obtained to those concerned by preparing a report as a result of these
studies (Durmuş and Taş 2008). The features of audit can be listed as follows:
• The audit includes the information of a particular economic unit and a period.
• The audit is a process.
• Audit deals with the accuracy of the information or how much reliable it is.
• The audit is an act of comparison.
• The audit is an act of evidence collection and assessment.
• The audit is performed by an expert and independent person.
• A report is issued as a result of audit works.
Above listed properties show that audit is a systematic process, and there is a
series of stages in this process.
2 Audit Types
Audit of financial statements is the most widespread type of audit. The purpose of
this type of audit is to investigate whether financial statements are prepared in
accordance with predetermined criteria.
The information presented in financial statements contains the claims of the
management of an enterprise on the financial situation and results of operation of
the enterprise. The correctness of these claims is especially important for external
information users. However, not only the personal needs of external information
users but also the needs of all interest groups are taken into consideration while
performing this type of audit.
In this type of audit, whether the rules determined by high authorities inside or
outside the enterprise are complied with is investigated. The audit carried out on
whether the rules inside the enterprise are fulfilled is performed by internal auditors.
The audit on whether the laws and legislations determined by high authorities
outside the enterprise are complied with is performed by persons outside the
enterprise, such as government auditors.
Risk Based Internal Audit 263
This type of audit is also called efficiency audit. The efficiency of the operations of
the enterprise is assessed by reviewing the policies of the enterprise. The recom-
mendations given for improving the results obtained and operations are reported to
the enterprise management.
The investigations conducted in operation audit may not be limited only to
accounting information. Many functions such as marketing, production, logistics
and management may also be examined in this context. It is generally performed by
internal auditors in practice, and whether the operation results achieve
predetermined targets or standards can be measured.
It is the type of audit that must be performed in line with legal provisions. For
example, enterprises that are subject to CMB audit and banks that are subject to
BRSA audit are obliged to perform the independent external audit.
It is the type of audit that enterprises carry out in line with their own wishes
although there are no legal obligations.
The auditing of the year-end financial statements of public companies, other com-
panies that are subject to CMB supervision, banks, and insurance companies in
accordance with generally accepted audit standards.
It is the type of audit conducted by audit companies performing annual audits in the
form of auditing the interim financial statements of certain companies in interim
periods.
264 A. G€
orener
This is the type of audit performed when enterprises go through such situations as
liquidation, merger, transfer and splitting or when they become public for the
first time.
The type of audit performed by people or institutions that have no relation to the
enterprise, which are outside the enterprise. The subject of external audit consists of
accounting data. Whether the information recorded reflect the financial and com-
mercial processes that occur in the relevant accounting period is assessed within the
framework of generally accepted accounting principles.
The type of audit performed by people that permanently work in the enterprise or
institution and named as internal auditors. In this type of audit, the operations of the
enterprise are examined in all aspects and reported to the senior management.
It is the audit performed in the name and benefit of public by people that receive
their duties and authorities from law. In these audits, they audit the level of compli-
ance of institutions and organizations with legislation, the economy policy of the
state and public interest.
Internal audit is an independent and objective activity that aims to develop the
activities of an institution and contribute to these activities. Internal audit helps the
institution to achieve its targets by bringing a systematic and disciplined approach
for the purpose of assessing and developing the effectiveness of the risk manage-
ment, control and governance processes of the institution (Institute of Internal
Auditors Research Foundation 2009).
As a result of internal audit, the management of the institution obtains guarantee
and counselling about whether the resources are used effectively, economically and
Risk Based Internal Audit 265
efficiently in accordance with the objectives and targets of the institution, compli-
ance of the activities with the relevant legislation, whether the assets are protected,
to which extent the current internal controls in the institution are sufficient, and the
reliability of the information produced by the institution.
That internal auditing became a modern profession started with the establish-
ment of the Institute of Internal Auditors in the US in 1948. Different definitions
were made for the internal audit that fulfills the function of objective information
provision that is the reason for its emergence with the establishment of the Insti-
tution of Internal Auditors, and the focal point constantly changed and developed.
The effects of the institutionalization of the Institute of Internal Auditors, its
becoming an institution that is taken into consideration more by increasing the
number of its members, and the publication of the standards and the codes of
conduct on internal audit practices may also not be disregarded in the development
of internal audit (Pehlivanlı 2010).
Another institution that is important in terms of internal audit practices is the
“Committee of Sponsoring Organizations of the Treadway Commission-COSO”.
COSO is an internationally accepted organization that is established with the
support of the International Institute of Internal Auditors, US Institute of Certified
Public Accountants, US Association of Accountants, the Institute of Management
Accountants and Financial Executives Association. The publication of the “Internal
Control Framework” by COSO in 1992 and “Corporate Risk Management Frame-
work” in 2006 helped to overcome the obstacles to the development of the profes-
sion of internal auditing and the crisis in front of the profession (Pehlivanlı 2010).
In practice, internal audit includes 5 main areas of activity. These are (Alptürk
2008);
Financial Audit: The assessment of whether the data in financial reports and
assets and liabilities of the audited unit are compliant with
their real value, sources of financing, the management of
the assets and the budget allowances allocated.
Compliance Audit: The investigation of whether the financial processes and
other activities of an organization are compliant with the
determined methods, rules, and legislation.
Performance Audit: The assessment of the affordability, effectiveness and
efficiency levels of the physical, financial and human
sources used by the institution or organization while
performing their duties.
266 A. G€
orener
The most basic benefits of modern internal audit can be listed as follows (Pehlivanlı
2010);
• The need for safe information provision is fulfilled. The effectiveness of internal
audit is directly related to the effectiveness of the internal control system in a
certain sense. If there is an effective internal control environment in the enter-
prise, the effectiveness will also be high in internal audit. Consequently, this will
facilitate the access of the parties that want to obtain information on the enter-
prise to safe information.
• The need for the protection of the enterprise’s assets and records is fulfilled. In
case there is an effective internal audit system, the assets and records of the
enterprise are protected at a high level.
• The need for increasing efficiency is fulfilled. The scope of internal audit also
includes the effectiveness of the activities. In this context, inefficient activities
can be eliminated in case internal audit works effectively.
• Adaptation to the policies determined by the senior management is ensured.
possibility that more than one result of a decision can occur, and the occurrence
probabilities of these possibilities are not known at all (Merna and Al-Thani 2005):
Risk consists of two components being uncertainty and effect. While uncertainty
may not have any effect in certain situations; the effect is the most important result
of risk. Furthermore, while the possibility of the incident that will occur in uncertain
situations is not known, possibilities may be known when risk is in question. For
these reasons, it is incorrect to perceive the concepts of uncertainty and risk in the
same way (Pehlivanlı 2010) (Table 1).
The institutions operating for a particular purpose maintain their activities in a
particular risk environment. While they cannot eliminate the risks completely when
maintaining these activities, they can keep them at a reasonable level. The most
effective ways of keeping risks at an acceptable level are control and audit. The
control and audits to be carried out during the activities can both reduce risks to
acceptable levels and contribute to the institution’s achieving its objective.
Although risks are subject to different variations, some of the risks that institutions
may encounter can be listed as follows in general:
• Market Risk: It results from the changes in market prices. Interest rate risk,
currency risk, risk of commodity price change, energy price risk,
• Credit Risk: The risk that results from the possibility that the debt owners fail to
pay for their debts.
• Operational Risk: In addition to the risks that are based on errors, faults and
abuses in the fulfillment of an operation, any kind of risk that may occur within
the framework of organization, workflow, technology and human power, will
cause the institution material or fiduciary loss, which remains outside the credit
and market risk, and statistical measurement of which can be made based on the
past data (Kishali and Pehlivanlı 2010).
• Reputation Risk: Risks that may occur in case the enterprise loses its reputation.
• Legal Risks: The risks that result from the acts of the enterprise that do not
comply with the law.
• Macroeconomic Risks: The risks that may occur due to macroeconomic changes.
• Strategic Risks: The risks that may occur due to incorrect strategic decisions.
268 A. G€
orener
• Country Risk: The risks that result from unexpected economic and political
changes in other countries.
Risk management is an important concept that was born and developed in the first
half of the 1970s and affects the evolution of market economies. Risk management
can be defined as a process that ensures defining, assessing and managing the risks
(Merna and Al-Thani 2005). In this process, the enterprises should assess and
manage the risks that may emerge in the best way in order to fulfill their aims
and ensure continuity.
Globalization has rapidly increased the volume of financial transactions, and an
environment where financial risks vary has emerged. Serious problems occurred
both in terms of banks and financial systems, and the crises that occurred from time
to time led to serious economic and social costs in parallel to these developments
(Bolgün and Akçay 2009).
With these crises, the interest in corporate risk management has increased, and
COSO published a detailed guide called Corporate Risk Management Framework
in 2006. With this framework, a guide that can be applied as standard at inter-
national scale emerged. The framework in question is a model that can be shaped in
accordance with different needs and properties of enterprises and adapted to enter-
prises (Pehlivanlı 2010).
Corporate risk management is a process that defines potential incidents that can
affect institution’s achieving its targets, provides reasonable guarantee in terms of
achieving the targets of the directors and institution within the limits of the will to
take risks, is structured all through the institution and affected by the management
of the institution and other employees. According to this definition, the main ele-
ments that emerge can be listed as follows (COSO 2010):
• It is a continuing and flowing process
• It is affected by the employees in all parts of the institution
• It is applied within the framework of the strategy
• It is applied at all levels and sections of the institution
• It is not necessary to eliminate risks completely
• Corporate risk management provides a reasonable level of guarantee in terms of
helping the institution reach its targets.
It is important that the system works effectively in terms of achieving the
expected benefit from corporate risk management process. The assessment of the
system effectiveness and taking the necessary precautions play a big role in the
monitoring of the process.
Risk Based Internal Audit 269
That the audit is carried out based on risk provides opportunities for better man-
agement of the institution in the future rather than carrying out the audit activity as a
limited process or based on individuals (Kaya 2010). The risk-based audit approach
was first adopted by OCC in the US in 1995. There are three main developments
that led to this. These are (Özsoy 2004, Eşkazan 2005);
• The developments in the field of technology expanded the type and scope of the
activities of institutions alongside with financial theories and practices.
• The spread of derived products and other complex financial products and the
creativity in derived products, the increase in commercial activities and securi-
ties based on assets, as well as the developments in secondary markets, have
considerably changed the financial system.
• Consolidation experienced in the US banking system in the 1990s led to the
emergence of an increased number of big and complex banks.
The use of risk-based audit practices in the banking system has become more
widespread over time, and it started to be used in other sectors in different forms in
the following periods.
The risk-based internal audit is the type of audit based on the presumptions that
audit sources are not limited, the activities of the units to be audited face different
risks and the activities of the unit to be audited have different levels of importance.
The auditor makes risk-based plans that determine the priorities of internal audit
activities in the light of these presumptions and in accordance with the objectives of
the institution and implements them. Accordingly, the scope of risk-based internal
audit can be listed as follows (Pehlivanlı 2010):
• The examination and assessment of the sufficiency and effectiveness of the
internal control system,
• The examination of the implementation and effectiveness of risk management
methods and risk assessment methodologies,
• The revision of management and financial information systems including the
electronic information system and electronic services,
• The examination of the correctness and reliability of accountancy records and
financial statements,
• The examination of the system of assessing the enterprise’s own capital in line with
risk estimations,
• Auditing of the operation of both the processes and a particular internal control
system,
• The examination of the compliance with the conditions, ethical rules, policy and
methods of the legal and regulatory authorities,
• Control of the accuracy, reliability and timeliness of regulatory reporting
270 A. G€
orener
The changes that result from risk perception in the risk-based internal audit
include all areas of audit. In this context, first, the risk profile is revealed in risk-
based internal audit, and then, subjects such as the scope, content and timing of the
audit procedure, and the allocation of sources are shaped according to the risk
profile. A typical risk-based audit must also include the stages of risk definition and
risk assessment in order to reveal the profile. These two stages are very important in
terms of risk-based audit and should be performed extensively (Kishali and
Pehlivanlı 2006).
With the risk-based internal audit, the retrospective point-of-view in audit changed
and it started to focus on future. Now, the internal auditor takes into consideration
all details that may prevent the institutions from achieving their targets by focusing
on incidents that may happen in the future rather than past events in the audit to be
performed.
While in the traditional audit understanding, the auditor focuses on past activ-
ities and tries to reveal the incorrect activities that have taken place, the occurrence
of faulty activities is tried to be prevented in the risk-based internal audit. Tradi-
tional and risk-based internal audit are comparatively addressed in the table below
(Table 2):
In the risk-based internal audit, the auditor will focus on the subject that current
and future risks are determined rather than being constantly busy with internal
control. While there are risk factors in both types of audit, the traditional audit
understanding focuses on natural risk, control risk and finding risks, whereas, in the
risk-based internal audit, the institutions’ own risks are also addressed in addition to
these risks.
While in the traditional internal audit, the auditor spends most of his working
time with the details related to planning, technical and internal control system, in
risk-based internal audit, the working period is spent on understanding the business
processes of the institution and business risks and the management of these risks.
In the traditional internal audit, the auditor gives advice on whether the internal
control system works effectively and benefit-cost effectiveness is ensured as he
constantly focuses on the internal control system. Subjects such as risk variation,
risk avoidance, risk transfer are emphasized in the risk-based internal audit. The
internal auditor is in the position of an independent auditor in the audited institution
in terms of examining accountancy data, assessing the internal control system and
dealing with the surveillance of the activities in the traditional internal audit.
Whereas in risk-based internal audit, the auditor is partly on the same side with
the institution as he regularly examines the systems developed for measuring the
Risk Based Internal Audit 271
risks that the institution may face and gives the necessary recommendations to the
management.
Risk-based internal audit process consists of closely related stages. As this process
takes place as a whole, each stage contributes to the shaping of the next stage.
Risk assessment, which is also a component of the internal control framework
published by COSO, is also a work that is recommended at the stage of audit
planning in internal audit standards. Risk assessment consists of the stages of
measuring and ordering previously defined and classified risks that will prevent
the institution from achieving its targets. As a result of risk assessment, the auditor
can apply tests in the audit program to important control points. The methods that
are used in risk assessment are determined in accordance with the structure of the
organization and business processes. The risks are measured in terms of the
possibilities and their effects with the help of one of the qualitative or quantitative
methods to be used in the assessment or their mix. Then the risks that are measured
are ordered with the help of a risk matrix (Pehlivanlı 2010).
Qualitative assessment techniques are used under conditions when potential prob-
ability and effect are low or numerical data and a quantitative assessment expert are
272 A. G€
orener
not present. With these techniques, it is possible to express the potential effect
levels and their possibility of emergence is also possible with the personal judg-
ments of the person conducting the analysis.
Main qualitative assessment techniques can be listed as follows (Merna and
Al-Thani 2005):
• Brainstorming
• Presumption Analysis
• Delphi Analysis
• Interview Technique
• Checklists
• Risk Recording
• Risk Mapping
• Possibility-Effect Tables
The selection of these risk assessment techniques varies depending on the
structure of the institution. If the institution is not willing for risks, i.e. if its risk
appetite is low, a result aimed at avoiding risky activities will be achieved with this
assessment. If the institution has such a structure that is open to risks and tries to
turn these into an opportunity, a contrary result will be achieved.
In this analysis, effect and possibility estimations are expressed in numerical values
using data sources. The quality of analysis results depends on the accuracy and
integrity of the data used and the validity of the model used (Arslan 2008).
Potential effects can be found by modeling the results of a particular event or
series of events, as well as statistically from past studies or events. The effect can be
revealed in the form of money, technique, damage that can happen to a person or
another damage criterion. In certain cases, it may be necessary to use more than one
numerical value in order to determine the risk level of the same incident (Arslan
2008).
Quantitative assessment techniques are generally computer-based techniques.
The main ones among these techniques are as follows (Merna and Al-Thani 2005):
• Decision Tree Technique
• Monte Carlo Simulation Program
• Sensitivity Analyses
• Possibility-Effect Analyses
There is a consensus that the data obtained using the qualitative assessment
techniques in the literature are more practical than the data obtained using the
quantitative assessment techniques, and qualitative evaluation methods should be
preferred especially at the beginning stage (Pehlivanlı 2010).
Risk Based Internal Audit 273
Risks are generally analyzed by their possibility of occurrence and the extent to
which they affect the institution where they emerge. Risk ranking is made accord-
ing to the result of the combination of the risks’ possibilities and effects.
The theoretical model that emerges in risk-based audit processes and the result
obtained from this model can be formulated as follows.
The possibility is generally related to time and indicates the occurrence fre-
quency of events. While risk possibilities are classified as low, middle and high in
the simplest sense, the risk effect may also be similarly classified as small, middle
or heavy. That scale rating limits are objective and may vary among individuals
may lead to distrust against the assessment activity and consequently the risk
matrix. A five-item scale is generally preferred to prevent this.
The five-item scale that is used in order to assess the effect dimension of risks
can range as not important, small, important, serious and destructive, the five-item
scale used in possibility evaluation can range as very low, low, middle, high and
very high. While certain risks have a high effect, their possibility of occurrence is
low. While some of them have a low effect, their possibility of occurrence is high.
The more careful these subjects are determined while performing risk assessment,
the better the risk attitude of the organization can be determined. The information
sources to be used in the process of determining the effects and possibilities of risks
can be listed as past records, experiences related to practices, relevant published
sources, market research, voting results, economical, technical or other models and
expert opinions (Pehlivanlı 2010).
The possibility and effect results of the risks that are assessed with the dimension of
possibility and effect using qualitative and quantitative assessment techniques are
shown with the risk matrix. Risk matrices can be formed in different ways. A simple
matrix where the possibility and effect are shown separately can be presented as
follows (Table 3).
The process of combining the possibilities and effects of risks is performed using
simple average or weighted average methods. The sum of the possibility and effect
results are divided into two in simple average. Thinking that the effect factor is
more important has brought weighted average into the agenda.
274 A. G€
orener
The risk matrix is made according to the voting results performed during the
process of risk workshop. Nowadays, risk workshops are generally performed with
the help of computer software and are often specially prepared in the framework of
the institution’s needs. Risk recording is preparing a list of risks that emerge as a
result of risk assessment and prevent the institution from achieving its targets. Since
risk management and control are the duties of the administration, they should
directly be included in the processes of recording the risks that are determined
and removing the risks with no probability of occurrence from the records (Griffiths
2006, Pickett 2006).
6 Conclusion
Risk management and internal audit have become the most important subjects that
institutions should address while performing their activities. It is necessary that risk
management processes and internal audit activities should be seriously addressed,
and the relevant tools should be used effectively in order for institutions to achieve
their targets.
Nowadays, integrated risk management systems which concern the institutions
and in which all risks are gathered under a single roof have started to become
widespread, and the degree of importance of the calculations of the risk occurrence
possibility and the extent to which they will affect the institution in case they occur
has gradually increased. The use of various tools in order to increase the importance
of risk management processes and ensure the effectiveness of these processes is
among important reasons for the development of the risk-based internal audit
approach. Internal auditors suggest improving measures with the reports presented
and contribute significantly to administrators by making assessments on the suffi-
ciency and effectiveness of the risk management processes implemented by
institutions.
The gradual spread of risk-based internal audit activities revealed the necessity
for auditors to improve their professional knowledge and skills in these areas and
Risk Based Internal Audit 275
that internal auditors adopt methods that require them to use information techno-
logies and creativity more by leaving aside traditional audit methods have gained
more importance.
References
Alptürk E (2008) Finans, muhasebe ve vergi boyutlarında iç denetim rehberi. Maliye ve Hukuk
Yayınları, Ankara
Arslan I (2008) Kurumsal Risk Y€ onetimi. Maliye Bakanlı gı Strateji Geliştirme
BaşkanlıgıYayınlanmamış Uzmanlık Tezi, Ankara
Bolgün E, Akçay MB (2009) Risk Y€ onetimi. Scala Yayıncılık, Istanbul
Durmuş CN, Taş O (2008) Denetim. Alfa Yayınları, Istanbul
COSO (2010) Enterprise Risk Management Integrated-Framework, Committee of Sponsoring
Organizations of the Treadway Commission. 21 Oct 2015. http://www.coso.org/documents/
COSO_ERM_ExecutiveSummary.pdf
Eşkazan AR (2005) Risk Odaklı İç Denetim Planlaması. İç Denetim Dergisi, Bahar, pp 32–33
Griffiths DM (2006) Risk based internal auditing: an introduction [online]. Dostupno na: 21 Feb
2015. http://www.internalaudit.biz/files/introduction.Internalauditv2_0_3.pdf
Griffiths MP (2012) Risk-based auditing. Gower, London
Institute of Internal Auditors Research Foundation (2009) International Professional Practices
Framework (IPPF). Institute of Internal Auditors Research Foundation. 11 Apr 2015. http://
www.theiia.org/guidance/standards-and-guidance/ippf/definition-of-internal-auditing/
Kaya HA (2010) İç denetim. Edit€
orler: Selami Sezgin Sevinç Yaraşır Fatih Deyneli Elvan Teke,
20. Türkiye Maliye Sempozyumu Bildiriler Kitabı, pp 97–106
Kishali Y, Pehlivanlı D (2006) Risk Odaklı İç Denetim ve İMKB Uygulaması. Muhasebe ve
Finansman Dergisi 30:75–87
Merna T, Al-Thani FF (2005) Corporate risk management. Wiley, Hoboken, NJ
Özsoy MT (2004) Risk Odaklı Denetim ABD Uygulaması ve Türkiye Açısından De gerlendiril-
mesi. Active, Mart–Nisan
Pehlivanlı D (2010) Modern iç denetim uygulamalari. Beta Yayınları, İstanbul
Pickett KS (2006) Audit planning: a risk-based approach. Wiley, Hoboken, NJ
Structured finance is a system that contains tools and highly complex financial
transactions created by financial engineering activities. Structured systems are
developed to find solutions in order to meet unique financial needs of companies
or investors that could not be fulfilled by traditional financial products. In this sense,
structuring is the way of creating hybrid financial securities by developing a system
that makes credit risk transfer possible via securitization and using of derivatives
for optimum financing or investing.
S. Gokten (*)
Faculty of Economics and Administrative Sciences, Department of Management, Baskent
University, Baglica Campus, 06810 Baglica, Ankara, Turkey
e-mail: sgokten@baskent.edu.tr
P. Okan Gokten
Faculty of Economics and Administrative Sciences, Department of International Trade, Gazi
University, 06500 Besevler, Ankara, Turkey
e-mail: pinar.okan@gazi.edu.tr
Fig. 2 Overview of credit risk transfer instruments. Source: Jobst, A.A. (2007) ‘A Primer on
Structured Finance’ Journal of Derivatives and Hedge Funds, 13 (3), p. 5
280 S. Gokten and P. Okan Gokten
Blundell-Wignall and Atkinson (2009) defined the reasons of global financial shock
in the frame of by global macro liquidity policies and by a very poor framework for
incentives of financial sector agents, conditioned by bad regulations, tax systems
and governance standards. They identified financial crisis as follows: “The liquidity
policies were like a dam overfilled with flooding water. Global liquidity distortions,
including interest rates at 1 % in the United States and 0 % in Japan, China’s fixed
exchange rate and recycling of its international reserves, and the Sovereign Wealth
Funds (SWF) investments, all helped to fill the dam to overflowing. That is how the
asset bubbles and excess leverage got under way.”
Expansionary monetary policy was applied in US to activate the wheels of
economy again following dotcom crash and 9/11 attack realized in 2000 and
2001 respectively. Interest rates became a record low of 1 % in June 2003, after
remained at that level for a year showed upside fluctuation and reached 5.25 % in
June 2006. Simultaneously, increased in money supply caused a sharp and contin-
uous increase in consumer borrowings (see Fig. 3). This situation brought up need
for banks on providing liquidity and in the beginning banks used overnight repos in
financing that made their value double. Later on, sustainable lending could be
provided by using structured finance techniques more. Augmentation of subprime
mortgages played critical role on generation of this frame, where the face value of
mortgages outstanding reached $2.75 trillion in 2007, of which $1.25 trillion were
subprime mortgages, $1 million Alt A debt and $500 billion jumbo ARMs (Zandi
2008, p 44).
Decrease in target rates and increase in credits create a loop which produced
sufficient profit for financial market players, that means ‘giving more credits and
making more money’ without consider diversification in the frame of minimizing
default risk. Growing of the housing bubble reached the pick in 2004 with an
average price increase nearly 120 % for a typical US house, and the asset values
taken into account as collateral played a fictive role for credit protection. Based on
high weight of subprime loans in total outstanding amount, the system gave error
after sharp declined in housing prices by means of finding adequate assets in value
or source to cover the loss of structured products’ investors. The fundamental
reason of the error is systematic risk maximization derived from undesirable default
correlation between assets.
were not same as the values that could be seen in financial statements. In other
words, accounting techniques on valuation could not be sufficient for market to
obtain signal about the reality of financial market. Thus, increased asymmetry made
market uninformed on increased default correlation.
The Global Shock showed the weaknesses of structured finance accounting. The
insufficiency in the disclosures of structured products and the valuation methods
used are defended as the factors increasing the severity of financial crisis. The
explanations in financial statements related with the structured finance products
have to be sufficient. So, financial statement users can understand the necessary
issues without having a problem and decisions taken will be much more accurate.
The factor, related with the valuation of structured finance products, affects the
severity of the financial crisis is the valuation of these with fair value. Before the
crisis, the standard setters support the fair value accounting. As a result of this,
financial crisis formed an important debate related with fair value accounting.
In many sources the expression of mark to market (M2M) accounting is used
instead of fair value accounting. The definition of fair value in IFRS 13 is as: “The
price that would be received to sell an asset or paid to transfer a liability in an
orderly transaction between market participants at the measurement date”. Fair
value accounting bases on the current values of the future cash flows of assets and
liabilities and reports these (Meder et al. 2011). Basically, fair value is a market-
based measurement not entity-specific value.
As mentioned in IFRS 13, there are three levels in defining the fair value. These
are:
• Level 1—inputs are quoted prices in active markets for identical assets or
liabilities that the entity can access at the measurement date.
• Level 2—inputs are inputs other than quoted market prices included within
Level 1 that are observable for the asset or liability, either directly or indirectly.
• Level 3—inputs are unobservable inputs for the asset or liability.
As the fair value relies on the efficient market hypothesis, in reporting the price
in the market is grounded on. For this reason, fair value accounting presents a
valuation measure that can be applied in active markets.
There are lots of studies related with the relation between fair value accounting
and financial crisis (Allen and Carletti 2008; Badertscher et al. 2011; Barth and
Landsman 2010; Heaton et al. 2010; Laux and Leuz 2009a, b; Plantin et al. 2008).
Some of these studies support that fair value accounting has an effect on the
financial crisis, some relies on that fair value accounting does not have an effect
or has a little effect on the financial crisis. Actually, the expression that fair value
accounting caused the financial crisis is not so accurate. Instead of this, supporting
Recent Financial Crisis and the Structured Finance: Accounting Perspective. . . 283
that fair value accounting increases the severity of financial crisis will be a right
judgement.
During the financial crisis, there were sharp decreases in the prices of fixed assets
and financial assets. As the companies having these assets applied fair value in
valuation, they lost a great amount of money and as a result capital losses happened.
In crisis period, since companies sold their assets in fire sales in other words at
prices below the fundamental values and as they used fair values as a valuation
measure, there was a decrease in the asset valuation in the market. So this situation
caused capital losses, companies lost money and even failures occurred. Therefore
because of the knock-on effect in other companies applying fair value accounting,
the market became much worse. As fair value accounting takes market prices in
consideration, a decrease in similar prices affected other companies’ financial
statements.
Standard setters are also blamed for the severity of financial crisis. Standards
were not efficient enough in crisis period and also restrictions and rules were so
strict are found among these accusations. There is an advantage in providing
flexibility in the application of standards in abnormal periods as financial crisis.
Especially banks were canalized in the application of fair value while reporting
their financial assets. It should not be forgotten that there is also a political pressure
that directs standard setters.
After the financial crisis, IASB performed some relaxations related with the fair
value accounting. These amendments are reclassification of some financial instru-
ments and issuing guidance on measurement when markets become inactive and
very thin (Amel‐Zadeh and Meeks 2013). In some situations, organizations are
allowed to depart from fair value accounting (they may prefer historical cost).
Standards issued by IASB and related with structured finance are as follows:
• IFRS 7 Financial Instruments: Disclosures and IFRS 12—Disclosure of Interests
in Other Entities: The importance of this standard is that the insufficiency in
disclosures of structured finance products is one of the main reasons that have an
impact on financial crisis.
• IFRS 9—Financial Instruments: This standard will supplant the IAS 39. IFRS
9 was issued in 2014 and will come into effect in 2018.
• IFRS 10—Consolidated Financial Statements: In this standard in whose finan-
cial statements Special Purpose Entities may be reported and also how they may
be shown is informed.
• IFRS 13—Fair Value Measurement
• IAS 39—Financial Instruments: Recognition and Measurement
All these debates have brought to mind a new question: Instead of fair value
which valuation measure will bring to a successful condition? Frequently, historical
cost is the answer of this question. It is important to remember that in valuation
while historical cost accounting uses book values, fair value accounting depends on
market values. In other words, historical cost is the monetary amount paid in the
transaction. Fair value is the current market value. In this sense, generally historical
cost equals to the fair value at the time assets are subject to transaction.
284 S. Gokten and P. Okan Gokten
5 Conclusion
Fair value accounting is procyclical. For this reason, during boom periods assets
write ups and especially banks’ leverage ratios increase, means high return on
equity (ROE) appears. During crisis as a result of sharp decreases in market prices,
company losses will appear. This situation causes a much more weak financial
system and a sharper financial crisis. On the contrary, during boom periods histor-
ical cost accounting prevents asset write ups and provides lower leverage ratios
(Laux and Leuz 2009a).
The proponents of fair value accounting defend that as a result of the application
of this method, financial statements will be accurate and as they will reflect the
current values in the market timely. So the transparency will increase. What we say
may be the most important benefits of fair value accounting that cannot be ignored.
There are also proponents of historical cost accounting and they oppose to the
application of fair value accounting. They thought that fair value accounting leads
to an artificial volatility and so in some cases, boom values may be seen. For these
reasons historical cost accounting is much more useful according to them (Allen
and Carletti 2008).
Each valuation method has its own positive and negative sides. In this tradeoff
the main question must have the following: Is the accounting numbers well
reflection of the market important for us? Or is the determination of the accounting
numbers prudently against the negation that can be seen in market values in the
future important for us? The global shock rekindles the debate that depends on these
two questions. Currently, there is still no agreed upon answer.
References
Allen F, Carletti E (2008) Mark-to-market accounting and liquidity pricing. J Account Econ 45
(2):358–378
Allen F, Babus A, Carletti E (2012) Asset commonality, debt maturity and systemic risk. J Financ
Econ 104(3):519–534
Amel‐Zadeh A, Meeks G (2013) Bank failure, mark‐to‐market and the financial crisis. Abacus 49
(3):308–339
Badertscher BA, Burks JJ, Easton PD (2011) A convenient scapegoat: fair value accounting by
commercial banks during the financial crisis. Account Rev 87(1):59–90
Barth ME, Landsman WR (2010) How did financial reporting contribute to the financial crisis? Eur
Account Rev 19(3):399–423
Blundell-Wignall A, Atkinson P (2009) Origins of the financial crisis and requirements for reform.
J Asian Econ 20(5):536–548
Heaton JC, Lucas D, McDonald RL (2010) Is mark-to-market accounting destabilizing? Analysis
and implications for policy. J Monet Econ 57(1):64–75
Laux C, Leuz C (2009a) The crisis of fair-value accounting: making sense of the recent debate.
Acc Organ Soc 34(6):826–834
Laux C, Leuz C (2009b) Did fair-value accounting contribute to the financial crisis? National
Bureau of Economic Research, No. w15515
Recent Financial Crisis and the Structured Finance: Accounting Perspective. . . 285
Meder A, Schwartz ST, Spires EE, Young RA (2011) Structured finance and mark-to-model
accounting: a few simple illustrations. Account Horiz 25(3):559–576
Plantin G, Sapra H, Shin HS (2008) Marking‐to‐market: Panacea or pandora’s box? J Account Res
46(2):435–460
Zandi M (2008) Financial shock: a 360o look at the subprime mortgage implosion, and how to
avoid the next financial crisis. FT Press, New Jersey
Dr. Pinar Okan Gokten is a Research Assistant at Department of International Trade, Faculty of
Economics and Administrative Sciences, Gazi University, Ankara, Turkey. Dr. Gokten received a
B.S. in Business Administration from the Baskent University, master’s degree and Ph.D. in
accounting from the Gazi University, Ankara, Turkey. She specializes in financial accounting,
cost accounting and positive accounting theories. She has authored and co-authored many aca-
demic papers and research notes.
Compliance and Reporting Trends: Essential
Strategies
Semen Son-Turan
Abstract The digital age, with decreasing barriers to entry, paving the way for
low-cost competition, saw an influx of new financial products and services globally.
Soon the increasingly technology-driven financial landscape transformed itself with
the democratization of finance diffusing to all levels of society. The standing rules
and regulations of financial markets were confronted with an epitome of complex-
ities marked by higher transparency, increased efficiencies, a wide range of sub-
stitutes, abundant information, a huge number of stakeholders and a bulk of aspiring
entrepreneurs. However, a new game necessitates new rules, and a considerable
disruption in old ways of doing is sure to witness unorthodox problems that need to
be dealt with, and preferably foreseen, through a different lens. Sooner or later,
these new digitally enhanced financial markets are destined to break down, drag-
ging down everyone who once had faith in them, if not supported by proper
compliance and corporate social performance and reporting standards. This chapter
explores newly emerging trends in compliance and reporting standards for financial
institutions.
1 Introduction
There is no doubt that stable financial institutions are indispensible for economic
growth and job creation. However, the financial services industry has increasingly
been challenged by the decline in trust of customers thanks to consecutive crises,
frauds, regulatory pressures and disruptive innovations by non-traditional financial
services providers. Also regulators are imposing more stringent control measures
over businesses’ operations to avoid a recurrence of the recent financial history that
is marked by trouble and tragedy. Consequently, the presence of effective corporate
S. Son-Turan (*)
Department of Business Administration, MEF University, Ayazaga Cad. No. 4, Maslak
Sariyer, Istanbul, Turkey
e-mail: semen.son@mef.edu.tr
2 Background
1
http://www.int-comp.org
290 S. Son-Turan
3 Literature Review
2
http://www.ey.com/US/en/Services/Specialty-Services/Climate-Change-and-Sustainability-Services/
Value-of-sustainability-reporting
3
http://www.toyota-global.com/sustainability/report/sr/
4
http://www.goldmansachs.com/citizenship/esg-reporting/esg-highlights-2014.pdf
Compliance and Reporting Trends: Essential Strategies 291
Fig. 1 Activity diagram. Source: Toyota Global, 2015 Sustainability Report 14-01
When the financial services industry is compared to other industries, the creation of
a sustainable compliance program maybe even more essential especially due to the
basic underlying trust issues, complex transactions and masses of stakeholders,
which makes this respective industry relatively more prone to swindle and fraud.
Particularly stakeholders in financial services, and foremost customers and
regulatory agencies, require detailed information about whether or not their insti-
tutions are meeting required standards and guidelines.
Clearly, the awareness and consequently, demands of customers and investors
monitoring social, environmental and governance issues (referred to as “ESG”,
“sustainability” or “triple bottom-line”), have increased. According to Hohnen
(2012), the proposition that organizations, and business organizations in particular,
should supplement their financial accounting with accounting on their environmen-
tal, social and other ‘non-financial’ performance (“sustainability reporting”), first
emerged in the 1990s, primarily instigated through calls from advocacy groups and
investors, as well as business leaders and governments. During its initial years
environmental issues were understood and addressed in sustainability reports. The
United Nations Conference on Environment and Development (UNCED), also
known as the Rio (de Janeiro Earth) Summit that took place in 1992, was a
landmark in the history of sustainability reporting, that addressed environmental
and climate issues. According to website of the Global Reporting Initiative
(“GRI”),5 sustainability reporting enables organizations to consider their impacts
of wide range of sustainability issues, enabling them to be more transparent about
the risks and opportunities they face. Sustainability reporting can be considered as
synonymous with other terms for non-financial reporting; triple bottom line
reporting (economic, social environmental), corporate social responsibility (CSR)
reporting, and more.
Some of the providers of sustainability reporting guidance include:
• GRI (Global Reporting Initiative)
5
https://www.globalreporting.org/information/sustainability-reporting/Pages/default.aspx
292 S. Son-Turan
6
https://www.db.com/cr/en/concrete-Sustainable-banking-business---Ensuring-future-success-
today.htm
7
http://www.unep.org/resourceefficiency/Business/SustainableandResponsibleBusiness/Corporate
SustainabilityReporting/tabid/78907/Default.aspx
8
https://www.kpmg.com/US/en/IssuesAndInsights/ArticlesPublications/Documents/iarcs-sustain
ability-reporting-what-you-should-know.pdf
9
http://www.ey.com/Publication/vwLUAssets/ey-fostering-sustainability-in-financial-services/
$FILE/ey-emeia-financial-services-sustainability-report-2014.pdf
Table 1 Literature review
Author, Method of data Research
year Subject Variables collection methodology Findings
Ruf An empirical investigation of the Corporate social performance Questionnaire, Regression Change in CSP is positively
et al. (2001) relationship between change in (CSP), financial accounting secondary data associated with growth in sales
corporate social performance and measures for the current and subsequent
financial performance: A stake- year. Return on sales is signifi-
holder theory perspective cantly positively related to
change in CSP for the third finan-
cial period, indicating that long-
term financial benefits may exist
when CSP is improved.
Moore Corporate social and financial Corporate social performance Secondary data Correlation, There is a negative relationship
(2001) performance: An investigation in (CSP), corporate financial perfor- regression between the two variables, with
the UK supermarket industry. mance (CFP). CFP deteriorating as CSP
improves. Lagged CFP com-
pared with overall CSP, how-
ever, shows an opposite trend
Compliance and Reporting Trends: Essential Strategies
4 Conclusion
References
Basel Committee on Banking Supervision (2005) Compliance and the compliance function in
banks. http://www.bis.org/publ/bcbs113.pdf
Bowen HR (2013) Social responsibilities of the businessman. University of Iowa Press, Iowa
Coleman L (2011) Losses from failure of stakeholder sensitive processes: financial consequences
for large US companies from breakdowns in product, environmental, and accounting stan-
dards. J Bus Ethics 98(2):247–258
Culp S (2015) Confronting disruptive forces, financial services firms need new approach to
compliance. 23 Feb 2015. http://www.forbes.com/sites/steveculp/2015/02/23/confronting-dis
ruptive-forces-financial-services-firms-need-new-approach-to-compliance/#74a2e565db04
Ettredge M, Johnstone K, Stone M, Wang Q (2011) The effects of firm size, corporate governance
quality, and bad news on disclosure compliance. Rev Acc Stud 16(4):866–889
EY and Boston College Center for Corporate Citizenship (2013) Value of sustainability reporting.
Retrieved from http://www.tksolution.net/media/394/Value-of-Sustainability-Reporting.pdf
Garriga E, Melé D (2004) Corporate social responsibility theories: mapping the territory. J Bus
Ethics 53(1–2):51–71
Hohnen P (2012) The future of sustainability reporting. EEDP programme paper. Chatham House,
London
Jaeger J (2014) Sustainability reporting on the rise. 5 June 2014 https://www.complianceweek.
com/blogs/the-filing-cabinet/sustainability-reporting-on-the-rise-0#.VuFVCCSA3dk
Konar S, Cohen MA (2001) Does the market value environmental performance? Rev Econ Stat 83
(2):281–289
Lamm J, Blount S, Boston S, Camm M, Cirabisi R, Cooper N et al (2010) Under control. Apress,
New York
Mills A, Haines P (2015) Essential strategies for financial services compliance. Wiley, New Jersey
Moore G (2001) Corporate social and financial performance: an investigation in the UK super-
market industry. J Bus Ethics 34(3–4):299–315
NYSSA (2010) Knowledge of good and evil: a brief history of compliance. 26 May 2010 http://
post.nyssa.org/nyssa-news/2010/05/a-brief-history-of-compliance.html#sthash.kYFjrJPi.dpuf
296 S. Son-Turan
Orlitzky M, Schmidt FL, Rynes SL (2003) Corporate social and financial performance: a meta-
analysis. Organ Stud 24(3):403–441
Orlitzky M, Siegel DS, Waldman DA (2011) Strategic corporate social responsibility and envi-
ronmental sustainability. Bus Soc 50(1):6–27
PwC (2014) State of compliance 2014. Financial services industry brief. Retrieved from https://
www.pwc.com/us/en/risk-management/state-of-compliance-survey/assets/pwc-soc-financial-
services.pdf
Ruf BM, Muralidhar K, Brown RM, Janney JJ, Paul K (2001) An empirical investigation of the
relationship between change in corporate social performance and financial performance: a
stakeholder theory perspective. J Bus Ethics 32(2):143–156
Seguis-Mas E, Bollas-Araya HM, Polo-Garrido F (2015) Sustainability assurance on the biggest
cooperatives of the world: an analysis of their adoption and quality. Ann Public Coop Econ 86
(2):363–383
Van Beurden P, G€ossling T (2008) The worth of values—a literature review on the relation
between corporate social and financial performance. J Bus Ethics 82(2):407–424
1 Introduction
With the rapid change in today’s environment, risk management became more
popular among NPOs with intent to increase the efficiency and besides to mitigate
the negative effects of disturbances. Not only business environment and science
always attached particular importance to risk issue, but also the concept of NPOs
E. Karakaya (*)
Logistics Program, Vocational School of Social Science, Istanbul Medipol University, Kavacik
Campus, Beykoz 34810, Istanbul, Turkey
e-mail: ekarakaya@medipol.edu.tr
G. Karakaya
Department of International Trade, Istanbul Commercial University, Sutluce Campus, Beyoglu
34445, Istanbul, Turkey
e-mail: gkarakaya@ticaret.edu.tr
increased its popularity in recent years due to complexity and increase in the
amount of global interactions.
Beasley (2011) claims that having significant risk awareness results in better
performance while governing an organization as a whole. In other words, executive
staffs who realize that organizations, as well as non-profit ones, must assume that
the importance of risk can go further their mission goals or objectives being more
informed about the types of risks and their effects.
Young (2009) points out the problem clearly by saying that non-profit organi-
zations did not take account of view of risk management into consideration
adequately. He also puts forward in his comprehensive study that non-profit
organizations are not able to take consequential decisions in a strategic level,
even though they appear to take part in some studies concerned with stopping or
reducing the negative effects of risks. Besides, Trivunovic et al. (2011) argue that
until now, many international benefactors and the NPOs did not apply a compre-
hensive method to overcome an expected or unexpected corruption of risks. That is
why; this study could be proposed as a structural framework in order to ensure that
it is beneficial for practitioners or executive staffs of NPO throughout the imple-
mentation of the all steps of risk management.
This chapter is organized in the following way. This first section gives a brief
overview of the recent history of risk management for NPOs and the second part
deals with the potential risks and their drivers in the NPOs are identified. Thirdly,
the determined potential risks are scored, assessed and ranked in terms of four
objectives which are financial loss, growth, image and profit in order to find the
most effective risks. Lastly, the suitable and applicable mitigation-methods are
developed for handling the negative consequences of selected risk.
2 Literature Review
There is a large volume of published studies describing the risk management and
the role of risk management even in the area of both business and science. So far,
however, there were little discussions about application of risk management
approach for NPOs. An extensive study in this field was provided by Jackson
(2006). Although his book called Risk Management and Contingency Planning
includes comprehensive theoretical explanations about risk management for NPOs
and planning methods, there are no available empirical case studies.
Mohammed (2007) identifies potential risks and the ways of managing risks for
an NPO that provides health and services for mental, intellectual and physical
disability individuals. Young (2009) offers a conceptual framework by identifying
the kinds of decisions in order to help non-profit organizations when they need to
manage their risks in a strategic level.
Wilson-Grau (2004) implements risk management steps in a strategic level in
order to help NPOs to achieve their mission or long term purposes. Gaudenzi and
Borghesi (2006) provides a method to evaluate supply chain risks by using
Developing a Risk Management Framework and Risk Assessment for Non-profit. . . 299
Sitkin and Pablo (1992) define risk as “the extent to which there is hesitation
whether potentially desired or insignificant/unwanted outcomes of decision will
be realized”. In other respects, Ritchie and Brindley (2007) formulate a principle of
risk to assess (1) the probability of occurrence of certain outcomes (2) severity from
the occurrence of event (3) the ability to detect the risk. It is put together in the
notation below.
Boas (2012) defines risk for NPOs as anything that may have a negative impact
on achieving your NPO’s mission, goals, objectives and strategies if it becomes
reality.
Risk Management is defined as “an organized process to identify what can go
wrong, to quantify and assess associated risks, and to implement/control the
appropriate approach for preventing or handling each risk identified” (INCOSE
2002). Matan and Hartnett (2011) have provided an extensive definition of Risk
Management: “the process that is adopted to plan for the possibility that events may
cause harm to an organization, focusing specifically on risk associated with board
members and volunteers, staff, programs and events, services offered, operations,
technology and financial management”. Wilson-Grau (2003) claims that in this
volatile environment, risk management is a tool for maximizing an NPO’s oppor-
tunities and minimizing the dangers to success. It enables NPOs’ decision-makers
to think strategically all the time.
The key aspects of risk management can be listed under four topics, which are
identifying and categorizing the risks, evaluating the available risks, deciding how
to mitigate them and applying the necessary action.
300 E. Karakaya and G. Karakaya
The risk management process is shown in Fig. 1 which can be repeated until the
risks are kept inside the acceptable corridors. These steps are implemented incre-
mentally within the scope of the study.
4 Case Studies
Risk identification is the phase in which the risks are determined. All possible risks
are collected in a list, then not only identification of risk conducted but also
recognition the source or drivers of potential risks are carried out in this step.
A significant amount of literature published on categorization of possible risks or
changes, in other words change drivers. An example of this kind of literature is
carried out by Christopher and Peck (2004) who divide source of changes into five
classes: environment, supply, demand, control and process. On the other hand,
Tang and Tomlin (2008) diversify the classes by adding political, social and
behavioural sources of risk. Some other studies attempted to classify the source
of change as well, for example Chopra and Sodhi (2004), Harper (2012), Park
(2011). Boas (2012) separate possible risks into three levels (1) Risks could be seen
in the macro environment such as governmental legislation and regulations or
shifting lifestyle, (2) Risks emerge in micro environment level and interruption of
energy or required resources and cancelling donation or financial aid can be given
as the example, (3) Risks happen inside the organization and effect directly as
departure of staff members with high qualifications or poor decision making etc.
Matan and Hartnett (2011) list risk as follows: Volunteer risk, financial risk, staffing
risk, restricted grants risk, reputation risk.
To identify supply chain risks in our case study, the possible risks are identified
through a series of brainstorming sessions with officials at executive level of the
firm with guidance from related literature in the background. Within these sessions,
six potential risk types are determined and listed with examples as follows.
1. Financial risk (economic crisis, insufficient donation)
2. Other associations risk (negative competition, lack of communication)
Developing a Risk Management Framework and Risk Assessment for Non-profit. . . 301
Risk Identification
3. Own association risk (rapid growth, low performance due to high bureaucracy)
4. Student risk (lack of realizing the real necessary, inefficient student
performance)
5. Executive staff risk (management deficiency, overloading)
6. Activity risk (ineffective and non-systematic working, unsuitable meeting place)
7. Political risk (political instability, legislations)
8. Intention and behaviour risk (different aim and purpose)
The identified risks are then put into a hierarchical structure as shown in Fig. 2.
The structure of the hierarchy consists of three levels. While the top level represents
the essential classification of available risks in terms of different risk properties. It
consists of four main classes, which are namely risk source, risk expectation, risk
duration and risk focus. The second level consists of two sub-classes of each
fundamental risk features, as external and internal, expected and unexpected, long
term and short term, organizational and personal. Finally, the bottom level includes
one example for each subclass which is chosen from the case study.
In order to provide a better understanding of the above figure, the class of ‘Risk
Source’ is explained in detail. Internal risks mean disruptions or dysfunctions
originated from problems inside the bounds of NPOs such as electricity breakdowns
or information technologies related problems. Within the concept of the case study,
students which are the reason for establishment of the NPO could be accepted as an
internal risk. External risks take notice of environmental causes that can implicitly
or explicitly lead to disturbances within the NPO. Political risk, legislation or
regulations can be given an example for external risk of our case. The probability
of occurrence for internal supply chain risks is grater compared to external supply
chain risk. On the other hand, external supply chain risks are more dangerous than
internal supply chain risks.
Risk Quantification phase comes after the risks analyses in order to identify the
prioritization of the risks affect. The well-known method is to measure the likeli-
hood and the expected impact on the defined system. In other words, this
302 E. Karakaya and G. Karakaya
assessment is essentially dealing with two main questions; first, “how likely a risk
is” (i.e., the frequency of risk) and second, “how terrible risk can be” (i.e., severity
of risk). Within the concept of the previous literature, there are a number of
methods to quantify risks such as the Six Sigma Method, the Failure Modes and
Effect Analysis and Statistical Control method. On the other hand, Analytic Hier-
archy Process (AHP) (Saaty 1980) which is popular and widely used method for
multi- criteria decision making systems to determine the relative scores of each risk
factor. The model presented in this study utilizes the AHP to calculate the risks’
scores to determine risks efficiently for the NPO.
The following Table 1 shows importance scale for pair wise comparisons of (risk
xy) of two risk items (item x and item y). In other words, risk xy represents the
comparison between item x and item y. If item y is 7 (very strong importance) times
more important than item x, then the comparison of risk yx ¼ 1/7.
The evaluation objective is determined as a selection of the most effective risk. The
evaluation criteria are financial losses, image, growth and quality of organization
while the alternative risks are listed as follows:
• Financial risk
• Other associations risk
• Own associations risk
• Student risk
• Executive staff risk
• Activity risk
• Political risk
• Intentional and behaviour risks
As it was stated before that AHP method which is used as an evaluation
technique in order to figure out the most significant risk consists of three phases.
1. Comparison of objectives is the first phase in which a matrix is established where
columns represent the predetermined alternative risks and rows include the
evaluation criterion. The value of pairwise comparisons which are collected
from experts of the organization in terms of four main objectives; financial loses,
growth, image and profit are input inside the matrix. Table 2 given below shows
the compact of the illustration of the matrix.
2. Building normalization matrix phase includes some mathematical calculations
to specify the relative weights of the decision criteria. In order to normalize the
criteria, each value of paired comparisons divided by the summation of the
columns and then the average of rows refer to the relative weight of each risk
type. All calculations are presented in Table 3 below.
Developing a Risk Management Framework and Risk Assessment for Non-profit. . . 303
3. Ranking of the weighted alternatives is the last phase in which all calculated
scores of each risk factor depending on evaluation criterion are shown in a same
chart. To define the most important risk types, firstly, the summation of the risk
factors taken in terms of financial losses, image and growth. In the second step,
the relative rankings (priorities) of alternatives were determined. It is apparent
from the obtained results that the Financial Risk is specified as a most effective
risk than others in our case study. The final results are summarized in Table 4
below.
The results can be summarized as follows.
1. The results identify the financial risk as the most important risk factor since the
score of financial risk is the highest score for all the tables. The NPO should deal
with ways to mitigate this risk. Moreover, lots of interpretations can be made
about the results of tables:
2. The financial risk is for sure the most effective risk in financial losses. In other
words, if the NPO makes a monetary mistake, the most apparent damage is the
cost rather than image and profit.
3. Activity risk and own association risks are more significant from the aspect of
the NPO’s image. The reason is that the NPO is known with their spectrum of
activities; therefore, the impact of activity risk is directly linked with reputation
of the company.
4. In order to make company grow, the NPO should arrange the economic situa-
tions like fees or donations in a balanced way to decrease the financial risk.
5. In the same way, the financial risk affects the company’s quality negatively.
Losing benefactors or declining number of students is the unwanted situation for
all NPOs. Thus, the company scores significant loss in its quality when an
unbalanced situation occurs for financial resources.
6. It can be concluded that internal risks cause more hazard than external risks
owing to the fact that the possibility of external risks is much lower.
7. To conclude, considering the whole risk results, the financial risk should be
immediately mitigated. It is suggested that the NPO should take precautions and
measures for eliminating or at least reducing these risks.
304
Risk Mitigation is the phase in which mitigation decisions are taken to stop or at
least reduce the effects of risks. This phase is composed of many mitigation
strategies and new implementation plans for undesired event occurrences.
After evaluation of risk alternatives, the risk management plan is documented,
justified and described. Also the chosen treatments are described. During this
process allocated responsibilities are recorded, monitored and evaluated, and
assumptions on residual risks are made. To handle possible risks, the following
suggestions might be offered for the NPO organizations:
• Financial risk
– Finding new financial resources
– Effortless and inexpensive transportation vehicles to reach activity location
– Ensuring the more transparent financial structure for expense awareness
• Activity risk
– The increased quantity and diversity of activities to support recognisability
– Announcement of activities by using all social media opportunities
– Sufficient speakers for the educational activities
– Academic and systematic education or training
• Student risk
– Acceptable and appropriate activities for all kind of students
– Carrying out activities in a harmonized atmosphere
– Out of town trips for country introduction
The impact of mitigation plans should be monitored. For many reasons, an
organization should have a dynamic control system on managing risks in an
organization and frequent system updates by applying some other changes within
the system or in the environment.
Developing a Risk Management Framework and Risk Assessment for Non-profit. . . 307
5 Conclusions
This chapter explained the central importance of risk management for NPOs. One
of the more significant findings to emerge from this study is that an analytical
approach and risk management framework is provided for NPOs. By means of these
findings of the study, NPOs will be able to increase the efficiency of its organization
and reduce the risk of major possible malfunctions simultaneously.
References
Beasley M (2011) Increasing risk awareness for mission critical objectives of not-for-profit
organizations. American Institute of Certified Public Accountants, Durham
Boas K (2012) Building capacity in NGO risk management. Retrieved from http://www.
thesustainablengo.org/
Carter TS, Demczur JM (2013) Legal risk management checklist for non- for-profit organizations.
Carters Professional Corporation, Ottawa, Toronto
Chen L (2010) Risk management for nonprofit organizations. Oregon State University, Corvallis
Chopra S, Sodhi MS (2004) Managing risk to avoid supply-chain breakdown. MIT Sloan Manag
Rev 46(1):53–62
Christopher M, Peck H (2004) Building the resilient supply chain. Int J Logist Manag 2:1–13
Gaudenzi B, Borghesi A (2006) Managing risks in the supply chain using the AHP method. Int J
Logist Manag 17(1):114–136
Harper TJ (2012) Agent based modeling and simulation framework for supply chain risk man-
agement. Dissertation, Air Force Institute of Technology
INCOSE (2002) What is “Risk”. Risk Management Working group, Hall, DC
Jackson P (2006) Nonprofit risk management and contingency planning. Wiley, New Jersey
Matan R, Hartnett B (2011) How nonprofit organizations manage risk. Sobel & Co, Livingston
Mohammed KM (2007) Managing risk: a case study of a non-governmental organization that
provides long- term care and support service for people with mental, intellectual and physical
disabilities. Massey University, Palmerston North
Park K (2011) Flexible and redundant supply chain practices to build strategic supply chain
resilience: contingent and resource-based perspectives. Dissertation, The University of Toledo
Pehlivanli D (2012) K^ar Amacı G€ utmeyen Kuruluslarda Kurumsal Risk Y€ onetimi ve Risk
Çalıstayı Vaka Çalısması. Muhasebe ve Finasman Dergisi:117–128
Ritchie B, Brindley C (2007) Supply chain risk management and performance: a guiding frame-
work for future development. Int J Oper Prod Manag 27(3):303–322
Saaty TL (1980) The analytic hierarchy process. St. Louis ua, New York
Sitkin SB, Pablo AL (1992) Reconceptualizing the determinants of risk behavior. Acad Manag
Rev 17(1):9–38
Tang C, Tomlin B (2008) The power of flexibility for mitigating supply chain risks. Int J Prod
Econ 116(1):12–27
Trivunovic M, Johnsøn J, Mathisen H (2011) Developing an NGO corruption risk management
system: considerations for donors. U4 Issue, 2011(9)
Wilson‐Grau R (2003) The risk approach to strategic management in development NGOs. Dev
Pract 13(5):533–536
Wilson-Grau R (2004) Strategic risk management for development NGOs: the case of a grant-
maker. Seton Hall J Dipl and Int’l Rel. 5:125
Young DR (2009) How nonprofit organizations manage risk. In: Musella SD (ed) Paid and unpaid
labour in the social economy. Georgia State University, Georgia
308 E. Karakaya and G. Karakaya
Tuba Bozaykut-B€
uk
Abstract Strategically planned and implemented risk management paves the way
for competitive advantage and a decisive edge for global financial institutions. The
importance of risk management becomes more evident in financial instability
periods. The failure of global financial institutions in the recent financial crisis
revealed that firms with strong risk management and culture were more prepared
and economically less damaged. As financial institutions have been criticized
severely about risk management practices, it also becomes clear that most financial
institutions have difficulties in developing a risk management culture. To have a
clear understanding of risk management culture, the chapter aims to highlight a
need to extend our understanding of risk management culture and how it can find a
voice in the strategic planning of global financial institutions.
1 Introduction
Risk management has always been a top priority issue for financial institutions in
terms of enhancing performance (Krause and Tse 2016), competitive advantage
(Fiegenbaum and Thomas 2004) and increasing value provided to the shareholders
(Cooper 2000). As it is well proposed, risk management constitutes one of the most
important veins of survival, attainment of strategic goals and an important deter-
minant of success in financial turmoil. Along this line of thinking, financial firms
approach risk management as a strategic tool both in their daily operations and in
crisis preparedness, detection and prevention because of its continuous evaluation
of environmental threats.
The one thing researchers reach a consensus on is that doing business means
taking risks and risk is a “strategic issue” (Clarke and Varma 1999). As the risk
T. Bozaykut-B€uk (*)
Department of Business Administration, School of Business and Management Sciences,
Istanbul Medipol University, Kavacik Campus, Istanbul, Turkey
e-mail: tbozaykut@medipol.edu.tr
management has a role in gains and losses of the firm, the value created and offered
to shareholders is directly linked to risk management practices (Elahi 2013). The
shareholders’ aim is to have higher returns in lending or investing their capital.
They expect managers to take risks, but at the same time; they refrain from making
investments in the institutions that would take great risks for these returns. The
artistry of risk management comes from the difficulty of satisfying the shareholder
demand of higher returns without losing the shareholder trust.
To develop a strong risk management culture is of strategic importance to the
success of risk management. According to the worldly accepted regulators of finan-
cial markets such as Institute of International Finance (IIF), not having a proper risk
management culture strategically set is one of the main reasons why many global
banks and other financial institutions faced with the catastrophic economic conse-
quences of the recent crisis. On the other hand, institutions with strong risk manage-
ment culture had the chance to overcome the crisis and surpass their competitors.
Hence, it is now evident that to set, manage and assess the risk management culture
is both a challenging issue and a source for competitive advantage for today’s
financial institutions.
Risk management culture refers to common norms and values related with risk
identification, management and assessment within the organization (IIF 2009).
Also, risk management culture is an organization-wide issue (Elahi 2013) and has
to be designed according to the risk attitudes and behaviors defined strategically in
order to attain corporate objectives. Likewise, firms need to introduce analytical
and statistical tools for risk assessment as well as they have to make employees get
familiar with the common language and the tone of the behavior in identifying and
managing risks. Thereby, all these factors make risk management culture a strategic
issue. To identify certain attitudes and behaviors related with the risks faced and to
embed risk management culture throughout the firm, risk management culture
should find its place in strategic planning.
In the chapter, the risk management’s significance to financial institutions is
firstly discussed. Then, risk management and factors for enhancing risk manage-
ment are highlighted. The chapter continues through the examination of risk man-
agement culture together with strategic planning and ends with final remarks and
implications on the topic.
Risk is defined by diverse disciplines with their own unique lenses. The finance
theorists approach risk in three manners (Cooper 2000). The first approach evalu-
ates risk as an “opportunity” and asserts that the gain will increase as the risks
increase. According to this perspective; for more profit, the institution would have
to undertake greater risks. Conversely, risk as “hazard or threat” connotes negative
meanings as failure or loss. Approaching risk as something to be avoided, the insti-
tution would use techniques to refrain from situations that would put them in a
Giving Risk Management Culture a Role in Strategic Planning 313
risky position on the sake of benefitting from the potential advantages of risks
(March and Shapira 1987). The third approach, namely the futuristic point-of-view,
takes risk as “uncertainty” that can bring both positive and negative outcomes. The
main point of the approach is to minimize the difference between what is expected
and what is attained by the risky operations. In line with the third perspective,
researchers associate risk with a decision making process about the future (Cooper
2000). In other words, the notion of future refers to uncertainty and the claim for
controlling the uncertainties lies in the essence of risk management (Power 2007;
Cooper 2000). Also, to some authors, risk is the various combinations of these
approaches. For instance, risk can refer to “uncertainty” with negative conse-
quences (Elahi 2013).
Similar with risk approaches, there is a traditional and a modern way of thought
on risk management. Traditional risk management approach risk negatively and
propose that risk would harmful or costly effects on the firm (Elahi 2013). Con-
versely, modern risk management thinking is in the manner of seeing risks as a
positive phenomenon that would create improvement and growth as the creativity
level of the firm would increase (Bowers and Khorakian 2014). Meanwhile some
studies support the notion that firms that have enriched risk capabilities stand one
step ahead of its rivals in the market and have greater competitive advantage (Elahi
2013).
Another matter to be mentioned is the dilemma that resides in the risk taking
behavior. From time to time under different conditions, the risk appetite of the firms
can rise or decline. Being always either on the risk-averse side or on the risk-lover
side can also have hazardous effects on the survival and the competitiveness of the
firms (Kahneman and Lovallo 1993; Fiegenbaum and Thomas 2004). Whether to
take that risk is a big question to answer and requires a detailed analysis in diverse
perspectives. In that decision making process, another problem encountered is the
bounded rationality issue. From the bounded rationality lenses (Simon 1972; March
1978), the firm cannot control every factor that can affect its operations and has to
give decisions depending on its limited knowledge. Although there are statistical
and scenario-based methods for measuring risks, it is not possible to estimate every
single factor in the market and the results of risk taking behavior can be the
Schrodinger’s cat paradox for financial institutions.
Through a closer gaze on the global financial institutions (GFIs), it can be
proposed that GFIs need to manage diverse risks encountered not only in their
constant daily operations but also have to be prepared at the strategic level when
faced with crises. To create a risk management culture, a financial firm needs to
identify the risks to be encountered. For instance, risks encountered by banks are
defined as credit risk, country and transfer risk, market risk, interest rate risk,
liquidity risk, operational risk, legal risk and reputational risk by Basel 1998
Framework (Cooper 2000). Basel 1998 Framework also offers an internal control
system that would detect and control risks listed for banks. Similarly, Bilal et al.
(2013) in their paper on re-modelling of risk management in banking summarizes
the process for control mechanisms recommended after the global financial crisis as
such: “To ensure the circumvent from potential risks, the Banking Supervisory
314 T. Bozaykut-B€
uk
Like many terms in social sciences, it is difficult to define what “risk management
culture” (RMC) is. To have a broad understanding of the meaning of “risk manage-
ment culture”, it would be enlightening to focus firstly on the term, organizational
culture. Since Peters and Walterman’s (1982) inspiring work, In Search for Excel-
lence, organizational culture is approached as something to be managed
(Willcoxson and Millett 2000). Peters and Walterman (1982) suggest that if man-
aged well, organizational culture would help organizational performance to be
enhanced and competitive advantage would be gained over competitors. After a
decade, Schein (1992: 12) with his study entitled Organizational Culture and
Leadership attracts the attention to the organization’s relations to external environ-
ment and defines the organizational culture as:
A pattern of shared basic assumptions that the group learned as it solved its problems of
external adaptation and internal integration that has worked well enough to be considered
valid and, therefore, to be taught to new members as the correct way to perceive, think, and
feel in relation to those problems.
In the finance literature, it is seen that many related terms such as “risk culture”,
“risk management culture” and “strategic risk management” are referred in the
studies. To McConnell (2013) different terms are used for RMC because insti-
tutions have different layers of culture related with different risk perceptions.
Thereby, to McConnell (McConnell 2013: 36), the ‘risk culture’ refers to how indi-
viduals within an organization approach risk and ‘risk management culture’
addresses specifically to the culture of the risk management group(s) and its inter-
actions with the organization; and we can also talk about a culture of ‘risk-taking’ in
organizations.
In some other works, authors use the term of risk-oriented culture. The aware-
ness of risks and of the ways to approach risk at all levels of the organization means
that organization has a risk-oriented culture. Similarly, to create and develop a risk-
oriented culture may require an “organizational paradigm shift” through which the
organization revises itself in order to develop risk management and vision as a core
competency (Cooper 2000: 19). To achieve this paradigm shift, an awareness of the
risk types faced by the institution should be developed (Girling 2013). That’s to say,
employees and managers at all levels are expected to have the awareness of what
risks are encountered, how they are assessed and controlled (Girling 2013).
Although admitting there is no consensus on a definition, IIF (2009) defines risk
culture as such: “risk culture is the set of norms and traditions of behavior of
individuals and of groups within an organization, that determine the way in which
they identify, understand, discuss, and act on the risks the organization confronts
and the risks it takes”. Along with this definition as with others, it is still difficult to
suggest a prescription for developing a risk culture environment when taking into
consideration the contingencies specific to each institution (Institute of Inter-
national Finance 2008). Contingencies do really matter so it would be difficult to
define an “ideal risk culture model”; yet it is possible to talk about some common
elements that would help to create a RMC. In their studies, many researchers
outline some factors that help to flourish a strong risk management culture. As to
the studies, for instance, the role of managers is an important facilitator of RMC in
the GFIs (McConnell 2013; Girling 2013; Geretto and Pauluzzo 2015). What’s
more, managers’ attitude towards risk is another element that comes to the fore-
front. Especially the managers who approach risks negatively would definitely
affect the development of risk management and culture. March and Shapira’s
study (1987) is a significant example of this assumption and it empirically indicates
that managers mainly focus on the potential losses instead of potential gains.
On the organizational level, RMC is a governance issue strategically designed
by top managers (De Marchi and Ravetz 1999). Specially, The Chief Risk Officer
(CRO) has an influential part on the cultivation of RMC. CRO has the role to
develop and reinforce risk culture by interacting with managers and the Board
(IIF 2013). On the individual level, RMC is related with some ethical issues as the
bad behavior or showing indifference towards risks. As individuals, members have
to detect, speak out loud and document risks, and be responsible about factors
threatening the risk management culture of the institution (Ashby et al. 2012;
Girling 2013).
316 T. Bozaykut-B€
uk
Likewise the term culture, “strategy” is mainly interested with the relation of
organization and environment and the basic premise is that organizations develop
strategies to cope with the changes in the environment (Mintzberg 1994). For
instance, the adaptive model of strategy implies that organizations have to be active
and not solely cope with the environment but “change with the environment”
(Chaffee 1985: 92). Hence, culture and strategy are two terms propounded as
means for dealing with changes in the environment.
Studies suggest that institutions engage more with strategic planning process if the
environment is complex and the rate of the change in the environment is increasing
(Bird 1991; Steiner 1979; Hopkins and Hopkins 1997). The economic ups and downs
are examples of transition periods in which institutions return to their strategic
planning and are more deeply concerned with the process. For instance to overcome
the economic turmoil of the recent global crisis, top managers have started to develop
their strategic plans on the risk governance mechanisms (Cooper 2000).
Strategic planning is “an iterative, comprehensive and systematic approach to
developing a firm’s overall direction, one that allows ‘management to analytically
determine an appropriate strategic path for the whole organization’” (Andersen
2000: 185). To Mintzberg (1994), strategic planning has the technical aspect of
strategic management and it is associated with analysis. Further mission, vision,
goals and values constitute the base of a strategic plan and strategic planning is the
“road map of the organization determining where the organization is now to where
it would like to be in five or ten years” (Bouhali et al. 2015: 74). With a
future projection in mind, strategic planning sets a direction to achieve organi-
zational goals through analysis.
Basically a strategic planning has three main processes (Hopkins and Hopkins
1997: 637): “(1) formulation, which includes developing a mission, setting major
objectives, assessing the external and internal environments, and evaluating and
selecting strategy alternatives (2) implementation and (3) control”. Through these
processes, the strategic planning also serves a symbolic function in developing
cognitive maps for common understanding (Duncan and Dutton 1987). To some
studies, strategic planning also serves a symbolic function in developing cognitive
maps for common understanding (Duncan and Dutton 1987). Besides developing
mission, vision and objectives; the values and standards of behavior in doing busi-
ness are also identified and developed in the strategic planning. To illuminate this
issue, IIF (2013: 2) implies that the Board has to be sure that culture fits the business
model and asks themselves constantly “What is the organization doing to support
things that we value? What are we doing to deter things that we don’t value? Do we
have an organization that is constantly risk aware?”
In the strategic planning, it is critical to determine the major risks faced, deter-
mine the risk tolerance level of the firm and how to control these risks through
internal and external environment analysis (IIF 2009; Holmquist 2012). Especially
318 T. Bozaykut-B€
uk
the recognition of risks related to the business strategy is of great importance at this
stage (IIF 2009). After identifying these major issues of risk management, the
manner and behaviors of the employees should be identified as to create mutual
understanding of risk and to form a common set of values and attitudes in managing
risks. In a RMC, members have the responsibility and accountability of the risks
they face and take precautions against not to eliminate but to control these risks
(Cooper et al. 2011; IIF 2009). As a consequence, RMC consists of dynamic pro-
cesses that require the whole organization’s participation.
In a dynamic environment, it is expected that the employees of GFIs should
internalize the common premises of the risk management culture so as to follow the
strategic plans. If financial institutions expect their members to detect and manage
risks in their daily operations in an ethical manner, the tools and symbolic actions
for a risk-oriented climate should be set in their strategic planning (IIF 2013).
Besides the role of CRO, the meetings or forums for discussing risks or introducing
analytical tools for managing risks are all to be set up in the strategic planning (IIF
2009; Cooper et al. 2011).
To lighten the significance of RMC in the strategic planning, the global eco-
nomic crisis of 2007–2009 can be an example. The crisis demonstrated that the risks
were not calculated as they had been and firms were not behaving ethically. What’s
more, many did not adopt sound risk management awareness and culture that
should have supported daily operations. Though believed to have efficient risk
calculation techniques and methodologies, GFC pointed out that the relevant skills
had not been developed to embed risk management in the organizational structure
and culture (Girling 2013 Geretto and Pauluzzo 2015). In line with this, regulators
of financial markets all around the world such as US Financial Crisis Inquiry
Commission or UK Parliamentary Inquiry pointed out the missing link of cultural
support as one of the paramount reasons why institutions face the global financial
crisis (McConnell 2013). After the crisis, although the problems with the risk
modeling have been tried to be solved by the regulations based on Basel standards,
yet; it is still a search to how to set up the risk management culture and risk
management practices (Bilal et al. 2013). Also recent researches interestingly
have showed the fact that GFIs still lack the mentality of risk management culture.
For instance, Geretto and Pauluzzo (2015) study the risk management of 50 large
banking companies around the world and find out that GFIs still evaluate risk man-
agement mainly as “financial trading or insurable risks”; “something negative, or to
be avoided without having a clear risk organization responsibilities or culture”
(p.313).
As the crisis proved, it is vital to give an effort for developing RMC in the stra-
tegic planning of the firm. This would not only provide an alignment of culture and
strategy but also help to increase firm performance and value creation to the share-
holders in the long run. In brief, to have a strong risk management culture requires a
strategic thinking and planning approach for long term success and when strategi-
cally formed, RMC would minimize the negative consequences of the economic
fluctuations.
Giving Risk Management Culture a Role in Strategic Planning 319
5 Conclusion
One of the topics raised by governments and regulators is how financial institutions
approach risks and the “criticality” of the implementation and improvement of risk
management. The complex and turbulent business environment makes it difficult to
manage risks, requiring a more detailed approach for an effective risk management.
Thereby, risk management and culture embedded in strategic planning can be be an
enhancer of sustainable competitive advantage while firms react to the threats and
opportunities in the environment.
The recent global crisis revealed that global financial institutions failed in behav-
ing ethically and the risk management of these firms had many shortcomings.
Especially, the banking sector is criticized severely in not having a strong risk
management structure and culture supporting the corporate strategies. It is now
clear that the risk management has to be supported culturally and GFIs have to
approach risk management also as a “cultural phenomenon”. All in all, to achieve
competitive advantage and create value to shareholders, GFIs have to develop
solid risk management culture.
Because of the increased significance risk management attained after the global
crisis, banks and other financial institutions aim to form a risk management culture
strategically. To do this, in their strategic planning, firms map how to flourish risk
management culture on instrumental and symbolic level. On the instrumental level,
analytical tools to be used in detecting and controlling risks are introduced. Further
to that, the common jargon and attitude in approaching risk has to be implemented
throughout the organization for the symbolic level.
As a solution to the problem of how to implement a strong risk management
culture, it is suggested that firms have to identify their risks, risk appetite and the
tolerance level in their strategic planning. Besides, the articulation of these critical
factors to the members by top management is indicated as an important factor to
create the base for a mutual risk understanding and behavior. Also, Chief Risk
Officer and Board have pivotal roles on the development of risk management
culture. The use of analytical tools and the acceptance of common set of norms
and values in assessing risks are the main elements of the risk management culture.
Yet, for today’s managers, it is a highly challenging mission to form the risk man-
agement culture within their organizations.
This chapter has set out to attract attention to risk management culture and its
place in the strategic planning. Being the compass of the firms, strategic planning
process should also cover the cognitive terrains and should direct tools, techniques,
attitudes and behaviors for developing risk management culture. The literature is
weak in explaining the cultural aspect of risk management. It is hoped that the
alignment of risk management culture with strategic thinking and planning would
be examined from diverse perspectives in the future studies.
320 T. Bozaykut-B€
uk
References
Oliver Wyman Financial Services and RMA (2010) Institutions need to better understand their
risk appetite. RMA J, 38–42
Peters T, Waterman R (1982) In search of excellence: lessons from America’s best-run corporations.
Warner, New York
Power M (2007) Organized uncertainty: designing a world of risk management. Oxford University
Press, New York
Raza Bilal A, Bt. Abu Talib N, Noor Azli Ali Khan M (2013) Remodeling of risk management in
banking: evidence from the sub-continent and gulf. J Risk Financ 14(5):468–489
Schein EH (1992) Organizational culture and leadership. Wiley, New Jersey
Simon HA (1972) Theories of bounded rationality. Decis Org 1(1):161–176
Steiner GS (1979) Strategic planning. Free Press, New York
Willcoxson L, Millett B (2000) The management of organisational culture. Aust J Manage Organ
Behav 3(2):91–99
Tuna Uslu
Abstract As a result of the economic, social and cultural reflections of the rapidly
developing information and communication technologies, a new social reference
was developed based on information society which is a an intellectual reformist
domain paradigm which can also be called as the information age. In this radical
transformation process, where information becomes a strategic resource including
information and communication technologies in its center, economic, social, polit-
ical and cultural lives were deeply affected. The changes and developments in
social dynamics also affected the structure of the organizations, their management
understanding, the technologies used and the employees while the organizations
had to re-design their functions and the managers had to re-design their roles. More
importance was given to the intrapreneurship and leadership qualities of the
managers who can adapt to the changing competition conditions in the organiza-
tional sense in the information society. This destructive change in the operation and
management techniques also affected the economy theories and was affective in the
development of endogenous growth approach. In this section, we draw a conceptual
framework between leadership and managership, discuss personal characteristics of
the chief officers and financial managers, the leadership of the time while looking
for a place for these concepts within Schumpeter’s approaches and endogenous
growth theories.
T. Uslu (*)
Graduate Program of Business Management, Istanbul Gedik University, Kartal, Istanbul,
Turkey
e-mail: tuna.uslu@gedik.edu.tr
1 Introduction
It is observed that the social developments and radical changes taking place
throughout the history took place only within a generation with different periods
and different characteristics. Human kind experienced one of its most fragile
periods throughout the history with the industrial and technological revolution.
The technology based developments emerging in the twenty-first century
transformed the official borders between countries into imaginary lines while the
technology and innovation came to the center point of daily life to become the
virtual dome of societies. Technology became to form the dynamism of the social
developments of our age. Technology can be considered as a positive reaction in the
aspect of cultural interaction between societies and its audience positioning reduced
the costs of communication and reaching to information while causing radical
transformations in the structures and functions of social dynamics, production
structures, social institutions and organizations. With this transformation taking
place in the field of technology and communication, a new social structure
emerged. The changes and developments taking place social dynamics affected
the structure of the organizations, their way of management, the technologies used
and the employees while also affecting the functions of organizations and roles of
managers.
Today, the ways of business operations which change in parallel to information
economy while also increasing the industrial relations and management approaches
in organizations. The most important and critical resource of organizations in the
rapidly changing environment of business, competition and entrepreneurship is the
qualified, knowledgeable and competent man force (Drucker 1986). Hence, orga-
nizations aim to strengthen their employees with a positive approach by infusing
them with concepts such as autonomy, creativity, and flexibility and supporting
them (Uslu 2014: 7). Hierarchical stages are reduced in businesses while two-way
communication between stages increases. This whole change created new organi-
zational structures after the third and fourth industrial revolutions, and it is also
effective for the employees to become more participating and to develop positive
attitudes. Due to the intense communication and interaction between employees
and managers, employees are able to have more participation in organization
processes and become a part of the management processes in the information
economy (Uslu 2015a). According to Castells (2000), information economy has
three basic qualities:
(1) In the information economics, the capacity to produce, process and manage
information is the main determinant of productivity and competitive power in
all economic units in the level of industry, region and country.
(2) Information economics is a global economy; production is carried out locally
but for all the world. It is not possible for the products and services that are not
globally produced to exist before the high quality products which are produced
for global market or the “creative destructive” (Schumpeter 1962) products
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 325
which have high competition power. Utterback and Acee (2005: 2) call this
kind of technologies as “destructive technologies”.
(3) Information economy is a network economy (Juniper 2002: 748). Economic
units in the information economy are called as network businesses. Network is a
cluster of interconnected ends. Each end has a one-to-one connection to other
ends. Therefore, the information sharing possibility and synergy in the network
structure are at the highest level. The strategies of the businesses in the
information economy are based on developing their own business ecosystems.
The organizations that attempt to exist in the information economy have to
ensure continuous renovations by keeping themselves open to information based
change and developments as well as developing technology and changing environ-
mental factors. However, with respect to sustainable competition, the leadership
profile within the conventional management approaches cannot respond to these
needs. The leadership concept of the future will underline the personality features
that can rapidly respond to change, adapt to developments, renew itself, has
effective communication skill, can take initiative, can rapidly interpret information
and turn it to opportunity, has knowledge on the qualities and capacity of the
audience it represents, assuring confidence to followers.
The rapid development of information and communication technologies have
been changing the organizational structure, business and work methods, manager
and employee profile, and in general work life, and have been bringing out new
models particularly in communication in inside and outside the organization (Uslu
2014: 290–291). Moreover, methods towards providing a participatory work envi-
ronment such as accurate decision making from the lower division to the upper
division in businesses and establishing communication, empowerment and increas-
ing authority; allow employees to be freer, stronger, and making authoritative
decisions, thus identify alternative ways to achieve goals and be motivated (Uslu
2014: 282–283). It was found that the most important factor determining organi-
zational creativity is organizational communication support followed by corporate
innovativeness (Uslu and Çubuk 2015).
Information society as well as organizational communication and social net-
works change paradigms and play a critical role in management approaches in the
new age. In our age, organizations and managers systematically develop new
methods and approaches to ensure competitive advantage. Product development,
product, service and process renewal are the leading innovative methods that are
used. Innovation describes the transformation in the business fields and production
philosophy. The differences between the countries and regions in terms of man-
agement, science, engineering, technology and labour quality have become the
factors that explain the complexity dynamics of the twenty-first century. Hence;
increasing the international competitive power of the local firms in the framework
of regional development politics, improving entrepreneurship and innovation
capacity in local area are highly important in this century (Eryigit and Uslu 2014:
429).
326 T. Uslu
In the process of economic development, Schumpeter is the first person to deal with
the concept of innovation in the most comprehensive sense (1912). In Schumpeter’s
approach, it is important to state that innovation and invention don’t have the same
meaning. It is the innovator who drives development, not the inventor. Hence the
invention that doesn’t become innovation cannot be the driving force of develop-
ment. Therefore, for Schumpeter, the dynamics of development turns to the dynam-
ics of innovation. Therefore, innovation causes development in stable economies
(Schumpeter 1947: 152–153). Another important concept that Schumpeter added to
the literature of economics is the creative destruction in relation to innovation. This
term refers to the economic derogation of old things or old technologies by the new
technologies or innovations (Aghion and Howitt 1997: 53). Schumpeter argues that
innovation is the basic functions of new companies and entrepreneurship and that it
takes places after creative destruction. When defining the essence of capitalism,
Schumpeter (1962) links initiatives that create new product, service, market and
economic enterprises to consecutive revolutions and economic dynamism within
the economic structure. This dynamism is a “creative destruction” that continuously
destroys what is old and creates new periods instead.
The analysis of Schumpeter gives innovation a basic role. When applied to
industrial process, invention becomes an innovation. An entrepreneur is the person
who realizes new combinations and who brings innovations. An entrepreneur is
always a leader in creating new production processes and new forms business
organizations or in entering new markets. Schumpeter believes that economic life
will maintain its static balance if it weren’t the innovations and that it will continue
as a process that repeats itself every year in the same channel. For Schumpeter,
innovations emerge as clusters not in a continuous manner. Activities of some
entrepreneurs create a convenient environment for the others who follow them.
Some others who don’t follow innovations are wiped off the market. This approach
of Schumpeter is called as “creative destruction”. As the innovation process takes
year by year, some survive in the market while others are wiped off the market.
These processes create periodical movements and bring in economic development.
According to Schumpeter, economic order takes place when a new technology,
product, market, production process or organizational structure becomes an open
alternative to the existing products and organizational applications with creative
destruction. If a new technology, product, market or organization applications don’t
form open alternatives to the products or organizational applications currently
available in the market, then creative destruction doesn’t take place (Larson
2000: 306). Today, there is emphasis on the destruction of the competitive structure
in the market with the intrinsic change of innovations. However, the power of
destroying the current structure in the market also show the contribution of change
to economic growth. Therefore, we can also talk about “creative making” instead of
“creative destruction” which will take place in the market due to innovations
(Lambooy 2005: 1140). Schumpeter’s definition of innovation highlights the
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 327
In the process of transition to information society that we are currently in, one of the
paradigm-changing organizational dynamics in industry is the psychological qual-
ities of this mental change through innovative entrepreneurship as well as informa-
tion production. Considering with respect to paradigm change, the essence of
entrepreneurship theory of Schumpeter (1934) in particular is made of the concept
of creative destruction caused by the innovative entrepreneur. Schumpeter (1962)
links the enterprises that create new economic organizations to consecutive revo-
lutions and economic dynamism that follow each other in the economy. This
dynamism is a creative destruction movement that constantly destroys what is old
and creates new periods instead. Creative destruction is the inclusion of new
methods to the production process after the elimination of organizations producing
old products and services by the organizations that make advance production with
new methods (Montgomery and Wascher 1988). From the micro point of view,
individuals can act rationally in repeated and familiar processes, while they may not
be able to easily grasp a new situation. But in a change process, individuals tend
towards the satisfaction of their desires and may go out of patterns. These individual
behaviours may functions as the motivating factors of transformation (Schumpeter
1934).
Schumpeter is the first person to define entrepreneurship from a different
perspective. According to Schumpeter, the innovation concept underlines the
definition of entrepreneurship. He defines an entrepreneur as the person who
destroys the current economic order by creating new combinations like producing
new goods and services, developing new processes, finding new export markets,
and creating a new organization structure. Classical economic point of view defines
entrepreneurship focuses on the personality traits like creativity (Schumpeter 1934)
and risk taking (Mill 1954).
330 T. Uslu
The entrepreneur is the person who provides financial and personal satisfaction
by taking psychological risks and who is in the process of creating anything that has
a different value by spending the required time and effort (Hisrich 1985). The
entrepreneur is an individual who creates valuable things to obtain results for
individual satisfaction like success and independency and for financial satisfaction
by assuming the financial and social risk within the process to increase economic
efficiency (Hisrich and Peters 2002). Entrepreneurship is the activity of economic
mobilizing of resources (Hisrich et al. 2005: 8). An entrepreneur coordinates the
production factors, manages the process, audits staff, orders raw material, estimates
demand, takes an intermediate role between producers and consumers (Winata
2008: 16). The entrepreneur is the key person, coordinator, modern leader and
manager in its own enterprise. An entrepreneur is a risk bearer, innovative, con-
structive, organizer, maker, industrial leader, project coordinator, employer, arbi-
trator, financial capital supplier and allocator of resources between alternative uses.
The duties of entrepreneurship include especially risk taking. For example, deci-
sions are taken in the uncertainty conditions which include information asymmetry,
and therefore they exhibit more risk taking behaviours than managers (Stewart and
Roth 2001).
Schumpeter defines the entrepreneur as the person founding new companies,
innovative, destroying routines and resisting against the old methods. The entre-
preneur of Schumpeter assumes these tasks only become successful. The special
leadership skills of the entrepreneur show him the correct way to take action. To
meet innovations, an entrepreneur should resist against the deviated behaviour and
the opposition of the environment who are hostile to innovation. The entrepreneur
of Schumpeter takes pleasure from this opposition. On the other hand, the entre-
preneur is a non-conventional creator, not a religious opposition (Brouwer 2002:
89). Moreover, Schumpeter makes a distinction between the types of entrepreneurs
as ordinary and economic actor. The entrepreneur in Schumpeter’s theoretical
system is the driving actor of the economic system. Revealing the innovations is
based on the leadership skills of the entrepreneur. Autonomous adaptations are
impossible for ordinary actors. Ordinary economic actors need visionary guides to
design new production and consumption plans (Ebner 2000). Schumpeter defines
the entrepreneur as a leader who has intuition and vision, who can realize new
things as well as evaluate old things through new ways. Leadership in
Schumpeterian system is not homogenous. Leadership skill is partly caused by
the use of information which looks like a public asset. Human movement perceiv-
ing and responding to information does it through various ways. Each internalizes
public assets through different ways. Leader leaves the manager behind by his skills
(Hébert and Link 2006: 101). Schumpeter distinguishes entrepreneur, manager and
financier. Entrepreneur usually provides vision and leadership to the organization
different from the managers who run the daily works of a company. The main task
of the entrepreneur is to decide which goals to pursue not how (Praag 1999: 320).
The basic function of the entrepreneur in modern societies is to continuously realize
the innovations. In this aspect, the power of the modern entrepreneur is considered
by his skill to make innovations and turn them into concrete commercial products.
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 331
much better than traditional markets in manufacturing and services. But as more
and more industries become increasingly digitalized and networked, the people can
expect the Schumpeterian dynamic to spread (Brynjolfsson and McAfee 2014).
This destructive change in the production methods also affected the economy
theories and were effective in the development of endogenous growth approach.
There is consensus in the economy literature (Pack 1994; Solow 1994; Grossman
and Helpman 1994; Fine 2000) on the fact that the endogenous growth theory is
based on the works of Romer (1986) and Lucas (1988). The study of Romer (1986)
on the long term growth dynamics and his new growth theories that are developed
as a different view to the neo-classical growth theory consider technology and
human capital as an endogenous input of growth (Shaw 1992: 615). Romer incor-
porated the technological change to the model and stated that the long term growth
is actually based on information background. In addition, it was stated that positive
externality may be involved in information production depending on the feature of
information that it cannot be patented or cannot be kept hidden. New theory uses
production functions based on increasing efficiency instead of neo-conventional
production function. This controversial assumption that Romer used in his model is
based on the argument that it is not only physical production that comes in the
investment and production process, but also the new production information.
Romer uses in its model the “learning-by-doing” idea of Arrow. Romer uses this
idea and assumes that technical information is produced as a side product in the
process of production and investment and that this information is used in the new
production almost as a free input and the new production is carried out at lower cost
and at higher quality.
Romer points out to the existence of increasing income due to the
non-competitive nature of creative ideas. Endogenous growth models suggest that
every kind of policies that affect physical capital, human capital and intellectual
capital will have a positive impact on long term growth (Dowrick 1995). Improve-
ments in the quality of education and workforce as well as the factors like scale
economies and technological development (Patrinos 1994). In his studies compar-
ing the growth speeds of countries, Barro (1990) states that the factor preventing
poor countries from catching up with rich countries is not the lack of physical
capital investment by the lack of investment on human capital. The model of Barro
is one of the first endogenous growth models and it regards the public policies as a
clear production factor. Lucas (1988) defines the human capital as the physical,
intellectual and technical capacity of individuals and emphasizes that it is the
driving force of growth. However, the human capital is a production factor like
the physical capital and it has increasing profits not decreasing ones. In this context,
human capital is a concept that will create internal growth. Those who defend
endogenous growth theory highlight different subjects but one of the common
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 333
points gathering these views in the same group is the idea that growth is determined
endogenously in the long term. This is attempted to be explained by Romer (1986)
with information background and technical progress, by Lucas (1988) with the
education of human capital, by Becker et al. (1990) with workforce growth and
productivity, by Yang and Borland (1991) with work force specialization and by
Young (1993) with learning-by-doing. This approach has become the basis for the
organizations to form human oriented, simpler organizations with their employees
and customers.
Drucker (1984) defined the concept of innovation as “the useful information that
provides first chance to ensure efficiency of employees working together in an
organization having different knowledge and skills”. Innovation is an instrument of
entrepreneurship and an action that provides necessary sources in creating a new
capacity (Drucker 1984, 1986). In the enterprises, innovation approach increases
also psychological ownership of employees and therefore has positive effect on the
job satisfaction through emotional commitment to organization (Uslu 2014: 314).
Organizational intrapreneurship refers to the internal market research activities
in organizations, provision of improving and innovative employee services, and
developments in the internal markets of big organizations and the small and
independent units that were designed to expand technologies and methods. This
is different than the comprehensive organization entrepreneurship that intends to
obtain a profitable position in external markets (Nielsen et al. 1985). Intrapreneurs
are the individuals who take responsibility to increase any innovation in the
organization. They can be creative and inventive but also dreamers who can always
understand how to convert an idea into a profitable reality (Pinchot 1984).
Lean approach refers to the redefinition and organization of the functions, depart-
ments and processes to provide a positive contribution to knowledge and value
creation (Womack and Jones 1996). Participation of employees and work groups
are also the most important aspect of the lean production practices (Kirkman 1997:
735). This concept involves the integration of the organization structure to make
sure faster response to the quality and standards requested by consumers. Lean
334 T. Uslu
cycle already refers to finding the correct model and putting the system on the right
track with visible user increase.
Until today, strategy and agility are usually considered as opposite poles by the
administrators. Strategy is defined as the following of path which is clearly deter-
mined over a set of carefully selected action, or in other words, of a previously
systematically defined path. On the other hand, agility is regarded as the concrete
form of opportunism. In fact, they both need each other. Strategy without agility is
merely a central planning and agility without strategy creates chaos.
The matter that is hardly grasped by many organizations is that an effective
strategy is encouraging by determining the limits in which the innovation and
experimentalism should take place rather than suppressing an agile attitude. Agile
strategy approach, however, develops companies from vision to strategy by blend-
ing agility and strategy. In this process, agility comes into play iteratively for
companies to develop new skills against what they learned and to re-organize and
improve their initial strategies. The employees who continuously work through
their processes and who can talk to each other about process improvement can
establish an agile management. Otherwise, the maintenance of the “present condi-
tion” will be considered to be less problematic and troublesome and companies will
try to protect the condition and accepted strategies instead of embarking on “new
adventures”. This attitude will create stationary systems which face increasingly
more problems. The system may become more “agile” by the use of system that
comprehends the processes in detail and that combines them after the management
documents the processes with their details. The management will become more
agile if processes get away from stationery as much as possible and transaction
periods gets shorter with the information technologies. In conclusion, system
should be turned to form that can quickly respond to the changing conditions.
This change is possible by well-defined and continuously improved processes.
Shortening of the cycle durations, which is another requirement of agile manage-
ment is possible by the improvement of processes. According to agile management,
individuals and the interaction between them are more important than the used
process and instruments. Prototype product or service is more important than the
comprehensible documentation. Relation with the customer is more important than
the text of the customer contract. Adapting to change is more important than strictly
sticking to the existing plan.
336 T. Uslu
Leaders are responsible for creating a vision and having this vision embraced in the
organization. Leader serves to the goals he sets. However, the management concept
is fixed, only concerned with today and is responsible to realize the vision. The
manager takes his power from the formal structures like laws and regulations while
leader takes his power from the circumstances and personal qualities (Starratt
1995). Managers have authority while leaders have power. A manager is introvert
for realizing the goals and only sees the trees in the forest while a leader sees the
forest with his extrovert quality. A manager manages and coordinates at the
business relations stage while a leader assures confidence and acts to develop his
subordinates (Lunenburg 2011). A successful leader manager profile for managerial
skills is summarized as the person who is smart, is imaginative, entrepreneur, has
the skill to take fast decision and inspires his subordinates (Tannenbaum and
Schmidt 1973).
Within the management metaphors based on knowledge, we need leaders who
can represent the organizations within increasing competitive environments and
who can take organizations forward. At this point, the key leadership qualities of
the period to come are defined as follows (Leslie 2009); righteousness and moral
courage, self-awareness and modest, empathy skill, transparency and openness,
vision, adaptation and flexibility, energy, determination against uncertainty, judge-
ment, consistency and righteousness, inspiration, motivation and resting motiva-
tion, respect and trust, knowledge and experience, strategic planning, inspiring
commitment, change management and leadership to masses.
Human element is located in the center of the understanding of information
society, therefore, leadership projection in the information society is not defined
over merely professional knowledge or physical qualities. The future understanding
of leadership should take this element into consideration in every stage of manage-
ment understanding. A leader should give importance to moral values in concen-
trating on the task without ignoring the human element. The leader of the new
century is the person who can turn threats to opportunity by rational approaches and
who can present practical solutions. Information age also turned to a more chaotic
structure in parallel to the developments in the field of informatics. An effective
leader should adopt the collective intelligence/team approach to provide proper
respond to the new order which become more complicated under current condi-
tions. This approach also categorizes the decision making mechanism towards the
team structure rather than the individual while it will also open the way for the
personnel to show their skills to reach their goals.
In line with the modern management approach, organization structures evolved
towards the team approach. With the collective intelligence approach, participation
of employees in decisions has become a prerequisite in decision making steps
which involves the future and success of organization. Within this process
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 337
Leadership takes place within a social context. Leadership is a relation taking place
between culture and psychology that is informed by individual and social behav-
iours (Adams 2013). According to the integrated leadership theory, the rules and
values of cultural communities affect the behaviours of leaders; leaders affect the
organization structure, culture and behaviours; cultural values and customs affect
the organization culture; organization culture and customs affect the leadership
behaviour. In short, within the light of this theory, culture, leader and organization
structure are continuously affected from each other. This shows us, within the light
of the data obtained from these researches, that cultural differences are affective on
the leadership behaviours (House et al. 2004: 17–18). As a result of a comprehen-
sive study, there are six definitions of global leadership behaviour. Charismatic
leadership characterized by demonstrating integrity, decisiveness, and performance
oriented by appearing visionary, inspirational and self-sacrificing; team oriented
leadership characterized by supporting team collaboration and integration; partic-
ipative leadership characterized by managerial skills; human orientated leadership;
characterized by modesty; autonomous leadership characterized by autonomy,
independence and individuality as a concept being recently defined; self-protective
leadership characterized by self-centered, procedural, conscious of status and
position which is also being recently defined (House et al. 2004: 14).
Today, organizations systematically develop new methods and approaches to
provide competitive advantage. The innovative methods being employed particu-
larly include the renewal of Production Development, product, service and pro-
cesses. Together with the information society, the organization communication and
338 T. Uslu
social networks change the paradigms and play a critical role in the management
approaches in the new age. The open and relation oriented approach of leadership
which is the particularly effective in increasing the positive behaviour and psycho-
logical capitals of employees are therefore effective for the employees to develop
proactive behaviours beyond their both work related performances and roles.
Employees are positively strengthened thanks to the managers who are sincere,
honest and have mutual relation and their desire to contribute to the organization
increases. In addition, it is seen that such a leadership approach is effective in
perceiving the innovativeness and entrepreneurship of the organization. Further, it
is understood that an open and relation oriented leadership approach has a great
effect for the innovativeness of the institution and for the participation of
employees by developing their psychological capital (Uslu 2015a). The qualities
of positive leader are listed as having communication with others with the aware-
ness that they are “human”, being reliable and honest, helping to the development
of others, prudential, sincere and authentic, focusing on opportunities rather than
obstacles, solving the problems for others, smiling and rarely straight-faced, modest
and expressing satisfaction, flexible and open to the ideas of others, not selfish and
team player (Cameron 2014). The effect of positive leadership is mainly perceived
through the management of organization and has an effect on the evaluation of
employees about their quality levels at work (Uslu 2014: 314). Open and ethical
communication of leader has a role on the organizational commitment of
employees especially through process management (Uslu 2014: 310). The
researches show that open leadership is an approach that directly supports entre-
preneurship and innovativeness in organizations (Uslu et al. 2015).
most important qualities of the new competence set defined for the finance pro-
fessionals are the leadership skills. In such an environment, communication skills
and managerial abilities are especially important (Deloitte 2010: 3). The CFOs
today are trying to manage an ecosystem which gets more sophisticated every day
and includes global operations to create a competitive environment, use of financial
data and analyses for profitable growth, realization of company strategies and
coping with a dynamic regulatory environment. On the other hand, CFOs are
expected to find and retain skilful human resource and to communicate with a
broad group of stakeholders. Such people who can do all of the above, use the
wisdom of the past and technology of the present and imagine the innovation of the
future can be called (KPMG 2016: 24) as “Renaissance CFO”.
5 Conclusion
References
Adams EA (2013) Context, culture, and cognition: the significant factors of global leadership
research. Int Leadersh J 5(2):94
Aghion P, Howitt P (1997) Endogenous growth theory. MIT Press, Cambridge, MA
Aned OAM, Alya OAM (2013) Invigorating entrepreneurial spirit among workforce. Int J Manage
Sustain 2(5):107–112
Barro R (1990) Government spending in a simple model of endogenous growth. J Polit Econ
98:103–125
Becker GS, Murphy KM, Tamura R (1990) Human capital, fertility, and economic growth. J Polit
Econ 98:12–37
Brouwer MT (2002) Weber, Schumpeter and Knight on entrepreneurship and economic develop-
ment. J Evol Econ 12:83–105
Brynjolfsson E, McAfee A (2014) The second machine age. Work, progress, and prosperity in a
time of brilliant technologies. W.W. Norton, New York
Cameron KS (2014) Positive leadership requires positive energy. The Wheatley Institution, http://
wheatley.byu.edu/fellow_notes/individual.cfm?id¼35>
Castells M (2000) Materials for an exploratory theory of the network society. Br J Soc 51(1):5–24
Deloitte (2010) CFO after the millennium: the 10 essential elements marked the last 10 years,
Deloitte CFO Series 2, September 3
Dowrick S (1995) Innovation and endogenous growth: the new theory and evidence. Economic
approaches to innovation. Edward Elgar, Cheltenham
Drucker PF (1984) Our entrepreneurial economy. Harvard Bus Rev 62(2):59–64
Drucker PF (1986) Innovation and entrepreneurship: practice and principles. Simon and Schuster,
New York
Drucker PF (1999) Management challenges for the 21st century. Harper Collins, New York
Drucker PF (2008) Management, revised edn. Harper Collins, New York
Ebner A (2000) Schumpeterian theory and the sources of economic development: endogenous,
evolutionary or entrepreneurial? In: The International Schumpeter Society conference on
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 341
Pinchot G (1984) Who is the intrapreneur? In: Intrapreneuring: why you don’t have to leave the
corporation to become an entrepreneur. Harper & Row, New York, pp 28–48
Praag CM (1999) Some classic views on entrepreneurship. The Economist 147(3):311–335
Ries E (2011) The Lean startup: how today’s entrepreneurs use continuous innovation to create
radically successful businesses. Crown Publishing, New York
Romer PM (1986) Increasing returns and long-run growth. J Polit Econ 94(5):1002–1037
Rosenfeld R, Servo JC (1994) Facilitating innovation in large organization. In: Henry J, Walker D
(eds) Managing innovation. Sage Publication, London
Schumpeter JA (1934) The theory of economic development. Oxford University Press, New York
Schumpeter JA (1939) Business cycles: a theoretical, historical and statistical analysis of the
capitalist process. McGraw-Hill Book, New York
Schumpeter JA (1947) The creative response in economic history. J Econ Hist 7(2):149–159
Schumpeter JA (1962) Capitalism, socialism, and democracy, Harper perennial, 3rd edn. Harper
Collins, New York
Shaw GK (1992) Policy implication of endogenous growth theory. Econ J 102:590–620
Solans ED (2003) Financial innovations and monetary policy. In: 38th SEACEN Governors
conference on structural change and growth prospects in Asia - challenges to central banking,
Manila, 13 Feb 2003. https://www.ecb.europa.eu/press/key/date/2003/html/sp030213.en.html
Solow RM (1994) Perspectives on growth theory. J Econ Perspect 8(1):45–54
Starratt RJ (1995) Leaders With vision the quest school renewal. Corwin Press, Thousand Oaks,
CA
Stewart WH, Roth PL (2001) Risk propensity differences between entrepreneurs and managers: a
meta-analytic review. J Appl Psychol 86(1):145–153
Tannenbaum R, Schmidt WH (1973) How to choose a leadership pattern. Harvard Bus Rev
36:162–180
Uslu T (2014). Perception of organizational commitment, job satisfaction and turnover intention in
M&A process: a multivariate positive psychology model. Unpublished PhD Thesis, Depart-
ment of Business Administration, Marmara University
Uslu T (2015a) Current leadership approaches and the maintenance of the fourth industrial
revolution from a Schumpeterian point of view. In: International congress on economy
administration and market surveys, Istanbul, 4–5 Dec 2015, pp 110–111
Uslu T (2015b) The comparison of contemporary leadership styles in organizational context
according to employees. In: Proceedings of international conference on modern research’s in
management, economics and accounting, Istanbul, 27 Jul 2015. ISBN: 978-9944-0203-10-2
Uslu T, Çubuk D (2015) The effects of knowledge management and self-organization on organi-
zational creativity: the mediating roles of corporate innovativeness and organizational com-
munication. Int J Org Leadersh 4(4):403–412
Uslu T, B€ulb€ul IA, Çubuk D (2015) An investigation of the effects of open leadership to
organizational innovativeness and corporate entrepreneurship. Els Proc Soc Behav Sci
195:1166–1175. doi:10.1016/j.sbspro.2015.06.169
Utterback JM, Acee HJ (2005) Disruptive technologies: an expanded view. Int J Innov Manage 9
(1):1–17
Winata S (2008) The economic determinants of entrepreneurial activity: evidence from a Bayesian
approach. Massey University, Palmerston North
Womack JP, Jones DT (1996) Lean thinking. Simon & Schuster, New York
Xiao S, Zhao S (2012) Financial development, government ownership of banks and firm innova-
tion. J Int Money Fin 31(4):880–906
Yang X, Borland J (1991) A microeconomic mechanism for economic growth. J Polit Econ
99:460–482
Young A (1993) Invention and bounded learning by doing. J Polit Econ 101(3):443–472
Agile Intrapreneurship in Volatile Business Environment: Changing Roles of. . . 343
Tuna Uslu is an Assistant Professor at Istanbul Gedik University, Graduate Program of Business
Management, Istanbul, Turkey. He is also head of the Sport Management Department. Dr. Uslu
has a BS in Economics and Business Administration from Istanbul Bilgi University, an MSc in
Total Quality Management from Dokuz Eylul University and a PhD in Organizational Behavior
from Marmara University. His research interests lie into fields including micro-economics,
strategic management, sport management, cognitive psychology, industrial and organizational
psychology. He has taught Business Management, Entrepreneurship and Facility Planning,
Strategies and Methods of International Trade, Quality Management Systems, International
Transportation and Logistics, Social Psychology, Management and Organization courses, among
others, at both graduate and undergraduate levels. He has several articles, awards and book
chapters, joined more than hundred national and international conferences. He is in the editorial
board of International Journal of Business and Management, Management and Organizational
Studies, Journal of Management and Sustainability, International Journal of Economic Behavior
and Organization, International Journal of Psychological Studies and has been an ad hoc reviewer
for journals such as Information Resources Management Journal and Review of Innovation and
Competitiveness.
Emerging Trends in the Post-Regulatory
Environment: The Importance of Instilling
Trust
Semen Son-Turan
Abstract The financial services industry is one of the most critical pillars of
economic growth and sustainable development in any country. As such, the findings
of the 2016 Edelman Trust Barometer, that measures trust in institutions with more
than 33,000 respondents in 28 countries over the last 15 years, are highly alarming.
Accordingly, the financial services industry is ranked among the lowest with a mere
51 % on a global basis. Despite this darkened outlook, areas exist that seem to be
promising: Sustainability management, responsible innovation and the organized
and systemic efforts to increase transparency, comparability, accountability and
reliability. Although the recent crises in financial markets have led regulators to
come to a general agreement that a mutual effort is needed to develop procedures
for increased compliance standards, and increase the pace of harmonization in
accounting and financial reporting standards, the industry is faced with an imminent
challenge: The low levels of trust in financial services. In this chapter, the author
discusses how to re-build trust and reputation of the industry.
1 Introduction
There are some industries in which trust, confidence, the feeling to be in “good
hands”, a gut instinct that the particular company is “the right one” for you, weigh
relatively more heavily as a decision factor when considering to start or continue
working with that specific institution. Financial services institutions are intermedi-
aries with whom most people entrust their nest eggs and rely on these financial
“trustees” to keep them safe and ready to be returned with an “interest” when asked
to do so. Be it for investing or even speculating rather than saving purposes, risk-
S. Son-Turan (*)
Department of Business Administration, MEF University, Ayazaga Cad. No. 4, Maslak
Sariyer, Istanbul, Turkey
e-mail: semen.son@mef.edu.tr
savvy investors, too, do exercise judgment over whether their financial institution is
worthy the opportunity cost of not working with an alternative competitor.
The study of trust has been attracting diverse groups of researchers for long.
Hence, various measures, definitions and drivers of trust depending on the disci-
pline of the scholar, the types of stakeholders and the industry specifics (Huberman
2001; Tyler and Stanley 2007; Guiso et al. 2008), have thus far been determined.
The concept of trust in relation to financial services can be addressed in a
systemic context (w.r.t. financial markets and their instruments), or on an institu-
tional basis (ie. the confidence customers have in local banks, their stock brokers or
insurance agents). Clearly, the financial system is comprised of markets, institu-
tions, instruments and stakeholders, who all interact and create chain reactions,
causing spillover effects and are even contagious on international levels. However,
this chapter focuses on the institutional perspective and how financial institutions,
in specific, can and do tackle customer trust, confidence and reputation-related
issues.
Without the existence of risk or uncertainty about the outcomes of certain
actions, trust would not be needed. Trust inherently is associated with vulnerability
and individuals are potentially willing to accept such on basis of positive expecta-
tions about the intentions or behavior of another in a situation of interdependence
and risk (Ennew and Sekhon 2007).
The financial services industry is, by far, one of the most versatile industries,
which also determines its risk and return potential. In that sense it is uniquely fragile
as it mostly relies on human sentiment. What makes it so unique, and, volatile at the
same time, can easily be understood by examining the size and scope of financial
innovations, some of which are shown in Fig. 1. As trade began to flourish, so did
the financial system. In ancient Greece and during the Roman Empire, lenders
based in temples made loans and deposits, and changed money. Archeology from
this period in ancient China and India also shows evidence of money lending
activity. Whereas the medieval and Renaissance Italy and particularly the affluent
cities and wealthy merchants of Florence, Venice and Genoa are attributed the
greatest role of the development of the modern banking system. Seventeenth
century Amsterdam set the stage for many financial innovations such as the first
joint stock company in history, the Dutch East India Company. It is often consid-
ered to have been the first multinational corporation in the world and the first
company to issue stock. Undoubtedly, the invention of the automated teller
machine (ATM) at the end of the 1960s, the advent of telephone banking by the
mid 1980s, and the bloom of Internet banking in the 1990s laid the foundation of
today’s “modern” financial services industry with concurrent regulatory gaps either
paving the way to misuse of innovation or constraints opening up the stage for
disruptive competitors, such as new generation financial intermediaries, like
crowdfunding platforms, or cryptocurrencies, like Bitcoin.
Marked by global macroeconomic instability and increased disruption, the
financial services landscape clearly needs to restore trust and boost its clientele’s
confidence, which is comprised of retail banks, insurance companies, investment
banks, accounting, audit and consumer finance companies, among others.
Emerging Trends in the Post-Regulatory Environment: The Importance of. . . 347
Fig. 1 A short account of financial services history. Source: Author’s own elaboration, information
is drawn from various sources (http://www3.weforum.org/docs/WEF_FS_RethinkingFinancial
Innovation_Report_2012.pdf, https://bitcoinmagazine.com/articles/quick-history-cryptocurrencies-
bbtc-bitcoin-1397682630, http://www.money-zine.com/investing/investing/collateralized-mortgage-
obligations/, https://www.imf.org/external/pubs/ft/wp/2010/wp10164.pdf, http://w4.stern.nyu.
edu/research/technological_change_and_fin_innovation_in_banking.pdf, http://www.freedman-
chicago.com/ec4i/History-of-Crowdfunding.pdf)
The 2016 Edelman Trust Barometer1 reveals that the financial services industry,
although being on an upward trend since 2012 with an eight-point increase, ranked
last in the 28-country survey of the general population with a trust rating of 51 %.
1
Source: http://www.edelman.com. The 2016 Edelman Trust Barometer surveyed more than
33,000 respondents with an oversample of 1150 general population respondents ages 18 and
over and 500 informed public respondents in the U.S. and China and 200 informed public
respondents in all other countries representing 15 % of the total population across 28 countries.
348 S. Son-Turan
According to the results, the public is not only interested in adherence to profit-
related goals but is also perceptive towards the societal contributions of firms.
Integrity and engagement are among the potential drivers of lower trust levels.
The previous, 2015 Edelman Trust Barometer, on the other hand, pointed out that
only technology-related innovation in the financial services industry, that is elec-
tronic and mobile payments, garnered more trust than the industry itself.
According to a PwC report (PwC 2014),2 the problem the industry faces, is
bigger than trust. It is about the apathy and frustration of its clients who feel that all
financial services institutions are the same. Anxieties have multiple drivers, (except
for the investment banking sector that is most influenced by press coverage),
personal experience is the most significant factor determining the trust level
followed by press coverage, transparency of price and terms/conditions and word
of mouth. According to the said report, among the factors that might improve
consumer trust comes greater transparency on products and services (46 %), stricter
codes of conduct for employees (41 %), changes to remuneration rules (40 %) and
improved internal governance (37 %) (Fig. 2).
Reputation and trust are closely related. Jaffer et al. (2014), assert that strong
trustworthiness, willingness and competence in keeping commitments, requires that
the responsible institutions for delivering an obligation both, render an account of
their performance, and be held accountable for such. To that end, informed and
objective performance appraisal (what has been done and clearly communicated
and enforceable standards of what ought to be done) and clear accessible commu-
nication of such is necessary.
Kindleberger and Aliber (2005) explain that the history of the financial services
industry has been the stage for a multitude of bubbles, scandals, crashes, panics and
fraudulent activity. Clearly, recent financial crises, such as the 2008 US Housing
Bubble resulting in the sub-prime mortgage crisis (the financial crisis of 2008), the
2001 US Internet stock crash, the 1985–1989 bubble in real estate and stocks in
2
The report is based on analysis of a survey of over 2000 people across the UK.
Emerging Trends in the Post-Regulatory Environment: The Importance of. . . 349
Finland, Norway and Sweden, and the bubble in real estate and stocks in Thailand,
Malaysia, Indonesia and several other Asian countries have not contributed posi-
tively to customers’ perceptions of financial services institutions. Kindleberger and
Aliber differentiate between bubbles that are swindles and those that are not.
According to the scholar, the Mississippi Bubble was not a swindle; the South
Sea Bubble was. A bubble is said to generally start with an apparently legitimate or
at least legal purpose. Accordingly, what became the Mississippi Bubble initially
started as the Compagnie d’Occident, to which the Law system added the farming-
out of national tax collections and the Banque. In the South Sea Bubble on the other
hand, the monopoly of trade in the South Atlantic is said to be purely incidental
(Kindleberger and Aliber 2005: 190).
Whether or not investors in financial markets can differentiate between purpose-
ful and incidental financial tragedy though is a different concern.
2 Background
Be it retail or wholesale finance, the sources and drivers of trust (or distrust) are
more or less the same across the globe.
Llewellyn (2014) discusses the importance of trust in financial services, with
particular emphasis on the UK. According to the author, some of the several
structural and behavioral elements as a result of which trust has been lost include:
(1) Succession of high-profile scandals, the (complex) nature of financial products
and services and the vulnerability of retail customers’ trust is an important issue,
(2) numerous episodes of mis-selling of some financial products, (3) a lack of
diversity w.r.t. ownership structure of financial firms, corporate governance
arrangements, capital structure and primarily business models, the latter reducing
consumer choice and effective competition, (4) the fact that relationship banking
has given way to transactional banking, which in turn has promoted a sales culture
and potentially hazardous incentive structures within banking and other financial
firms, (5) a serious erosion in the application of the principles of the “Treating
Customers Fairly” regime, which was imposed on the retail financial services
industry by the then regulator—the Financial Services Authority (FSA) (in the
UK), (6) low priority given to ethical standards within financial firms, (7) the
existence of perverse incentive structures inherent in the shareholder value maxi-
mization model and within financial firms in forms such as bonuses and salaries,
(8) Opportunistic cross-subsidization by life assurance firms, who offer new cus-
tomers better returns than existing ones, (9) unjustifiably high and complex charges
in a complex intermediation setting and the lack of transparent pricing, (10) the lack
of truly effective competition in some retail markets, (11) the lack of access to
financial products and services for some consumers (financial inclusion issues), and
(12) the fact that independent advisory market is weak due to the exit of several
retail banks as a result of regulatory changes.
350 S. Son-Turan
3
https://www.accenture.com/us-en/insight-financial-services-technology-rebuild-consumer-confi
dence.aspx
Emerging Trends in the Post-Regulatory Environment: The Importance of. . . 351
should also adopt a more sophisticated approach that explores how innovation and
new technologies can simplify the consumer experience and enhance their interac-
tion with financial services. According to the report, this would enable the delivery
of faster, safer and more convenient ways for consumers to handle their finances,
independent of government intervention. Furthermore, with regards to the propo-
sition that financial crises have been the major cause of declining trust, the findings
show that more long-term drivers, such as the distinctive nature of consumer
finance products, are more instrumental in defining the concept of trust. Finally,
the report proposes two interventions: First, the regulator should create a kite-mark
for a wide range of privately provided ‘trusted financial products’, from current
accounts to pensions, which would conform to mandated standards and act as
market norms against which all other products could be compared. Second, having
secured product quality through the ‘trusted product’ kite-mark, competition
between providers should be strengthened.
However, whether regulation is a substitute for reputation is another issue and
beyond the scope of this article.
A Global Consumer Banking Survey (EY 2014), which includes responses from
over 32,000 retail banking consumers across 43 countries, explores the role of trust
in creating customer advocates and how valuable trust is to the overall banking
relationship. To that end, it established that the one most sought after benefit that
needs improvement is the transparency of fees and simplicity of offers and com-
munication. Secondly, while customers are satisfied with the convenience of
traditional banking, their expectations are constantly rising as new technologies
and consumer benefits develop. The findings of the report suggest that trust is
mostly associated with the customers’ experience of how they are being treated,
followed by communication and problem solving.
3 Literature Review
The role of trust, confidence and reputation in financial services has been addressed
frequently in academic literature. Tyler and Stanley (2007), in their qualitative
research based on 147 in-depth interviews with corporate bankers and their clients,
find that small companies are more trusting than large corporates. Furthermore the
authors establish that, bankers use calculative and operational trust and were
cynical about their counterparts’ trustworthiness. Pi et al. (2012), propose a frame-
work of intention to continuously adopt online financial services and suggest that
(1) website trust influences on the intention to continuous adoption of online
financial services, (2) cognitive trust of online customers influences on affective
trust, (3) factors of transaction security, website and company awareness, prior
Internet experience, and navigation functions directly influence on cognitive trust
of online customers, and; (4) transaction security is the only factor that influences
on affective trust of online customers. Bejou et al. (1998), examining the relation-
ship between trust, ethics and relationship satisfaction establish that from the
customer’s perspective, the determinants of relationship satisfaction are thought
352 S. Son-Turan
4 Conclusion
Compliance, especially starting with the Sarbanes Oxley Act, has been thought to
be an effective tool, not only in hindering potential fraudulent activity, but also
thereby instilling the trust lost in financial services. However, surveys like the
Edelman Trust Barometer mentioned previously, imply that these measures have
not been enough and the public, and especially the millennials, which form a huge
percentage of the near future financial services customer base, cares about social
responsibility rather than simple control and compliance, which are more or less de
facto mechanisms nowadays.
The mind of a typical member of Generation Z (“Gen Z”4) is a crowded place
(Holland 2013). Gen Z-ers are already the biggest generational group in the U.S.,
4
Individuals born between 1995 and 2012, according to http://www.socialmarketing.org/newslet
ter/features/generation3.htm
Emerging Trends in the Post-Regulatory Environment: The Importance of. . . 353
having overtaken the millennials in what Sparks & Honey describe as a coming
“demographic tsunami” (Bershidsky 2014). An article by the Wall Street Journal
(WSJ 2016) reports that the first wave of Gen Z’s 1.8 million job candidates will
enter the labor force in May 2016 and, like preceding generations, they come
weighted with unique characteristics determined partly by the events and technol-
ogy that helped shape their formative years. The same article highlights findings of
a study from Randstad Holdings and Millennial Branding that portrays the digital
dependency of this generation: 84 % of Gen Z sleep with their phones. More and
more do we see headlines in popular media such as; “Gen Z is about to rock the
banking industry” (WSJ 2016), “Generation Z’ is entrepreneurial, wants to chart its
own future” (Northeastern News 2014), and “The money mind-set of Generation Z”
(Holland 2013).
To conclude, this chapter has portrayed a major concern for the financial services
industry: the declining trust of customers and investors, and the concurrent loss of
reputation. Understanding the dynamics underlying these developments will help
financial services companies revamp their product and service offerings by
adapting to the changing social, environmental and economic conditions. It is
advised to particularly gain a deep understanding of the needs and demands of
the newer generations and embrace their values. Secondly, transparency in man-
agement and reporting, with universally agreed upon rules and regulations, should
be a de facto understanding across the industry.
References
Bejou D, Ennew CT, Palmer A (1998) Trust, ethics and relationship satisfaction. Int J Bank Mark
16(4):170–175
Bershidsky L (2014) Here comes generation Z. 18 Jun 2014. http://www.bloombergview.com/
articles/2014-06-18/nailing-generation-z
Brown M, Whysall P (2010) Performance, reputation, and social responsibility in the UK’s
financial services: a post-‘credit crunch’ interpretation. Serv Ind J 30(12):1991–2006
Choi H, Varian H (2012) Predicting the present with Google Trends. Econ Rec 88(s1):2–9
Ennew C, Sekhon H (2007) Measuring trust in financial services: the trust index. Consum Policy
Rev 17(2):62
EY (2014) Winning through experience. EY Global Consumer Banking Survey 2014. http://www.
ey.com/Publication/vwLUAssets/EY_-_Global_Consumer_Banking_Survey_2014/$FILE/EY-
Global-Consumer-Banking-Survey-2014.pdf
Gill C (2008) Restoring consumer confidence in financial services. Int J Bank Mark 26(2):148–152
Guiso L, Sapienza P, Zingales L (2008) Trusting the stock market. J Fin 63(6):2557–2600
Herman J (2015). Innovators improve consumer confidence in the Financial Solutions Lab. 30 Mar
2015. https://www.uschamberfoundation.org/blog/post/innovators-improve-consumer-confi
dence-financial-solutions-lab/42924
Holland K (2013) The money mind-set of Generation Z. 13 Sep 2013. http://www.cnbc.com/id/
101026186
Howcroft B, Hamilton R, Hewer P (2007) Customer involvement and interaction in retail banking:
an examination of risk and confidence in the purchase of financial products. J Serv Mark 21
(7):481–491
354 S. Son-Turan
1 Introduction
Culture is the entirety of values, norms, beliefs and assumptions that govern
individuals’ attitude and behavior. Circumstantially, culture causes individuals’
attitudes and behaviors to represent similarities and at the same time allows
Recently finance and international business scholars have studied cultural factors as
legitimate variables in explaining financial decision making processes at individual
and institutional levels. Since culture might have an important role in economic
behavior such as financial decisions (Chang and Noorbakhsh 2009), researchers
tried to determine the dimensions of culture and contribute to understanding
national culture dimensions.
Hofstede’s cultural dimensions provided researchers with the most comprehen-
sive framework to analyze the effects of cultural values on business organizations
358 E.H. Cetenak et al.
rules, and intolerance toward deviant ideas and actions (Hofstede 1980; Podrug
2011). People in uncertainty-avoidant cultures favor an orderly structure in their
organizations, institutions, and personal relations and prefer well-anticipated events
(Mihet 2013). Uncertainty avoidance is also linked to preferences such as rules,
stability, uniformity, and especially closely related to psychological characteristics
widely discussed in behavioral financial economics such as conservatism and risk
aversion (Chen et al. 2015). These cultural properties influence attitude of people
towards risk at micro-level (Mihet 2013). Expectedly, people don’t want to take
risks in uncertainty-avoidant societies.
The opposite holds for people with a high tolerance for uncertainty. People in
low uncertainty-avoidant cultures often exhibit a low sense of urgency in ambigu-
ous, surprising, or unstructured situations. In contrast, people in high uncertainty-
avoiding cultures feel more anxious in such situations, and therefore tend to take
immediate action to reduce the level of ambiguity (Chen et al. 2015).
The power-distance dimension measures “the extent to which less powerful
members of institutions and organizations within a country expect and accept that
power is distributed unequally” (Hofstede 1983, 1998; Dimitratos et al. 2011; Mihet
2013). In high power-distance countries, organizations and institutions, individuals
know whom to obey. Formal communication channels work from upper levels
towards lower levels. In these systems, uncertainty is reduced by power-distance.
Institutions and all units have clearly defined processes (Sargut 2001). On the
contrary, decision-making is more likely to be decentralized in low power-distance
countries. Thus, organizational structures are fairly decentralized with flat hierar-
chical pyramids in such countries. Further, management in low power-distance
countries is more likely to delegate decision-making power (Dimitratos et al. 2011).
Masculinity stands for a society in which social gender roles are clearly distinct:
men are supposed to be assertive, tough and focused on material success, and
women are supposed to be modest, tender, and concerned with quality of life;
while Femininity indicates a society in which social gender roles overlap: both men
and women are supposed to be modest, tender and concerned with quality of life
(Hofstede 1983; Podrug 2011). In high masculinity societies, such propensities as
competition, money-making, audacity stand in the forefront; while humanitarian
propensities may lack behind. In feminine societies human interaction is important
and there is great emphasis on the quality of life (Sargut 2001).
In addition to individual and organizational variables, cultural background is an
effective criterion in decision making process. According to Li et al. (2013)
individualism has a positive and a significant association, whereas uncertainty
avoidance has a negative and significant association with corporate risk-taking.
Cultural values such as individualism and uncertainty avoidance may cause differ-
ent legal frameworks as well as different laws concerning investor protection and
creditor rights, which in turn may affect corporate risk taking. For example, more
individualistic countries have legal systems that support individual freedom and
autonomy, which may encourage corporate risk taking (Rehbein 2014).
Griffin et al. (2012) examined the effects of culture on firms in the manufactur-
ing sector in the period 1997–2006. They are the only ones who use a hierarchical
360 E.H. Cetenak et al.
Additionally, we also test effect of firm specific (Return on Assets and Size) and
country specific variables (Ease of Doing Business and Total Credits Provided by
Financial Sector) on these financial decisions.
In this study we examined the relationship between cultural dimensions and certain
fundamental financial decisions by using multilevel mixed models which are basi-
cally statistical models of parameters that vary at more than one level (Raudenbush
and Bryk 2002). Multilevel modelling (namely hierarchical linear models, nested
models or mixed models) is a generalization of regression methods, and as such can
be used for a variety of purposes, including prediction, data reduction, and causal
inference from experiments and observational studies (Gelman 2012).
Multilevel data structures used in multilevel models are very common in the
social sciences (Albright and Marinova 2010), e.g. students may be nested within
schools, voters within districts, firms within industry or country. This type of
The Effect of National Culture on Corporate Financial Decisions 361
multilevel data structure may cause several violations when using standard OLS
regression (Kayo and Kimura 2011: 363) such as correlated errors, biased estimates
of coefficient standard errors and wrongfully interpreting the results and signifi-
cance of the predictor variables (Mihet 2013).
Within multi-level framework, Multilevel Models allow to test multilevel
(industry or country level) theories simultaneously. Secondly, with multilevel
models we can control unbalanced data easily. For example, if the number of
firms varies widely across industries or countries, with multi-level fixed model
approach we can detect all of those multilevel effects separately. Furthermore, via
multilevel mixed models we can add cross-level interactions into a model and by
doing so we can see not only firm, industry or country effects but also their cross-
level interactions (Mihet 2013).
When considering financial decisions of a firm; it might be affected by its own
industry or its home country characteristics beside its own individual characteris-
tics. Thus, in order to explain behaviour of a firm properly, each level should be
considered in the analysis. In recent years, statistical methods that take into account
multilevel data have gained popularity (Albright and Marinova 2010).
In this study, we analyze three levels of financial decision determinants in
20 selected countries for the year 2014 (for the list of countries, please see Appendix).
Most of the countries are OECD countries. We excluded some of the European
countries because they exhibit cultural similarities with each other and may have
caused the study to be biased. We only included firms quoted in the NYSE for
U.S.A. with SIC codes varying from 2000 to 3999, to avoid heterogeneity problems
caused by industry diversity. We conducted the analyses only on non-financial firms.
First level in our study is ‘firm’, while the second level is ‘industry’, and third
level is ‘country’. By applying multilevel models, we assume that observations
across industries and countries are correlated amongst themselves. Similarly, it is
rational to suppose that firms working in the same industry and in the same country
have similar behaviour regarding financing decisions. Also we assume all of our
firm level independent variables have random slope which effected by both industry
and country factors. Our basic model we fit in this study is below:
Where i denotes country, j industry and k denotes firm. Additionally ui,, ri,j and
ei,j,k random error terms representing the variance respectively across country,
industry-country and firm-industry-country which are normally distributed with
zero mean and σ2 variance.
We have identified seven dependent variables from seven different models. Our
dependent variables are the leverage ratio, Altman Z score, R&D expenses, SG&A
expenses, working capital investment, retained earnings and discretionary accruals.
The data used in the analysis are given in Table 1. In order to calculate firm level
362 E.H. Cetenak et al.
Table 1 (continued)
Variables Description Source Mean SD
Country-level control variables
Ease of doing Economies are ranked on their ease of World 32.09 35.68
business doing business, from 1–189. A high ease Bank
of doing business ranking means the
regulatory environment is more condu-
cive to the starting and operation of a
local firm.
Credit from pri- Domestic credit provided by the finan- World 143.7 58.39
vate sector cial sector includes all credit to various Bank
sectors on a gross basis (% of GDP)
variables, income statement and balance sheet items were used by the authors. Ease
of doing business and credit from private sector variables were obtained from
World Bank, Hofstede’s cultural dimension variables were obtained from
Hofstede’s web site. Table 1 also presents definition, source and descriptive statis-
tics of each variable used.
Leverage was calculated as total debt to total assets ratio and used as proxy for
firms’ capital structure. The mean of Leverage was 0.24, which indicates an overall
low leverage ratio for the sample. The Altman Z score is used as proxy for risk or
distance from bankruptcy and was calculated by following Mackie-Mason’s (1990)
modified Altman Z score formula (see Table 1). The average of the Z scores is
1.415. R&D, SG&A expenses, working capital and retained earnings were all
scaled dividing by total assets. The average of R&D was 0.04, the average of
SG&A was 0.20 and the average of working capital and retained earnings were 0.39
and 0.23, respectively. Discretionary Accruals was calculated based on Modified
Jones model (Jones 1991) which was adjust by Dechow et al. (1995). Discretionary
earnings are an indication of a firm’s earnings management decisions and can be
considered as a clue for manipulation of earnings. The average was 0.08.
Hofstede’s four dimension indexes are independent variables in our analysis:
uncertainty-avoidance, masculinity, individualism and power-distance. The aver-
age of the uncertainty score was 63.92. Higher scores denote higher avoidance of
uncertainty. The average of masculinity score was 52.52, and the averages of
individualism and power-distance indexes were 49.90 and 60.21, respectively.
5 Results
Table 2 presents the results. Our dependent variables are leverage ratio, Altman z
score, R&D and SG&A expenses, working capital investment, retained earnings
and discretionary accruals. Size and return on assets are firm level control variables.
Ease of doing business and credit from private sector are country level control
variables. Most of the cultural variables have a significant effect on financial
decisions. On the other hand, the firm-level control variable ‘size’ has an effect
364
on all financial decisions while ROA has an effect on most of them. Similarly as can
be seen from the table, ‘ease of doing business’ and ‘credit from private sector’
variables have an effect on most of the financial decisions. The models are mean-
ingful as a whole.
6 Conclusion
In this study, we have aimed to determine the effect of national culture on financial
decisions. Firstly, we have given some information on the concepts of culture and
national culture. Then, we have theoretically discussed the effects of culture and
national culture dimensions, as defined by Hofstede, on the financial decisions.
Finally, we examined the empirical evidence of the effect of national culture on
various financial decisions by utilizing multilevel models.
Our first dependent variable was leverage ratio, which is a proxy for capital
structure. Leverage ratio denotes how extensively a company uses debt. It is some-
times directly associated with the riskiness of the firm, hence risk-taking. However, it
would be inadequate to say that the lower the leverage ratio, the less risky the business
is. In general, if a firm’s leverage ratio is too high, it’s a signal that the firm may be in
financial distress. But, at the same time a leverage ratio that is too low may be a sign
that the firm is over-relying on equity to finance its business, which can be costly and
inefficient. In our findings, leverage ratio is positively and significantly correlated with
both uncertainty avoidance and masculinity, while it is negatively and significantly
correlated with individualism. Masculinity, which denotes such traits as money-
making and competition, also affects risk-taking behavior. The tax advantage obtained
by using debt financing may encourage managers who are keen on increasing profits to
use leverage as a tool to do so. As individualism increases, leverage ratio decreases.
Altman Z Score’s gives an indication of risk or distance from bankruptcy as well
as financial performance. A higher Z score denotes lower riskiness for the firm.
Based on our results, uncertainty avoidance, individualism and power distance are
positively and significantly correlated with the Z score. As uncertainty avoidance
increases, risk avoidance increases. Higher individualism and power-distance also
denote risk avoidance and a better financial performance.
R&D expenditures are instrumental on future earnings rather than short-term
earnings. Thus, there is a trade-off for firms. A firm with high R&D expenditures
takes on a certain level of risk. Our findings indicate that the R&D expenditure is
negatively and significantly correlated with uncertainty avoidance and power-
distance. Risk avoiding managers prefer not to spend too much on R&D and
similarly an authoritative management culture (high power distance) that prefers
clearly defined rules isn’t expected to promote R&D expenditures.
SG&A expenditures include management salaries, bonuses, marketing and even
some R&D expenses. We can interpret the results based on these expenses similar
to R&D expenditures. These are expenses made not only for today’s earning but
also for future benefits. These expenses are correlated positively with individualism
and negatively with power-distance, which indicates that as individualism increases
366 E.H. Cetenak et al.
SG&A expenses increase and as power-distance becomes higher these expenses get
lower.
Working capital investments usually follow three main motivations: operations,
financial prudence and/or speculation. Working capital investments are signifi-
cantly and negatively correlated with all cultural dimensions examined here. The
negative relationship with uncertainty avoidance can be explained by risk avoid-
ance (higher working capital means financial flexibility and lower risk of cash
constraint). At the same time, results also suggest that masculine, authoritative and
individualistic management style prefer lower working capital investment. This
implies lower accounts receivable, lower cash flow and inventory and higher
accounts payable levels.
The variable retained earnings represent the portion of profits that the firm chose
not to distribute as dividend to the shareholders, rather retain for the firm, with
various motives. It has positive and significant correlation with uncertainty avoid-
ance and power-distance at the 5 % level, while a positive and significant correla-
tion with individualism at the 10 % level. This finding suggests that risk avoiding
firms retain more earning. At the same time, as individualism and authoritative
management levels increase, management prefers not to distribute dividends and
have higher retained earnings.
Discretionary Accruals were calculated based on Adjusted Jones model (Jones
1991) which modified by Dechow et al. (1995). Discretionary accruals are residual
of modified Jones model that obtained from year-industry-country estimation. We
considered absolute value of discretionary accruals because magnitude of residuals
is more important than its direction. Discretionary accruals are an indication of a
firm’s earnings management decisions and can be considered a clue for manipula-
tion of earnings. As individualism and power-distance increases, such earnings
management practices become less. An interpretation may be that as individualism
increases, managers become avoidant of unethical behavior. At the same time,
because power-distance reduces agency conflict, this results in lower discretionary
accruals.
As a whole, the findings in our study suggest that firms’ financial decisions
cannot be explained only by the risk-return formula. Cultural influences like power,
control, individualism and such play a crucial role in the financial-decision making
process.
The Effect of National Culture on Corporate Financial Decisions 367
Appendix
References
Albright JJ, Marinova DM (2010) Estimating multilevel models using SPSS, Stata, SAS, and
R. Indiana University, Bloomington, IN
Chang CH, Lin SJ (2015) The effects of national culture and behavioral pitfalls on investors’
decision-making: herding behavior in international stock markets. Int Rev Econ Fin
37:380–392
Chang K, Noorbakhsh A (2009) Does national culture affect international corporate cash hold-
ings? J Multinatl Fin Manage 19(5):323–342
Chen Y, Dou PY, Rhee SG, Truong C, Veeraraghavan M (2015) National culture and corporate
cash holdings around the world. J Bank Fin 50:1–18
Dechow PM, Sloan RG, Sweeney AP (1995) Detecting earnings management. Acc Rev
70:193–225
Dimitratos P, Petrou A, Plakoyiannaki E, Johnson JE (2011) Strategic decision-making processes
in internationalization: does national culture of the focal firm matter? J World Bus 46
(2):194–204
Gelman A (2012) Multilevel (hierarchical) modeling: what it can and cannot do. Technometrics
48:432–435
Griffin D, Li K, Yue H, Zhao L (2009) Cultural values and corporate risk-taking. University of
British Columbia and Peking University Working Paper
Griffin DW, Li K, Yue H, Zhao L (2012) How does culture influence corporate risk-taking?
doi:10.2139/ssrn.2021550. Available at SSRN: http://ssrn.com/abstract=2021550
Hofstede G (1980) Motivation, leadership, and organization: do American theories apply abroad?
Organ dyn 9(1):42–63
Hofstede G (1983) National culture in four dimension: a research-based theory of cultural
differences among nations. Int Stud Manage Org XIII(1–2):46–74
Hofstede G (1998) Attitudes, values and organizational culture: disentangling the concepts. Org
Stud 19(3):477–498
Jaggi B, Low PY (2000) Impact of culture, market forces, and legal system on financial disclo-
sures. Int J Acc 35(4):495–519
Jones JJ (1991) Earnings management during import relief investigations. J Acc Res 29:193–228
Kanagaretnam K, Lim CY, Lobo GJ (2011) Effects of national culture on earnings quality of
banks. J Int Bus Stud 42(6):853–874
368 E.H. Cetenak et al.
Kayo EK, Kimura H (2011) Hierarchical determinants of capital structure. J Bank Fin 35
(2):358–371
Kurtz RS (2003) Organizational culture, decision making, and integrity: the National Park Service
and Exxon Valdez. Public Integr 5(4)
Li K, Griffin D, Yue H, Zhao L (2013) How does culture influence corporate risk-taking? J Corp
Fin 23:1–22
Mackie‐Mason JK (1990) Do taxes affect corporate financing decisions? J Fin 45(5):1471–1493
Mihet R (2013) Effects of culture on firm risk-taking: a cross-country and cross-industry analysis.
J Cult Econ 37(1):109–151
Nolder C, Riley TJ (2013) Effects of differences in national culture on auditors’ judgments and
decisions: a literature review of cross-cultural auditing studies from a judgment and decision
making perspective. Auditing 33(2):141–164
Podrug N (2011) Influence of national culture on decision-making style. SE Eur J Econ Bus 6
(1):37–44
Raudenbush SW, Bryk AS (2002) Hierarchical linear models: applications and data analysis
methods, vol 1. Sage, London
Rehbein K (2014) Does culture influence corporate risk taking? Acad Manage Perspect 28(1):1–3
Sargut AS (2001) Kültürler Arası Farklılaşma ve Y€ onetim. İmge Kitabevi, Ankara
Emin Huseyin Cetenak is an Assistant Professor of Accounting and Finance at Nevsehir Haci
Bektas Veli University Department of Management, Nevsehir-Turkey. Dr. Cetenak has both BS
and MBA degrees in Business Administration from Cukurova University and PhD in Finance from
Cukurova University (2012) as well. His research fields are corporate finance, corporate gover-
nance, working capital management etc. He has taught Financial Management, Corporate Finance,
International Finance, Working Capital Management, Financial Risk Management courses, among
others, at both graduate and undergraduate levels.
Elif Acar, is an instructor of finance at Adana Science and Technology University Department of
Management Information Systems, Adana–Turkey. Ms. Acar has a BA in Economics from
Bogaziçi University (1994), an MBA from University of Hartford, CT (USA, 1998) with an
academic excellence scholarship, and is currently pursuing her PhD in finance at Çukurova
University in Adana. MS. Acar has extensive working experience in the finance field prior to
her academic studies. She worked in the banking sector in Turkey during 1995–1997, then at
Standard & Poor’s credit rating agency in New York and in London for a total of 12 years as a
corporate and project finance rating analyst with managerial responsibilities. She has conducted
corporate credit analyses on many projects in the energy and utilities sector, alternative energy
projects, public private partnerships, and infrastructure projects. Her research interests are corpo-
rate finance, capital structure, capital markets, information technology risk, and financial decision
making process. She has taught international finance, banking and financial markets at the
undergraduate level. She also gives seminars to entrepreneurs and mentors of entrepreneurs
through the university’s continuing education center and various university-industry partnership
organizations. Ms. Acar is bilingual in Turkish and English.
Human Side of Strategic Alliances,
Cooperations and Manoeuvrings During
Recession and Crisis
Tuna Uslu
Abstract Together with the globalizing economy, it is no more possible for any
system to survive by ignoring the market changes and transformations. A change
taking place anyhow in any place of the world triggers complex processes and
affects everyone by growing in waves. Successful ways of business conduct of
today is based on predicting the growth speed of these waves and on the ability to
carry out strategic cooperations and manoeuvres accordingly. Sometimes these
fluctuations also trigger serious crises. Apart from the shocks created in organiza-
tional structures, periods of crisis have complex effects on people. Some people
approach to these events in hesitation, while other people or organizations happen
to have skills to turn these processes into opportunity. The practical examples show
that the organizations that adapt to new condition by getting simpler and getting rid
of burdens in the constriction process are able to come out in a better condition
before the crisis. This section discusses the way of organizations to become human
oriented when acting strategically during strategic alliances, cooperations and
manoeuvrings.
1 Introduction
T. Uslu (*)
Graduate Program of Business Management, Istanbul Gedik University, Kartal, Istanbul,
Turkey
e-mail: tuna.uslu@gedik.edu.tr
transformation period (Ngyuen and Kleiner 2003). From the psychological per-
spective, the integration strategies and plans of organizations should be reviewed in
relation to moods of managers, supervisors and personnel (Mirvis and Marks 1992).
The company mergers are a desired incident for the company stakeholders while
it is a little different for employees. The company acquisitions and mergers are
infections and it is normal that the fever of the organization increases and the body
of the organization starts a resistance order (deGeus 1997). The mistakes that cause
failure of mergers are usually experienced in the post-merger integration process
(Simpson 2000). Particularly, the failure to assign the required importance to the
human factor during the integration process can be shown among the significant
causes of failure (Hutchinson 2002). It is emphasized that the emotions and
personal contributions of the members of organization should be taken into con-
sideration for a new formation to succeed (Syrjala and Tuomo 2007). The manage-
ment reorganizes the human resources of the merging businesses in this period to
make sure that the personnel not fitting to the structure of the new business are
discharged or leave voluntarily (O’Rourke 1989). If we study mergers by taking
human factor into consideration, every year thousands of people lose or change
their jobs upon mergers. This obviously may cause psychological traumas. Indi-
viduals losing their jobs or those who cannot adapt to their new work environment
are negatively affected by the mergers activities (Kusstatscher and Cooper 2005).
During crisis periods, business may realize a series of options that have direct or
indirect negative effect on the rights, health and security of their employees in order
to ensure compliance. The leading elements of the crisis adaptation strategy are
reducing the costs, decreasing the scale, closing down some businesses and units
and narrowing the employment. Shrinking is the first option chosen by the busi-
nesses to reduce their expenses and it can be defined as a passage from the current
organization structure to the required organization structure. In this sense, busi-
nesses reduce their productions in the event of crisis, avoid from creating new
employment, fire several employees, widespread temporary employment and trans-
fer certain parts of production to supplier relations and subcontractors (ILO 2009).
Economic crises may increase the inequality between income groups as well as the
death rates among both adults and children. Suicide rates among low-income young
men may increase. It was determined that the geographical and ethnic inequality
with respect to life expectation and death rate increased during crisis periods (Mills
2010). Suicide based deaths increase in periods when economic growth is reduced
and there is shrinkage (Bezruchka 2009). As there are such big and serious
reflections of economic crises and shrinkage on human psychology, the effects on
individuals and employees should be managed by the public and organization
managers. It can be said that the crisis period start in a business when disputes
among employees increase, the effect of business operations decreases, the business
image is hurt and it becomes gradually impossible for the business to achieve its
goals (Fink 1986).
372 T. Uslu
The word crisis is a concept that is used in daily life and in almost every part of life.
Crisis is a very delicate issue that takes place in all organizations like non-profit
organizations, state organizations, service organizations, small partnerships, strate-
gic cooperations and international organizations (King 2002). An important feature
of crisis is that it is a circumstance including events that may cause significant
organization losses and involve time pressure for making decisions (Mitrof 1992).
Organizational crises affect each unit and individual of the organization in waves
and leave significant destruction behind. Crises are the circumstances that threaten
the priority goals of the business, involve limited time to prevent, shock decision
makers when take place and therefore cause high stress. Another essential feature
that distinguishes crises from ordinary circumstances is that it involves the require-
ment of immediate intervention to the emergency. One needs to act quickly in crisis
periods. In this sense, a crisis period can be defined as the changes that require
urgent response and rapid adaptation (Puchan 2001). As the response to crisis to be
given by the organization is determined and guided by individuals, one should
study first the managerial, then the individual and then the organization reactions
(Milburn et al. 1983). In addition, the effects of crisis on employees are very
important for the organization and organizational activities (Podolok 2002).
In the crisis period, there may be shocking stress reactions, symptoms of
violence, distress from problems, depression, exposure to assault and sadness.
The concerned stress may cause long term mental and physical diseases as well
as family problems. In such periods, the management of the organization should
have its personnel feel that it cares about them (Persons 1995). In crisis periods,
people experience “hidden anger”, “accusations of each other” and “communica-
tion disorder”. These feelings rapidly spread in the work places and employees start
to think that they are the ones who are wronged the most instead of sharing and thus
reducing their feelings. The variables that affect the thoughts of an employee about
his job include salary, promotion chances, social benefits, managers, colleagues,
working conditions, communication, security, efficiency and quality of job. Each of
these variables has various effects on the job satisfaction (Berry 1997). In the crisis,
the only factor that motivates the employees to do their job is the fear of being
unemployed. Employees may show the tendency to work extra hours because of
fear of losing their jobs (Uslu 2012).
Corporate strategies and crisis intervention plans affect the employees in differ-
ent ways. They may play a role in making the employees to develop positive or
negative attitudes. For this reason, managers get into new paths of seeking new
structure and transformations which will bring the advantage of competitiveness to
their organization and will make their institutions to survive. In the same time, this
change also affects the relations between the employee and the organization. With
regard to the employees, not having information about the future and being nervous
about the uncertainty may lead to the tendency of resisting the intervention process
(Uslu 2012).
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 373
Crisis management is not a discipline that can be learnt in the middle of a storm
during an organizational work. Crisis management should be learnt when there is
no cloud in the horizon (Hesselbein 2002). The manager should be able to have
good estimation of long term circumstances to avoid from the possible future crises.
However, many organizations evaluate short term conditions quickly while ignor-
ing the long term conditions (Schleh 1974: 19). According to the literature, this
approach constitutes the basis for unpreparedness and failure in crisis.
According to one view, the failure in crisis actually makes us prepared for both
the present and future crises. Particularly, a business which didn’t have a crisis
before will be caught unprepared to the crisis if it gets wrong signals from the
market (Silver 1992: 13). The studies didn’t show a significant relation of crisis
experience and technological risk in the industry with crisis preparedness while
organizations with high performance in the market were found to be more prepared
to crisis (Carmeli and Schaubroeck 2008). Some companies do not panic at the
crisis environment but develop various product and production processes by
avoiding excessive reactions and creating different tactics and crisis plans for
alternative possibilities. It is attempted not to reflect the increases in production
costs to the product prices thus the expected inflation rate is taken into consideration
in determining the prices. In sales, it is attempted to increase the sales volume by
presenting attractive price offers to customers (Barton 1994; Mitrof 1988). In a
sense, the businesses that turn crises to opportunity are the organizations that use
proactive approaches in their operations and that constantly learn, improve and
develop themselves even in normal times beyond being prepared.
Organizations use certain methods to achieve their strategic, tactical and oper-
ational goals. These methods they use are also perceived and explained by the
society somehow. There is a relation between the rules applied by the organization,
way of behaviour, perception of the organization by the society and the perfor-
mance of the organization. If the behaviours and applications are accepted, the
image will increase and this will bring financial support to the sales of the organi-
zation and investment opportunities. The benefits obtained by the employees in this
case will result in more work, protection of institution and integration with the
institution (Bromley 1993). Particularly the human resources policies of the insti-
tution are the factors that configure the image to be provided by the employees with
respect to their own organizations, culture structure, vision and the image taking
place out of the institution (Dowling 1993). In order to create an image of a strong
organization, the needs of the employees and their expectations from their organi-
zation should be covered. Employees generally need a vision and be proud of their
jobs. Again, the employees expect a shared organization culture, a communication
climate operated by all aspects and career opportunities (Schutz and Cook 1986).
Employees will be able to experience an internal transformation in compliance with
the process. Structural premises interact with personality traits and empower
employees while they also ensure development of attitude against the condition.
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 375
As a result of this transformation, they will be able to acquire a role where they can
express and represent themselves in this uncertain atmosphere or newly established
balance. These effects lead to social identity formation of individuals and are
reflected to observable and organizational outputs through an external representa-
tion process (Uslu 2014). The employees in the process first experience an internal
transformation and get repositioned and then externally describe themselves
through this new identity and move towards outcomes (Fig. 1).
Employees want to make sure that they are safe particularly at periods of crises.
They need to trust in their leaders and experience internal peace caused by loyalty
to leaders. It is only this way that employees can take an effective role for the
organization to overcome short term circumstances and resolve the crisis in a short
time (Mitroff 2001: 19). The most important issue to give priority in preparing the
crisis plan is to ensure maximum safety of the employees and to present them a
psychological peaceful environment (Perra and Morrison 1997).
Other than that, the crisis plans should be able to root causes of crisis and what
can one or more possible factors are. In addition, they should be able to prevent
aggravation of emergency case and to cover possible serious outcomes (Harris
1996). Good management of crisis period refers to determining the factors of crisis
in a restrained manner, to create crisis teams, to take corrective measures to avoid
long term problems and to make flexible emergency plan against a possible future
crisis (Allen 1986).
In crisis conditions, creating an independent crisis team becomes useful in
achieving a more effective solution. In addition to normal operations, tasks to be
carried out with new groups to manage the crisis period become useful in reducing
tension of employees and encouraging drive for success. These temporary and even
independent working groups are the process of temporary coordination of managers
at different units. These groups try to solve a distorted structure or a certain problem
(Knowles and Saxberg 1988).
376 T. Uslu
Considering the fact that there may be conflict of authority at times of recession and
the managers controlling the behaviours of employees may be changed or reduced,
it is seen that trust and authorization at such times have critical role for the company
to perform its essential functions. On the other hand, there feeling of trust between
management and employees decreases at times of recession. The main cause of it
that the employees start to doubt about the openness of the top management, and to
think that the management is concerned about its own needs, doesn’t do its job well
and that the company is no more reliable (Mishra 1998). Conflicts between
employees, their thoughts on decisions made in the organization, concerns regard-
ing work related problems, hesitating to speaking out about neglect and impropri-
eties could do serious damage to the institutions (Morrison and Milliken 2000).
How the employees perceive the institutional management and communication
methods are the determining factors in terms of job satisfaction and performance
(Zhui et al. 2004).
When reducing the management expenses, one should be careful not to damage
the basic function of the organization and not to create an excessive opposition
among the employees (Chang and Campo-Flores 1980). The way of coping with
recession is more critical for the success of the operation rather than the recession
itself. The studies showed that the recession actions merely for reducing the number
of employees usually ended up with failure. On the other hand, considering the
criteria of increasing quality and minimum impact to employees, it is observed that
the more success is achieved by the company by reducing its expenses in more
comprehensive recession actions for changing the strategies, processes, control,
product and services of company (Mishra 1998).
The fact that there is a widespread perception among the employees that equality
and justice are not observed in the relation environment and applications in the
work places during process like crisis, narrowing or strategic cooperations causes
an environment that has negative effect on the health of employees (Wilkinson
2001). Similarly, it is seen that many studies on mergers and acquisitions only deal
with the financial function and strategy selection of mergers but ignore the ethical
problems in this process (Lin and Wei 2006). This has a negative effect on
employees. There are two ethical problems in the mergers and acquisition opera-
tions (Werhane 1988). One of them is the violation of the rights of employees as the
employees lose their jobs after mergers. Another ethical problem is the violation of
the shareholders. Although the shareholders are the people who are affected the
most by the operation, they are little informed about their liabilities, obligations and
benefits they will obtain from the merger.
An important element to prevent the formation of organization crisis is to ensure
the flow of correct and sufficient information. Depending on the ability of the
system, quality information within the decision process includes an effective
information flow to avoid excessive loading in the system (Smart and Vertinsky
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 377
1977: 640). In order to obtain information on a crisis, one needs to hear all people in
an organization. It is necessary to benefit at the period of determination of the crisis
from the independent observer out of the business just like the people in the
business (Augustine 1995, 2000: 29). The most important issue during the crisis
is communication and exchange of information. Hence, most of the conflicts and
problems in the process are caused by misunderstandings. When the case becomes
problematic, the words to use should be selected more carefully and the actions to
take should be more meticulous (Coombs 2001; Goldsmith 2002). In addition, the
crisis should be explained to employees with all causes of it during the process.
Information should be given on how the management will run the crisis and the
measures to be taken should be explained (Mishra 1998). If the employees cannot
receive replies to eliminate their personal concerns, they cannot show the perfor-
mance expected from themselves in the crisis. The management should use the
“mobile management” technique during the crisis period. The leader should speak
to the employees and colleagues face to face and one to one. In addition, employees
should be able to feel that the management cares about their expectations. The
concerns of employees should be noticed by the management and their views
should listened and appreciated (Sherman 2001: 30–31). If the management acts
with the feeling of responsibility and cares about internal communication, the
employees will be the most willing and effective advocate of the organization.
This will enable them to easily overcome the problems they face (Cohn 1991: 20).
In addition, the communication between these people after the crisis feeds the roots
of innovation (Hurst 1995).
The skill of an organization to expand its capacity to determine its own future has a
very big importance during the crisis which is an unplanned development. The
resistance of organizations that constantly improve themselves against sudden
changes is higher. For this, however, the learning in the organization should be
continuous or one needs to start a process of cultural change which will radically
change the learning approach. The learning organization is centered first of all on
the change of mentality. A learning organization is a place that allows people to
form and discover their own realities. How to change or reorganize is based on the
constant expansion and organization of the capacity of a learning organization to
determine its own future (Solomon 1994).
378 T. Uslu
Lean organization concept is another concept that is closely related with and
emerging in practice as a result of the concepts like level reduction, zero hierarchy,
shrinking and reorganization. Lean organization refers to the redefinition and
organization of the functions, departments and processes to provide a positive
contribution to value creation (Womack and Jones 1996). This concept involves
the integration of the organization structure to make sure faster response to the
quality and standards requested by consumers. In lean organizations, the prior
objective is to exclude the activity and positions for creating added value and to
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 379
make the decision maker and the one doing the job as close as possible. This is a
structure that is free from details, not delaying works and able to give prompt
reaction. The most import differences between “Lean” and “Fordist” production
include a the creation of a working organization to make more use of the intellectual
knowledge of the work force which is one of the principles of lean management
(Lewchuk and Robertson 1997). Team works also involve the most important
aspect of the lean production practices (Kirkman 1997: 735). The importance of
lean organizations is increased due to the narrowing of time, necessity to take quick
decisions and to show prompt reactions to the circumstances particularly during the
periods of crisis. The lean organization model is an organization model that can
promptly response to change and that is away from centralization and hierarchy
which supports the structural functionality of organizations for preventing or
observing crises.
5 Conclusion
The most widespread strategic cooperations are mergers, acquisitions and assign-
ments, joint venture and licence agreements. It is highly remarkable that these
strategies take place particularly during and after crisis periods. As mentioned in the
literature, the process of mergers and acquisitions may be a threat for the employees
to maintain their organizational identities. Therefore, the operations of mergers and
acquisitions need to be carried out specifically by pre-planning, support and
improve positions of employees in the workplace. Employees will be able to
experience an internal transformation in compliance with the process. Structural
premises interact with personality traits and empower employees while they also
ensure development of attitude against the condition (Uslu 2014: 278). It is seen
that structured innovations in the process empower process management activities
and reduces the organizational cynicism and intention to leave of employees (Uslu
2014: 309). Open and ethical communication of leader has also an important role on
the organizational commitment of employees through process management (Uslu
2014: 310). As a result of the transformation, employees will be able to acquire a
role where they can express and represent themselves in this uncertain atmosphere
or newly established balance. These effects lead to social identity formation of
individuals and are reflected to observable and organizational outputs through an
external representation process (Uslu 2014: 270).
In general, the biggest negativities for employees in organizations that undergo
critical processes like strategic cooperations are experienced in the post-merger
integration process. Employees perceive weakening with respect to positive guid-
ance within the framework of positive leadership and with respect to open and
ethical communication especially in this period. Therefore, their work enthusiasm
and target oriented hopes can be broken. This causes a reduction in benefitting from
the sources of the organization during the process of negotiation and contracting as
well as a reduction of psychological ownership in the following integration process.
In this period, organizational cynicism levels of employees increase and their
organizational commitments and job satisfactions get weaker (Uslu 2014: 316).
Change and crisis management is also not a discipline that can be learnt in the
middle of a storm. It should be learnt when there is no cloud in the horizon.
Structural transformations are operations that need to be carried out by planning
on the levels of organization, division and individuals before the process by the top
management. Therefore, regardless of the fact that there is a merger, acquisition,
assignment or discharge, it is observed that transformations, that are attempted
without determining procedure, innovation and process stages, corporate infrastruc-
ture, organizational communication instruments, leadership approach, human
resources practices, would fail (Uslu 2014: 318).
The priority activity of the management at the first stage of the transformation is
to bring a new structure and understanding to the organization according to the
changing conditions so that the organization can achieve its goals and objectives. In
a sense, this is the process of re-organizing and addressing the management
382 T. Uslu
References
Allen JE (1986) Beyond time management: organizing the organization, 1st edn. Addison-Wesley,
Boston, MA. ISBN 0-201-15793-4
Augustine NR (1995) Managing the crisis you tried to prevent. Harvard Bus Rev 1(4):147–158
Augustine NR (2000) Managing the crisis you tried to prevent. Harvard business review on crisis
management. Harvard Business School, Boston, pp 1–32
Barton L (1994) Crisis management: preparing for and managing disasters. Cornell Restaurant
Admin Q 35(2):63
Berry LM (1997) Psychology at work. McGraw Hill, San Francisco, CA
Bezruchka S (2009) The effect of economic recession on population health. Can Med Assoc J 181
(5):281–282
Bijlsma-Frankema K (2001) On managing cultural integration and cultural change processes in
mergers and acquisitions. J Eur Ind Train 25(4):192–207
Bromley BD (1993) Reputation image and impression management. Wiley, London
Bruner RF (2004) Applied mergers and acquisitions. Wiley, New York
Buono AF, Bowditch JL (2003) The human side of mergers and acquisitions: managing collisions
between people, cultures, and organizations. Beard Books, Washington, DC
Burnett JJ (1998) A strategic approach to managing crises. Public Relat Rev 24(4):475–488
Carey DC, Ogden D (2004) The human side of M & A: how CEOs leverage the most important
asset in deal making. Oxford University Press, Oxford. ISBN 978-0-19-514096-5
Carmeli A, Schaubroeck J (2008) Organizational crisis-preparedness: the importance of learning
from failures. Long Range Plann 41:177–196
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 383
Cartwright S, Cooper CL (1993) The psychological impact of mergers and acquisition on the
individual: a study of building society managers. Hum Relat 46:327–347
Cartwright S, Schoenberg R (2006) Thirty years of mergers and acquisitions research: recent
advances and future opportunities. Br J Manage 17(1):1–5
Chakrabarti A, Mitchell W (2005) A corporate level perspective on acquisitions and integration.
In: Cooper CL, Finkelstein S (eds) Advances in mergers and acquisitions, 4th edn. Elsevier,
Oxford
Chang YN, Campo-Flores F (1980) Business policy and strategy. Goodyear, Santa Monica, CA
Cohn RJ (1991) Pre-crisis management. Exec Excell 8(10):20
Coombs TW (2001) Teaching the crisis management/communication course. Public Relat Rev 27
(1):89–101
Covin TJ, Sightler KW, Kolenko TA, Tudor RK (1996) An investigation of post-acquisition
satisfaction with the merger. J Appl Behav Sci 32:125–136
De Cock C, Rickards T (1996) Thinking about organizational change: towards two kinds of
process intervention. Int J Org Anal 4(3):233–251
de Silva SR (1997) The changing focus of industrial relations and human resources management.
In: Paper presented at Industrial Labour Organization in Asia-Pasific in the Twenty-First
Century, Turin, Italy, 5–13 May 1997. ILO, ACT/EMP Publications
deGeus A (1997) The living company. Longview Publishing, Chicago, IL
Deloitte (2010). Deloitte M&A Reports, Deloitte Turkey
Dessler G (1986) Organization theory integrating structure and behavior, 2nd edn. Prentice-Hall,
Englewood Cliffs, NJ, p 449
Dowling GR (1993) Developing your company image into a corporate asset. Long Range Plann 26
(2):101–109
Elmuti D, Kathawala Y (2001) An overview of strategic alliances. Manage Decis 39(3):205–217
Fink S (1986) Crisis management: planning for the inevitable. Amacom Press, New York
Gertsen MC, Soderberg AM, Torp JE (1998) Different approaches to understanding of culture in
mergers and acquisitions, cultural dimensions of international mergers and acquisitions. de
Gruyter, Berlin, pp 17–38
Goldsmith B (2002) Leadership in crisis: best practice from people who’ve been there. Am Water
Works Assoc J 94(1):32–34
Griffen MA, Neal A, Parker SK (2007) A new model of work role performance: positive behavior
in uncertain and interdependent contexts. Acad Manage J 50:327–347
Hammer M (1990) Reengineering work: don’t automate, elaborate. Harvard Bus Rev Jul–
Aug:104–112
Hammer M, Champy J (1993) Reengineering the corporation. Nicholas Brealy, London
Hansen G, Wernerfelt B (1989) Determinants of firm performance: the relative importance of
economic and organizational factors. Strateg Manage J 17:399–411
Harris J (1996) Sharpen your team’s skills in project management. McGraw-Hill Education,
London
Hesselbein F (2002) Crisis management: a leadership imperative. Acad Manage J 34(2):63
Hurst DK (1995) Crisis & renewal: meeting the challenge of organizational change. In: Tushman
ML, de Ven AHV (eds) From the management of innovation and change series. Harvard
Business School Press, Boston, MA
Hutchinson S (2002) Managing the people side of mergers, why communication is critical for the
success of mergers and acquisitions. Melcrum 6(5):26
ILO (2009) Health and life at work: a basic human right, 28 April World Day for safety and health
at work. International Labor Organization, Geneva, pp 7–8
Kavanagh M, Ashkanasy N (2006) The impact of leadership and change management strategy on
organizational culture and individual acceptance of change during a merger. Br J Manage
17:83–105
King G (2002) Crisis management and team effectiveness: a closer examination. J Bus Ethics 41
(3):235–250
384 T. Uslu
Kirkman B (1997) The impact of cultural values on employee resistance to teams: toward a model
of globalized self-managing work team effectiveness. Acad Manage Rev 22(3):730–757
Knowles HP, Saxberg BD (1988) Organization leadership of planned and unplanned change.
Future 20(3):258–259
Krug JA, Aguilera RV (2004) Top management team turnover in mergers & acquisitions. Adv
Merge Acquis 4:121–149
Kusstatscher V, Cooper CL (2005) Managing emotions in mergers and acquisitions. Edward
Elgar, Cheltenham, p 31
Latack JC (1986) Coping with job stress: measures and future decisions for scale development. J
Appl Psychol 71(3):377–385
Lewchuk W, Robertson D (1997) Production without empowerment: work reorganization from the
perspective of motor vehicle workers. Capital Class 63:37–64
Lin YY, Wei YC (2006) The role of business ethics in merger and acquisition success: an
empirical study. J Bus Ethics 69:95–109
Lucio M, Stuart M (2004) Swimming against the tide: social partnership, mutual gains and the
revival of tired HRM. Int J Hum Resour Manage 15(2):410–422
Marks ML (1982) Merging human resources: a review of current research. Mergers Acquis
17:50–55
Marks ML, Mirvis PH (1992) Track the impact of mergers and acquisitions. Pers J 71:70–79
McHugh M (1997) The stress factor: another item for the change management agenda? J Org
Change Manage 10(4):345–362
Milburn TW, Schuler RS, Watman KH (1983) Organizational crisis. Part I: Definition and
conceptualization. Hum Relat 36(12):1141–1160; Part II: Strategies and responses. Hum
Relat 36(12):1161–1180
Mills C (2010) Health, employment and recession the impact of the global crisis on health
inequities in New Zealand. Policy Q 6(4)
Mirvis PH, Marks ML (1992) Managing the merger: making it work. Prentice Hall, Upper Saddle
River, NJ
Mishra EK (1998) Preserving employee morale during downsizing. Sloan Manage Rev 18
Mitrof II (1988) Crisis management: cutting through the confusion. Sloan Manage Rev 1
(Winter):1820
Mitrof II (1992) Effective crisis management. Acad Manage Exec 1(4):283–292
Mitroff II (2001) Crisis leadership. Exec Excell 18(8):19–20
Morrison EW, Milliken FJ (2000) Organizational silence: a barrier to change and development in a
pluralistic world. Acad Manage Rev 25:706–725
Ngyuen H, Kleiner BH (2003) The effective management of mergers. Leadersh Org Dev J
24:447–454
O’Rourke JT (1989) Post-merger integration: the Ernst & Young management guide to mergers
and acquisitions. Wiley, New York, pp 220–221
Olie R (1994) Shades of culture and institutions in international mergers. Org Stud 15(3):381–405
O’Shaughnessy KC, Flanagan DJ (1998) Determinants of layoff announcements following
M&As: an empirical investigation. Strateg Manage J 19:989–999
Pearson C, Clair JA (2003) Refarming crisis management. Acad Manage Rev 23(1):61–62
Pearson C, Mitroff II (1993) From crisis prone to crisis prepared: a framework for crisis manage-
ment. Acad Manage Exec 7(1):48–59
Perra RD, Morrison S (1997) Preparing for a crisis. Educ Leadersh 55(2):43
Persons W (1995) Crisis management. Career Dev Int 1(5):27
Podolok A (2002) Crisis management teams. Risk Manage 49(9):56
Puchan H (2001) The Mercedes-Benz A-class crisis. Corp Commun 6(1)
Quinn RE, Spreitzer GM, Hart SL (1996) Becoming a master manager: a competency framework.
Wiley, New York
Romanelli E, Tushman ME (1994) Organizational transformation as punctuated equilibrium: an
empirical test. Acad Manage J 37(5):1145
Human Side of Strategic Alliances, Cooperations and Manoeuvrings During. . . 385
Rousseau DM (1988) The construction of climate in organization research. Int Rev Ind Org
Psychol 3:139–159
Ruppel CP, Harrington SJ (2000) The relationship of communication, ethical work climate, and
trust to commitment and innovation. J Bus Ethics 24(4):313–328
Schleh EC (1974) Management tactician. McGraw-Hill Education, New York
Schneider B, Reichers AE (1983) On the etiology of climates. Pers Psychol 36:19–39
Schoenberg R (2000) The influence of cultural compatibility within cross-border acquisitions: a
review. Adv Merge Acquis 1:43–59
Schraeder M (2001) Identifying employee resistance during the threat of a merger: an analysis of
employee perceptions and commitment to an organization in a pre-merger context.
Mid-Atlantic J Bus 37:191–203
Schutz P, Cook J (1986) Porsche on nichemanship. Harvard Bus Rev Mar–Apr:98–106
Sheaffer Z, Negrin RM (2003) Executives’ orientation as indicators of crisis management policies
and practices. J Manage Stud 40(2):577–578
Sherman R (2001) How to communicate during times of crisis (Professionals at work). Bus Credit
103(10):30–34
Silver AD (1992) The turnaround survival guide: strategies for the company in crisis. Kaplan
Publishing, London
Silver AD (2014) Managing corporate communications in the age of restructuring, crisis, and
litigation: revisiting groupthink in the boardroom. J Ross, Plantation, FL
Simpson C (2000) Integration framework supporting successful mergers. Merge Acquis Can 12
(10):3
Smart C, Vertinsky I (1977) Designs for crisis decision units. Admin Sci Q 22:640–657
Solomon CM (1994) HR facilitates the learning organization concept. Pers J 59
Syrjala J, Tuomo T (2007) Ethical aspects in Nordic business mergers: the case of electro-
business. J Bus Ethics 80:531–545
Uslu T (2012) Investigation of the effects of the structural changes in organizations on employees.
In: 20th National management and organization congress, May 24–26. Proceedings Book.
Dokuz Eylul University, pp 45–49
Uslu T (2014) Perception of organizational commitment, job satisfaction and turnover intention in
M&A process: a multivariate positive psychology model. Unpublished PhD Thesis, Marmara
University, Department of Business Administration
Uslu T (2015) Study of the effect of re-organization process on the employee performance through
qualitative and quantitative methods. In: International congress on economy administration
and market surveys, Istanbul, p 107, 4–5 Dec 2015
Wanberg CR, Banas JT (2000) Predictors and outcomes of openness to changes in a reorganizing
workplace. J Appl Psychol 85(1):132–142
Weber Y (1996) Corporate cultural fit and performance in mergers and acquisitions. Hum Relat
49:1181–1193
Weisaeth L, Knudsen O, Tomessen A (2002) Technological disasters, crisis management and
leadership stress. J Hazard Mater 93(1):36–37
Werhane PH (1988) Two ethical issues in mergers and acquisitions. J Bus Ethics 7:41–45
Wilkinson C (2001) Fundamentals of health at work: the social dimensions. Taylor & Francis,
London, pp 9–10
Womack JP, Jones DT (1996) Lean thinking. Simon & Schuster, New York
Zhui Y, May SK, Rosenfeld LB (2004) Information adequacy and job satisfaction during merger
and acquisition. Manage Commun Q 18(2):241–270
386 T. Uslu
Tuna Uslu is an Assistant Professor at Istanbul Gedik University, Graduate Program of Business
Management, Istanbul, Turkey. He is also head of the Sport Management Department. Dr. Uslu
has a BS in Economics and Business Administration from Istanbul Bilgi University, an MSc in
Total Quality Management from Dokuz Eylul University and a PhD in Organizational Behavior
from Marmara University. His research interests lie into fields including micro-economics,
strategic management, sport management, cognitive psychology, industrial and organizational
psychology. He has taught Business Management, Entrepreneurship and Facility Planning, Strat-
egies and Methods of International Trade, Quality Management Systems, International Transpor-
tation and Logistics, Social Psychology, Management and Organization courses, among others, at
both graduate and undergraduate levels. He has several articles, awards and book chapters, joined
more than hundred national and international conferences. He is in the editorial board of Interna-
tional Journal of Business and Management, Management and Organizational Studies, Journal of
Management and Sustainability, International Journal of Economic Behavior and Organization,
International Journal of Psychological Studies and has been an ad hoc reviewer for journals such as
Information Resources Management Journal and Review of Innovation and Competitiveness.