Keywords

1 Introduction

Information privacy is a multi-disciplinary and crucial topic for understanding the digital world [4, 65]. Information privacy mainly relates to personal data stored in information systems, such as medical records, financial data, photos, and videos. In this research, we focus on online privacy where personal data are shared over the Internet.

Current research on information privacy highlights issues such as privacy concerns of online users [8, 17, 74], the so-called “privacy paradox”, referring to the inconsistency of users’ privacy-related behavior and their privacy concerns [40, 44, 81]. Another main strand of research includes Privacy-Enhancing Technologies (PETs) [58].

In the information age, privacy has become a luxury to maintain as data privacy can be violated on the internet through technical tools such as cookies or tracking online activities [11, 57]. However, the rapid growth of the Internet and what it has brought to people’s lives (especially during the past ten years) are truly astonishing. The Internet makes people’s lives incredibly convenient and websites will probably remain an important communication channel, along with direct messaging applications.

Privacy, however, is not just an Information Technology (IT) problem, although it could be in many cases. Many psychological, social and cultural factors play a significant role in the field of privacy. Human behavior is a considerable variable as individuals interact with others in online environments exchanging private information and making decisions about their privacy [15].

The variety of information that individuals share online can potentially characterize them [13, 54]. The mechanisms that individuals use when making online sharing decisions are the main focus of this research.

The individual decision process with respect to privacy is affected by multiple factors. Incomplete information bounded rationality, and systematic psychological deviations are considerable variables that influence individual’s privacy behavior [2, 10]. First, incomplete information refers to privacy decision-making, where third parties share personal information about an individual without her being part of the transaction. How personal information will be used might be known only to a subset of the parties making decisions (information asymmetry); thus, risk could be hard to calculate, as it may dependent on unknown random variables. Benefits and costs associated with privacy intrusions and protection are complex, multifaceted, and context-specific. They are frequently bundled with other products and services (e.g., a search engine query can prompt the desired result but can also give observers information about the searcher’s interests), and they are often realized only after privacy violations have taken place. They can be monetary but also immaterial and, thus, difficult to quantify.

Second, individuals would be unable to act in an optimal way, even if they had access to complete information. Especially when individuals have to manage huge volumes of data and make decisions about the protection or disclosure of personal information, bounded rationality limits their ability to process and memorize all their actions. They rely on simplified irrational models, strategies, and heuristics [6].

Third, individuals might deviate from the rational strategy, even if they had access to complete information and could successfully calculate optimization strategies for their privacy-sensitive decisions. A vast body of economics and psychology literature has revealed several forms of systematic psychological deviations from rationality that affect individual decision-making [36, 76]. For example, in addition to their cognitive and computational bounds, individuals are influenced by motivational limitations and misrepresentations of personal utility. Research in psychology also documents how individuals mispredict their own future preferences or draw inaccurate conclusions from past choices [5]. In addition, individuals often suffer from self-control problems, in particular, the tendency to trade off costs and benefits in ways that damage their future utility in favor of immediate gratification. Individuals’ behavior can also be guided by social preferences or norms, such as fairness or altruism. Many of these deviations apply naturally to privacy-sensitive scenarios. Any of these factors might influence decision-making behavior inside and outside the privacy domain, although not all factors need to always be present. Empirical evidence of their influence on privacy decision-making would not necessarily imply that individuals act recklessly or make choices against their own best interest. It would, however, imply bias and limitations in the individual decision process that we should consider when designing privacy public policy and PETs.

2 Privacy Trade-Offs in the Digital Age

What are the privacy implications of behavioral decision-making in online transactions? To answer this question we should notice what privacy stands for. For decades a long-lasting debate exists among scholars to define exactly what that right entails [61]. Undoubtedly, privacy is a fundamental human right [79], but also a “chameleon” that changes meaning depending on context [37]. Looking for a privacy definition in literature we found clear disarray. Nobody seems to have a very clear idea what the right to privacy is [53]. As Solove [72] points out, privacy means different things to different people.

Warren and Brandeis [79] in 1890 described Privacy as the protection of individuals space and their right to be left alone. Other authors have defined privacy as the control over personal information [80], or as an aspect of dignity, integrity and human freedom [68]. Nonetheless, all approaches have something in common: a reference to the boundaries between private and public.

Privacy in the modern world has two dimensions. First, it has to do with the identity of a person and, second, it has to do with the way personal information is used. Individuals during their daily online transactions as consumers of products and services have many topics to consider and decisions to make related to privacy. Consumers seek for maximum benefits and minimum cost for themselves. Firms, on the other hand, can benefit from the ability to learn so much about their customers. Under the above prism scientists working on behavioral decision-making focus their research on the trade-offs and the protection (or sharing) of information [4].

Privacy transactions nowadays occur in three different types of markets [3]. First, we have transactions for non-privacy goods where consumers often reveal personal information, which may be collected, analyzed and processed some way. In this case, the potential secondary use of information should be considered as a possibility. The second type of privacy-related transactions occurs where firms provide consumers free products or services (e.g. search engines, online social networks, free cloud services). In these transactions, consumers provide directly personal information, although the exchange of services for personal data is not always visible. The third type of privacy-related transactions occurs in the market of privacy tools. For example, consumers may acquire a PET tool to protect their transactions or hide their browsing behavior [7].

Consumers’ personal data analysis can improve firms’ marketing capabilities and increase revenues through targeted offers. Consequently, firms employ innovative strategies in order to allure consumers to easily provide more personal information and shape preferences [60]. By observing consumers’ behavior, firms can learn how to improve their services and turn to price discriminations strategies for clear profit [9]. On the other hand, consumers benefit from targeted advertisement strategies, since advertisements are tailored to consumers’ interests. Firms and consumers can both benefit from such targeting; the former reduce communication cost with consumers, and the latter gain easily useful information [75].

Finally, a more intangible but also important form of indirect consumers’ costs is related to the fact that the more an individual’s data is shared with other parties, the more those parties gain a bargaining advantage in future transactions with that individual. While consumers receive offers for products, data holders accumulate information about them over time and across platforms and transactions. This data permits the creation of a detailed dossier of the consumers’ preferences and tastes, and the prediction of her future behavior [29].

Results from literature about privacy transactions show that decision-making for the collection and diffusion of private information by firms and other third parties will almost always raise issues for private life. Consumers seem to act shortsightedly when trade-offs apply short term benefits and long term costs for privacy invasions. This suggests that consumers may not always behave rationally when facing privacy trade-offs. Current research talks about the privacy paradox phenomenon, where individuals face obstacles in making privacy sensitive decisions because of incomplete information, bounded access to the available information, and plenty deviations and behavioral biases suggested by behavioral decision research [2, 6].

3 Information Privacy in Cloud Computing: A Game Theory Approach

In the literature, the adoption and implementation of cloud computing technology have become an important milestone for modern organizations and inseparably connected with the protection or disclosure of personal information. Four-factor analysis of the human component, technology, organization, and environment is used to understand cloud computing technology adoption [43, 46, 51]. Cloud computing adoption by the organizations can be considered as a utopia if individual users are not familiar with the cloud technology. Sharma et al. [70] point out studies from the field of information systems where behavioral constructs are key factors influencing the individual user to adopt a new technology [12, 24, 41, 77]. Sharma et al. [70] examine if and to what extent factors such as perceived usefulness, perceived ease of use, computer self-efficacy and trust can affect individual users to adopt cloud technologies and indicate that the above factors were found to be important indeed.

A major inhibiting factor has to do with the loss of control over storage of critical data and the service’s outsourced nature. The challenge for cloud providers is to identify and understand the concerns of privacy-sensitive stakeholders and adopt security practices that meet their requirements [19]. Misunderstanding the privacy concerns of end-users may lead to loss of business, as they may either stop using a perceivably insecure or privacy-abusing service, or falsify their provided information, hence minimizing the potential for profit via personalized advertising. An end-user can give fake data if she believes that the service provider is going to abuse the privacy agreement and sell personal data derived from a cloud based subscription to a third party [16].

Di Vimercati et al. [78] underline that the significant benefit of elasticity in clouds appealed companies and individual users to adopt cloud technologies. At the same time, this benefit is proved harmful for users’ privacy, as security threats and a potential loss of control from data owners exists. In this case, the adoption of the cloud computing paradigm is diminished. European Network and Information Security Agency (ENISA) [1] lists the issue of loss of control over data as a top risk for cloud computing. Also, in 2013 the “Cloud Security Alliance - CSA” lists data breaches and data loss as two of the top nine threats in cloud computing [14, 32]. The new complexity of the cloud paradigm (e.g. distribution and virtualization), the class of data (e.g. sensitive data) or the fact that CSPs might be not fully trustworthy are topics that increase security and privacy obstacles for cloud adoption.

Game theory in these cases emerges as an interesting tool to explore the aforementioned issues, as it can be used to interpret stakeholder interactions and interdependencies across the above scenarios. For example, Rajbhandari and Snekkenes [62] implemented a game theory-based approach to analyze risks to privacy, in place of the traditional probabilistic risk analysis (PRA). Their scenario is based on an online bookstore where the user has to subscribe in order to have access to a service. Two players take part in this game: the user and the online bookstore. The user could provide either genuine or fake information, whereas the bookstore could sell user’s information to a third party or respect it. A mixed strategy Nash equilibrium was chosen for solving the game, with user’s negative payoffs, in order to describe quantitatively the level of privacy risk.

Snekkenes [71] applies Conflicting Incentives Risk Analysis (CIRA) in a case where a bank and a customer are involved in a deal. Snekkenes attempts to identify who is to take the role of the risk owner in case of data breach incidents and what are the utility factors weighted on the risk owner’s perception of utility. The CIRA approach identifies stakeholders, actions, and payoffs. Each action can be viewed as a strategy in a potentially complex game, where the implementation of the action amounts to the participation in a game. CIRA shows how this method can be used to identify privacy risks and human behavior.

Also, according to Hausken [33], the behavioral dimension is a very important factor in order to estimate risk. A conflict behavior, which is recorded on individuals’ choices, can be integrated into a probabilistic risk analysis and analyzed through game theory. Resnick [66] worked on providing the use of “cheap pseudonyms” as a way to measure reputation in Internet interaction between stakeholders. This was a game of multiple players where users provided pseudonyms during an interaction in the Internet world and they had the option either to continue playing with the current pseudonym or find a new one, at each period of time. A suboptimal equilibria is found, as a repeated prisoner’s dilemma type of game, while methods of limiting identity changes are suggested.

Cai et al. [20] insert a game-theory approach to managing decision errors, as there is a gap between strategic decisions and actions. They study the effects of decision errors on optimal equilibrium strategy of the firm and the user. Cavusoglu and Raghunathan [22] propose a game theory for determining if a provider should invest on high or low-cost ICT and compare game theory and decision theory approaches. They show that in cases where firms choose their action before attackers choose theirs (sequential game), firms gain the maximum payoff. Also, when firms adopt knowledge from previous hacker attacks to estimate future hacker effort, then the distance between the results of decision theory and game theory approaches is diminishing.

Gao and Zhong [31] address the problems of distorted incentives for stakeholders in an electronic environment, applying differential game theory in a case where two competing firms offer the same product to customers and the one can influence the value of their information assets by changing pricing rates. To assure consumers that they do not risk losing sensitive information, and also, increase consumer demand, firms usually integrate their security investment strategies. Researchers reveal that higher consumer demand loss and higher targeted attacks, avert both firms from aggressive defense policy against hackers and would rather prefer to decrease the negative effect of hacker attacks by lowering their pricing rates.

Concluding, game theory research in online privacy-related decision-making has shown that it can give credible results in understanding privacy-related behavior.

4 Impact of Consumer Trust in Cloud Services

Sato [67] refers that 88% of consumers, worldwide, are worried about the loss of their data. Who has access to their data? Where consumers’ data is physically stored? Can cloud service providers (CSPs) find ways to gain consumers’ trust? Is the CSPs attempt towards consumer trust, a value for money strategy? These are typical questions that consumers and CSPs make about trust in clouds and online environments.

Ramachandran and Chang [63] highlight key issues associated with data security in the clouds. One key factor for cloud adoption is building trust when storing and computing sensitive data in the cloud. Trust related to e-services offered in virtual online environments is a major topic for both consumers and cloud service providers, as well as for cloud researchers. Trust is strongly tied to online security. McKnight et al. [49] indicate three significant trust components: ability, integrity and good will as prominent factors for a new ICT adoption. Ability is equal to CSPs efficiency in resources and skills that will not deter consumers from adopting cloud technologies. Integrity refers to CSPs obligations to comply with regulations, and good will means that CSPs assure priority to consumers’ needs.

Sharma et al. [70] suggest that trust in clouds has a positive and significant relationship with individual’s decision to adopt cloud computing services. In clouds, users often want to share sensitive information and CSPs should ensure their privacy [39]. Svantesson and Clarke [73] suggested that CSPs should apply such policies to ensure users that their data are safe and allure them to use clouds.

Consumers trust CSPs only to the extent that the risk is perceived to be low and the convenience payoff for them to be high. Pearson [59] argues that when customers have to decide about trusting CSPs for personal data exchange services, they should consider organization’s operational, security, privacy and compliance requirements and choose what best suits them.

5 Asymmetric Information and Strategic Stakeholders Interaction in Clouds

Asymmetric information is a concept encountered often in commercial transactions between sellers and buyers, end-users and service providers where one party has more information compared to the other. Potentially, this could lead to a harmful situation as one party can take advantage of the other party’s lack of knowledge. Information asymmetries are commonly met in principal-agent problems where misinforming is caused and the communication process is affected [23].

Principal-agent problems occur when an entity (or agent) makes decisions on behalf of another entity: Principal is “a person, who authorizes an agent to act with a third trusted party” [18, 27]. A dilemma exists when the agreement between participants is not respected and the agent is motivated to act for his own personal gain and in contrary to the “principal”. Principals do not know enough about whether an agreement has been satisfied and, therefore, their decisions are taken under some risk and uncertainty and involve costs for both parties. The above information problem can be solved if the third trusted party provides incentives in order the agents to act appropriately and in accordance with the principals. In terms of game theory, rules should be changed so that the rational agents are confronted with what principal desires [18].

McKinney and Yoos [48] argue that information is almost always unspecified to an unbounded variety of problems and the involved agents (so-called stakeholders) almost always act without having full information about their decisions. Whilst literature on information risk is adequately studied in the last decades, there is no risk premium for information asymmetry [34]. Easley and O’hara [26] argue that information asymmetry creates something called information risk and their model showed that additional private information from consumers receives higher expected returns to the involved agents.

For an agent, a risk premium is the minimum economic benefit by which the expected return from decision-making under risk must exceed the known return on a risk-free decision where full information is provided to the involved stakeholders. A rational agent is risk averse. He attempts to reduce the uncertainty when exposed to information asymmetry. The utility of such a strategic movement expected to be high in many cases. For such risky outcomes, a decision-maker adopts a criterion as a rule of choice, where higher expected value strategic movements are simply the preferred ones [55].

From a game theory perspective, uncertain outcomes exist where potential preferences with regards to appropriate risky choices coincide. In cases where the above-expected utility hypothesis is satisfied, it can be proved useful to explain choices that seem to contradict the expected value criterion. Asymmetric information in clouding introduces scenarios where stakeholders (consumers and service providers) interact strategically. A game theory approach based on trust is regarded as a useful tool to explain the conflict and cooperation between intelligent rational decision-makers.

Njilla et al. [52] introduce a game-theoretic model for trust in clouds suggesting that risk and trust are two behavioral factors that influence decision-making in uncertain environments like cloud markets, where consumers seem they do not have full control over their stored data. They adopt a game theoretic approach to establishing a relationship between trust and factors that could affect the assessments to risk. The scenario refers to three players: end-users, service providers, and attackers. The provider defends the system’s infrastructure against attackers, while end-users tempt not to trust an online service in case of data privacy breaches. Njilla et al. [52] propose a game model which mitigates cyber attack behavior. They analyze different solutions obtained from the Nash Equilibrium (NE) and find that frequent attacks with contemporary providers’ ability to mitigate the loss, might cause the attacker to be detected and caught. Thus, it is possible, in this case, the attacker not to attack because of high risk and penalties. But what about the gain and the loss when the provider invests in security and the attacker decides to attack and succeeds his target with users’ private data compromised? What are the payoffs of each player in this case? These remain open questions.

Maghrabi and Pfluegel [47] use game theory by an end-user perspective to assess risk pertaining to moving to public clouds. While previous works focus on how to help the cloud provider to assess risk, they developed a model for benefits and costs associated with attacks on the end user’s asset in order to help the user decide whether or not adopt the cloud. The end-user is conformed to a Service Level Agreement (SLA), which promises protection against external attacks.

Douss et al. [25] propose a game trust model for mobile ad hoc networks. Assuring reputation and establishing trust between collaborating parties is indirectly a way to provide the secure online environment. The authors suggest an evaluation model for trust value. They applied computational methods and developed a framework for trust establishment.

Li et al. [42] study price bidding strategies when multiple users interact and compete for resource usage in cloud computing. The provided cloud services are available to end-users with a pay-as-you-go manner [38, 56]. A non-cooperative game model is developed with multiple cloud users, where each cloud user has incomplete and asymmetric information about the other users. They work on utility functions with the “time efficiency” parameters incorporated to calculate net profit for each user, in order to help them to decide whether to use the cloud service. For a cloud provider, the income is a number of money users pay for resource usage [50]. A rational user will maximize his net reward by choosing the appropriate bidding strategy \((= U_{of \ choosing \ the \ cloud \ service} - P_{ayment})\); U stands for utility and P stands for payment. However, it is irrational for a cloud provider to provide enough resources for all potential requests in a specific time. Therefore, cloud users compete for resource usage. The above stakeholders’ strategic interactions are analyzed from a game-theoretic perspective and the existence of Nash equilibrium is also confirmed by a proposed near-equilibrium price bidding algorithm. For future research, a good idea is to study the cloud users’ choice among different cloud providers or determine a properly mixed bidding strategy.

Fagnani et al. [28] consider a network of units (e.g., smartphones or tablets) where users have decided to make an external backup for their data and, also, are able to offer space to store data of other connected units. They propose a peer-to-peer storage game model and design an algorithm which makes units interact and store data backup from connected neighbors. The algorithm has been converged to Nash equilibrium of the game, but several challenges have arisen for future research analysis related to stakeholders’ interactions in a more trusted environment.

Moreover, the resource allocation problem in cloud computing where users compete for gaining more space to run their applications and store their data is analyzed by Jebalia et al. [35]. They develop a resource allocation model based on a cooperative game approach, where cloud providers provide a great number of resources in order to maximize profit and combine the adoption of security mechanisms with payoffs maximizing.

Security and privacy are often located as opposite concepts. Much of focus is on reducing cost during the establishment of a trustworthiness infrastructure in cloud computing, which gradually requires disclosing private information and proposing a model of trading privacy for trust [52, 69]. Also, Lilien et al. [45] indicate the difference between maintaining a high level of privacy and establishing trust for transactions in cloud environments. Users, who display a particular interest in concealing private information intensively, request from cloud providers a set of corresponding credentials which establishing trust for these users. The tradeoff problem exists where the assurance for the minimum user’s privacy loss meet the choice of revealing the minimum number of credentials for satisfying trust requirements.

Raya et al. [64] suggest a trust privacy tradeoff game-theoretic model that gives incentives to stakeholders to build trust and at the same time assure privacy loss at a minimum level. Individual players do not trust cloud providers unless they received an appropriate incentive.

Gal-Oz et al. [30] introduce a tradeoff approach studying the relationship between trust and privacy in online transactions. They suggest that pseudonyms constitute a necessary component for maintaining privacy since pseudonyms prevent association with transaction ID and ensure a level of reputation. The more pseudonyms used, the more reputation is succeeded.

Mentioning the above major issues, we indicate that any application relying upon an emerging cloud computing technology should consider the different possible threats. The problem is a lack of a clearly defined meaning of such a risk that benefits the cloud users to make proper choice and cloud service providers to avoid threats efficiently.

6 Conclusions

A game theory approach is adopted as a very general language for modeling choices by agents in whom the actions of other agents can affect each player’s outcome. Game theory assumes players choose strategies which maximize the utility of game outcomes given their beliefs about what others will do.

The most challenging question is often how beliefs are formed. Most approaches suggest that beliefs are derived from what other players are likely to do. Game theory focuses on preferences and the formation of beliefs. Equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. Each belief is the probability of the other players having particular types, given the type of the player with that belief. The way players specify reasonable beliefs is by equating choices.

However, some limits arise. First, many games that occur in social life are so complex, which means that at a specific time, players cannot form accurate beliefs about what other players would choose and therefore they cannot choose equilibrium strategies. So, what strategies might be chosen by players with bounded rationality, or how a repeated game helps players to improve their strategic choices? Second, in empirical works, only received payoffs are easily measured (e.g., prices in auctions). A huge variety of experiments show that game theory sometimes explains behavior adequately, and sometimes is badly rejected by behavioral and process data [21]. The above inference can be used to create a more general theory which matches the standard theory when it is accurate, and explains the cases in which is badly rejected. This emerging approach is called “behavioral game theory” which uses the analytical game theory to explain observed violations by incorporating bounds on rationality.

Game theory is the standard theory to analyze cases where individuals or firms interact; for example, strategic interaction of privacy-sensitive end-users’ use of cloud-based mobile apps, e-commerce transactions between sellers and consumers, and many other social dilemmas such as the provision of public goods. Behavioral game theory introduces psychological parameters which amplify a rational scenario and give a motivational basis for players’ behavior. Representation, social preferences over outcomes, initial conditions and learning are the basic components for a precise analysis [21].

In this work, we focus on Information Privacy in Cyberspace Transactions. Cyberspace is a synopsis for the web of consumer electronics, computers, and communication networks that interconnect the world. The potential surveillance of electronic activities presents a serious threat to information privacy. The collection and use of private information have caused serious concerns about privacy invasion by consumers, creating a personalization-privacy tradeoff. The key approach to addressing privacy concerns is via the protection of privacy through the implementation of fair information practices, a set of standards governing the collection and use of personal information. We take a game-theoretic approach to explore the motivation of firms for privacy protection and its impact on competition and social welfare in the context of product and price personalization. We find that privacy protection can work as a competition-mitigating mechanism by generating asymmetry in the consumer segments to which firms offer personalization, enhancing the profit extraction abilities of the firms. In equilibrium, both symmetric and asymmetric choices of privacy protection by the firms can result, depending on the size of the personalization scope and the investment cost of protection. Further, as consumers become more concerned about their privacy, it is more likely that all firms adopt privacy protection strategies. In the perspective of welfare, we show that autonomous choices of privacy protection by personalizing firms can improve social welfare at the expense of consumer welfare. We further find that regulation enforcing the implementation of fair information practices can be efficient from the social welfare perspective, mainly by limiting the incentives of the firms to exploit the competition-mitigation effect.