Keywords

1 Introduction

In 1890, legal scholars Brandeis and Warren famously wrote that privacy consists in “the right to be let alone” [1, p. 193]. Moreover, they asserted that a person’s right to be let alone is violated not only when an unwanted or uninvited individual accesses that person’s physical space, but also when that individual accesses facts and information about them without their prior knowledge and permission [1] (emphasis supplied). Core to this construct of information privacy is the idea that, at a minimum, individuals have a legally protected interest (if not a right as suchFootnote 1) in controlling the time, place, manner, and audience when they disclosing their personal information. Individuals are particularly vulnerable, and their interest in controlling their personal information is greatest, when the personal information in question can connect them back to the information to the exclusion of any other individual. That is, our interest in information privacy is perhaps highest when our personally identifiable information (PII) is involved. PII refers to information about the individual that can be connected back to that individual to the exclusion of any other [2]. However, because online transactions necessarily entail the individual to be using a computer with specific internet protocol (IP) and media access control (MAC) addresses, these online transactions can be easily tagged back to the individual who was logged in at that IP address at that time. In other words, virtually any online transaction involving personal information in fact involves PII, rather than anonymized information. Thus, as the use of computers to collect, store and disseminate information about us has become virtually ubiquitous, protecting our information privacy – controlling who learns what about us, how they learn it and when they learn it - has increasingly become a challenge.

Organizations often assert that their collection and use of customers’ PII is necessary to facilitating the computer based interaction between the individual and their organization: it allows the organization to more effectively and efficiently tailor and personalize the goods and services offered to a specific customer. Customer PII is also exchanged between and among organizations. Disclosing PII is a part of the cost incurred by individuals when obtaining goods, services, or even information online. In addition, it is increasingly the case that there are no viable alternatives for obtaining the good or service than to complete the online transaction and disclose their PII. That is, engaging in this particular type of human-computer interaction requires disclosure of some amount of PII. The result is that individuals who are unwilling to disclose their PII may not have access to the goods, services, and even information that individuals who are willing to disclose their PII have access to. This, in turn, results in a significant imbalance in bargaining power as between the individual consumer and the organizations with which they interact online. While this shift in bargaining power has become increasingly noticeable during the 2010s, privacy scholar Gandy [3] observed as long ago as the mid 1990s that:

[T]he power that the individual is able to exercise over the organization when she withholds personal information is almost always insignificant in comparison with the power brought to bear when the organization chooses to withhold goods or services unless the information is provided [3, p. 19].

Given omnipresent and increasing inter-organizational competition, organizations engaging with customers online should begin to imagine and plan for a universe in which there are online goods and services providers who do not barter in their customers’ PII, and who allow consumers to retain meaningful control over their PII. In this regard, organizations engaging with customers online (i.e. through a computer interface) should revisit their respective information privacy policies and practices as if such a universe existed; this, in turn, requires them to more fully acknowledge their customers’ actual concerns and, in sum, to understand their information behaviors around information privacy when engaging with their virtual marketplace. According to a 2014 Pew research center study, which investigated the perceptions of the American public with respect to issues of personal privacy, provides some initial insights [4], lack of control over the further dissemination of information is a core concern. 91% of the study participants indicated that they believe consumers have lost control over their personal information online.

Consumers are clearly concerned about online information privacy. However, the continued proliferation of goods and services available online provides evidence that few of these individuals actually modify their online information-sharing behaviors - much less simply decline to disclose information online in the first instance. Privacy scholars Norberg, Horne and Horne [5] referred to this phenomenon - wherein we disclose personal information online even though we know there are risks associated with doing so - as the privacy paradox.

To understand the privacy paradox, we need to first understand what drives our basic propensity to disclose PII online. Our research thus aims to explore online information privacy behaviors – that is, those behavioral factors that influence our decisions to disclose PII online. More specifically, we examine the privacy paradox through the lens of the privacy calculus framework. That is, we start by examining the privacy paradox as being explained by a risk/reward analysis. We raise the research question: which behavioral factors are salient in forming our perception of risk vis-à-vis disclosing our PII online?

In this paper, we first provide a brief discussion of privacy calculus, and introduce several theoretical frameworks that may inform an individual’s online information privacy behavior. Next, we outline and position the specific behavioral factors included in the study, which comprise our theoretical framework. Then, we discuss our research method, and present our data analysis. Finally, we present our conclusion(s), acknowledge the particular limitations of our study, and outline the future direction of our research.

2 Privacy Calculus

There are a number of theoretical behavioral frameworks that have the potential to explain our apparent propensity to share our PII online despite realizing that doing so exposes us to the risk of having that information compromised. In particular, information privacy behavior has frequently been discussed in terms of a behavioral risk/rewards analysis, referred to in the literature as privacy calculus [6].

Privacy calculus was initially proposed by Culnan and Bies [6], as an extension of a broader analysis by Laufer and Wolfe [7]. Privacy calculus suggests that, when an individual interacts with an organization or institution, their propensity to disclose personal or private information to that organization or institution is essentially a function of whether the individual believes they are being given a fair exchange for that information – that is, the benefits they receive for doing so outweigh, or, are, in any case, not outweighed by, the potential risks undertaken in disclosing the personal information. In essence, this model suggests that, in order for an organization or institution to entice or encourage individuals to disclose personal information, it ought to both enhance the rewards and benefits offered as part of the transaction, and work to reduce perceived risk. To this latter end, Culnan and Bies [6] position trust as having direct (negative) influence on perceived risk. In particular, Culnan and Bies [6] point out that the unwarranted (i.e. unauthorized) disclosure of personal information undermines trust when an organization discloses information to someone other than the person who the customer initially expected or authorized to receive it, and, in particular, when the customer considers some or all of that information to be sensitive.

Dinev and Hart [8] likewise take the perspective that propensity to disclose personal information online is a function of an individual’s cost/benefit analysis with respect to the online transaction. Accordingly, Dinev and Hart [8] proposed a privacy calculus model in which they framed the result of the cost/benefit analysis as a “cumulative antecedent to information disclosure” [8, p. 62]. Smith and Dinev [9] examined the role of privacy concern in influencing propensity to disclose PII, particularly the extent to which an individual’s level of privacy concern might inform their risk analysis.

3 Study Framework

Privacy calculus analysis offers a straightforward way in which to think about our online information behaviors (specifically, our decision to either disclose or withhold our PII when interacting online). However, this risk/reward analysis does not address foundational questions concerning how we perceive the potential risk and potential reward of disclosing or withholding PII when interacting online. To attempt to answer these questions, we must look at other behavioral factors. We start by using the foundational factors suggested in Bauer’s seminal work on perceived risk [10] to analyze the salience of each with respect to an individual’s decision to disclose or withhold information when interacting online. We also consider the salience of an individual’s fundamental level of privacy concern on their disclose/withhold decision [8, 9, 11]. Finally, we consider the potential salience of other, more general behavioral factors. For example, an individual’s decision to disclose or withhold PII when interacting online may be informed by the subjective or social norms to which they have been exposed, their attitude toward either disclosing PII or interacting online, or their sense of perceived behavioral control with respect to their ability to retain control over and protect their PII.

Figure 1 below depicts our initial framework. Each of the foregoing factors is incorporated into the theoretical framework we are investigating. In the following subparagraphs, we discuss and define each of these factors, and present our initial theoretical framework.

Fig. 1.
figure 1

Initial study framework

3.1 Perceived Risk

Bauer defined perceived risk as consisting of two fundamental components: uncertainty of outcome and seriousness of outcome [10]. Bauer further wrote that perceived risk includes multiple types of risks – including social risk, psychological risk, time risk, financial risk, and performance risk. Pavlou and Featherman expanded on Bauer’s discussion, framing ‘uncertainty’ in terms of perceived vulnerability, and ‘seriousness’ as “perceived severity” [12]. Featherman and Pavlou’s discussion of perceived risk also included privacy risk as an additional risk facet [12].

Following Featherman and Pavlou, in the context of our study we consider the extent to which, in deciding whether to disclose PII online, an individual considers both the likelihood of their PII being compromised (perceived vulnerability) and the specific consequences of having the information compromised (perceived severity). We suggest that the basic perceived risk framework holds up in this context, and hypothesize that both perceived vulnerability and perceived severity are salient factors in an individual’s decision whether or not to disclose PII online.

3.2 Privacy Concern

“Privacy concern” may be broadly thought of as referring to how much concern or anxiety an individual experiences with respect to the treatment of their PII. At a high level, privacy concern speaks to a variety of potential PII handling issues - specific events or occurrences that we can point to as making us generally concerned about the privacy of our PII online.

At a more fundamental level, though, an individual’s level of privacy concern may also be examined from the perspective of its antecedent or component factors. As Smith, Dinev and Xu [9] assert, once such antecedent factor is privacy experiences, which refers specifically to the individual’s own experiences with their personal information privacy, whether negative or positive [9, 13]. For example, has the individual ever had their personal information compromised? Has their PII been compromised more than once? What were some of the consequences they faced? Smith et al. also suggest that privacy awareness is an antecedent to privacy concern. Privacy awareness refers generally to the extent to which an individual knows that (1) there are risks associated with disclosing PII online; and (2) there are ways they can mitigate these risks [9].

It is tempting to believe that privacy awareness and privacy experience have a significant influence on an individual’s decision to disclose or withhold PII when interacting online. After all, shouldn’t we learn from bad experiences? Shouldn’t we avoid negative situations if we are aware of them in advance? However, from the perspective of which behavioral factors inform an individual’s perceived risk in disclosing PII online, we hypothesize neither of these factors will prove salient.

Perceived Trustworthiness.

Smith et al. [9] further posit that privacy concern both informs and is informed by other behavioral factors, including trust. Deutsch defined trust in terms of the personal characteristics of the person being trusted (the trustee), as they are perceived by the person who trusts (the trustor) [14]. The trustor’s assessment of the trustee’s abilities to perform the particular task in question, as well as their general reliability and overall predictability are the key components in the formation of trust. Deutsch further framed trust with respect to the trustor allowing themselves to be vulnerable to the trustee in a given situation: the trustor effectively gives up control over the situation (including its outcome) to the trustee [15].

One of the defining characteristics of online transactions, however, is that the individual is providing their PII to an anonymous entity, over which they have little or no control or oversight. The trust analysis in this case is a bit different: the inquiry is not so much whether the online organization (or individual) is capable or reliable; rather, the trustor is more likely to be asking themselves how likely it is that their PII will be compromised or mishandled in some way, and how badly it is compromised or mishandled. The trustor relies on not only the ability, but also the integrity and benevolence of the online organization with which they are interacting to protect their PII, and ensure it is not compromised or mishandled. These elements (ability, integrity, and benevolence) are hallmarks not of trust, but of the subjective construct of perceived trustworthiness [c.f. 16,17,18,19,20]. Accordingly, we assert that perceived trustworthiness, rather than trust, is the more appropriate factor to consider in our framework.

We hypothesize that perceived trustworthiness is a significant, salient factor in an individual’s decision to disclose or withhold PII online. Moreover, as suggested above, the perceived trustworthiness analysis speaks directly to the perception of risk. The more we feel that an online organization has not only the ability and integrity to “do the right thing” by us, but also the benevolence to do so, the less likely we are to perceive that interacting with them is risky for us.

3.3 Perceived Behavioral Control: Cyber Self-efficacy

Ajzen [21] asserted that perceived behavioral control is best understood as a hierarchical construct, consisting of two lower-order factors—perceived self-efficacy and perceived controllability—that collectively inform the higher order factor perceived behavioral control. To distinguish between these two constructs, perceived self-efficacy refers to the level of ease or difficulty the individual ascribes to the particular behavior while perceived controllability looks at the degree to which actual performance is up to (within the control of) the individual.

We note that perceived self-efficacy and perceived controllability are aggregative concepts. Virtually all tasks/behaviors consist of multiple steps. The individual performing the behavior may feel that most (if not all) of these steps are easy for him/her to do (perceived self-efficacy) or within their control; but, if the individual feels that one of these steps is very difficult for them, they may well decide not to perform the behavior—even though they know that if they perform the steps of the behavior, they will achieve the desired outcome. Similarly, if the individual feels that most of the steps in the behavioral outcome are within their control, but may nonetheless not perform the behavior (or even attempt it) because they believe they will ultimately lack those resources for which they must rely on others (materials; time; money; personnel) to complete the behavior, they may simply not attempt it. Framed in terms of perceived self-efficacy and perceived controllability, an individual may have high perceived self-efficacy, but due to low perceived controllability (i.e. as to the outcome), they may not attempt to perform the behaviors. Conversely, an individual may know that if she performs certain behaviors, a desired outcome is guaranteed (if you save enough money, you can purchase a new TV). However, if the individual has low perceived efficacy (feels unable to save, has too many financial commitments), they will likely not attempt the behavior despite the fact that it will, in fact, achieve a desired outcome. In this regard, Ajzen [21] suggested that perceived behavioral control should be understood to refer simply to the individual’s sense that the ability to attempt the performance of a particular behavior is within their purview, rather than the outcome per se.

In the context of disclosing PII online, what we term “cyber self-efficacy” considers, on the one hand, the extent to which an individual believes they are able to identify situations in which it is/is not safe to disclose PII, what information is/is not safe to disclose in a given situation (control over task), and, on the other hand, the individual’s belief that they can control what information is/is not disclosed and who accesses and uses it (control over outcome). Our study framework thus considers the extent to which cyber self-efficacy informs an individual’s decision to disclose or withhold PII when interacting online. We predict that cyber self-efficacy is an important factor in the disclose/withhold decision.

3.4 Attitude

Allport [22] defines an individual’s attitude as “a mental and neural state of readiness, organized through experience, exerting a directive or dynamic influence upon the individual’s response to all objects and situations with which it is related” [22, p. 810]. He notes that the effect of attitude on behavior can be viewed on a continuum from driving (motivating) behavior to simply directing it. Allport [22] further notes that attitude had generally been discussed and studied as a binary concept (i.e., favorable/unfavorable; good/bad; positive/negative). Fishbein [23] defines attitude similarly, referring to attitudes as “learned predispositions” which cause us to respond to specific people, places and things in a particular, predetermined way (whether favorable or unfavorable).

In our study framework, we investigate whether an individual’s attitude toward interacting online (whether with organizations or individuals) influences their intention or propensity to disclose PII online. In particular, we consider whether or not an individual’s perceived risk in disclosing PII online is informed by their fundamental attitude toward interacting online. We posit that attitude is not a salient factor in predicting or explaining an individual’s intention or propensity to disclose PII online. Accordingly, we further hypothesize that attitude is also not a salient factor in the individual’s perception of the risk associated with disclosing PII online.

3.5 Social Norms

Social norms, resulting from general consensus and/or negotiation in a society, is a collective construct, which essentially provide an informal, custom-based and socially enforced set of rules for acceptable behavior within that society [24]. These rules encompass the customs, traditions, standards, mores, and fashions of that particular culture, as well as similar indicia of membership in or belonging to that society. Social norms reflect standard or characteristic behaviors to be expected of, from and by members of that society or culture. Accordingly, they provide guidance in situations where expectations are ambiguous and the appropriate response behavior is uncertain. Moreover, based on the principles or rules of conduct and behavior established in the society, social norms serve to both foster desirable behavior and to sanction undesirable behaviors.

Cialdini and Trost [24] described two types of social normative behavior: descriptive norms, and injunctive norms. Descriptive norms are those behavioral norms that function to encourage socially desirable behavior. Descriptive norms influence behavior by providing a positive, socially successful model on which to base one’s own behavior. Injunctive norms influence behavior by imposing sanctions of varying severity on the society’s miscreants. These sanctions are often loosely constructed, informal forms of indictment within one’s own social networkFootnote 2. As noted in the preceding paragraph, injunctive and descriptive norms are not mutually exclusive, and may serve to inform each other.

Fishbein and Ajzen [25] discussed social norms with specific reference as to how they influence behavioral intention and outcome – i.e. how they influence us to perform (or not) a particular behavior. Fishbein and Ajzen [25] note that descriptive norms have both direct and indirect effects on behavior. That is, we observe the direct effect of a behavior: is it rewarded or punished? Such observation may also inform our awareness of a related injunctive norm (discussed below). We may also observe that – whether initially rewarded or not – a behavior may have indirect consequences – i.e. subsequent positive or negative consequences. Moreover, we can also observe any resources we may require, or barriers we may need to deal with, in order to perform the behavior. Injunctive norms, on the other hand, are those behavioral norms that operate to sanction socially undesirable behaviors. The Ajzen and Fishbein framework incorporates both injunctive and descriptive social norms into what they refer to as subjective norms, which they define as “an individual’s perception that most people who are important to [them] think [they] should (or should not) perform a particular behavior” [25, p. 131]. In other words, subjective norms are one’s perception of applicable social norms, and reflect the “total social pressure experienced with respect to a given behavior” [25, p. 131].

It is tempting to believe that online information privacy behaviors are influenced by social norms. On a daily basis, we observe others interacting and sharing PII online. If they do it, why shouldn’t we? If they have a problem resulting from sharing PII online, shouldn’t we think twice before doing it ourselves? However, we predict that the influence of subjective norms is actually not significant in terms of our ultimate decision to disclose or withhold PII online.

4 Method

This study attempts to investigate the extent to which the behavioral factors discussed in the previous section inform and, in turn, are informed by an individual’s subjective assessment of the risks and rewards associated with disclosing PII online in a particular transaction. That is, the basic model remains the very straightforward privacy calculus model described above; however, we include the behavioral factors described in the previous section in our framework to determine which, if any, are salient in terms of explaining and predicting how the risk/reward analysis is conducted. The data were analyzed using exploratory factor analysis (EFA) techniques.

4.1 Data Collection and Survey Instrument

The survey was recently administered to students enrolled in an undergraduate-level technical writing course at a large southeastern state university. A total of 69 surveys were returned, of which 67 were complete and used for analysis. The participants were all between the ages of 18 and 40, with the vast majority being between the ages of 18–25 (n = 64). Approximately two-thirds (n = 47) were male. Nearly all (n = 51) indicated having previous information technology (IT) experience.

The survey uses questions covering perceived behavioral control, attitude and social norms, which were adapted from Ajzen’s Theory of Planned Behavior (TPB) model questionnaire [26] to be specific to online information privacy behaviors. The adapted TPB items were combined with questions intended to measure the participant’s level of privacy concern (as defined in Smith, Milberg [13] and Smith, Dinev [9]), as well as perceived risk. Questions concerning privacy calculus, suggested by Dinev and Hart [8], were also incorporated.

The instrument consisted of 39 items, with each item being scored on a 7-point Likert-type scale (−3 = strongly disagree; 0 = neutral; 3 = strongly agree). Two specific questions concerning the participants’ declared intention to disclose personal information online within a specified timeframe were excluded as ultimately irrelevant to the research questions to be addressed. Accordingly, the final analysis consisted of 37 observed variables. In addition, anonymized demographic information (i.e. age range and gender) were also collected, but not analyzed for this study.

4.2 Analysis

Because we have no pre-conceived ideas with respect to how the factors described in the previous sections may interact in this specific context, our first step is to apply EFA techniques to the survey data. EFA allows us to determine whether or not the data surface any (or all) of these factors as being salient in explaining online information privacy behavior. We performed our analysis using IBM’s SPSS (v25). We used Principal Axis Factoring as the extraction method, and, to facilitate interpretation of the factors, we applied factor rotation (specifically, Oblimin with Kaiser Normalization).

As a preliminary matter in conducting EFA, we consider both the Bartlett’s test of sphericity and the Kaiser-Meyer Olkin (KMO) measure of sampling adequacy. The first iteration of EFA yielded a Bartlett’s Test result that was statistically significant (p < 0.001)Footnote 3. Thus, although the KMO was marginal (.546), we felt justified in proceeding with the EFA. The extraction values in the communalities table, reflecting the amount of variance each measured variable could be reproduced by the factors as a whole [27], were fairly homogeneous. The majority of the extraction values fell between .5 and .8, with only items 4 and 10 falling below .4. In this first iteration, 12 factors were surfaced with initial Eigenvalues >1 [28, 29]. Cumulatively, these factors explain approximately 66.3% of the variance between and among the 37 variables.

As noted above, items 4 and 10 had the least explanatory value of the initial 37 items (i.e. measured variables). While there is no magic cut-off number for selecting extraction values, the factors as a set did not produce more than 40% of the variance of either item 4 or 10. Accordingly, items 4 and 10 were not useful in surfacing salient factors, and we removed them in our second iteration. The results of the Bartlett’s test and the KMO test were similar to those obtained in the first iteration, although the KMO results were slightly higher (.569). The extraction values resulting from this iteration were, again, fairly homogeneous, with the exception of item 16 (which was <.4). 11 factors were surfaced having Eigenvalues > 1. Cumulatively, these 11 factors also explain 66.3% of the variance between and among the remaining 35 variables.

When we removed item 16 in the next iteration of EFA, the KMO value again improved, but was still marginal (.578). The extraction values of the variables in the communalities table were all >.4, with the exception of v35. Although there were again 11 factors with Eigenvalues >1, the overall percentage of variance between and among the remaining 34 variables explained by these factors increased to 67.33%.

Continuing to eliminate variables with limited contributions to the factors being surfaced, we next removed item 35. The KMO value increased to .582. The extraction values of the variables remained homogeneous, with the exception of one of the variables (item 14), which was <.4. This fourth EFA iteration also surfaced 10 factors with Eigenvalues >1; the overall percentage of variance between and among the remaining 33 variables explained by these 10 factors decreased (slightly) to 65.89%.

When we next removed item 14, the KMO value decreased slightly – to .578. No additional factors were eliminated, so we still had 10 factors at the end of this fifth iteration of EFA. However, the overall percentage of variance between and among the remaining 32 variables explained by these 10 factors increased to 66.7%.

Finally, in our sixth and last iteration of EFA, we removed item 20 – the only extraction factor <.4 from the previous EFA iteration. This iteration of EFA had no further variables with extraction values <.4. In addition, there was virtually no change in the KMO value from the previous iteration and there are still 10 factors with Eigenvalues >1. These remaining 10 factors explain 67.69% of the variance between and among the remaining 31 variables.

Given the level of homogeneity achieved in the extraction values of the remaining variables, we suggest that further extraction would be of marginal/limited value. Accordingly, we next examined the corresponding pattern matrix to identify which items correspond with which factors (i.e., which variables load most strongly on which factors). The end result of this part of the analysis reveals four factors that have strong loadings of multiple items. Three of these four factors had at least four item structure coefficients of >|.6|, which is generally held to be the minimum factor loading value needed for factors consisting of four or fewer variables [27]. The fourth factor (perceived controllability) had four total items loading on it, of which three were >|.6|. The final EFA pattern and structure coefficient matrix is presented in Table 1 (Appendix), and summarizes the factors and loadings in our final model.

Table 1. Pattern and structure coefficient matrix

Figure 2 below is a representation of the study framework as tested by our study. The lighter arrows and circles reflect factors that did not load well with the survey data, and the darker arrows and circles denote factors having good loadings. As can be seen, while EFA reduced the data collected across the original 37 items to 10 factors, our final framework includes just four factors. Four of the 10 factors had only weak loadings (loadings <.4). In addition, items 11, 15, 19, 22, 28, and 29 loaded across three additional factors, but they did not load cohesively such that the resulting factors were meaningful. Accordingly, these weak-loading and non-loading items were eliminated, leaving us with four factors.

Fig. 2.
figure 2

Tested study framework

4.3 Discussion of Research Question

Based on the results of EFA discussed above, we can answer our research question as follows: perceived trustworthiness, cyber self-efficacy, and perceived vulnerability appear to be the most salient of the factors we studied in terms of predicting or explaining an individual’s decision to disclose or withhold PII when interacting online. With respect to an individual’s perception of risk in disclosing PII online, we can also assert that these are salient factors. Our findings are illustrated in Fig. 3 below, which depicts our final study framework. Note that, because both of the lower-order factors of which Cyber Self-Efficacy is compromised loaded well, our final framework includes the higher-order factor only.

Fig. 3.
figure 3

Final study framework

5 Conclusions, Limitations, and Future Work

Applying this study findings in an organizational context, we note that our findings suggest that organizations that emphasize normative behaviors or attempt to influence attitudes in order to make individuals more likely to transact/disclose PII online are perhaps missing out on more significant behavioral influencers. Likewise, severity of outcome doesn’t appear to be a salient behavioral factor in this context, suggesting that the simple fact of having customers’ or users’ PII compromised matters less to them than the consequences to them of the compromise. Further and as suggested by the privacy paradox, even making individuals aware of the potential for the PII they disclose online to be compromised seems insufficient (including invoking the individual’s own experiences with this issue) to deter disclosure.

A main limitation of the study we discuss in this paper is that our sample size was small (n = 67). However, as discussed in Thompson [27], sample size may be less important when the factors have structure coefficients greater than |.6|, as was the case here (see Appendix). In addition, the external validity of the study is problematic because only a single, fairly homogeneous, group from a single location participated in the survey.

Future research will focus on the other factors surfaced from the EFA - specifically, perceived vulnerability and perceived trustworthiness – along with cyber self-efficacy - to assess how effectively these three factors explain (or predict) propensity to disclose personal information. Moreover, as part of our future work, we will not only examine these factors individually vis-à-vis online information privacy behavior, but will also operationalize each of them in an experimental design in order to gage the relative importance of each in an individual’s risk/reward analysis when disclosing PII online as well as explore the relationships between and among them to assess whether one or more of these factors influences and/or moderates the interactions between and among the others.