Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

A Psycholinguistics-inspired Method to Counter IP Theft Using Fake Documents

Published: 12 June 2024 Publication History

Abstract

Intellectual property (IP) theft is a growing problem. We build on prior work to deter IP theft by generating n fake versions of a technical document so a thief has to expend time and effort in identifying the correct document. Our new SbFAKE framework proposes, for the first time, a novel combination of language processing, optimization, and the psycholinguistic concept of surprisal to generate a set of such fakes. We start by combining psycholinguistic-based surprisal scores and optimization to generate two bilevel surprisal optimization problems (an Explicit one and a simpler Implicit one) whose solutions correspond directly to the desired set of fakes. As bilevel problems are usually hard to solve, we then show that these two bilevel surprisal optimization problems can each be reduced to equivalent surprisal-based linear programs. We performed detailed parameter tuning experiments and identified the best parameters for each of these algorithms. We then tested these two variants of SbFAKE (with their best parameter settings) against the best performing prior work in the field. Our experiments show that SbFAKE is able to more effectively generate convincing fakes than past work. In addition, we show that replacing words in an original document with words having similar surprisal scores generates greater levels of deception.

1 Introduction

Intellectual property theft is a growing problem in Europe and the USA. According to the FBI, the annual cost incurred by the US economy because of intellectual property theft lies between 225 and 600 billion USD.1 This is an enormous number. In fact, the situation is considered to be so severe that in 2023, the US House of Representatives formed a bipartisan select committee to “investigate and submit policy recommendations” on the topic.2
The FORGE [8] framework addresses this problem by suggesting that when an inventor creates a document containing intellectual property, a process should automatically trigger the creation of n fake versions of the document (e.g., \(n=49\) or \(n=99\) or even more). An adversary, faced with \((n+1)\) versions of a given document, will need to sift through all \((n+1)\) documents to find the real one. This causes delays, imposes costs, increases frustration, and increases uncertainty for the adversary. Even if he finds the real document, he may not be sure that it is real. And if he identifies a fake document as the real thing and chooses to act on that basis, then the impact on his operations may be devastating. Several successor efforts build upon FORGE in various ways [1, 21, 26, 49]. Nevertheless, all of these efforts seek to generate n fake documents by replacing words or concepts (e.g., n-grams) in an original document with new words/concepts so the fake is “close enough” to the original to be believable, yet “far enough” to be wrong.
However, these past efforts fail to take advantage of existing knowledge about the process of human language understanding. Deception involves inducing a state of incorrect belief in a user’s mind, and therefore there may be significant value in looking to psycholinguistics for inspiration, specifically at how people get from a sentence to the information that is ultimately derived from it.
For some time, a measurement known as surprisal has been one of the main workhorses in cognitive science research on sentence understanding [19, 20, 41]. Surprisal measures the quantity of information conveyed by a probabilistic event x, \(-\log p(x)\), where the base of the logarithm is often 2 so the quantity of information is measured in bits. Notice that the familiar definition of Shannon entropy for a random variable X, \(H(X) = \sum _x -p(x)\log p(x)\), can be written as \(H(X) = E\left[-\log p(x)\right]\); that is, entropy is the expected value of surprisal. In psycholinguistics, the distribution most commonly of interest is the conditional probability of a word given its preceding context, \(p(w_i|w_1 \ldots w_{i-1})\). When this probability is high, the next word \(w_i\) conveys little information given its context and its surprisal is low. Conversely, the surprisal for \(w_i\) is high when it is unexpected given the context.
Psycholinguists and neurolinguists often adopt a linking hypothesis [14, 44] —in which surprisal is connected with processing effort [5, 11, 38, 42, 45]. For example, word-level surprisal estimated using corpus-based probabilities has been correlated with indices of human sentence processing effort using fMRI [4, 6, 22, 39], MEG [7, 13], EEG [31, 34, 37], reading times [11, 18, 40, 42, 48], and pupillometric measurements [2]. In another influential line of work, Nobel Laureate Daniel Kahneman [25] drew a connection between effort and attention: In a system where mental energy is a limited resource, he observes, the terms “exert effort” and “pay attention” can to a great extent be considered synonymous (p. 8).
Putting these ideas together leads to our working hypothesis related to deception. We know that a word’s surprisal in context is connected to cognitive effort in processing it, and we know that effort is connected to attention. Therefore, it is plausible to surmise that there is a relationship between surprisal and attention. Adding the last piece of the puzzle, there is an obvious connection between attention and deception—the common idiom “slipping something under the radar” encapsulates the idea that deception is not deception if it is noticed, i.e., if it rises to a conscious level of attention. We therefore conjecture that optimizing success at deception should involve surprisal as a crucial variable to manipulate.
In this article, we explore this hypothesis by introducing methodology for Surprisal-based Fake (SbFAKE for short) document generation. In particular, we develop objective functions involving surprisal, such as substituting words that minimize surprisal, in optimization-based fake document generation (cf. References [1, 8]).
To our knowledge, we are the first to introduce methods inspired by research on human sentence understanding in service of improving the generation of documents that will deceive adversaries. In addition to that core idea, our principal contributions are the following: First, we contribute a bilevel optimization program that takes an original document and specifies how to substitute words in the original document with replacements (using the concept of surprisal) so n fake versions of the document are generated. Second, because bilevel optimization is computationally challenging, we develop methods to scale this computation by showing how a single-level linear program can do the job in an equivalent manner. Finally, we use a prototype implementation to conduct an extensive set of experiments with human subjects, looking at the ability to detect deceptive documents in two domains, computer science and chemistry.3
Our experiments: (i) identified the best parameters under which to run SbFAKE, (ii) showed that SbFAKE under these best parameters beat out the WE-FORGE system [1] (which had been previously been shown to beat FORGE [8]), and (iii) yielded interesting results about the surprisingness of words being replaced in the original document and the surprisingness of the replacement words, and (iv) demonstrating interesting relationships between the amount of time spent by subjects reading documents, their attentiveness, and their ability to detect the real document. In short, SbFAKE beats the state-of-the-art along many dimensions and also reveals new insights about how words’ surprisal is linked to their ability to help deceive human users.
The organization of this article is as follows: Section 2 describes prior work on this topic. Section 3 provides a bird’s-eye view of the architecture of the SbFAKE framework. Section 4 shows how we combine the psycholinguistic concept of surprisal and concepts from operations research to set up a bilevel optimization problem to solve the problem of finding n fake versions of an original document. Because solving bilevel optimization problems is hard, Section 5 shows how to convert these bilevel optimization problems into a single level linear programming problem. Section 6 contains details of the prototype implementation of the SbFAKE system, along with experimental results conducted under appropriate IRB authorization.

2 Related Work

The goal of reducing IP theft has been in the minds of cybersecurity researchers for years. While much of this effort has gone into traditional security instruments (e.g., firewalls to keep intruders out [9], encryption to protect secret data [47], network and/or transaction monitoring [27]) to identify malicious activity, recent work has focused on generating fake versions of documents to deter IP theft.
The history of using fakes to deceive an adversary is not new. Reference [43] was one of the first to propose the use of honeypots to identify insiders within an organization who are stealing sensitive data. Honey files [51] created files with attractive sounding names such as passwords.doc so attackers would be drawn toward those files—if a user touched those files, then s/he would be assumed to be malicious. Reference [46] proposed using fake DNS information and HTML comments to lead attackers astray. Reference [35] provides a comprehensive survey of honeypots in the literature, but conspicuously says little about work involving natural language processing.
Separately from this work, there is growing interest in combating data breaches through the use of fake data [10, 15, 33] and intentionally falsified information [30]. Because those efforts focus on relational databases (usually only tables), we do not describe them in detail here.
Concurrently with efforts to combat data breaches with fake data, there has been a noticeable increase in research on the idea of generating n fake versions of a real, technical document to deter IP theft. Unlike honeypots, the idea here is not to identify an attacker, but to impose costs on him/her, once they steal a tranche of documents from a compromised system, even if the victim does not even know they have been hacked. FORGE [8] was the first system to propose this idea. It extracted concepts from a network, built a multilevel concept graph to find words to replace in the original document, and then used an ontology to find appropriate replacement words.4 FORGE suffered from several issues: First, it required a good ontology for the domain of documents being protected from IP theft, but such ontologies were not always present. Second, by first choosing a word to replace in a first phase (without considering the quality of the potential replacements of that word), it often forced suboptimal choices in a second phase where the replacements were chosen. The WE-FORGE system [1] improved upon FORGE by eliminating both of these flaws. Rather than using an ontology, WE-FORGE automatically associated an embedding vector [28] with each word or concept in an original document as well as in some underlying background corpus. All of these word embeddings (and hence the words themselves) were then run through a clustering algorithm [50] to generate clusters. Replacements for a word were selected from the cluster containing the word. Word-Replacement pairs were selected simultaneously by solving an optimization problem. WE-FORGE was shown to outperform FORGE in its ability to deceive experts by generating high-quality documents.
Subsequent work focused on the fact that technical documents can be multimodal, e.g., they may contain tables or images or diagrams or equations/formulas. Probabilistic logic graphs [21] provided a single framework based on graphs to represent knowledge expressed via such multimodal structures. Reference [49] proposed a mechanism to generate fake equations that were sufficiently close to the real equation to be credible, yet sufficiently different to be wrong.

3 SbFAKE Architecture

Figure 1 shows the architecture of the SbFAKE system.
Fig. 1.
Fig. 1. Architecture of the SbFAKE system.
SbFAKE works with a given domain. For instance, in our work, we tested SbFAKE’s ability to generate fake documents in the computer science and the chemistry domains. Once a domain has been chosen, it works in two phases. A first pre-processing step learns some parameters from the given domain of interest. This is shown at the bottom of Figure 1, below the horizontal line. A second “operational use” phase kicks in when we try to generate fake versions of a specific document.

3.1 Pre-processing Phase

The pre-processing phase consists of four major steps.
(1)
Domain-specific Corpus: We must first build a corpus of documents related to a domain of interest (e.g., chemistry, computer science). For instance, if a pharmaceutical firm wants to use SbFAKE to generate fake drug designs, then they would build a corpus of documents related to drug designs. If an automobile company wants to use SbFAKE, then they would build a corpus of documents related to car designs.
(2)
Word Pruning: The domain-specific corpus may include a large number of irrelevant words (from the point of generating fake documents). For example, we would like to remove stop words as well as numerous other common words. In the case of technical documents, we may not care about adverbs, adjectives, and prepositions. In this case, we can eliminate such words through part-of-speech tagging tools. Note that pruning words does not mean they are ignored—just that they are not considered in the next two steps. Pruned words still play a role in surprisal score calculations, because this score depends on the context in which the word appears.
(3)
Token Embeddings: We automatically learn a token embedding [32] for each word we select in the last step, which uses a numeric vector to represent a token. The token embedding can reflect the relationship between different words.
(4)
Token Embedding Clusters: The token embedding of a word reflects its relationship with other words. If two words are similar enough, then they may be able to replace each other. We therefore develop word clusters using standard clustering methods [16].5
(5)
IDF Computation: Inverse document frequency captures the rarity of a word c among all documents in the corpus \(\mathcal {D}\), i.e., \(\log \frac{|\mathcal {D}|}{d\in \mathcal {D}:c\in d}\). We computed the inverse document frequency for the token of interest.
As none of these pre-processing steps is particularly novel, we describe them here solely for the purpose of completeness.
Table 1.
Text Word3gram Surprisal5gram SurprisalLSTM Surprisal
Slice5.0747642524.956734182.683528423
an2.8900866513.7446632399.646591187
apple3.2096662522.9707858562.102390766
through4.216763023.76094055210.96848202
at2.8739817142.77985477412.09079266
its2.821770432.5102288728.735381126
equator2.9139051443.38177394910.03571224
and1.4293055531.8473505974.142323017
you2.885180952.78362894114.68394089
will1.6490695481.562266356.160648346
find1.3022358421.1555924426.539898396
five4.016185764.17049646411.69524097
small3.123059753.1172072899.547338486
chambers4.2952280044.382720479.957602501
Table 1. An Example for the Word-by-word Surprisal Metrics in a Sentence from the Book, The Little Prince

3.2 Operational Use

Once the first phase is complete, the SbFAKE system is ready to generate fakes in the chosen domain. The second phase takes a document d as input and generates a set \(\mathcal {F}\) of fake versions of d. This phase involves the following steps:
(1)
Key Token Extraction: We first extract key tokens of interest in the document d.
(2)
Token-by-token Surprisal Metrics: The token-by-token surprisal metric measures how unexpected a token is, given prior context [4]. Formally, the surprisal score is \(-\log p(w_i\mid w_1,\ldots ,w_{i-1})\), where \(w_i\) is a word and \(w_1,\ldots ,w_{i-1}\) are the words immediately preceding it. As shown in Table 1, n-gram surprisal scores are usually based on the 3-gram language model or the 5-gram language model [12], and the LSTM surprisal score is based on a long short-term memory (LSTM) language model [18]. For each extracted token c in d, we will compute the surprisal scores, and we will also use the same context to compute the corresponding surprisal scores for each word in the same cluster \(\mathcal {C}(c)\). It is important to note that a surprisal score for a token is related to the context (e.g., sentence) in which it appears. A token may have a low surprisal score in some contexts and a high surprisal score in other contexts, even within the same document.
(3)
TFIDF Computation: Given a document, we can compute the term frequency for each token of interest, i.e., the number of times a token occurs in d. We can then compute the term-frequency inverse document frequency score, i.e., \(TFIDF(c)=TF(c)\times IDF(c)\).
(4)
SSIDF: Given a word and its surprisal and idf scores, we propose a new metric called SSIDF. We set \(SSIDF(c)={IDF(c)\over S(c)}\). The intuition behind SSIDF is that words that are both surprising and rare may be particularly bad to use as replacements.
Once these computations are complete, we develop optimization methods to compute the set of fake documents. These optimization algorithms are at the very heart of the novelty of this article.

4 Generating Fake Documents

At this stage, each occurrence of a token in a document has an associated surprisal score. However, the same token may appear in different parts of the document with different surprisal scores. We can set the surprisal score of a token in a given document to be any aggregate (e.g., mean, median) of the set of surprisal scores of the token in the document.6
For example, consider the sentences (S1) and (S2) both taken from a US patent [23].
Sentence S1
Thus, for context, from 0.8 to 1.2 mol of catalyst can be employed per mole of chloroacetyl chloride.
The surprisal scores of “chloroacetyl” and “chloride” in S1 are 22.39 and 16.19, respectively.
Sentence S2
There is a need for a technically simple and highly efficient method for preparing 2,5-dimethylphenyl acetic acid.
The surprisal score of “dimethylphenyl” in S2 is 19.28.
Consider the case of the “How many of each animal did Moses take on the Ark?” example. This deception succeeded because the word “Moses” is not too surprising in the context of the word “Ark.” Had we instead posed this question as “How many of each animal did Oprah take on the Ark?” then the deception would have been much less effective because people do not mentally associate either Oprah with Biblical themes. As a consequence, we hypothesized that the surprisal score of a word being replaced must not be too high. We therefore provided a surprisal score interval for each word being replaced so very surprising words are not replaced. At the same time, we do not want words being replaced to be very unsurprising—replacing such words may not have the desired effect of making the generated fake “wrong enough.”
In sentence S1 above, for instance, we might consider replacing the words “chloracetyl” with “pyridyl” and “chloride” with “iodide” to yield sentence S3 below.
Sentence S3
Thus, for context, from 0.8 to 1.2 mol of catalyst can be employed per mole of pyridyl iodide.
The surprisal score of “pyridyl” and “iodide” are 20.54 and 17.31, respectively. These surprisal scores are similar to those of the words “chloracetyl” and “chloride” that they, respectively, replace.
The reader will notice that (S3) is intuitively a credible replacement for (S1). The rest of this section will show how such fakes can be generated automatically (and, in fact, S3 is generated automatically from S1 by one of the algorithms we discuss below).

4.1 An Explicit Bilevel Surprisal Optimization Problem

Given a surprisal score interval, we will try to replace some tokens whose score is in this interval to generate a set of fake documents. We need to find the interval that will give us a set of documents whose deceptive capabilities are the highest. In addition, we need to consider which token we should use to replace a token in the original document to make a fake document look real, while at the same time ensuring that the resulting set of fake documents look different from one another.
Fig. 2.
Fig. 2. Explicit bilevel surprisal optimization problem.
We can formulate this as a bilevel optimization problem: Determine the interval first and then determine the best set of fake documents. In this section, we will formulate this problem with several objective functions for each approach. Figure 2 shows the natural formulation of our fake document generation problem as a bilevel optimization problem. The notation used and their intuitive explanation is as follows:
\(\mathcal {C}\) is the set of tokens in the original document d.
\(S(c)\) is the suprisal score of token c (aggregated across all occurrences in c in the original document). Throughout this section, we slightly abuse notation and allow \(S(c)\) to be any one of a number of different types of surprisal scores such as those in References [4, 12, 18], as well as variants thereof such as the SSIDF score proposed earlier in this article. In fact, as new methods of evaluating surprisal are developed, those, too, can be used as potential definitions of \(S(c)\)—and everything in this section will continue to work with those new definitions.
\(I=[I_1,I_2]\) is the surprisal score interval for each token in \(\mathcal {C}\) of the original document.
\(\mathcal {C}(c)\) is the set of tokens from the corpus of documents that can be used as replacements for c—these are tokens in the same cluster as c. In addition, the tokens in \(\mathcal {C}(c)\) should have a surprisal score interval in the range \(I=[I^{\prime }_1,I^{\prime }_2]\).
\(\mathcal {F}\) is a set of fake documents. For now, we can think of each of these as a copy of the original document—once we solve the optimization problem in Figure 2, we will make the appropriate replacements to generate the fake documents.
\(\text{fake}(\mathcal {F})\) is a function to measure deceptive capabilities. We will give specific examples of this function later.
\(X_{f,c,c^{\prime }}=1\) says that token c in document f is replaced by \(c^{\prime }\), and \(X_{f,c,c^{\prime }}=0\) says that c in document f is not replaced by \(c^{\prime }\). \(dist(c,c^{\prime })\) is the distance between c and \(c^{\prime }\).
Equation (3) ensures that at least \(\alpha\) tokens are replaced. This partially ensures that the fakes are sufficiently different from the original. Equation (4) ensures that the distance between each fake document and the original document is not less than a threshold \(\beta\). This ensures that the fakes are sufficiently different from the original. Equation (5) says that we never replace concept c in document f with \(c^{\prime }\) if the surprisal score of c is in the desired interval \([I_1,I_2]\) but the surprisal score of \(c^{\prime }\) is less than \(I_1\). Intuitively, this means that \(c^{\prime }\) has a surprisal score outside the desired interval. Later in this article, we will experimentally identify the desired interval that leads to maximal deception. Equation (6) is similar but this time, \(c^{\prime }\) surprisal score exceeds \(I_2\). Equation (7) says that we never replace concept c in document f with \(c^{\prime }\) if the surprisal score of c is outside the desired interval \([I_1,I_2]\) by being smaller than \(I_1\). Again, this is because once we have discovered an interval for the surprisal score that maximizes deception, we want the surprisal score of replaced concepts to be within that desired interval. Similarly, Equation (8) says that we never replace concept c in document f with \(c^{\prime }\) if the surprisal score of c is outside the desired interval \([I_1,I_2]\) by being greater than \(I_2\). The final constraint, Equation (9), says that all the variables \(X_{f,c,c^{\prime }}\) are binary, i.e., set to either 0 (indicating the c is not replaced by \(c^{\prime }\) in document f) or 1 (indicating the c is replaced by \(c^{\prime }\) in document f). This is because for each potential fake file f, a concept c is either replaced by concept \(c^{\prime }\) or not.

4.2 The Objective Function

The objective function uses the term \(\text{fake}~(\mathcal {F})\) whose definition has not been provided thus far. This expression could be defined in many ways. One way is as follows:
\begin{align} -\sum _{f\in \mathcal {F},c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}|\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}|. \end{align}
(10)
This formulation ensures that a fake document is close to the original document and that two fake documents are different from each other. This is important, because we do not want all the fake documents to look exactly the same. \(TFIDF(c)\) is the well-known product of the term frequency of c and the inverse document frequency of c, and \(\lambda \gt 0\) is a constant.
We can also consider a variant of the above formulation. Suppose \(\tau \in [0,\max _{f\in \mathcal {F}}\sum _{ c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)]\). We can now try to make sure that the distance between each fake document and the original document is close to \(\tau\) by updating the above objective function as follows:
\begin{align} -\sum _{f\in \mathcal {F}}|\sum _{c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}-\tau |+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}|\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}|. \end{align}
(11)
This formulation allows us to adjust the distance between each fake document and the original document to make sure that they are not too close and not too far. When \(\tau =0\), Equation (11) is equivalent to Equation (10).

4.3 An Implicit Bilevel Surprisal Optimization Problem

One problem with using the explicit bilevel surprisal optimization problem formulation above is that the number of variables can be enormous. If the corpus of documents from the domain of interest is large, then the set of tokens in that corpus can be enormous—and the explicit formulation has one variable \(X_{f,c,c^{\prime }}\) for each concept \(c^{\prime }\) in the corpus. If we consider a case where we want to generate say 100 fake documents, then there are 1,000 tokens in the original document and 100k tokens in the domain corpus, then we may have \(10^{11}\) variables \(X_{f,c,c^{\prime }}\), which would likely cause any real-world application to struggle. In this section, we try to replace the \(X_{f,c,c^{\prime }}\) variables with \(X_{f,c}\) variables to tame this complexity. To do this, we use the “Implicit” Bilevel Optimization Problem in Figure 3.
Fig. 3.
Fig. 3. Implicit bi-level optimization problem formulation.
The notation in Figure 3 is similar to that in Figure 2, but there are some differences worth noting.
\(X_{f,c}=1\) means that the token c in document f is replaced in fake f (but without specifying which concept in \(\mathcal {C}(c)\) is the replacement). \(X_{f,c}=0\) means c in document f is not replaced.
Equation (14) ensures that at least \(\alpha\) tokens are replaced, while Equation (15) ensures that the distance between each fake document and the original document is not less than a threshold \(\beta\). \(\overline{dist}(c)\) is the average distance between c and each element in \(\mathcal {C}(c)\).
As in the case of the explicit approach, we may now formulate our objective function \(\text{fake}(\mathcal {F})\) in terms of the \(X_{f,c}\) variables. One way to do this is as given below:
\begin{align} -\sum _{f\in \mathcal {F},c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}| X_{f,c}- X_{f^{\prime },c}|, \end{align}
(19)
which ensures that a fake document is close to the original document and two fake documents are distinguishable. \(TFIDF(c)\) is the product of the term frequency of c and the inverse document frequency of c, and \(\lambda \gt 0\) is a constant. And as in the case of the explicit approach, if \(\tau \in [0,\max _{f\in \mathcal {F}}\sum _{c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)]\), then we can define a version of the objective function, which ensures that the distance between each fake document and the original document is close to \(\tau\).
\begin{align} -\sum _{f\in \mathcal {F}}|\sum _{ c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}-\tau |+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}| X_{f,c}- X_{f^{\prime },c}|, \end{align}
(20)
which allows us to adjust the distance between each fake and the original document to make sure that they are neither too close nor too far. When \(\tau =0\), Equation (20) is equivalent to Equation (19).

5 Transforming the Bilevel Program into A Linear Program

The replacement of the \(X_{f,c,c^{\prime }}\) variables in the explicit bilevel surprisal optimization problem with \(X_{f,c}\) variables provides one potential performance improvement. However, bilevel optimization problems are still hard to solve. The goal of this section is to transform the bilevel programs into linear programs.
In Equations (1)–(9) of the explicit bilevel surprisal optimization problem, we want to use \(c^{\prime }\) to replace c so \(I_1\le S(c)\le I_2\) and \(I^{\prime }_1\le S(c^{\prime })\le I^{\prime }_2\), i.e.,
\begin{align} &S(c)-I_1\ge 0 \end{align}
(21)
\begin{align} &I_2-S(c)\ge 0 \end{align}
(22)
\begin{align} &S(c^{\prime })-I^{\prime }_1\ge 0 \end{align}
(23)
\begin{align} &I^{\prime }_2-S(c^{\prime })\ge 0. \end{align}
(24)
Recall, as stated earlier, that \(S(c)\) could be any arbitrary but fixed surprisal function such as those proposed in References [4, 12, 18], the SSIDF metric proposed earlier in this article, or, in fact, any function that is hypothesized to measure the quality of word replacements. Note that for any pair of tokens \(c,c^{\prime }\) not satisfying the above constraints, \(X_{f,c,c^{\prime }}\) should be 0, indicating that we cannot use \(c^{\prime }\) to replace c. To make this happen, we introduce a binary \(I_{c,c^{\prime },i}\) for any \(c,c^{\prime }\) for each of the above four constraints, which satisfies:
\begin{align} &S(c)-I_1\ge I_{c,c^{\prime },1}-1 \end{align}
(25)
\begin{align} &S(c)-I_1\le I_{c,c^{\prime },1} \end{align}
(26)
\begin{align} &I_2-S(c)\ge I_{c,c^{\prime },2}-1 \end{align}
(27)
\begin{align} &I_2-S(c)\le I_{c,c^{\prime },2} \end{align}
(28)
\begin{align} &S(c^{\prime })-I^{\prime }_1\ge I_{c,c^{\prime },3}-1 \end{align}
(29)
\begin{align} &S(c^{\prime })-I^{\prime }_1\le I_{c,c^{\prime },3} \end{align}
(30)
\begin{align} &I^{\prime }_2-S(c^{\prime })\ge I_{c,c^{\prime },4}-1 \end{align}
(31)
\begin{align} &I^{\prime }_2-S(c^{\prime })\le I_{c,c^{\prime },4}, \end{align}
(32)
which assumes that the surprisal score values have normalized to \([0,1]\). For example, we have \(I_{c,c^{\prime },1}=1\) if \(S(c)-I_1\ge 0\), otherwise, \(I_{c,c^{\prime },1}=0\). Then, we have:
\begin{align} &X_{f,c,c^{\prime }}\le I_{c,c^{\prime },1} \end{align}
(33)
\begin{align} &X_{f,c,c^{\prime }}\le I_{c,c^{\prime },2} \end{align}
(34)
\begin{align} &X_{f,c,c^{\prime }}\le I_{c,c^{\prime },3} \end{align}
(35)
\begin{align} &X_{f,c,c^{\prime }}\le I_{c,c^{\prime },4}, \end{align}
(36)
which will make sure that \(X_{f,c,c^{\prime }}=0\) if c or \(c^{\prime }\) is not in the given interval. Therefore, we can transform the bilevel program in Equations(1)–(9) into the linear program shown in Figure 4.
Fig. 4.
Fig. 4. Explicit linear surprisal program formulation.
Similarly, the implicit bilevel program in Equations (12)-(18) can be transformed into the linear surprisal program shown in Figure 5.
Fig. 5.
Fig. 5. Implicit linear surprisal program formulation.
The objective function contains absolute values of variables, e.g., the objective function \(\text{fake}(\mathcal {F})\) in Equation (10) has \(\sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}|\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}|\). For each absolute value term, we use the following linear constraints to represent the value \(\sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}Y_{f,f^{\prime },c}\)=\(\sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}|\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}|\):
\begin{align} \sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }} + YB_{f,f^{\prime },c}&\ge Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(60)
\begin{align} - \left(\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}\right) + Y(1-B_{f,f^{\prime },c})&\ge Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(61)
\begin{align} \sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }} &\le Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(62)
\begin{align} - \left(\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}\right) &\le Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(63)
\begin{align} Y_{f,f^{\prime },c}&\ge 0\quad \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(64)
\begin{align} B_{f,f^{\prime },c}&\in \lbrace 0,1\rbrace \quad \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}, \end{align}
(65)
where Y is a large constant (at least the upper bound of \(Y_{f,f^{\prime },c}\), i.e., \(|\mathcal {C}|^2\)), \(B_{f,f^{\prime },c}=0\) means \(Y_{f,f^{\prime },c}=\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }}\ge 0\), and \(B_{f,f^{\prime },c}=1\) means \(Y_{f,f^{\prime },c}=-(\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f,c,c^{\prime }}-\sum _{c^{\prime }\in \mathcal {C}(c)}X_{f^{\prime },c,c^{\prime }})\ge 0\). Then, the objective function \(\text{fake}(\mathcal {F})\) in Equation (10) becomes the following linear objective function:
\begin{align} -\sum _{f\in \mathcal {F},c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}Y_{f,f^{\prime },c}, \end{align}
(66)
with constraints in Equations (60)–(65). Similarly, the objective function \(\text{fake}(\mathcal {F})\) in Equation (11) becomes the following linear objective function:
\begin{align} -\sum _{f\in \mathcal {F}}Y_f +\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}}Y_{f,f^{\prime },c}, \end{align}
(67)
with the help of constraints in Equations (60)–(65) and the following constraints:
\begin{align} \sum _{c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}-\tau + Y^{\prime }B_f&\ge Y_f \quad \forall f\in \mathcal {F} \end{align}
(68)
\begin{align} - \left(\sum _{c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}-\tau \right) + Y^{\prime }(1-B_f)&\ge Y_f \quad \forall f\in \mathcal {F} \end{align}
(69)
\begin{align} \sum _{c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}-\tau &\le Y_f \quad \forall f\in \mathcal {F} \end{align}
(70)
\begin{align} - \left(\sum _{c\in \mathcal {C},c^{\prime }\in \mathcal {C}(c)}dist(c,c^{\prime })TFIDF(c)X_{f,c,c^{\prime }}-\tau \right) &\le Y_f \quad \forall f\in \mathcal {F} \end{align}
(71)
\begin{align} Y_f&\ge 0\quad \quad \forall f\in \mathcal {F} \end{align}
(72)
\begin{align} B_f&\in \lbrace 0,1\rbrace \quad \forall f\in \mathcal {F}. \end{align}
(73)
Similarly, the objective function \(\text{fake}(\mathcal {F})\) in Equation (19) becomes the following linear objective function:
\begin{align} -\sum _{f\in \mathcal {F},c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}} Y_{f,f^{\prime },c}, \end{align}
(74)
with the following constraints:
\begin{align} X_{f,c}- X_{f^{\prime },c} + YB_{f,f^{\prime },c}&\ge Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(75)
\begin{align} -(X_{f,c}- X_{f^{\prime },c}) + Y(1-B_{f,f^{\prime },c})&\ge Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(76)
\begin{align} X_{f,c}- X_{f^{\prime },c} &\le Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(77)
\begin{align} -(X_{f,c}- X_{f^{\prime },c}) &\le Y_{f,f^{\prime },c} \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(78)
\begin{align} Y_{f,f^{\prime },c}&\ge 0\quad \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F} \end{align}
(79)
\begin{align} B_{f,f^{\prime },c}&\in \lbrace 0,1\rbrace \quad \forall c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}. \end{align}
(80)
The objective function \(\text{fake}(\mathcal {F})\) in Equation (20) becomes the following linear objective function:
\begin{align} -\sum _{f\in \mathcal {F}}Y_f+\lambda \sum _{c\in \mathcal {C},f,f^{\prime }\in \mathcal {F}} Y_{f,f^{\prime },c}, \end{align}
(81)
with the help of the above constraints for \(Y_{f,f^{\prime },c}\) and the following constraints for \(Y_f\):
\begin{align} \sum _{ c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}-\tau + Y^{\prime }B_f&\ge Y_f \quad \forall f\in \mathcal {F} \end{align}
(82)
\begin{align} - \left(\sum _{ c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}-\tau \right) + Y^{\prime }(1-B_f)&\ge Y_f \quad \forall f\in \mathcal {F} \end{align}
(83)
\begin{align} \sum _{ c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}-\tau &\le Y_f \quad \forall f\in \mathcal {F} \end{align}
(84)
\begin{align} - \left(\sum _{ c\in \mathcal {C}}\overline{dist}(c)TFIDF(c)X_{f,c}-\tau \right) &\le Y_f \quad \forall f\in \mathcal {F} \end{align}
(85)
\begin{align} Y_f&\ge 0\quad \quad \forall f\in \mathcal {F} \end{align}
(86)
\begin{align} B_f&\in \lbrace 0,1\rbrace \quad \forall f\in \mathcal {F}. \end{align}
(87)

6 Implementation and Experiments

The implementation of the SbFAKE system consisted of 980 lines of code written in Python. In addition, the implementation used multiple Python libraries including the Natural Language Toolkit, scikit-learn, SciPy, and PyTorch. The optimization problems were set up using CVSPY library and solved with GUROBI. We calculated surprisal scores via two different models: an LSTM-based model and a fine-tuned GPT-2 model. All experiments were run on a machine with i7-10750H CPU and 64 GB RAM.
Our experiments fall into three broad categories: (i) finding the best parameter settings for the SbFAKE system, (ii) comparing SbFAKE against the state-of-the-art existing system, and (iii) assessing specific hypotheses regarding the SbFAKE system. We describe each of these below.

6.1 Finding the Best Parameters for SbFAKE

To find the best hyperparameters for SbFAKE, we conducted an experiment (with 40 human subjects) at the sentence level for both the Computer Science and the Chemistry domains and varied the following parameters:
(1)
Type of optimization: We compared explicit versus implicit linear program optimizations that were described in Section:4.
(2)
Surprisal score versus SSIDF: We also varied the type of scoring considered to include both surprisal scores and SSIDF (described in Section:3.2). Recall that SSIDF is a combination of surprisal score and TFIDF.
(3)
Model that was used to compute surprisal score: We used both LSTM-based and fine-tuned GPT-based models to calculate the surprisal score [29].
(4)
Score interval: The fake document generation problem requires specifying an interval \([I1,I2]\). In our experiments, we compared five different combinations of \(I1\) and \(I2\): [0.0, 1.0];[0.3, 0.7];[0.0, 0.4];[0.6, 1.0];[0.4, 0.6].
(5)
Others: Based on some quick initial testing, we set the parameters \(\alpha = 0.8\), \(\beta = 0.8\), and \(\lambda = 0.6\) throughout our experiments. These parameters were introduced in equations Equation (3), Equation (15), and Equation (10), respectively.
We present the best 16 parameter settings in Table 2.
Table 2.
OptimizationModelScoreIntervalChoice 1Choice 2Choice 3Total
explicitlstmsurprisal[0.0, 1.0]\(\underset{{\color{grey}{pval=0.1226}}}{{{\bf 0.103}}}\)\(\underset{\color{green}{pval=0.0095}}{{{\bf 0.123}}}\)\(\underset{\color{grey}{pval=0.2984}}{{{\bf 0.075}}}\)\(\underset{\color{green}{pval=0.0265}}{{{\bf 0.1}}}\)
explicitlstmsurprisal[0.6, 1.0]\(\underset{\color{green}{pval=0.0162}}{{0.14}}\)\(\underset{\color{grey}{pval=0.0932}}{{0.095}}\)\(\underset{\color{grey}{pval=0.6195}}{{0.05}}\)\(\underset{\color{grey}{pval=0.0706}}{{0.09}}\)
implicitlstmsurprisal[0.6, 1.0]\(\underset{\color{grey}{pval=0.3733}}{{0.071}}\)\(\underset{\color{green}{pval=0.0308}}{{0.11}}\)\(\underset{\color{grey}{pval=0.4223}}{{0.065}}\)\(\underset{\color{grey}{pval=0.117}}{{0.084}}\)
explicitgpt2ssidf[0.0, 1.0]\(\underset{\color{grey}{pval=0.6131}}{{0.048}}\)\(\underset{\color{grey}{pval=0.6342}}{{0.05}}\)\(\underset{\color{green}{pval=0.0013}}{{0.15}}\)\(\underset{\color{grey}{pval=0.1264}}{{0.083}}\)
explicitgpt2surprisal[0.0, 1.0]\(\underset{\color{grey}{pval=0.5617}}{{0.053}}\)\(\underset{\color{grey}{pval=0.2777}}{{0.075}}\)\(\underset{\color{grey}{pval=0.0524}}{{0.108}}\)\(\underset{\color{grey}{pval=0.181}}{{0.078}}\)
implicitlstmsurprisal[0.0, 0.4]\(\underset{\color{green}{pval=0.0309}}{{0.13}}\)\(\underset{\color{grey}{pval=0.3426}}{{0.07}}\)\(\underset{\color{grey}{pval=0.8478}}{{0.028}}\)\(\underset{\color{grey}{pval=0.206}}{{0.076}}\)
explicitlstmsurprisal[0.0, 0.4]\(\underset{\color{grey}{pval=0.1226}}{{0.103}}\)\(\underset{\color{grey}{pval=0.7827}}{{0.038}}\)\(\underset{\color{grey}{pval=0.8061}}{{0.033}}\)\(\underset{\color{grey}{pval=0.6517}}{{0.05}}\)
implicitgpt2surprisal[0.0, 1.0]\(\underset{\color{grey}{pval=0.4345}}{{0.065}}\)\(\underset{\color{grey}{pval=0.7256}}{{0.043}}\)\(\underset{\color{grey}{pval=0.4223}}{{0.065}}\)\(\underset{\color{grey}{pval=0.5041}}{{0.058}}\)
implicitlstmsurprisal[0.0, 1.0]\(\underset{\color{grey}{pval=0.5407}}{{0.055}}\)\(\underset{\color{grey}{pval=0.5172}}{{0.058}}\)\(\underset{\color{grey}{pval=0.6195}}{{0.05}}\)\(\underset{\color{grey}{pval=0.5794}}{{0.054}}\)
explicitlstmssidf[0.4, 0.6]\(\underset{\color{grey}{pval=0.7362}}{{0.035}}\)\(\underset{\color{grey}{pval=0.7004}}{{0.045}}\)\(\underset{\color{grey}{pval=0.2984}}{{0.075}}\)\(\underset{\color{grey}{pval=0.6161}}{{0.052}}\)
implicitlstmssidf[0.4, 0.6]\(\underset{\color{grey}{pval=0.828}}{{0.023}}\)\(\underset{\color{grey}{pval=0.3031}}{{0.073}}\)\(\underset{\color{grey}{pval=0.8061}}{{0.033}}\)\(\underset{\color{grey}{pval=0.7615}}{{0.043}}\)
Authentic version\(\underset{\color{grey}{pval=0.5407}}{{{\bf 0.055}}}\)\(\underset{\color{grey}{pval=0.7879}}{{{\bf 0.0375}}}\)\(\underset{\color{grey}{pval=0.7881}}{{{\bf 0.035}}}\)\(\underset{\color{grey}{pval=0.7687}}{{{\bf 0.0425}}}\)
implicitgpt2ssidf[0.0, 1.0]\(\underset{\color{grey}{pval=0.8468}}{{0.02}}\)\(\underset{\color{grey}{pval=0.662}}{{0.048}}\)\(\underset{\color{grey}{pval=0.5817}}{{0.053}}\)\(\underset{\color{grey}{pval=0.8038}}{{0.04}}\)
explicitgpt2ssidf[0.3, 0.7]\(\underset{\color{grey}{pval=0.793}}{{0.028}}\)\(\underset{\color{grey}{pval=0.8138}}{{0.035}}\)\(\underset{\color{grey}{pval=0.5548}}{{0.055}}\)\(\underset{\color{grey}{pval=0.8165}}{{0.039}}\)
explicitlstmssidf[0.3, 0.7]\(\underset{\color{grey}{pval=0.8593}}{{0.018}}\)\(\underset{\color{grey}{pval=0.7827}}{{0.038}}\)\(\underset{\color{grey}{pval=0.5817}}{{0.053}}\)\(\underset{\color{grey}{pval=0.8515}}{{0.036}}\)
implicitlstmssidf[0.3, 0.7]\(\underset{\color{grey}{pval=0.7099}}{{0.038}}\)\(\underset{\color{grey}{pval=0.8745}}{{0.028}}\)\(\underset{\color{grey}{pval=0.8319}}{{0.03}}\)\(\underset{\color{grey}{pval=0.8824}}{{0.033}}\)
implicitgpt2ssidf[0.3, 0.7]\(\underset{\color{grey}{pval=0.8865}}{{0.013}}\)\(\underset{\color{grey}{pval=0.7827}}{{0.038}}\)\(\underset{\color{grey}{pval=0.6792}}{{0.045}}\)\(\underset{\color{grey}{pval=0.8824}}{{0.033}}\)
Table 2. The Table Displays Parameter Combinations Used
The columns “Choice 1,” “Choice 2,” and “Choice 3” indicate the percentage of survey participants who selected each combination as their first, second, and third choice, respectively. The “Total” values represent the overall percentage of participants who chose the combination in any of the positions. The “Authentic Version” row corresponds to the case when the original document is selected.
The rows correspond to different parameter settings (in descending order of the quality of performance as measured by the last column). The columns in Table 2 can be interpreted as follows: The “Choice 1” column shows the percentage of subjects whose first choice was the result generated by a given parameter setting (or the original document which corresponds to the “Authentic version” row). Similarly, the “Choice 2” and “Choice 3” columns show the percentage of subjects who chose the result generated by a given parameter setting (or the original document). The last column shows the percentage of subjects whose top 3 choices included the sentences generated by the given parameter setting.
Results. We see that the best parameter setting (based on any one of the top 3) used the explicit algorithm with the LSTM-based surprisal model and a surprisal interval of \([0.0,1.0]\) (i.e., the surprisal interval was unconstrained). This corresponds to the first row in Table 2. However, if we consider only the top choice, then the second row, which has the same parameters except that the surprisal interval is \([0.6,1.0]\), is best.
The results show that out of the 16 combinations examined, 11 beat out the “authentic” version. These results are statistically significant via a Student’s t-test (\(p \lt 0.001\)). These combinations were chosen more frequently as one of the three preferred choices. Consequently, based on these findings, we recommend selecting the following three parameter combinations that demonstrated the highest efficiency:
(1)
explicit, lstm, surprisal, [0.0, 1.0] (we refer to this setting later in the article as SbFAKE[E]-01). The “01” denotes the surprisal interval used.
(2)
explicit, lstm, surprisal, [0.6, 1.0] (we refer to this setting later in the article as SbFAKE[E]-61). The “61” denotes the surprisal interval used.
(3)
implicit, lstm, surprisal, [0.6, 1.0] (we refer to this setting later in the article as SbFAKE[I]).
Each of these three parameter settings beats the next best one in a statistically significant manner (\(p \lt 0.001\) in all cases).

6.2 Comparison with WE-FORGE

This section describes an experiment that used the three best parameter settings discovered above to compare the deception ability of SbFAKE with WE-FORGE [1], which represents the best prior work on fake document generation. WE-FORGE was already shown to be superior to the earlier FORGE [8] system and hence SbFAKE was only compared with WE-FORGE.
A total of 40 participants were assigned to two surveys, each focusing on documents from either the field of Chemistry or Computer Science.7 Each survey comprised 10 tasks, with each task presenting participants with 11 documents. Among these documents, only one was original, while the remaining 10 were generated using five distinct algorithms, resulting in two fabricated documents from each algorithm (one each corresponding to the three best settings of SbFAKE and two versions of WE-FORGE8). The primary objective of each task was for participants to identify and select the three documents they believed to be authentic. Each of the five algorithms generated two fake documents in each task to give a total of 10 fakes and one original document per task. These five algorithms were compared head-to-head and the results of these head-to-head comparisons are reported in this section. This is therefore a fair comparison.
Furthermore, as part of this experiment, we recorded the duration of time participants spent on the task. In addition, we included four questions related to the documents they were working with to assess their attentiveness to the task.
Table 3.
Parameter valueChoice 1Choice 2Choice 3Total
Original0,0850,060,10,082
WE-FORGE[I]0,240,20,1350,192
WE-FORGE[E]0,1750,1650,140,160
SbFAKE[I]0,1450,1650,1950,168
SbFAKE[E]-610,220,2150,1750,203
SbFAKE[E]-010,1350,1950,2550,195
Table 3. The Experiment Results for Documents from the Chemistry Field
Table 4.
Parameter valueChoice 1Choice 2Choice 3Total
Original0,150,070,0950,105
WE-FORGE[I]0,1250,150,180,152
WE-FORGE[E]0,170,20,180,183
SbFAKE[I]0,220,180,160,187
SbFAKE[E]-610,170,1950,1850,183
SbFAKE[E]-010,1650,2050,20,190
Table 4. The Experiment Results for Documents from the Computer Science Field
Table 3 shows the results on the Chemistry dataset, while Table 4 shows the results on the Computer Science data.
Results on Chemistry documents. Table 3 shows the results. Columns “Choice 1,” “Choice 2,” “Choice 3,” and “Total” are interpreted in the same way as in Table 2. We see that if only the top choice is considered, WE-FORGE[I] does the best and beats all versions of SbFAKE. But if we consider the top 3 as was done in the WE-FORGE paper [1] and as we did when tuning hyperparameters in Table 2, then SbFAKE[E] is the top performer.
Results on Computer science documents Table 4 shows the results on the CS dataset. Here, we see that all three versions of SbFAKE outperform both versions of WE-FORGE. In fact, WE-FORGE[I] (which was the top performer w.r.t. Chemistry documents) is the worst performer. All these results were statistically significant at the \(p \lt 0.001\) level.
Results on combined documents Table 5 presents the aggregated outcomes for documents from both datasets. We see that all versions of SbFAKE beat all versions of WE-FORGE and that SbFAKE[E] is the best and the surprisal intervals of [0,1] and [0.6,1] seem to give the same performance. The two pairwise comparisons of WE-FORGE with SbFAKE[E] and SbFAKE[I] show that WE-FORGE exhibits weaker performance in a statistically significant manner (\(p \lt 10^{-10})\). The claim that SbFAKE[E] is better than SbFAKE[I] is also statistically significant (\(p \lt 10^{-10})\).
Table 5.
Parameter valueChoice 1Choice 2Choice 3Total
Original0,11750,0650,09750,093
WE-FORGE[I]0,18250,1750,15750,172
WE-FORGE[E]0,17250,18250,160,172
SbFAKE[I]0,18250,17250,17750,178
SbFAKE[E]-610,1950,2050,180,193
SbFAKE[E]-010,150,20,22750,193
Table 5. The Experimental Results for Documents from Both Fields

6.3 Impact of Surprisal Scores

We also explored three hypotheses relating to surprisal scores.
We first define the deception rate of a fake document as the percentage of human subjects that chose that document as the real one as one of their top 3 choices. For instance if a fake document d was chosen to be in the top 3 by 10 out of 40 human subjects in the two experiments combined, then the deception rate of that document is 10/40 = 25%. A higher deception rate for a selected document suggests that the document has higher efficacy in achieving deception.
Choosing to Replace Low Surprisal Score Words Achieves Higher Deception. Suppose d is the original document and f is a fake version of it. We defined the average surprisal score \(AS(f,d)\) to be the average of the average surprisal scores of occurrences of words in d that were replaced to generate f. Our hypothesis was that as \(AS(f,d)\) increased, the deception rate would go down.
Figure 6 shows the result. We see that an increase in \(AS(f,d)\) achieves an increase in deception up to some point (surprisal score = 15) but any further increase causes the deception rate to go down. This suggests that deception is not achieved when the words chosen for replacement are very unsurprising, but they are not achieved either when the words chosen to be replaced are too surprising. The “sweet spot” is somewhere in the middle with surprisal scores around 15.
Fig. 6.
Fig. 6. Deception rate of fake documents selected in top-3 (y-axis) as average surprisal score of words selected for replacement changes (x-axis).
Choosing Low Surprisal Score Words as the Replacements Achieves Higher Deception. This time, our hypothesis was that as the average surprisal score of the words chosen to be replacements increased (once they were substituted into the original document to generate a fake), the deception rate would go down. In other words, this time around, we looked at the replacement words (not the words being replaced) after substitution to create the fake.
Fig. 7.
Fig. 7. Deception rate of fake documents selected in top-3 (y-axis) as average surprisal score of replacement words changes (x-axis).
Figure 7 shows the result. We see that an increase in the average surprisal score of replacement words after replacement to generate the fake achieves an increase in deception up to some point (surprisal score somewhere between 16 and 18) but any further increase causes the deception rate to go down. This suggests that deception is not achieved when the words chosen as the replacements are very unsurprising, but they are not achieved either when the words chosen as the replacements are too surprising. The “sweet spot” is somewhere in the middle with surprisal scores around 16–18.
Choosing Replacement Words whose Surprisal Score Is About the Same as the Replaced Words Achieves Higher Deception. This time, our hypothesis was that as the difference in the average surprisal score of the words chosen for replacement and chosen to be replacements increased (once they were substituted into the original document to generate a fake), the deception rate would go down.
Fig. 8.
Fig. 8. Deception rate of fake documents selected in top-3 (y-axis) as the difference between average surprisal score of replaced and replacement words changes (x-axis).
Figure 8 shows the result. We see that the hypothesis is in fact more or less valid. As long as the difference between the average surprisal score of the replaced words and the replacement words is less than or equal to two, the deception rate is high. After that, the deception rate drops precipitously.

6.4 Additional Hypotheses

We also examined some other hypotheses. Our experiments allowed us to monitor the amount of time the human subjects were active when responding to a task. We also periodically included attention checks.
Influence of time. Our hypothesis posits that the duration of time an individual invests in the experiment is proportional to the likelihood of successfully finding the original document (in the subject’s top 3 choices).
Fig. 9.
Fig. 9. Time spent vs. probability of choosing the right answer in the top 3 choices made.
Figure 9 shows the time spent by the subject on the experiment (x-axis) against the probability that one of his/her top-3 answers was correct. To our surprise, the hypothesis is not completely true. We see an interesting pattern. Subjects who spent too little or too much time were the ones who were most deceived. In contrast, subjects who spent a “modest” amount of time were more successful in finding the original document. Those who spent relatively little time may not have been as diligent as they should have been and perhaps were quickly fooled. Those who spent a lot of time were perhaps unable to uncover the real document despite strong effort. Those who spent a modest amount of time (two–three hours for computer science, over four hours for chemistry) had perhaps achieved the right tradeoff in terms of time spent vs. accuracy in finding the real document.
Influence of attentiveness. We performed four attention checks during the time subjects were participating in the experiment. Simple questions about the data were posed and were supposed to be answered correctly. We hypothesized that the subjects who answered these attention check queries correctly were more likely to find the real document than those who did not.
Fig. 10.
Fig. 10. Percentage of attention checks answered correctly (x-axis) vs. probability of choosing the right answer in the top 3 choices made.
Figure 10 shows the results. We see that the results are inconclusive, regardless of the dataset considered.

6.4.1 Score Type Hypothesis.

One of the most notable aspects that captured our attention was the contrast between surprisal scores and ssidf scores. As indicated in Table 6, surprisal scores produced significantly better outcomes in terms of deception of adversaries. Fakes generated using surprisal scores were more frequently chosen over those generated through ssidf scores.
Table 6.
Parameter valueChoice 1Choice 2Choice 3Total
Original0.0550.0380.0350.043
Surprisal scores0.7300.6100.4730.603
Ssidf0.2200.3530.4930.355
Table 6. The Numbers Reflect the Probability of the Participant Choosing a Document Generated Using a Specific Type of Score

6.4.2 Model Type Hypothesis.

Other findings regarding the surprisal score show that scores produced using the LSTM-based model are preferable almost at the same rate as in the hypothesis before (Table 7).
Table 7.
Parameter valueChoice 1Choice 2Choice 3Total
Original0.0550.0380.0350.043
Lstm0.7200.6750.4900.628
Gpt20.2250.2880.4750.355
Table 7. The Numbers Reflect the Probability of the Participant Choosing a Document Generated Using a Specific Model

6.4.3 Interval Hypothesis.

The table containing information regarding thresholds used in optimization shows that the best option is to not exclude words based on the surprisal scores (Table 8).
Table 8.
Parameter valueChoice 1Choice 2Choice 3Total
Original0.0550.0380.0350.043
[0.0, 1.0]0.3430.3950.5000.413
[0.3, 0.7]0.0950.1380.1810.138
[0.0, 0.4]0.2330.1080.0600.133
[0.6, 1.0]0.2180.2050.1150.179
[0.4, 0.6]0.0580.1180.1080.094
Table 8. The Numbers Reflect the Probability of the Participant Choosing a Document Generated Using a Specific Interval in the Optimization

6.4.4 Optimization Type Hypothesis.

As for the optimization itself, it was shown that explicit optimization outperforms implicit optimization (Table 9).
Table 9.
Parameter valueChoice 1Choice 2Choice 3Total
Original0.0550.0380.0350.043
explicit0.5250.4980.5750.540
implicit0.4200.4650.36750.355
Table 9. The Numbers Reflect the Probability of the Participant Choosing a Document Generated Using a Specific Optimization Type

6.4.5 Runtime Assessments.

Another experiment checked the runtime of the explicit optimization vs. the implicit optimization. The overall runtime is highly dependent on the original text, but optimization itself is around 40 times faster in the case of implicit optimization compared to explicit optimization.
Fig. 11.
Fig. 11. Variation in runtime of the implicit optimization algorithm. (a) shows how the runtime changes as we vary the number of words in the original document (x-axis) and as we vary the number of fake documents we wish to return. (b) shows how the runtime changes as we vary the number of nouns in the original document (x-axis) and as we vary the number of fake documents we wish to return.
We also checked the absolute runtimes of the implicit optimization. For this, we ran two experiments. In the first, we computed the runtime of the implicit algorithm when we varied the number of words in the original document (Figure 11(a)) and also varied the number of fakes desired. In the second, we computed the runtime of the implicit algorithm when we varied the number of nouns in the original document (Figure 11(b)) and also varied the number of fakes desired. Recall that in SbFAKE, we only replace nouns. The times reported are quite fast, just 1–2 seconds in the scenarios considered. But these times do not include the time to compute the surprisal scores.
Surprisal score computation is relatively expensive, as shown in Figure 12, and can take several minutes for long documents.
Regardless, the total time to generate a number of fake versions of even long documents is a matter of minutes—maybe 10 minutes maximum—which is deemed acceptable by potential users we have spoken with. In the worst case, the person tasked with creating the fakes merely pushes a “generate fakes” button and goes for a short coffee break or the like.
Fig. 12.
Fig. 12. Variation in runtime to compute surprisal scores of (a) all words in the document and (b) all nouns in the document.

7 Conclusion and Future Work

There is now growing interest in combating intellectual property theft through the use of fake data [10, 15, 33] and intentionally falsified information [30]. One version of this general trend is also applicable to the use of fake documents in which we create a set of fake versions that are similar enough to the original to be credible, yet sufficiently different to be wrong [3, 17, 24, 36]. We develop a novel system called SbFAKE that makes use of surprisal, an information theoretic measure widely used in research on human sentence understanding. Bringing together surprisal, natural language processing, and optimization, we show that our novel method to generate a set of fake versions of a document significantly outperforms the state-of-the-art fake document system, WE-FORGE [1].
The SbFAKE framework and method we have developed is the first framework (to the best of our knowledge) that combines the power of psychology with NLP and optimization methods to enhance the ability to generate fake documents that are more likely to deceive an IP thief than the best prior work. More generally, we have established that drawing insights from cognitive science research on human language processing improves computational methods for deceptive text generation, suggesting the potential for cognitive science research in human language understanding to contribute to novel solutions for other problems. Our technical approach shows how any surprisal scoring method can be used to automatically associate a bilevel optimization problem whose solution corresponds to a desired set of n fake documents. We further show that this bilevel optimization problem can be transformed into an equivalent single-level optimization problem that can be solved efficiently to generate effective fakes. This approach has the potential of being improved over the coming years (e.g., by developing more sophisticated surprisal models) and enhancing scalability so batches of fake documents (i.e., n fake documents can be simultaneously generated for each of a batch of k original documents).
A first direction for future work is to develop specialized data structures to scale the speed with which we can compute the surprisal scores of all words (or all nouns) within the original document. Currently, the problem of finding the surprisal score of all the words in the document dominates the runtime of the SbFAKEframework.
A second direction for future work is to look at the relationship between surprisal scores and large language models (LLMs). In an LLM, we compute the probability that a concept c (or a word) will appear in a given context. Text is generated by inserting high probability words in a given context, e.g., the partial sentence “He enjoyed his trip to New...” would likely be completed with the word “York” when an LLM is used. One interesting hypothesis is that modern LLMs might be better than other models at suggesting plausible replacement words because they use a much larger context than other kinds of language models and encode much more world knowledge because they are trained on vastly more text. In particular, this suggests that we can leverage LLMs to better identify replacement concepts.
We plan to study these issues in further detail in future work.

Footnotes

3
Human subject experimentation was conducted with IRB approval.
4
A natural question to ask is: How would the legitimate user identify the real document from a sea of fakes? To achieve this, FORGE proposed a method based on Message Authenticating Codes (MAC). Each document, real or fake, has a public key embedded in it. A user’s private key, used in conjunction with the public key, allows the authorized user to see whether a specific document is real or fake. As this problem has already been solved, we do not address it further here, but refer the reader to Reference [8] for details.
5
Our implementation used k-means clustering, but this can be swapped out with any other clustering method if so desired.
6
In our experiments, we used the mean.
7
All human subjects were recruited via Amazon Mechanical Turk. We selected participants from the U.S. to limit the number of variables in our study and to ensure a background suitable to understand the documents provided in the surveys. Subjects were required to have a graduate degree in the US for the Chemistry survey and Software&IT services as Employment Industry for the Computer Science survey. We used attention checks and monitored the amount of time the subjects spent on each task of the survey to ensure high-quality results. All experiments were conducted under IRB authorization.
8
The authors of WE-FORGE were kind enough to generate the documents for us to use in the experiments.

References

[1]
Almas Abdibayev, Dongkai Chen, Haipeng Chen, Deepti Poluru, and V. S. Subrahmanian. 2021. Using word embeddings to deter intellectual property theft through automated generation of fake documents. ACM Trans. Manag. Inf. Syst. 12, 2 (2021), 1–22.
[2]
Christine S. Ankener, Mirjana Sekicki, and Maria Staudte. 2018. The influence of visual uncertainty on word surprisal and processing effort. Front. Psychol. 9 (2018), 2387.
[3]
Yu Aoike, Masaki Kamizono, Masashi Eto, Noriko Matsumoto, and Norihiko Yoshida. 2021. Decoy-file-based deception without usability degradation. In IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE’21). IEEE, 1–7.
[4]
Shohini Bhattasali and Philip Resnik. 2021. Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension. In Findings of the Association for Computational Linguistics (ACL-IJCNLP’21). Association for Computational Linguistics, 3786–3798. DOI:DOI:
[5]
Marisa Ferrara Boston, John Hale, Reinhold Kliegl, Umesh Patil, and Shravan Vasishth. 2008. Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam sentence corpus. J. Eye Movem. Res. 2, 1 (2008).
[6]
Jonathan R. Brennan, Edward P. Stabler, Sarah E. Van Wagenen, Wen-Ming Luh, and John T. Hale. 2016. Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain Lang. 157 (2016), 81–94.
[7]
Christian Brodbeck, Shohini Bhattasali, Aura A. L. Cruz Heredia, Philip Resnik, Jonathan Z. Simon, and Ellen Lau. 2022. Parallel processing in speech perception with local and global representations of linguistic context. ELife 11 (2022), e72056.
[8]
Tanmoy Chakraborty, Sushil Jajodia, Jonathan Katz, Antonio Picariello, Giancarlo Sperli, and V. S. Subrahmanian. 2019. A fake online repository generation engine for cyber deception. IEEE Trans. Depend. Sec. Comput. 18, 2 (2019), 518–533.
[9]
Huashan Chen, Jin-Hee Cho, and Shouhuai Xu. 2018. Quantifying the security effectiveness of firewalls and DMZs. In 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. 1–11.
[10]
Haipeng Chen, Sushil Jajodia, Jing Liu, Noseong Park, Vadim Sokolov, and VS Subrahmanian. 2019. FakeTables: Using GANs to generate functional dependency preserving tables with bounded real data. In International Joint Conference on Artificial Intelligence. 2074–2080.
[11]
Vera Demberg and Frank Keller. 2008. Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition 109, 2 (2008), 193–210.
[12]
Vera Demberg, Asad Sayeed, Philip Gorinski, and Nikolaos Engonopoulos. 2012. Syntactic surprisal affects spoken word duration in conversational contexts. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. 356–367.
[13]
Peter W. Donhauser and Sylvain Baillet. 2020. Two distinct neural timescales for predictive speech processing. Neuron 105, 2 (2020), 385–393.
[14]
David Embick and David Poeppel. 2015. Towards a computational (ist) neurobiology of language: Correlational, integrated and explanatory neurolinguistics. Lang., Cogn. Neurosci. 30, 4 (2015), 357–366.
[15]
Mohammad Esmaeilpour, Nourhene Chaalia, Adel Abusitta, Franşois-Xavier Devailly, Wissem Maazoun, and Patrick Cardinal. 2022. Bi-discriminator GAN for tabular data synthesis. Pattern Recogn. Lett. 159 (2022), 204–210.
[16]
Absalom E. Ezugwu, Abiodun M. Ikotun, Olaide O. Oyelade, Laith Abualigah, Jeffery O. Agushaka, Christopher I. Eke, and Andronicus A. Akinyelu. 2022. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects. Eng. Applic. Artif. Intell. 110 (2022), 104743.
[17]
Yun Feng, Baoxu Liu, Yue Zhang, Jinli Zhang, Chaoge Liu, and Qixu Liu. 2021. Automated honey document generation using genetic algorithm. In 16th International Conference on Wireless Algorithms, Systems, and Applications (WASA’21). Springer, 20–28.
[18]
Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL’18). 10–18.
[19]
John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In 2nd Meeting of the North American Chapter of the Association for Computational Linguistics.
[20]
John Hale. 2016. Information-theoretical complexity metrics. Lang. Ling. Compass 10, 9 (2016), 397–412.
[21]
Qian Han, Cristian Molinaro, Antonio Picariello, Giancarlo Sperli, Venkatramanan S. Subrahmanian, and Yanhai Xiong. 2021. Generating fake documents using probabilistic logic graphs. IEEE Trans. Depend. Sec. Comput. 19, 4 (2021), 2428–2441.
[22]
John M. Henderson, Wonil Choi, Matthew W. Lowder, and Fernanda Ferreira. 2016. Language structure in the brain: A fixation-related fMRI study of syntactic surprisal in reading. Neuroimage 132 (2016), 293–300.
[23]
Thomas Himmler. U.S. Patent US7629476B2, Dec. 2009. Method for producing 2,5-dimethylphenyl acetic acid. (U.S. Patent US7629476B2, Dec. 2009).
[24]
Yibo Hu, Yu Lin, Erick Skorupa Parolin, Latifur Khan, and Kevin Hamlen. 2022. Controllable fake document infilling for cyber deception. arXiv preprint arXiv:2210.09917 (2022).
[25]
Daniel Kahneman. 1973. Attention and Effort. Prentice-Hall.
[26]
Snow Kang, Cristian Molinaro, Andrea Pugliese, and V. S. Subrahmanian. 2021. Randomized generation of adversary-aware fake knowledge graphs to combat intellectual property theft. In AAAI Conference on Artificial Intelligence, Vol. 35. 4155–4163.
[27]
Arun Kumar, Ananya Bandyopadhyay, H. Bhoomika, Ishan Singhania, and Krupal Shah. 2018. Analysis of network traffic and security through log aggregation. Int. J. Comput. Sci. Inf. Secur. 16, 6 (2018).
[28]
Siwei Lai, Kang Liu, Shizhu He, and Jun Zhao. 2016. How to generate a good word embedding. IEEE Intell. Syst. 31, 6 (2016), 5–14.
[29]
Roger Levy. 2013. Memory and surprisal in human sentence comprehension. Sent. Process. 78 (2013), 142–195.
[30]
Huanruo Li, Yunfei Guo, Penghao Sun, Yawen Wang, and Shumin Huo. 2022. An optimal defensive deception framework for the container-based cloud with deep reinforcement learning. IET Inf. Secur. 16, 3 (2022), 178–192.
[31]
Jiaxuan Li and Allyson Ettinger. 2023. Heuristic interpretation as rational inference: A computational model of the N400 and P600 in language processing. Cognition 233 (2023), 105359.
[32]
Qi Liu, Matt J. Kusner, and Phil Blunsom. 2020. A survey on contextual embeddings. arXiv preprint arXiv:2003.07278 (2020).
[33]
Tongyu Liu, Ju Fan, Guoliang Li, Nan Tang, and Xiaoyong Du. 2023. Tabular data synthesis with generative adversarial networks: Design space and optimizations. VLDB J. (2023), 1–26.
[34]
James A. Michaelov, Megan D. Bardolph, Cyma K. Van Petten, Benjamin K. Bergen, and Seana Coulson. 2023. Strong prediction: Language model surprisal explains multiple N400 effects. Cognitive Computational Neuroscience of Language (2023), 1–71.
[35]
Marcin Nawrocki, Matthias Wählisch, Thomas C. Schmidt, Christian Keil, and Jochen Schönfelder. 2016. A survey on honeypot software and data analysis. arXiv preprint arXiv:1608.06249 (2016).
[36]
Erick Skorupa Parolin, Yibo Hu, Latifur Khan, Patrick T. Brandt, Javier Osorio, and Vito D’Orazio. 2022. Confli-T5: An AutoPrompt pipeline for conflict related text augmentation. In IEEE International Conference on Big Data (Big Data’22). IEEE, 1906–1913.
[37]
Milena Rabovsky, Steven S. Hansen, and James L. McClelland. 2018. Modelling the N400 brain potential as change in a probabilistic representation of meaning. Nat. Hum. Behav. 2, 9 (2018), 693–705.
[38]
Brian Roark, Asaf Bachrach, Carlos Cardenas, and Christophe Pallier. 2009. Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Conference on Empirical Methods in Natural Language Processing. 324–333.
[39]
Cory Shain, Idan Asher Blank, Marten van Schijndel, William Schuler, and Evelina Fedorenko. 2020. fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychol. 138 (2020), 107307.
[40]
Cory Shain, Clara Meister, Tiago Pimentel, Ryan Cotterell, and Roger Philip Levy. 2024. Large-scale evidence for logarithmic effects of word predictability on reading time. In Proceedings of the National Academy of Sciences 121, 10 (2024).
[41]
Nathaniel J. Smith and Roger Levy. 2008. Optimal processing times in reading: A formal model and empirical investigation. In Annual Meeting of the Cognitive Science Society, Vol. 30.
[42]
Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition 128, 3 (2013), 302–319.
[43]
Lance Spitzner. 2003. Honeypots: Catching the insider threat. In 19th Annual Computer Security Applications Conference. IEEE, 170–179.
[44]
Michael K. Tanenhaus. 2004. On-line sentence processing: Past, present, and future. In The On-line Study of Sentence Comprehension. Psychology Press, 371–394.
[45]
Marten Van Schijndel and William Schuler. 2015. Hierarchic syntax improves reading time prediction. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1597–1605.
[46]
Nikos Virvilis, Bart Vanautgaerden, and Oscar Serrano Serrano. 2014. Changing the game: The art of deceiving sophisticated attackers. In 6th International Conference On Cyber Conflict (CyCon’14). IEEE, 87–97.
[47]
Na Wang, Junsong Fu, Bharat K. Bhargava, and Jiwen Zeng. 2018. Efficient retrieval over documents encrypted by attributes in cloud computing. IEEE Trans. Inf. Forens. Secur. 13, 10 (2018), 2653–2667.
[48]
Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy. 2020. On the predictive power of neural language models for human real-time comprehension behavior. arXiv preprint arXiv:2006.01912 (2020).
[49]
Yanhai Xiong, Giridhar Kaushik Ramachandran, Rajesh Ganesan, Sushil Jajodia, and V. S. Subrahmanian. 2020. Generating realistic fake equations in order to reduce intellectual property theft. IEEE Trans. Depend. Sec. Comput. 19, 3 (2020), 1434–1445.
[50]
Rui Xu and Donald Wunsch. 2005. Survey of clustering algorithms. IEEE Trans. Neural Netw. 16, 3 (2005), 645–678.
[51]
Jim Yuill, Mike Zappe, Dorothy Denning, and Fred Feer. 2004. Honeyfiles: Deceptive files for intrusion detection. In 5th Annual IEEE SMC Information Assurance Workshop. IEEE, 116–122.

Index Terms

  1. A Psycholinguistics-inspired Method to Counter IP Theft Using Fake Documents

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Management Information Systems
      ACM Transactions on Management Information Systems  Volume 15, Issue 2
      June 2024
      102 pages
      EISSN:2158-6578
      DOI:10.1145/3613621
      • Editor:
      • Heng Xu
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 12 June 2024
      Online AM: 06 March 2024
      Accepted: 02 March 2024
      Revised: 29 February 2024
      Received: 19 October 2023
      Published in TMIS Volume 15, Issue 2

      Check for updates

      Author Tags

      1. AI for security
      2. fake document generation

      Qualifiers

      • Research-article

      Funding Sources

      • ONR

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 1,199
        Total Downloads
      • Downloads (Last 12 months)1,199
      • Downloads (Last 6 weeks)201
      Reflects downloads up to 14 Feb 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media