Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Logic Programming with Post-Quantum Cryptographic Primitives for Smart Contract on Quantum-Secured Blockchain
Next Article in Special Issue
A Novel Conflict Management Method Based on Uncertainty of Evidence and Reinforcement Learning for Multi-Sensor Information Fusion
Previous Article in Journal
A Study on Consumers’ Visual Image Evaluation of Wrist Wearables
Previous Article in Special Issue
A New Total Uncertainty Measure from A Perspective of Maximum Entropy Requirement
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making

1
School of Postgraduate School, Air Force Engineering University, Xi’an 710051, China
2
School of Air and Missile Defense, Air Force Engineering University, Xi’an 710051, China
*
Authors to whom correspondence should be addressed.
Entropy 2021, 23(9), 1119; https://doi.org/10.3390/e23091119
Submission received: 4 August 2021 / Revised: 23 August 2021 / Accepted: 25 August 2021 / Published: 28 August 2021
(This article belongs to the Special Issue Recent Progress of Deng Entropy)

Abstract

:
Much attention has been paid to construct an applicable knowledge measure or uncertainty measure for Atanassov’s intuitionistic fuzzy set (AIFS). However, many of these measures were developed from intuitionistic fuzzy entropy, which cannot really reflect the knowledge amount associated with an AIFS well. Some knowledge measures were constructed based on the distinction between an AIFS and its complementary set, which may lead to information loss in decision making. In this paper, knowledge amount of an AIFS is quantified by calculating the distance from an AIFS to the AIFS with maximum uncertainty. Axiomatic properties for the definition of knowledge measure are extended to a more general level. Then the new knowledge measure is developed based on an intuitionistic fuzzy distance measure. The properties of the proposed distance-based knowledge measure are investigated based on mathematical analysis and numerical examples. The proposed knowledge measure is finally applied to solve the multi-attribute group decision-making (MAGDM) problem with intuitionistic fuzzy information. The new MAGDM method is used to evaluate the threat level of malicious code. Experimental results in malicious code threat evaluation demonstrate the effectiveness and validity of proposed method.

1. Introduction

Atanassov [1,2] developed the concept of intuitionistic fuzzy set on the basis of Zadeh’s fuzzy set [3]. Atanassov’s intuitionistic fuzzy sets (AIFSs) relax the condition that the non-membership degree and the membership degree sum to 1. AIFSs are a generalization of fuzzy sets, i.e., a particular case of other types of generalized fuzzy sets [4,5]. Moreover, AIFSs are identical to interval-valued fuzzy sets (IVFSs) from a mathematical perspective [6]. In an AIFS, the hesitation degree is the difference between one and the sum of membership and non-membership grades. The hesitation degree contributes much serviceability to the depiction of uncertain information. Researchers have paid much attention on the intuitionistic fuzzy set theory since its advantage in modeling uncertain information systems [7]. The theory of intuitionistic fuzzy sets has been successfully applied in many fields, including uncertainty reasoning [8] and decision making [9,10]. The connection between AIFSs and other uncertain theories is also attracting increasingly much interest [11,12,13,14,15,16,17,18].
Zadeh [3] first introduced the notion of entropy to fuzzy sets to measure the uncertainty or fuzziness in a fuzzy set. The notion of fuzzy entropy defined for fuzzy sets is partially similar to the concept of Shannon entropy [19], which was initially defined in probability theory. Luca and Termini [20] developed the axiomatic definition of entropy, and then proposed a kind of non-probabilistic fuzzy entropy. Then, Burillo and Bustince [21] first axiomatically defined the measure of intuitionistic entropy, which was merely determined by hesitation degree. Unlike the entropy measures created by Burillo and Bustince [21], the entropy measure for intuitionistic fuzzy sets developed by Szmidt and Kacprzyk [22] was defined based on the ratio of two distance values. Axiomatic definition for intuitionistic fuzzy entropy was also presented by Szmidt and Kacprzyk [22]. Following the work of Szmidt and Kacprzyk [22], many authors [23,24,25,26] have done a great deal of work concentrating on the definition of entropy measures. Some research has also focused on the entropy of AIFSs and their application in the evaluation of attribution weighting vector [9,10]. It has been pointed out by Szmidt et al. [27] that entropy measure cannot capture all uncertainty hidden in an AIFS. Thus, it may be difficult to develop a satisfactory uncertainty measure for AIFSs merely by entropy measure. The difference between entropy and hesitation in measuring the uncertainty of AIFSs has been pointed out by Pal et al. [28]. In [28], it was claimed that the combination of entropy and hesitation may furnish an effective way to measure the total uncertainty hidden in an AIFS.
Generally, knowledge measure is related to the useful information provided by an AIFS. From the perspective of information theory, much information indicates a great amount of knowledge, which is helpful for decision making. Therefore, the notion of knowledge measure can be regarded as the complementary concept of total uncertainty measure, rather than of entropy measure. This means that less total uncertainty always accompanies a greater amount of knowledge. With the purpose of making an evident distinction between types of intuitionistic fuzzy information, Szmidt et al. [27] took both intuitionistic fuzzy entropy and hesitation into consideration to develop a knowledge measure for AIFS, in which the intuitionistic fuzzy entropy was defined by quantifying the ration between the nearer distance and farer distance. This knowledge measure has been used to estimate the weight of each attribute to solve multi-attribute decision making (MADM) problems [29]. Nguyen [30] has developed a novel knowledge measure by measuring the distance from an AIFS to the most uncertain AIFS. It seems that this knowledge measure can well describe fuzziness and intuitionism in AIFSs. However, the use of normalized Euclidean distance may bring another problem, namely that the relation between fuzziness and knowledge cannot be completely reflected. Recently, Guo [29] put forward a new axiomatic definition for the knowledge measure of AIFS. A new and highly robust model was introduced in [31] to quantify the knowledge amount of AIFS. By measuring the difference between an AIFS and its complement, the new model proposed by Guo [31] has been widely used to defined entropy measure for AIFSs [32,33]. Moreover, the combination of the two parts in Guo’s model [31] lacks a clear physical interpretation. Several years ago, Das et al. [34] performed a comprehensive review of axiomatic definitions of information measures of AIFSs and investigated their relationships, in which entropy measure, knowledge measure, distance measure, and similarity measure are all concerned.
The above analysis demonstrates that the topic of knowledge measure for AIFSs is still open for debate, and commanding prodigious attention. Most research on knowledge and uncertainty measures of AIFSs mainly focus on the difference between AIFS and its complement. Only a few knowledge measures are constructed by measuring the distinction between an AIFS and the AIFS with maximum uncertainty or minimum uncertainty. Although Nguyen [30] opened up this new way of studying knowledge measures of AIFSs, further exploration is needed to improve this kind of knowledge measure and realize a desirable knowledge measure for AIFSs. This motivates us to present a new method to measure the knowledge of AIFSs based on a novel intuitionistic fuzzy distance, which is defined based on the transformation from an intuitionistic fuzzy value (IFV) to an interval value. An axiomatic definition of the knowledge measure of AIFSs will also be formulated from a more general point of view. Moreover, we will further explore the proposed knowledge measure’s properties, and we compare it with other measures based on numerical examples to demonstrate its performance. Then we will apply it to the problem of intuitionistic fuzzy multi-attribute group decision making (MAGDM).
The remainder of this study is structured as follows. Several concepts regarding AIFSs are explained in Section 2. In Section 3, a new type of distance measure for AIFSs is developed, followed by the proposal and discussion of the distance-based knowledge measure in Section 4. In Section 5, the proposed distance and knowledge measures are used to develop a new method to solve MAGDM problems in intuitionistic fuzzy condition. Application of the new method for MAGDM is presented in Section 6 to illustrate the performance of the proposed method. Some conclusions of this paper are presented in Section 7.

2. Preliminaries

Here, we briefly recount some background knowledge about AIFSs to for ease of subsequent exposition.
Definition 1.
Letting a non-empty set  X = { x 1 , x 2 , , x n } be the universe of discourse, a fuzzy set  A  in  X  is then defined as follows [3]:
A = x , μ A ( x ) x X
where μ A : X [ 0 , 1 ] is the membership degree.
Definition 2.
The intuitionistic fuzzy set  B in  X = { x 1 , x 2 , , x n }  as defined by Atanassov can be expressed as [1]:
B = x , μ B ( x ) , v B ( x ) x X
where μ B : X [ 0 , 1 ] and v B : X [ 0 , 1 ] are membership degree and non-membership degree, respectively, with the condition
0 μ B ( x ) + v B ( x ) 1
The hesitation degree of AIFS B defined in X is denoted π B . x X , and the hesitation degree is calculated by the expression that follows:
π B ( x ) = 1 μ B ( x ) v B ( x )
Apparently, we can obtain π B ( x ) [ 0 , 1 ] , x X . π B ( x ) is also referred to as the intuitionistic index of x to B . Greater π B ( x ) indicates more vagueness. It is apparent that when π B ( x ) = 0 , x X , the AIFS degenerates into an ordinary fuzzy set.
For two AIFSs A and B defined in X , the following relations were defined in [1]: A B if and only if μ A ( x ) μ B ( x ) , v A ( x ) v B ( x ) for each x X . The complement of B is denoted B C [1], and can be obtained by B C = x , v B ( x ) , μ B ( x ) x X .
It has been proved that AIFSs and IVFSs are mathematically identical [4,6]. They can be converted to each other. Thus, For an AIFS B defined in X and x X , we can use an interval μ B ( x ) , 1 v B ( x ) to express the membership and non-membership grades of x with respect to B. We can see this as the interval-valued interpretation of AIFS, in which μ B ( x ) and 1 v B ( x ) represent the lower bound and upper bound of membership degree, respectively. Apparently, μ B ( x ) , 1 v B ( x ) is a valid interval, since μ B ( x ) 1 v B ( x ) always holds for μ B ( x ) + v B ( x ) 1 . The correspondence relation between AIFSs and IVFSs holds only from the mathematical point of view. If we explore their conceptual explanation and practical application, they may differ in the description of uncertainty [9,35].
In what follows, A I F S s ( X ) is used to denote the set consisted of all AIFSs defined in X . Generally, the couple μ B ( x ) , v B ( x ) is also called an IFV for clarity.
Definition 3.
For two IFVs  a = μ a , v a and  b = μ b , v b , the partial order between them is defined as  a b μ a μ b , v a v b  [1].
For all IFVs, based on the partial ranking order, we can obtain the smallest IFV as 0 , 1 , denoted by 0, and the largest IFV as 1 , 0 , denoted by 1.
For a linear order of IFVs, to rank multiple IFVs, Chen and Tan [36] defined the score function of an IFV as S ( a ) = μ a v a . Following the concept of score function for IFVs, Hong and Choi [37] developed an accuracy function H ( a ) = μ a + v a to depict the accuracy of IFV a = μ a , v a . Xu [38] then proposed a ranking-order relation between two IFVs a and b , which can be equivalently shown as follows.
For their score functions, if S ( a ) is greater than S ( b ) , then a is greater than b , and vice versa.
If S ( a ) and S ( b ) are equal, we consider the following cases: (1) if H ( a ) is equal to H ( b ) , then a and b are equal; and (2) if H ( a ) is greater than H ( b ) , then a is greater than b ; and vice versa.
Based on above order relation, the linear order relation of multiple IFVs can be obtained.
We know that similarity measure and distance measure are important in the research of fuzzy set theory [39]. Similarly, the construction of similarity measure and distance measures for AIFSs plays an important role in AIFSs [23,40,41,42,43,44,45,46,47,48,49,50], and they are helpful for the comparison of intuitionistic fuzzy information [24,25].
Definition 4.
For a mapping  D : A I F S × A I F S [ 0 , 1 ] , it is called a distance measure between two AIFSs A and B defined in X if  D ( A , B ) satisfies the following properties [23]:
  • (DP1) 0 D ( A , B ) 1 ;
  • (DP2) D ( A , B ) = 0 , if and only if A = B ;
  • (DP3) D ( A , B ) = D ( B , A ) ;
  • (DP4) If A B C , then D ( A , B ) D ( A , C ) and D ( B , C ) D ( A , C )
Definition 5.
Amapping  S : A I F S × A I F S [ 0 , 1 ] is called a similarity measure two AIFSs A and B defined in X if  S ( a , b ) satisfies the following properties [40]:
  • (SP1) 0 S ( A , B ) 1 ;
  • (SP2) S ( A , B ) = 1 , if and only if A = B ;
  • (SP3) S ( A , B ) = S ( B , A ) ;
  • (SP4) If A B C , then S ( A , B ) S ( A , C ) and S ( B , C ) S ( A , C ) .
Similarity measure and distance measure usually are regarded as a couple of dual concepts. Thus, distance measures can be used to define similarity measures, and vice versa.

3. New Intuitionistic Fuzzy Distance Measure

In past years, numerous similarity measure and distance measure have been advanced [7,39,45]. However, some may lead to unreasonable results in practical applications [7]. Some new defined distance/similarity measures may have complicated expressions [39,45], which are not suitable for constructing knowledge measure for AIFSs. Thus, it is necessary to define a desirable distance measure to assist us in developing a new knowledge measure. Here, we propose a new distance measure for AIFSs by borrowing a distance measure for interval values. It has been claimed that an AIFS can be represented in the form of interval-valued fuzzy set [5]. Based on such relation, an intuitionistic fuzzy distance measure can be developed based on interval comparison.

3.1. Interval-Comparison-Based Distance Measure for AIFSs

AnAIFS B = x , μ B ( x ) , v B ( x ) x X defined in X = { x 1 , x 2 , , x n } indicates the membership degree of xi to B is uncertain, with lower and upper bounds of μ B ( x i ) and 1 v B ( x i ) , respectively. That is to say, the membership grade of xi to B lies in an interval [ μ B ( x i ) , 1 v B ( x i ) ] , i = 1 , 2 , , n . Thus, we can measure distance between AIFSs A and B defined in X = { x 1 , x 2 , , x n } by comparing interval values [ μ A ( x i ) , 1 v A ( x i ) ] and [ μ B ( x i ) , 1 v B ( x i ) ] , i = 1 , 2 , , n .
In [51], authors have reviewed distances between interval values. They pointed out that the distance measure dTD proposed in [52] is not a metric distance, since for an interval value a = [a1,a2], dTD(a, a) = 0 does not always hold. Thus, Irpino and Verde [51] proposed a Wasserstein distance based on the point of view of one-dimensional uniform distribution, rather than from that of two-dimensional uniform distribution as developed in [52]. The definition as follows gives the Wasserstein distance measure between interval values.
Definition 6.
Given two interval values a = [a1,a2] and b = [b1,b2] with  a , b [ 0 , 1 ] , the distance between them is defined as [51]:
d I ( a , b ) = a 1 + a 2 2 b 1 + b 2 2 2 + 1 3 a 2 a 1 2 b 2 b 1 2 2
Thus, i { 1 , 2 , , n } , and for A x i = μ A ( x i ) , v A ( x i ) and B x i = μ B ( x i ) , v B ( x i ) the distance between their corresponding interval values [ μ A ( x i ) , 1 v A ( x i ) ] and [ μ B ( x i ) , 1 v B ( x i ) ] can be expressed by
d I ( A x i , B x i ) = μ A ( x i ) + 1 v A ( x i ) 2 μ B ( x i ) + 1 v B ( x i ) 2 2 + 1 3 1 v A ( x i ) μ A ( x i ) 2 1 v B ( x i ) μ B ( x i ) 2 2
which can also be expressed as
d I ( A x i , B x i ) = μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 2 + 1 3 μ A ( x i ) + v A ( x i ) 2 μ B ( x i ) + v B ( x i ) 2 2
or
d I ( A x i , B x i ) = μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 2 + 1 3 π A ( x i ) 2 π B ( x i ) 2 2
Since all parameters μ A ( x i ) , v A ( x i ) , π A ( x i ) , μ B ( x i ) , v B ( x i ) , and π B ( x i ) take values in the interval [0,1], we have 1 μ A ( x i ) v A ( x i ) 1 , 1 μ B ( x i ) v B ( x i ) 1 . The maximum value of d I ( A x i , B x i ) can then be obtained as 1, which is obtained when A x i = 0 , 1 , B x i = 1 , 0 or A x i = 1 , 0 , B x i = 0 , 1 . Thus, the relation 0 d I ( A x i , B x i ) 1 can be obtained.
According to the analysis above, we are able to define a new distance measure for Atanassov’s intuitionistic fuzzy sets. Given two AIFSs A = x , μ A ( x ) , v A ( x ) x X and B = x , μ B ( x ) , v B ( x ) x X defined in X = { x 1 , x 2 , , x n } , then the distance between them is calculated by the expression that follows:
D I ( A , B ) = 1 n i = 1 n μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 2 + 1 3 μ A ( x i ) + v A ( x i ) 2 μ B ( x i ) + v B ( x i ) 2 2
Theorem 1.
For AIFSs  A = x , μ A ( x ) , v A ( x ) x X and  B = x , μ B ( x ) , v B ( x ) x X  defined in  X = { x 1 , x 2 , , x n } , D I ( A , B )  is a distance measure between A and B.
For the sake of readability, we provide the proof process of Theorem 1 in Appendix A.
Considering the weight of x i , i = 1 , 2 , , n , the distance between AIFSs A = x , μ A ( x ) , v A ( x ) x X and B = x , μ B ( x ) , v B ( x ) x X defined in X = { x 1 , x 2 , , x n } can be measured as
D W I ( A , B ) = i = 1 n w i μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 2 + 1 3 μ A ( x i ) + v A ( x i ) 2 μ B ( x i ) + v B ( x i ) 2 2
where w i is the weight of x i , i = 1 , 2 , , n , with w i [ 0 , 1 ] and i = 1 n w i = 1 .
Theorem 2.
D W I ( A , B ) is distance measure between AIFSs  A = x , μ A ( x ) , v A ( x ) x X  and  B = x , μ B ( x ) , v B ( x ) x X  defined in  X = { x 1 , x 2 , , x n } .
Its proof can be implemented in the same way as the proof of Theorem 1.

3.2. Comparative Analysis

By way of demonstrating the availability of the new distance measure to distinguish the information in form of intuitionistic fuzzy set, we apply numerical examples to conduct a comparative analysis. Owing to the complementary relation between distance measure and similarity measure, the below widely used measures defined for two AIFSs A = x , μ A ( x ) , v A ( x ) x X and B = x , μ B ( x ) , v B ( x ) x X defined in X = { x 1 , x 2 , , x n } will be used for comparison.
  • Hamming distance [53]:
    D N H ( A , B ) = 1 2 n i = 1 n μ A ( x i ) μ B ( x i ) + v A ( x i ) v B ( x i ) + π A ( x i ) π B ( x i )
  • Euclidean distance [53]:
    D N E ( A , B ) = 1 2 n i = 1 n μ A ( x i ) μ B ( x i ) 2 + v A ( x i ) v B ( x i ) 2 + + π A ( x i ) π B ( x i ) 2
  • Distance measurement of Wang and Xin [23]:
    D W ( A , B ) = 1 n i = 1 n μ A ( x i ) μ B ( x i ) + v A ( x i ) v B ( x i ) 4 + max μ A ( x i ) μ B ( x i ) , v A ( x i ) v B ( x i ) 2
  • Ye’s cosine similarity measure CIFS [54]:
    C I F S ( A , B ) = 1 n i = 1 n μ A ( x i ) μ B ( x i ) + v A ( x i ) v B ( x i ) μ A ( x i ) 2 + v A ( x i ) 2 μ B ( x i ) 2 + v B ( x i ) 2
Example 1.
Three patterns are presented by AIFSs defined in  X = { x 1 , x 2 , x 3 } and are given as
A 1 = { < x 1 , 0.4 , 0.5 > , < x 2 , 0.7 , 0.1 > , < x 3 , 0.3 , 0.3 > } , A 2 = { < x 1 , 0.5 , 0.4 > , < x 2 , 0.7 , 0.2 > , < x 3 , 0.4 , 0.3 > } , A 3 = { < x 1 , 0.4 , 0.5 > , < x 2 , 0.7 , 0.1 > , < x 3 , 0.4 , 0.3 > } .
A sample B = { < x 1 , 0.1 , 0.1 > , < x 2 , 1 , 0 > , < x 3 , 0 , 1 > } is given to be classified.
Using Equations (11) and (12), we obtain
DNH(A1,B) = DNH(A2,B) = DNH(A3,B) = 0.483,
DNE(A1,B) = DNE(A2,B) = DNE(A3,B) = 0.442.
Using the proposed distance measure DI(A,B), we obtain
DI(A1,B) = 0.3098, DI(A2,B) = 0.3389, DI(A3,B) = 0.3244.
We note in this example that the Hamming and Euclidean distances cannot be used to determine the pattern of B. The new proposed measure DI can classify B as pattern A1 because the distance between B and A1 is the least.
Example 2.
Three patterns are presented by AIFSs defined in  X = { x 1 , x 2 , x 3 , x 4 } and are given as
A 1 = { < x 1 , 0.3 , 0.4 > , < x 2 , 0.3 , 0.4 > , < x 3 , 0.6 , 0.1 > , < x 4 , 0.6 , 0.1 > } , A 2 = { < x 1 , 0.4 , 0.4 > , < x 2 , 0.3 , 0.5 > , < x 3 , 0.7 , 0.1 > , < x 4 , 0.6 , 0.2 > } , A 3 = { < x 1 , 0.4 , 0.4 > , < x 2 , 0.3 , 0.4 > , < x 3 , 0.7 , 0.1 > , < x 4 , 0.6 , 0.1 > } .
A sample to be classified is given as
B = { < x 1 , 0.35 , 0.65 > , < x 2 , 0.55 , 0.45 > , < x 3 , 0.65 , 0.1 > , < x 4 , 0.6 , 0.15 > } .
Using Equation (13), we can obtain: DW(A1,B) = DW(A2,B) = DW(A3,B) = 0.119.
Using our proposed distance measure DI(A,B), we obtain
DI(A1,B) = 0.0806, DI(A2,B) = 0.0948, DI(A3,B) = 0.0877.
These results show that the class of B cannot be determined based on the distance measure proposed by Wang and Xin [23]. Based on our proposed distance measure, we are able to obtain the minimum distance between B and three patterns as DI(A1,B) = 0.0806; therefore, sample B is classified to pattern A1.
Example 3.
Three patterns expressed by AIFSs which are defined in  X = { x 1 , x 2 } are given as
A 1 = { < x 1 , 0.4 , 0.4 > , < x 2 , 0.3 , 0.3 > } , A 2 = { < x 1 , 0.2 , 0.2 > , < x 2 , 0.3 , 0.3 > } , A 3 = { < x 1 , 0.1 , 0.1 > , < x 2 , 0.5 , 0.5 > } .
An unknown sample to be recognized is given by
B = { < x 1 , 0.1 , 0.1 > , < x 2 , 0.5 , 0.5 > } .
Using Equation (14), we can get: CIFS(A1,B) = CIFS(A2,B) = CIFS(A3,B) = 1.
Using the proposed distance measure DI(A,B), we obtain:
DI(A1,B) = 0.1443, DI(A2,B) = 0.0866, DI(A3,B) = 0.
It is obvious that sample B is identical to pattern A3, but sample B may be classified as A1, A2, and A3 simultaneously based on the cosine similarity, which is counter-intuitive. It can be seen that our distance measure can be used in classifying sample B as A3 due to the zero distance between them.
The above examples show that our proposed distance measure is effective in differentiating the information conveyed by different AIFSs. It can be easily proved that the choice of attribute weights will not change the conclusion obtained based on each example. Moreover, we note that the cosine similarity may be undefined when there is a zero denominator. The developed distance measures can overcome such deficiencies, so these examples indicate that the proposed distance measures are reasonable and effective in discriminating intuitionistic fuzzy information.

4. Knowledge Measure of AIFSs Based on DI

Suppose that A = x , μ A ( x ) , v A ( x ) x X is an AIFS defined in X = { x 1 , x 2 , , x n } , its knowledge measure K should intuitively satisfy some properties. It is rational that the knowledge measure K must be a non-negative function determined by μ A ( x ) and v A ( x ) . The knowledge amount of A should be identical to the knowledge amount of its complement, i.e., K(A) = K(AC). When the AIFS A is reduced to classical Zadeh’s fuzzy set, a negative correlation should exist between the knowledge measure and fuzziness. It has been in our mind that the fuzziness of Zadeh’s fuzzy set determines its fuzzy entropy, and they are both negatively correlated to μ A ( x ) v A ( x ) [22]. So the knowledge measure K(A) should be monotonously increasing with respect to μ A ( x ) v A ( x ) . Moreover, we note that a crisp set provides the maximum amount of information, so the knowledge amount of a crisp set reaches the maximum value K max = 1 . Conversely, the case that x X , μ A ( x ) = v A ( x ) = 0 means full ignorance, so the knowledge amount reaches its minimum value K min = 0 . In addition, in the case of μ A ( x i ) = v A ( x i ) = a 0 , we have π A ( x i ) = 1 2 a . Thus, the less a indicates greater the greater hesitant degree π A ( x i ) , which leads to the greater uncertainty degree and smaller knowledge amount.
Considering these intuitive properties, we give the following definition to describe the axiomatic properties of the knowledge measure for AIFSs.
Definition 7.
If a mapping  K : A I F S [ 0 , 1 ] satisfies the following properties, it is called a knowledge measure of an AIFS A defined in  X = { x 1 , x 2 , , x n } :
  • (KP1) K ( A ) = 1 if and only if A is a crisp set.
  • (KP2) K ( A ) = 0 if and only if π A ( x i ) = 1 , i { 1 , 2 , , n } .
  • (KP3) K ( A ) increases with μ A ( x i ) v A ( x i ) for fixed π A ( x i ) and decreases with π A ( x i ) if μ A ( x i ) v A ( x i ) is unchanged, i = 1 , 2 , , n .
  • (KP4) K ( A C ) = K ( A ) .
Since both knowledge and entropy measures are always regarded as two complementary concepts, we discuss these properties by comparing them with those of entropy measure. We can see that the third property in [22] defined for intuitionistic fuzzy entropy, denoted as E, is stated as: E ( B ) E ( A ) if B is less fuzzy than A, i.e., x X , (1) μ B ( x ) μ A ( x ) and v B ( x ) v A ( x ) for μ A ( x ) v A ( x ) , or (2) μ B ( x ) μ A ( x ) and v B ( x ) v A ( x ) for μ A ( x ) v A ( x ) .
The first condition indicates that μ B ( x i ) μ A ( x i ) v A ( x i ) v B ( x i ) and μ B ( x i ) v B ( x i ) μ A ( x i ) v A ( x i ) . Similarly, the second condition implies that μ B ( x i ) μ A ( x i ) v A ( x i ) v B ( x i ) and μ B ( x i ) v B ( x i ) μ A ( x i ) v A ( x i ) . Therefore, the entropy measure of AIFS decreases with μ A ( x i ) v A ( x i ) , i.e., E ( B ) E ( A ) if μ B ( x i ) v B ( x i ) μ A ( x i ) v A ( x i ) , i = 1 , 2 , , n , which is related to the property of KP3. However, this property of intuitionistic fuzzy entropy does not consider the influence of hesitation degree. It may not be sensible to discuss the relationship between fuzziness and intuitionistic fuzzy entropy if the hesitance degree is not fixed. Moreover, since μ A ( x i ) v A ( x i ) μ B ( x i ) v B ( x i ) cannot always induce μ A ( x i ) μ B ( x i ) v B ( x i ) v A ( x i ) or μ A ( x i ) μ B ( x i ) v B ( x i ) v A ( x i ) , the property E ( A ) E ( B ) if μ A ( x i ) v A ( x i ) μ B ( x i ) v B ( x i ) is more general than the third property listed in [22]. Thus, for the relation between knowledge and fuzziness, our proposed axiomatic property is made more general by relaxing the formal constraint by using μ ( x ) v ( x ) . However, such relaxation does not cause an unreliable measure of the knowledge amount because of the limitation of hesitation degree, which will be illustrated later. This also demonstrates the possibility and reasonability of further exploring the relation between the entropy measure and knowledge measure of AIFSs. We point out that the entropy of an AIFS reaches its peak value when the membership degree and non-membership degree are identical for all elements [22]. This is analogous to the entropy measure of fuzzy sets, which solely concerns the relation between membership degree and non-membership degree. Therefore, the notions of entropy and knowledge measure are not just complementary concepts, but rather they differ from each other not only in the aspect of viewpoint, but also in the point they focus on. The fuzzy entropy merely depicts the difference between an AIFS and a crisp set, which is denoted as fuzziness, while knowledge measure is defined to measure the closeness between AIFS and a crisp set, which takes both fuzziness and hesitancy into account.
Following the axiomatic properties in Definition 7, we can create knowledge measures for AIFSs by a mapping F: D→[0,1], where D = { ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] | x + y 1 } , and F must satisfy the following conditions:
  • (C1) F(x, y) = 1 if and only if |x−y| = 1.
  • (C2) F(x, y) = 0 if and only if x = y = 0.
  • (C3) For a fixed x+y, F(x, y) increases while |x−y| increases.
  • (C4) For a fixes |x−y|, F(x, y) increases while x+y increases.
  • (C5) F(x, y) = F(y, x).
For ( x , y ) [ 0 , 1 ] × [ 0 , 1 ] , we can effortlessly obtain many F functions satisfying the above conditions, such as F(x,y) = (|x−y|+x+y)/2 and F(x,y) = x2+y2. Using these functions, we can construct knowledge measures for AIFSs. Given an AIFS A = x , μ A ( x ) , v A ( x ) x X defined in X = { x 1 , x 2 , , x n } , its knowledge measure K can be expressed by K = i = 1 n F ( μ A ( x i ) , v A ( x i ) ) / n . In this way, many knowledge measures can be created for AIFSs, but most may lack of specific physical meaning. This motivates us to construct knowledge measures with both clear physical significance and axiomatic mathematical properties.

4.1. Construction of Knowledge Measure

From the second property KP2, we can conclude that the AIFS F = { x , 0 , 0 | x X } conveys the least knowledge. The amount of knowledge conveyed by an AIFS A can be reflected by the distance between A and F. The greater the distance between them, the greater the knowledge amount the AIFS A conveys, prompting us to devise a knowledge measure according to the distance from A to F.
For an AIFS A = μ A ( x ) , v A ( x ) defined in X = { x } , the distance between A and F = x , 0 , 0 can be calculated by Equation (9):
D I ( A , F ) = μ A ( x ) v A ( x ) 2 2 + 1 3 μ A ( x ) + v A ( x ) 2 2
Equation (15) can be further written as
D I ( A , F ) = 1 2 μ A ( x ) v A ( x ) 2 + 1 3 μ A ( x ) + v A ( x ) 2
Considering the conditions 0 μ A ( x ) 1 , 0 v A ( x ) 1 , and 0 μ A ( x ) + v A ( x ) 1 , we have 1 μ A ( x ) v A ( x ) 1 . Since the conditions μ A ( x ) + v A ( x ) = 1 and μ A ( x ) v A ( x ) = 1 can be satisfied simultaneously, the maximum value of D I ( A , F ) is 3 / 3 . Thus, the distance between A and F can be normalized by multiplying by 3 , giving the following form:
D N I ( A , F ) = 3 2 μ A ( x ) v A ( x ) 2 + 1 3 μ A ( x ) + v A ( x ) 2
We can then construct a knowledge measure for AIFSs defined in the discourse universe X = { x } as follows:
K I ( A ) = 3 2 μ A ( x ) v A ( x ) 2 + 1 3 μ A ( x ) + v A ( x ) 2
Generally, for the AIFS defined in X = { x 1 , x 2 , , x n } , denoted as A = x , μ A ( x ) , v A ( x ) x X , its knowledge amount can be quantified by
K I ( A ) = 3 2 n i = 1 n μ A ( x i ) v A ( x i ) 2 + 1 3 μ A ( x i ) + v A ( x i ) 2
Theorem 3.
For the AIFS  A = x , μ A ( x ) , v A ( x ) x X defined in  X = { x 1 , x 2 , , x n } , the function  K I ( A )  defined by Equation (19) is a knowledge measure of AIFS  A .
Theorem 3 is proved in Appendix B.

4.2. Numerical Examples

Here, the performance of the proposed knowledge measure KI will be examined considering some numerical examples.
Example 4.
Four AIFSs A1, A2, A3 and A4 are defined in universe X = {x}. They are given as
A 1 = x , 0.5 , 0.5 ,   A 2 = x , 0.3 , 0.3 ,   A 3 = x , 0.2 , 0.2 ,   A 4 = x , 0 , 0 .
The entropy measure presented in [23,55,56,57,58] cannot discriminate these AIFSs, since these measures are defined according to the difference between membership degree and non-membership degree. The membership degree and non-membership degree are identical in these four AIFSs, so they may be considered identically with the maximal entropy, which induces a minimal knowledge amount conveyed by them. However, according to the proposed knowledge measure KI, we have
K I ( A 1 ) = 0.5 ,   K I ( A 2 ) = 0.3 ,   K I ( A 3 ) = 0.2 ,   K I ( A 4 ) = 0
It can be seen that these four different AIFSs differ greatly from each other from the viewpoint of knowledge amount. This is helpful for handling such extreme cases with identical supporting and opposing degrees. From the definition of KI, we find that, when μ A ( x ) = v A ( x ) and for all x X , the calculation of KI assumes the following form:
K I ( A ) = 1 n i = 1 n μ A ( x i )
which indicates that the knowledge amount increases with the variable μ A ( x i ) in the conditions of μ A ( x i ) = v A ( x i ) and i 1 , 2 , , n . This useful feature coincides with intuitive analysis.
To further demonstrate the discriminability of the knowledge measure KI, we give Figure 1 to depict the value of knowledge amount associated with AIFS A defined in X = {x}. The value of KI (A) is reflected by the color assigned on each point μ A ( x ) , v A ( x ) in the simplex. It is shown that the figure is symmetric along the line μ A ( x ) = v A ( x ) , which illustrates the property of KI (AC) = KI (A). On the symmetric line μ A ( x ) = v A ( x ) , the rising trend of knowledge amount is clear. As shown in Figure 1, the maximum amount of knowledge is obtained in two points, (0,1) and (1,0), and in the point (0,0) the knowledge amount is minimum.
Example 5.
Let X = {6,7,8,9,10} be the discourse universe, an AIFS A in X is defined as:
A = 6 , 0.1 , 0.8 , 7 , 0.3 , 0.5 , 8 , 0.5 , 0.4 , 9 , 0.9 , 0 , 10 , 1 , 0 .
De et al. [59] defined an exponent operation for AIFS A defined in X . Given a non-negative real number m, A m is defined as
A m = x , μ A ( x ) m , 1 1 v A ( x ) m x X
Based on the operations in Equation (21), we have
A 0.5 = 6 , 0.316 , 0.553 , 7 , 0.548 , 0.293 , 8 , 0.707 , 0.225 , 9 , 0.949 , 0 , 10 , 1 , 0 , A 2 = 6 , 0.010 , 0.960 , 7 , 0.090 , 0.750 , 8 , 0.250 , 0.640 , 9 , 0.810 , 0 , 10 , 1 , 0 , A 3 = 6 , 0.001 , 0.992 , 7 , 0.027 , 0.875 , 8 , 0.125 , 0.784 , 9 , 0.729 , 0 , 10 , 1 , 0 , A 4 = 6 , 0.0001 , 0.998 , 7 , 0.008 , 0.938 , 8 , 0.062 , 0.870 , 9 , 0.656 , 0 , 10 , 1 , 0 .
Considering the characterization analysis of linguistic variables, we can consider AIFS A as “LARGE” in X. Correspondingly, AIFSs A0.5, A2, A3, and A4 can be regarded as “More or less LARGE,” “Very LARGE,” “Quite very LARGE,” and “Very LARGE,” respectively.
Intuitively, from A0.5 to A4, the uncertainty hidden in them becomes less and the knowledge amount conveyed by them increases. Therefore, the following relations hold:
E A 0.5 > E A > E A 2 > E A 3 > E A 4
K A 0.5 < K A < K A 2 < K A 3 < K A 4
To make a comparison, the entropy and knowledge measures listed in Table 1 are used. It is worth nothing that some of the entropy measures in the table are initially designed for interval valued fuzzy sets [56,57]. These entropy measures are modified for AIFSs based on their connection with interval values fuzzy sets. We present the results obtained based on different measures in Table 2 to facilitate comparative analysis.
From Table 2, we can see that entropy measures EZL, EZB, EBB, ESK, EHC, ES, and EZJ induce the following relations:
E Z L ( A ) > E Z L ( A 0.5 ) > E Z L ( A 2 ) > E Z L ( A 3 ) > E Z L ( A 4 ) , E Z B ( A ) > E Z B ( A 0.5 ) > E Z B ( A 2 ) > E Z B ( A 3 ) > E Z B ( A 4 ) , E B B ( A ) > E B B ( A 0.5 ) > E B B ( A 2 ) > E B B ( A 3 ) = E B B ( A 4 ) , E S K ( A ) > E S K ( A 0.5 ) > E S K ( A 2 ) > E S K ( A 3 ) > E S K ( A 4 ) , E H C ( A ) > E H C ( A 0.5 ) > E H C ( A 2 ) > E H C ( A 3 ) > E H C ( A 4 ) , E S ( A ) > E S ( A 0.5 ) > E S ( A 2 ) > E S ( A 3 ) > E S ( A 4 ) , E Z J ( A ) > E Z J ( A 0.5 ) > E Z J ( A 2 ) > E Z J ( A 3 ) > E Z J ( A 4 ) .
Because the entropy of AIFS A0.5 is less than that of AIFS A, entropy measures EZL, EZB, EZE, EBB, ESK, and EZJ do not perform as well as other entropy measures. From the point of view of knowledge amount, we note that the results obtained by KSKB, KN, and KG are not so reasonable, since counter-intuitive relations K S K B ( A 0.5 ) > K S K B ( A ) , K N ( A 0.5 ) > K N ( A ) , and K G ( A 0.5 ) > K G ( A ) exist. However, our developed knowledge measure KI can produce a rational result as KI(A0.5) < KI(A) < KI(A2) < KI(A3) < KI(A4). Thus, it is demonstrated that half of entropy measures in Table 1 cannot reflect the uncertainty hidden in these AIFSs. Although several knowledge measures have been presented, they are not able to distinguish the nuance of knowledge amount in different AIFSs. Thus, our developed knowledge measure outperforms other knowledge measures by providing persuasive results complying with intuitive analysis.
For a further investigation of the performance of the proposed knowledge measure, we modify the AIFS “LARGE” defined in X = {6,7,8,9,10} by increasing the non-membership degree of element “8” and reducing its hesitant degree. The modified AIFS “LARGE” is given as
B = 6 , 0.1 , 0.8 , 7 , 0.3 , 0.5 , 8 , 0.5 , 0.5 , 9 , 0.9 , 0 , 10 , 1 , 0 .
Through the operation shown in Equation (21), the following AIFSs related to B can be generated:
B 0.5 = 6 , 0.316 , 0.553 , 7 , 0.548 , 0.293 , 8 , 0.707 , 0.293 , 9 , 0.949 , 0 , 10 , 1 , 0 , B 2 = 6 , 0.010 , 0.960 , 7 , 0.090 , 0.750 , 8 , 0.250 , 0.750 , 9 , 0.810 , 0 , 10 , 1 , 0 , B 3 = 6 , 0.001 , 0.992 , 7 , 0.027 , 0.875 , 8 , 0.125 , 0.875 , 9 , 0.729 , 0 , 10 , 1 , 0 , B 4 = 6 , 0.0001 , 0.998 , 7 , 0.008 , 0.938 , 8 , 0.062 , 0.938 , 9 , 0.656 , 0 , 10 , 1 , 0 .
According to the entropy and knowledge measures listed in Table 1, we obtain the comparative results as shown in Table 3.
It can be seen that AIFS B still has more entropy than AIFS B0.5 when entropy measures EZL, EZB, EZE, EBB, ESK, and EZJ are considered. The ordered results obtained based on these entropy measures are
E Z L ( B ) > E Z L ( B 0.5 ) > E Z L ( B 2 ) > E Z L ( B 3 ) > E Z L ( B 4 ) , E Z B ( B ) > E Z B ( B 0.5 ) > E Z B ( B 2 ) > E Z B ( B 3 ) > E Z B ( B 4 ) , E Z E ( B ) > E Z E ( B 0.5 ) > E Z E ( B 2 ) > E Z E ( B 3 ) > E Z E ( B 4 ) , E B B ( B ) > E B B ( B 0.5 ) > E B B ( B 2 ) > E B B ( B 3 ) > E B B ( B 4 ) , E S K ( B ) > E S K ( B 0.5 ) > E S K ( B 2 ) > E S K ( B 3 ) > E S K ( B 4 ) , E Z J ( B ) > E Z J ( B 0.5 ) > E Z J ( B 2 ) > E Z J ( B 3 ) > E Z J ( B 4 ) .
It can be seen that these ranked orders do not satisfy intuitive analysis in Equation (22), while other entropy measures can induce desirable results. In this example, EHC and ES perform well, but the measure EZE performs poorly. This illustrates that these entropy measures are not robust enough.
Moreover, the results produced by knowledge measures KSVB, KN, and KG are also not reasonable, shown as:
K S V B ( B ) < K S V B ( B 0.5 ) < K S V B ( B 2 ) < K S V B ( B 3 ) < K S V B ( B 4 ) , K N ( B ) < K N ( B 0.5 ) < K N ( B 2 ) < K N ( B 3 ) < K N ( B 4 ) , K G ( B ) < K G ( B 0.5 ) < K G ( B 2 ) < K G ( B 3 ) < K G ( B 4 ) .
However, our proposed knowledge measure KI indicates that:
K I ( B 0.5 ) < K I ( B ) < K I ( B 2 ) < K I ( B 3 ) < K I ( B 4 ) .
Thus, the knowledge measures KSVB, KN, and KG are still not suitable for differentiating the knowledge amount conveyed by AIFSs. The effectiveness of the proposed knowledge measure KI is once again indicated by this example.
From the above examples, we conclude that entropy measures EZL, EZB, EZE, EBB, EHC, ES, ESK, and EZJ perform poorly because of their lack of robustness and discriminability. The proposed knowledge measure performs much better than knowledge measures KSVB, KN, and KG. The performances of entropy measures EA, EZC, EZD, EVS, ELDL, and the proposed knowledge measure KI in Table 3 seem to show that less entropy indicates more knowledge amount. Nevertheless, the relationship between entropy and knowledge measure is limited and conditional, as was discussed previously.
The above analysis indicates an effective way to define knowledge measure for AIFSs based on a metric distance measure dAIFS for AIFSs.

5. New Method for Solving MAGDM Problems

Since the inception of AIFSs, many researchers have been dedicated to exploring applications of AIFSs along with their mathematical mechanism. One important application area of AIFSs is multi-attribute group decision making (MAGDM) [28,30,36,38,62,63]. In the MAGDM problem, because of the limitation of experts’ knowledge and time pressure, uncertain or incomplete information may be provided in the evaluation of each alternative. Therefore, a suitable model should be constructed to depict the incomplete information. By introducing hesitancy degree, AIFSs can describe the uncertainty caused both by fuzziness and by lack of knowledge. Moreover, incomplete information can be aggregated in a direct way with the help of intuitionistic fuzzy aggregation operators. Thus, AIFSs are accepted by many researchers as one effective tool for solving MAGDM problems. The application of AIFSs in solving MAGDM problems has attract many researchers because of a series of open topics in this area, such as the determination of attribute weights, effective aggregation operators for AIFSs, ranking of alternatives based on IFVs, and the construction of intuitionistic fuzzy model from incomplete information.
Here, we put forth a new method with which to solve intuitionistic fuzzy MAGDM problems. We develop the approach according to the proposed intuitionistic fuzzy distance measure and distance-based knowledge measure. The intuitionistic fuzzy MAGDM problem is depicted as follows.
G = { G 1 , G 2 , , G m } is the set consisted of all threat levels. A = { A 1 , A 2 , , A n } is the set containing all attributes which will be considered to evaluate the threat level. E = { E 1 , E 2 , , E s } is the set of all decision makers to evaluate threat levels. The weight of attribute Ai is wi, i = 1 , 2 , , n , with i = 1 n w i = 1 . All weights are expressed by weight vector w = ( w 1 , w 2 , , w n ) T . Each decision maker is assigned a weighting factor λ j , j = 1 , 2 , , s , with j = 1 s λ j = 1 . Decision maker Ek ( k = 1 , 2 , , s ) gives the decision matrix expressed by IFVs as:
                                                                    A 1                             A 2                         A n R k =       G 1 G 2     G m         ( μ 11 k , v 11 k μ 12 k , v 12 k μ 1 n k , v 1 n k μ 21 k , v 21 k μ 22 k , v 22 k μ 2 n k , v 2 n k μ m 1 k , v m 1 k μ m 2 k , v m 2 k μ m n k , v m n k )
where r i j k = μ i j k , v i j k is an IFV representing the evaluation result of alternative Gi according to attribute Aj.
If the attribute weights are unknown, this MAGDM problem should be solved by steps as following.
Step 1. Determine attribute weights
In most cases, the weighting factor of each attribute is partly known or completely unknown due to limited time and expert knowledge. Thus, determining the weighting vector of all attributes is necessary. Several approaches have been put forward to assess the importance of all attributes in decision making.
Li et al. [62] developed the TOPSIS-based method to obtain the interval-valued weight factor for all attributes, which may cause information loss in the process of decision making. Wei [64] proposed an optical model to derive the attribute weighting vector, which was implemented by maximizing the deviation between all evaluation results under an attribute. Regarding the hesitance degree as an entropy measure, Ye [10] developed an entropy-based method to evaluate the attribute weight vector.
Note that Wei’s method [64] is based on the idea of maximizing the deviation, while Ye’s method [10] is based on the idea of minimizing the entropy. Combining Wei’s [64] and Ye’s ideas [10], Xia and Xu [9] proposed an entropy-/cross-entropy-based model to determine the attribute weighting vector, in which they utilize the cross-entropy to describe the deviation between IFVs. Borrowing the idea of Xia and Xu [9], we develop a model using the proposed distance measure DI and the knowledge measure KI to determine attribute weights.
For decision maker Ek, the average divergence of alternative Gi from all other alternatives under attribute Aj can be measured as
D I V i j k = 1 m 1 p = 1 m D I r i j k , r p j k
Based on distance measure DI and knowledge measure KI, the average divergence and knowledge amount of all information provided by Ek under attribute Aj can be measured, respectively, as
D I V i j k = 1 m 1 p = 1 m D I r i j k , r p j k
K j k = p = 1 m K I r p j k
Considering the weighting factor of each decision maker, we can obtain the total difference among all alternatives and the total amount of knowledge with respect to attribute Aj as
D I V j = k = 1 s λ k 1 m 1 p = 1 m q = 1 m D I r p j k , r q j k ,
K j = k = 1 s λ k p = 1 m K I r p j k ,
Generally, if the evaluation information of all alternatives under an attribute is quite different from each other, it means that this attribute provides much discriminative information, and thus it should be more important. Conversely, if there is little difference among the evaluation results of all alternatives obtained with respect to one attribute, then this attribute is less important. We also have the sense that a greater amount of knowledge conveyed by the information under an attribute indicates that the information provided is more helpful for decision making. Therefore, this particular attribute is more important. Based on the above analysis, we establish an optimal model with which to calculate the weighting vector w of all attributes as
max   T = j = 1 n w j k = 1 s λ k p = 1 m K I r p j k + 1 m 1 q = 1 m D I r p j k , r q j k s . t .           w H ,           j = 1 n w j = 1 , w j 0 , j = 1 , 2 , , n .
where H is a set that contains all of the incomplete information of an attribute weight.
In particular, if there is no additional information about the weighting vector, i.e., each attribute’s weighting factor is totally unknown, the weighting factor of attribute Aj ( j = 1 , 2 , , n ) can be calculated as
w j = k = 1 s λ k K j k + D I V j k j = 1 n k = 1 s λ k K j k + D I V j k = k = 1 s λ k p = 1 m K I r p j k + 1 m 1 q = 1 m D I r p j k , r q j k j = 1 n k = 1 s λ k p = 1 m K I r p j k + 1 m 1 q = 1 m D I r p j k , r q j k ,
Step 2. Use the intuitionistic fuzzy weighted averaging (IFWA) operator proposed in [38] and the weighting vector λ = ( λ 1 , λ 2 , , λ s ) T to collect the individual intuitionistic fuzzy decision matrices r i j k = μ i j k , v i j k ( k = 1 , 2 , , s ) into an aggregated decision matrix with intuitionistic fuzzy information, denoted R = (rij)mˣn.
Step 3. Use the aggregation operator IFWA and attribute weighting vector w to aggregate the evaluation results r i 1 , r i 2 , , r i n of each alternative Gi ( i = 1 , 2 , , m ) under all attributes to get an IFV Zi ( i = 1 , 2 , , m ) denoting the aggregated evaluation result of alternative Gi ( i = 1 , 2 , , m ).
Step 4. Calculate both the score function and accuracy function of IFVs Z 1 , Z 2 , , Z m .
Step 5. Rank all alternatives according to the score function and accuracy function of IFVs Z 1 , Z 2 , , Z m to obtain the priority order.

6. Application on Evaluation of Malicious Code Threat

Here, the method proposed in Section 5 for solving the MAGDM problems is applied on evaluation method of malicious code threat degree.
Example 6.
In a battle of cyber defense, the cyber-defense unit aims to choose a target with the highest threat to attack. In cyberspace security, cyber security researchers need to evaluate the threats caused by malicious code. In the way, the most dangerous threat can be addressed first, and then the other threats can be addressed.
The threat degrees of five malicious codes (G1, G2, G3, G4, G5) are evaluated by four experts (E1, E2, E3, E4) with respect to the following five attributes:
(1)
A1, the resource consumption;
(2)
A2, the destruction ability;
(3)
A3, the anti-detection ability;
(4)
A4, the self-starting ability;
(5)
A5, the diffusion ability.
The weighting vector of four experts is λ = ( 0.3 , 0.2 , 0.3 , 0.2 ) T . The associated weighting factor for the hybrid aggregation of the four experts is η = ( 0 . 155 ,   0 . 345 ,   0 . 345 ,   0 . 155 ) T , which is derived by the method based on normal distribution, as shown in [63]. The threat degree of each malicious code evaluated by four experts is expressed by the following four intuitionistic fuzzy decision matrices:
                                                                    A 1                     A 2                       A 3                   A 4                       A 5 R 1 =       G 1 G 2 G 3 G 4 G 5         ( 0.4 , 0.5 0.5 , 0.2 0.6 , 0.2 0.8 , 0.1 0.7 , 0.3 0.6 , 0.2 0.7 , 0.2 0.3 , 0.4 0.5 , 0.1 0.8 , 0.2 0.7 , 0.3 0.8 , 0.1 0.5 , 0.5 0.3 , 0.2 0.6 , 0.3 0.4 , 0.3 0.7 , 0.1 0.6 , 0.1 0.4 , 0.3 0.9 , 0.1 0.8 , 0.1 0.3 , 0.4 0.4 , 0.5 0.7 , 0.2 0.5 , 0.2 ) ,                                                                     A 1                     A 2                       A 3                   A 4                       A 5 R 2 =       G 1 G 2 G 3 G 4 G 5         ( 0.5 , 0.3 0.6 , 0.1 0.7 , 0.3 0.7 , 0.1 0.8 , 0.2 0.7 , 0.2 0.6 , 0.2 0.4 , 0.4 0.6 , 0.2 0.7 , 0.3 0.5 , 0.3 0.7 , 0.2 0.6 , 0.3 0.4 , 0.2 0.6 , 0.1 0.5 , 0.4 0.8 , 0.1 0.4 , 0.2 0.7 , 0.2 0.7 , 0.3 0.7 , 0.3 0.5 , 0.4 0.6 , 0.3 0.6 , 0.2 0.5 , 0.1 ) ,                                                                     A 1                     A 2                       A 3                   A 4                       A 5 R 3 =       G 1 G 2 G 3 G 4 G 5         ( 0.6 , 0.3 0.5 , 0.2 0.6 , 0.4 0.8 , 0.1 0.7 , 0.3 0.8 , 0.2 0.5 , 0.3 0.6 , 0.4 0.5 , 0.2 0.6 , 0.3 0.6 , 0.1 0.8 , 0.2 0.7 , 0.3 0.4 , 0.2 0.8 , 0.1 0.6 , 0.3 0.6 , 0.1 0.5 , 0.4 0.9 , 0.1 0.5 , 0.2 0.8 , 0.1 0.6 , 0.2 0.7 , 0.3 0.5 , 0.2 0.7 , 0.1 ) ,                                                                     A 1                     A 2                       A 3                   A 4                       A 5 R 4 =       G 1 G 2 G 3 G 4 G 5         ( 0.3 , 0.4 0.9 , 0.1 0.8 , 0.1 0.5 , 0.5 0.4 , 0.6 0.7 , 0.1 0.7 , 0.3 0.4 , 0.2 0.8 , 0.2 0.3 , 0.1 0.4 , 0.1 0.5 , 0.2 0.8 , 0.1 0.6 , 0.2 0.6 , 0.3 0.8 , 0.2 0.5 , 0.1 0.6 , 0.4 0.7 , 0.2 0.7 , 0.2 0.6 , 0.1 0.8 , 0.2 0.7 , 0.2 0.6 , 0.3 0.8 , 0.1 )
Case 1.
First, we suppose that the weight of each attribute is totally unknown.
We then use the proposed method shown in Equation (31) to establish the weighting vectors of five attributes. We solve this problem according to the next steps:
(1)
Using the distance measure DI and knowledge measure KI to get the average divergence and the amount of knowledge under all attributes for all decision makers, we obtain the divergence and knowledge matrix, respectively, as
D I V =     1.0067 0.9763 0.9274 0.9013 0.7288 0.5312 0.7012 0.5487 0.4928 0.4707 0.6048 0.5614 0.8524 0.9013 0.6275 0.9661 0.7473 0.8370 0.7270 1.1840 , K =     2.7112 2.8317 2.4047 2.4629 3.1393 2.5628 2.9237 2.3939 2.6850 2.9527 3.0721 2.6788 2.8276 2.4629 2.9745 2.6547 3.0779 3.0100 2.8944 2.6928 ,
The elements divij in matrix DIV represents the whole average divergence provided by Di under Aj, and kij in matrix K represents the knowledge amount provided by Di under Aj.
(2)
Given the weight vector λ = ( 0.3 , 0.2 , 0.3 , 0.2 ) T , we obtain the attribute weight vector based on Equation (31):
w = ( 0 . 2011 , 0 . 2036 , 0 . 1955 , 0 . 1908 , 0 . 2090 ) T .
(3)
Collecting all decision makers’ decision matrices based on the proposed IFWA operator, we can get the aggregated decision matrix as:
                                                                              A 1                                             A 2                                               A 3                                             A 4                                             A 5 R =       G 1 G 2 G 3 G 4 G 5         ( 0.4717 , 0.3704 0.6534 , 0.1516 0.7330 , 0.1534 0.7395 , 0.1380 0.6822 , 0.3178 0 . 7104 , 0 . 1741 0.6296 , 0.2449 0.4050 , 0.2828 0.6019 , 0.1320 0.6569 , 0.2132 0.5839 , 0.1732 0.7395 , 0.1625 0.5795 , 0.2486 0.3931 , 0.2000 0.6751 , 0.1732 0.5888 , 0.2930 0.6660 , 0.1000 0.7138 , 0.1516 0.5453 , 0.2551 0.7485 , 0.1762 0.7509 , 0.1246 0.5963 , 0.2828 0.5440 , 0.2855 0.6634 , 0.2169 0.6429 , 0.1231 ) .
(4)
Based on the vector w = ( 0 . 2011 , 0 . 2036 , 0 . 1955 , 0 . 1908 , 0 . 2090 ) T , we aggregate the threat degree of each target under all attributes using the IFWA operator to obtain
Z 1 = 0 . 6666 , 0 . 2085 ,   Z 2 = 0 . 6141 , 0 . 2031 ,   Z 3 = 0 . 6132 , 0 . 1886 ,   Z 4 = 0 . 6622 , 0 . 1812 , Z 5 = 0 . 6421 , 0 . 1920 .
(5)
The score function of Z1, Z2, Z3, Z4, Z5 can be calculated as:
S ( Z 1 ) = 0.4581 ,   S ( Z 2 ) = 0.4111 ,   S ( Z 3 ) = 0.4246 ,   S ( Z 4 ) = 0.4810 ,   S ( Z 5 ) = 0.4501 .
(6)
According to the score grades, we obtain the ranking order R of all malicious codes’ threat degree as
G 4 G 1 G 5 G 3 G 2 .
Based on the method proposed in [9], when E M 1.5 and C E M 1.5 are used, the attribute weights are obtained as w a = ( 0 . 1940 , 0 . 2238 , 0 . 1330 , 0 . 2117 , 0 . 2375 ) T , and the final ranking order is Ra: G 4 G 5 G 1 G 3 G 2 . When E N 1 and C E N 1 are used, the attribute weights are obtained as w b = ( 0 . 1931 , 0 . 2219 , 0 . 1325 , 0 . 2133 , 0 . 2392 ) T , and the final ranking order is Rb: G 4 G 5 G 1 G 3 G 2 . It is notable that the final ranking order obtained using the method proposed in Section 5 is not completely identical to that obtained in [9]. However, all methods can be used to obtain the same optimal alternative, G4. Since the solving the MAGDM problem is aimed at obtaining the best choice, the order of other alternatives may not be of concern. We can use the similarity between two weighting vectors, which is defined as the cosine value of the angle between them, denoted Sim:
S i m ( w 1 , w 2 ) = w 1 T w 2 w 1 T w 1 w 1 T w 2 ,
The consensus level between two ranking orders R1 and R2 is calculated by Spearman’s rank correlation coefficient [65]:
ρ ( R 1 , R 2 ) = 1 6 i = 1 p r i ( 1 ) r i ( 2 ) 2 / p ( p 2 1 ) ,
where p is the number of alternatives; r i ( 1 ) and r i ( 2 ) are the positions of alternative Gi in respective ranking order R1 and R2.
We then obtain
S i m ( w , w a ) = 0.9863 ; S i m ( w , w b ) = 0.9859 ; ρ ( R , R a ) = ρ ( R , R b ) = 0.9 .
These results indicate that the attribute weights obtained by the proposed method are quite similar to those yielded in [9]. Moreover, the ranking orders are at a high consensus level. It is demonstrated that the proposed method is effective for solving MAGDM problems.
Case 2.
We suppose that the attribute weights are partially known by some relations as following:
H = { w 1 0.1 ; 0.2 w 2 0.3 ; w 3 0.15 ; 0.2 w 4 0.3 ; 0.3 w 5 0.4 } .
We can then use the following optimal model to get the attribute weighting vector:
max       T = ( 3.5614 , 3.6045 , 3.4614 , 3.3783 , 3.7011 ) w s . t .           w H ,           j = 1 5 w j = 1 , w j 0 , j = 1 , 2 , , 5 . ,
and we obtain the weighting vector as w = ( 0.1 , 0.2 , 0.15 , 0.2 , 0.35 ) T .
Using the weighting vector w, we obtain the aggregated threat grades of each malicious code by the IFWA operator:
Z 1 = 0 . 6815 , 0 . 2110 ,   Z 2 = 0 . 6168 , 0 . 2036 ,   Z 3 = 0 . 6247 , 0 . 1858 ,   Z 4 = 0 . 6791 , 0 . 1743 , Z 5 = 0 . 6334 , 0 . 1849 ,
and their scores are calculated as
S ( Z 1 ) = 0.4704 ,   S ( Z 2 ) = 0.4131 ,   S ( Z 3 ) = 0.4389 ,   S ( Z 4 ) = 0.5048 ,   S ( Z 5 ) = 0.4484 ,
respectively. Then, we obtain the ranking order as
G 4 G 1 G 5 G 3 G 2 .
If there is only one expert in MAGDM problems, we do not need to fuse the results of different experts. Thus, we can deal with such cases by evaluating attribute weight vector and then aggregating all the results under different attributes. We will use another example to compare the proposed methods with other methods.
Example 7.
The cyber-defense unit will attack the malicious code with the maximum threat grade. In cyberspace security, cyber security researchers evaluate their own protection capabilities by evaluating malicious codes, and can judge the order in which malicious codes are difficult to be discovered in the system.
There are pieces of five malicious code for their choice. The following five types of malicious code include:
Five malicious code are presented as:
  • G1, a backdoor;
  • G2, a Trojan-PWS;
  • G3, a Worm;
  • G4, a Trojan-Spy;
  • G5, a Trojan-Downloader.
The cyber security researchers evaluates these five malicious code based on four attributes, which are the following:
  • A1, the resource consumption;
  • A2, the self-starting ability;
  • A3, the con-cealment ability;
  • A4, the self-protection ability.
The results of evaluation using intuitionistic fuzzy information are
                                                              A 1                         A 2                     A 3                 A 4 R =       G 1 G 2 G 3 G 4 G 5         ( 0.5 , 0.4 0.6 , 0.3 0.3 , 0.6 0.2 , 0.7 0.7 , 0.3 0.7 , 0.2 0.7 , 0.2 0.4 , 0.5 0.6 , 0.4 0.5 , 0.4 0.5 , 0.3 0.6 , 0.3 0.8 , 0.1 0.6 , 0.3 0.3 , 0.4 0.2 , 0.6 0.6 , 0.2 0.4 , 0.3 0.7 , 0.1 0.5 , 0.3 ) .
Case 1.
There is no information available for all attributes’ weights.
(1)
Using the distance measure DI and knowledge measure KI to get the average divergence and the amount of knowledge under all attributes, we obtain the divergence and knowledge matrix, respectively, shown as
D I V =     0.7468 , 0.5484 , 1.2190 , 1.1117 , K = 2.8798 , 2.4825 , 2.5963 , 2.5674
The elements divi and ki in vector DIV and K represent the average divergence degree and knowledge quantity under attribute Ai, respectively.
(2)
The weight factor of attribute Ai can be calculated as
w i = d i v i + k i i = 1 n d i v i + k i ,   i = 1 , 2 , , 5
We then obtain the weighting vector as w = (0.2563,0.2142,0.2696,0.2600)T.
(3)
Aggregate the evaluation results of each target under all attributes based on the weighting vector w and the IFWA operator. The final threat grades of five malicious code are:
Z 1 = 0 . 4102 , 0 . 4852 ,   Z 2 = 0 . 6408 , 0 . 2816 ,   Z 3 = 0 . 5544 , 0 . 3435 ,   Z 4 = 0 . 5337 , 0 . 2930 , Z 5 = 0 . 5722 , 0 . 2011 .
(4)
The score grades of all alternatives are computed as
S ( Z 1 ) = 0.0750 ,   S ( Z 2 ) = 0.3592 ,   S ( Z 3 ) = 0.2109 ,   S ( Z 4 ) = 0.2407 ,   S ( Z 5 ) = 0.3711 .
(5)
Thus, we rank all alternatives in order R as G 5 G 2 G 4 G 3 G 1 .
For further analysis, we compare these results with the solutions for Xia and Xu’s method [9]. The weighting vector that they obtained is w c = ( 0 . 2659 , 0 . 2486 , 0 . 2370 , 0 . 2486 ) T and the ranking order is Rc: G 5 G 2 G 3 G 4 G 1 . We note that these ranking orders are slightly diverse due to the distinction between intuitionistic fuzzy measures used, but they obtain the same optimal alternative G5.
We also obtain S i m ( w , w c ) = 0.9975 and ρ ( R , R c ) = 0.9 , indicating that the results achieved based on the method proposed in Section 5 are quite close to the results in [9].
Case 2.
Suppose that partially information on the attribute is available as:
H = { 0.15 w 1 0.2 ; 0.16 w 2 0.18 ; 0.3 w 3 0.35 ; 0.3 w 4 0.45 } ,
we then can build an optimal model to calculate the attribute weight:
max       T = ( 3.6266 , 3.0310 , 3.8153 , 3.6791 ) w s . t .           w H ,           j = 1 4 w j = 1 .
and can obtain the weight vector:
w = ( 0 . 20 , 0 . 16 , 0 . 34 , 0 . 30 ) T .
Aggregating the threat grades of each target under all attributes using the IFWA operator, we obtain
Z 1 = 0 . 3371 , 0 . 5186 ,   Z 2 = 0 . 6307 , 0 . 2855 ,   Z 3 = 0 . 5528 , 0 . 3327 ,   Z 4 = 0 . 4814 , 0 . 3270 , Z 5 = 0 . 5862 , 0 . 1904 .
The score grades of these IFVs representing each target’s threat degree can be obtained as
S ( Z 1 ) = 0.1415 ,   S ( Z 2 ) = 0.3451 ,   S ( Z 3 ) = 0.2201 ,   S ( Z 4 ) = 0.1545 ,   S ( Z 5 ) = 0.3958 .
By comparing the score grades of five IFVs, the ranking order of these five malicious codes’ threat degree can be obtained as: G 5 G 2 G 3 G 4 G 1 .
Using the method proposed in [9], the attribute weights can be yielded as w d = ( 0 . 19 , 0 . 16 , 0 . 35 , 0 . 30 ) T , and the corresponding ranking order is Rd: G 5 G 2 G 3 G 4 G 1 .
It is shown that the weighting vector obtained by the proposed method is much close to that obtained by Xia and Xu in [9] when partial information on the attribute weight is provided. We calculate the similarity degree between them as S i m ( w , w c ) = 0.9996 . It can also be seen that the order yielded by our proposed method is identical to Rd, a phenomenon that appears to be caused by the incomplete information.
These illustrative examples reveal the necessity of utilizing distance and knowledge measures to establish the attribute weights. They further demonstrate that our method proposed here reasonably and effectively handles intuitionistic fuzzy MAGDM problems. The applicability of our proposed knowledge measure is also illustrated. In the method proposed in [9], we note that they used more complex entropy/cross-entropy measures with additional parameters, but without specific physical meaning. Moreover, the hybrid aggregation operator used in [8] needs an associated weight vector to aggregate intuitionistic fuzzy information. Compared with these entropy/cross-entropy measures in [9], our developed distance and knowledge measures with relatively concise simple expressions and specific physical meaning can also obtain reasonable solutions with the help of the original IFWA operator. Thus, our proposed method seems to be more practical and easier to implement to solve MAGDM problems.

7. Conclusions

In this paper, we propose a knowledge measure based on our proposed intuitionistic fuzzy distance measure for the purpose of measuring the knowledge amount of AIFSs more accurately. The axiomatic definition of knowledge measure is refined from a more general view, after which we investigate the properties of the new distance-based knowledge measure. Mathematical analysis and numerical examples are provided to illustrate the proposed knowledge measure’s properties. To demonstrate the applicability of the proposed distance-based knowledge measure, we apply it to develop a new method of solving MAGDM problems with intuitionistic fuzzy information. Application examples combined with comparative analysis illustrate the effectiveness and rationality of our method.
We only present a knowledge measure based on our proposed distance measure in this paper. The main feature of the proposed knowledge measure lies in its succinct expression, good properties, and evident physical significance. This is a new perspective to considering knowledge measure and uncertainty measure. There must be other kinds of knowledge measures used if other distance measures are applied. Exploration on the reasonable distance measure is critical for the definition of knowledge measure. Conversely, based on the relation on distance measure and uncertainty measure, we can also develop new distance measure based on some reasonable knowledge measures. Furthermore, syncretic research on distance measure, similarity measure, knowledge measure, and uncertainty measure is also attractive and worthy.

Author Contributions

Conceptualization, X.W. and Y.S.; methodology, X.W.; validation, Y.W., X.W. and Y.S.; formal analysis, X.W.; writing-original draft preparation, X.W.; writing-review and editing, Y.S.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of China under grants No. 61703426, No. 61876189 and No. 61806219, and by the Post-Doctorial Science Foundation of China under Grant 2018M633680, and by by Young Talent fund of University Association for Science and Technology in Shaanxi, China, under Grant No. 20190108, and the Innovation Capability Support Plan of Shaanxi, China, under Grant No. 2020KJXX-065.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1. 
(1) Given D I ( A , B ) = 0 , we obtain μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 = 0 and μ A ( x i ) + v A ( x i ) 2 μ B ( x i ) + v B ( x i ) 2 = 0   i { 1 , 2 , , n } , which can be written identically as
μ A ( x i ) v A ( x i ) = μ B ( x i ) v B ( x i ) ,   μ A ( x i ) + v A ( x i ) = μ B ( x i ) + v B ( x i ) .
We then obtain μ A ( x i ) = μ B ( x i ) and v A ( x i ) = v B ( x i ) by adding and subtracting the above equations, respectively, i = 1 , 2 , , n . Hence, for all elements x X , μ A ( x ) = μ B ( x ) and v A ( x ) = v B ( x ) hold simultaneously, which indicates that A=B.
For two AIFSs, A and B, defined in X = { x 1 , x 2 , , x n } , we have the following relation:
A = B i { 1 , 2 , , n } , μ A ( x i ) = μ B ( x i ) , v A ( x i ) = v B ( x i ) D I ( A , B ) = 0 .
We can conclude from the above analysis that D I ( A , B ) = 0 A = B .
(2) It is straightforward that D I ( A , B ) = D I ( B , A ) .
(3) Three AIFSs, A , B , and C defined in X = { x 1 , x 2 , , x n } , can be expressed as A = x , μ A ( x ) , v A ( x ) x X , B = x , μ B ( x ) , v B ( x ) x X , and C = x , μ C ( x ) , v C ( x ) x X , respectively. Considering the condition A B C , we have the relations μ A ( x i ) μ B ( x i ) μ C ( x i ) and v A ( x i ) v B ( x i ) v C ( x i ) .
The distance between AIFSs A and B can be written as D I ( A , B ) = 1 n i = 1 n μ A ( x i ) v A ( x i ) 2 μ B ( x i ) v B ( x i ) 2 2 + 1 3 μ A ( x i ) + v A ( x i ) 2 μ B ( x i ) + v B ( x i ) 2 2 . The distance between AIFSs A and C can be written as
D I ( A , C ) = 1 n i = 1 n μ A ( x i ) v A ( x i ) 2 μ C ( x i ) v C ( x i ) 2 2 + 1 3 μ A ( x i ) + v A ( x i ) 2 μ C ( x i ) + v C ( x i ) 2 2 . The distance between AIFSs B and C can be written as D I ( B , C ) = 1 n i = 1 n μ B ( x i ) v B ( x i ) 2 μ C ( x i ) v C ( x i ) 2 2 + 1 3 μ B ( x i ) + v B ( x i ) 2 μ C ( x i ) + v C ( x i ) 2 2 . We then construct a function f ( x , y ) with two variables as
f ( x , y ) = ( x y ) ( a b ) 2 + 1 3 ( x + y ) ( a + b ) 2 ,
where 0 x 1 , 0 y 1 , 0 a 1 , and 0 b 1 .
The partial derivatives for variables x and y can be obtained as follows:
f x = 2 ( x y ) ( a b ) + 2 3 ( x + y ) ( a + b )           = 8 3 x a + 4 3 b y           = 4 3 2 x a + b y , f y = 2 ( x y ) ( a b ) + 2 3 ( x + y ) ( a + b )           = 8 3 y b + 4 3 a x           = 4 3 2 y b + a x .
(i) Given the conditions 0 a x 1 and 0 y b 1 , we obtain that f / x 0 and f / y 0 . Thus, f ( x , y ) is an increasing function of variable x and a decreasing function of variable y.
Letting a = μ A ( x i ) and b = v A ( x i ) then a = μ A ( x i ) μ B ( x i ) μ C ( x i ) and b = v A ( x i ) v B ( x i ) v C ( x i ) , i 1 , 2 , , n . Considering the monotonicity of f ( x , y ) , we have f ( μ B ( x i ) , v B ( x i ) ) f ( μ C ( x i ) , v C ( x i ) ) , i 1 , 2 , , n .
Under the conditions a = μ A ( x i ) , b = v A ( x i ) , and i 1 , 2 , , n , the following expressions hold:
D I ( A , C ) = 1 2 n i = 1 n f ( μ C ( x i ) , v C ( x i ) ) ,   D I ( A , B ) = 1 2 n i = 1 n f ( μ B ( x i ) , v B ( x i ) ) .
Therefore, we have D I ( A , B ) D I ( A , C ) .
(ii) In the conditions 0 x a 1 and 0 b y 1 , we have f / x 0 and f / y 0 . Thus, f ( x , y ) is a decreasing function of variable x and an increasing function of variable y.
Setting a = μ C ( x i ) and b = v C ( x i ) , then μ A ( x i ) μ B ( x i ) μ C ( x i ) = a and v A ( x i ) v B ( x i ) v C ( x i ) = b , i 1 , 2 , , n . Considering the monotonicity of f ( x , y ) , we have f ( μ B ( x i ) , v B ( x i ) ) f ( μ A ( x i ) , v A ( x i ) ) , i 1 , 2 , , n .
For a = μ C ( x i ) , b = v C ( x i ) , i 1 , 2 , , n , the following expressions hold:
D I ( A , C ) = 1 2 n i = 1 n f ( μ A ( x i ) , v A ( x i ) ) ,   D I ( A , B ) = 1 2 n i = 1 n f ( μ B ( x i ) , v B ( x i ) ) .
Thus, we have D I ( B , C ) D I ( A , C ) .
Taking (i) and (ii) into account, we conclude that D I ( A , B ) D I ( A , C ) and D I ( B , C ) D I ( A , C ) in the condition of A B C .
(4) The expression of D I ( A , B ) indicates that D I ( A , B ) 0 and D I ( A , B ) = 0 if A = B .
Defining two AIFSs in X = { x 1 , x 2 , , x n } as F = x , 0 , 1 x X and F = x , 1 , 0 x X , for two AIFSs A = x , μ A ( x ) , v A ( x ) x X and B = x , μ B ( x ) , v B ( x ) x X , we have F A B F based on the basic relation between AIFSs.
The condition F A B F implies that D I ( A , B ) D I ( F , F ) . Since AIFSs A and B are arbitrary, the relation D I ( A , B ) D I ( F , F ) holds in the set of all AIFSs defined in X .
By Equation (9), the distance between F and F can be calculated as D I ( F , F ) = 1 . Hence, D I ( A , B ) 1 .
From the above analysis, we obtain 0 D I ( A , B ) 1 .
Therefore, the above analysis indicates that the distance D I ( A , B ) satisfies all axiomatic conditions of a distance measure, and thus D I ( A , B ) is a distance measure for AIFSs. □

Appendix B

Proof of Theorem 3.
To be a knowledge measure of AIFSs, K I ( A ) defined in Equation (19) must satisfy all axiomatic properties defined in Definition 7.
(KP1) Let A be a crisp set. We then have μ A ( x i ) = 1 , v A ( x i ) = 0 or μ A ( x i ) = 0 , v A ( x i ) = 1 , i = 1 , 2 , , n , which implies that μ A ( x i ) + v A ( x i ) = 1 and μ A ( x i ) v A ( x i ) = 1 . Thus, K I ( A ) = 1 .
In the condition of 0 μ A ( x i ) 1 , 0 v A ( x i ) 1 , and 0 μ A ( x i ) + v A ( x i ) 1 , K I ( A ) = 1 can only be obtained in the case of μ A ( x i ) + v A ( x i ) = 1 and μ A ( x i ) v A ( x i ) = 1 , i 1 , 2 , , n . This indicates that i 1 , 2 , , n , μ A ( x i ) = 1 , v A ( x i ) = 0 or μ A ( x i ) = 0 , v A ( x i ) = 1 , which means that A is a crisp set.
Hence, K I ( A ) = 1 if and only if A is a crisp set.
(KP2) In the case where i 1 , 2 , , n , π A ( x i ) = 1 , we have μ A ( x i ) = v A ( x i ) = 0 , i 1 , 2 , , n . Then, K I ( A ) = 0 can be obtained by Equation (19).
The form of Equation (19) indicates that only in the case of μ A ( x i ) v A ( x i ) 2 = 0 and μ A ( x i ) + v A ( x i ) 2   i 1 , 2 , , n can we obtain K I ( A ) = 0 . This implies that μ A ( x i ) v A ( x i ) = 0 and μ A ( x i ) + v A ( x i ) = 0 , i 1 , 2 , , n . Therefore, we have μ A ( x i ) = v A ( x i ) = 0 , π A ( x i ) = 1 , i { 1 , 2 , , n } .
Thus, K I ( A ) complies with the property of KP2.
(KP3) The expression of K I ( A ) can be rewritten as
K I ( A ) = 3 2 n i = 1 n μ A ( x i ) v A ( x i ) 2 + 1 3 1 π A ( x i ) 2 .
It is explicit that K I ( A ) is monotonously increasing with μ A ( x i ) v A ( x i ) if π A ( x i ) is fixed.
By 0 π A ( x i ) 1 and 0 1 π A ( x i ) 1 , we can easily prove that K I ( A ) is monotonously decreasing with π A ( x i ) when μ A ( x i ) v A ( x i ) remains unchanged.
Then, K I ( A ) complies with the property of KP3.
(KP4) By the definition of A C , it is evident that K I ( A C ) = K I ( A ) .
We note that K I ( A ) defined in Equation (19) complies with all properties in the axiomatic definition of a knowledge measure, so it is a knowledge measure for AIFSs. □

References

  1. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  2. Atanassov, K. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 33, 37–46. [Google Scholar] [CrossRef]
  3. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  4. Bustince, H.; Barrenechea, M.E.; Fernandez, J.P.; Xu, Z.; Bedregal, B.; Montero, J.; Hagras, H.; Herrera, F.; De Baets, B. A historical account of types of fuzzy sets and their relationships. IEEE Trans. Fuzzy Syst. 2016, 24, 179–194. [Google Scholar] [CrossRef] [Green Version]
  5. Couso, I.; Bustince, H. From fuzzy sets to interval-valued and Atanassov intuitionistic fuzzy sets: A unified view of different axiomatic measures. IEEE Trans. Fuzzy Syst. 2019, 27, 362–371. [Google Scholar] [CrossRef]
  6. Atanassov, K.T.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  7. Bustince, H. Indicator of inclusion grade for interval-valued fuzzy sets. Application to approximate reasoning based on interval-valued fuzzy sets. Int. J. Approx. Reason. 2000, 23, 137–209. [Google Scholar] [CrossRef] [Green Version]
  8. Papakostas, G.A.; Hatzimichailidis, A.G.; Kaburlasos, V.G. Distance and similarity measures between intuitionistic fuzzy sets: A comparative analysis from a pattern recognition point of view. Pattern Recognit. Lett. 2013, 34, 1609–1622. [Google Scholar] [CrossRef]
  9. Xia, M.; Xu, Z. Entropy/cross entropy-based group decision making under intuitionistic fuzzy environment. Inf. Fusion 2012, 13, 31–47. [Google Scholar] [CrossRef]
  10. Ye, J. Fuzzy decision-making method based on the weighted correlation coefficient under intuitionistic fuzzy environment. Eur. J. Oper. Res. 2010, 205, 202–204. [Google Scholar] [CrossRef]
  11. Liu, Z.; Pan, Q.; Dezert, J.; Martin, A. Combination of classifiers with optimal weight based on evidential reasoning. IEEE Trans. Fuzzy Syst. 2018, 26, 1217–1230. [Google Scholar] [CrossRef] [Green Version]
  12. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  13. Song, Y.; Wang, X.; Lei, L.; Quan, W.; Huang, W. An evidential view of similarity measure for Atanassov’s intuitionistic fuzzy sets. J. Intell. Fuzzy Syst. 2016, 31, 1653–1668. [Google Scholar] [CrossRef]
  14. Wang, X.; Song, Y. Uncertainty measure in evidence theory with its applications. Appl. Intell. 2018, 48, 1672–1688. [Google Scholar] [CrossRef]
  15. Kumar, K.; Chen, S.-M. Multiattribute decision making based on the improved intuitionistic fuzzy Einstein weighted averaging operator of intuitionistic fuzzy values. Inf. Sci. 2021, 568, 369–383. [Google Scholar] [CrossRef]
  16. Kamal, K.; Chen, S.-M. Multiattribute decision making based on interval-valued intuitionistic fuzzy values, score function of connection numbers, and the set pair analysis theory. Inf. Sci. 2021, 551, 100–112. [Google Scholar]
  17. Peng, W.A.; Deng Feng, L.; Ping Ping, L.; Jiang, B.-Q. An information-based score function of interval-valued intuitionistic fuzzy sets and its application in multiattribute decision making. Soft Comput. 2020, 25, 1913–1923. [Google Scholar]
  18. Li, S.; Yang, J.; Wang, G.; Xu, T. Multi-granularity distance measure for interval-valued intuitionistic fuzzy concepts. Inf. Sci. 2021, 570, 599–622. [Google Scholar] [CrossRef]
  19. Shannon, C.E. A mathematical theory of communication. Mob. Comput. Commun. Rev. 2001, 5, 3–55. [Google Scholar] [CrossRef]
  20. De Luca, A.; Termini, S. A definition of a nonprobabilistic entropy in the setting of fuzzy sets theory. Inf. Control 1972, 20, 301–312. [Google Scholar] [CrossRef] [Green Version]
  21. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 1996, 78, 305–316. [Google Scholar] [CrossRef]
  22. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [Google Scholar] [CrossRef]
  23. Wang, W.; Xin, X. Distance measure between intuitionistic fuzzy sets. Pattern Recognit. Lett. 2005, 26, 2063–2069. [Google Scholar] [CrossRef]
  24. Garg, H. Generalized intuitionistic fuzzy entropy-based approach for solving multi-attribute decision-making problems with unknown attribute weights. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2019, 89, 129–139. [Google Scholar] [CrossRef]
  25. Garg, H.; Kaur, J. A Novel (R,S)-norm entropy measure of intuitionistic fuzzy sets and its applications in multi-attribute decision-making. Mathematics 2018, 6, 92. [Google Scholar] [CrossRef] [Green Version]
  26. Song, Y.; Wang, X.; Wu, W.; Lei, L.; Quan, W. Uncertainty measure for Atanassov’s intuitionistic fuzzy sets. Appl. Intell. 2017, 46, 757–774. [Google Scholar] [CrossRef]
  27. Szmidt, E.; Kacprzyk, J.; Bujnowski, P. How to measure the amount of knowledge conveyed by Atanassov’s intuitionistic fuzzy sets. Inf. Sci. 2014, 257, 276–285. [Google Scholar] [CrossRef]
  28. Pal, N.R.; Bustince, H.; Pagola, M.; Mukherjee, U.K.; Goswami, D.P.; Beliakov, G. Uncertainties with Atanassov’s intuitionistic fuzzy sets: Fuzziness and lack of knowledge. Inf. Sci. 2013, 228, 61–74. [Google Scholar] [CrossRef]
  29. Das, S.; Dutta, B.; Guha, D. Weight computation of criteria in a decision-making problem by knowledge measure with intuitionistic fuzzy set and interval-valued intuitionistic fuzzy set. Soft Comput. 2016, 20, 3421–3442. [Google Scholar] [CrossRef]
  30. Nguyen, H. A new knowledge-based measure for intuitionistic fuzzy sets and its application in multiple attribute group decision making. Expert. Syst. Appl. 2015, 42, 8766–8774. [Google Scholar] [CrossRef]
  31. Guo, K. Knowledge measures for Atanassov’s intuitionistic fuzzy sets. IEEE Trans. Fuzzy Syst. 2016, 24, 1072–1078. [Google Scholar] [CrossRef]
  32. Mao, J.; Yao, D.; Wang, C. A novel cross-entropy and entropy measures of IFSs and their applications. Knowl.-Based Syst. 2013, 48, 37–45. [Google Scholar] [CrossRef]
  33. Yager, R.R. On the measure of fuzziness and negation. Part I. Membership in unit interval. Int. J. Gen. Syst. 1979, 5, 221–229. [Google Scholar] [CrossRef]
  34. Das, S.; Guha, D.; Mesiar, R. Information measures in the intuitionistic fuzzy framework and their relationships. IEEE Trans. Fuzzy Syst. 2018, 26, 1626–1637. [Google Scholar] [CrossRef]
  35. Montero, J.; Gómez, D.; Bustince, H. On the relevance of some families of fuzzy sets. Fuzzy Sets Syst. 2007, 158, 2429–2442. [Google Scholar] [CrossRef]
  36. Chen, S.M.; Tan, J.M. Handling multicriteria fuzzy decision making problems based on vague set theory. Fuzzy Sets Syst. 1994, 67, 163–172. [Google Scholar] [CrossRef]
  37. Hong, D.H.; Choi, C.H. Multicriteria fuzzy decision-making problems based on vague set theory. Fuzzy Sets Syst. 2000, 114, 103–113. [Google Scholar] [CrossRef]
  38. Xu, Z. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187. [Google Scholar]
  39. Bustince, H.; Barrenechea, E.; Pagola, M. Image thresholding using restricted equivalence functions and maximizing the measures of similarity. Fuzzy Sets Syst. 2007, 158, 496–516. [Google Scholar] [CrossRef]
  40. Chen, S.M.; Chang, C.H. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on transformation techniques with applications to pattern recognition. Inf. Sci. 2015, 291, 96–114. [Google Scholar] [CrossRef]
  41. Song, Y.; Zhu, J.; Lei, L.; Wang, X. Self-adaptive combination method for temporal evidence based on negotiation strategy. Sci. China Inf. Sci. 2020, 63, 210204:1–210204:13. [Google Scholar] [CrossRef]
  42. Lei, L.; Song, Y.; Luo, X. A new re-encoding ECOC using reject option. Appl. Intell. 2020, 50, 3090–3100. [Google Scholar] [CrossRef]
  43. Song, Y.; Fu, Q.; Wang, Y.; Wang, X. Divergence-based cross entropy and uncertainty measures of Atanassov’s intuitionistic fuzzy sets with their application in decision making. Appl. Soft Comput. 2019, 84, 105703. [Google Scholar] [CrossRef]
  44. Li, D.; Cheng, C. New similarity measures of intuitionistic fuzzy sets and application to pattern recognition. Pattern Recognit. Lett. 2002, 23, 221–225. [Google Scholar]
  45. Song, Y.; Wang, X.; Lei, L.; Xue, A. A novel similarity measure on intuitionistic fuzzy sets with its applications. Appl. Intell. 2015, 42, 252–261. [Google Scholar] [CrossRef]
  46. Garg, H.; Kumar, K. A novel exponential distance and its based TOPSIS method for interval-valued intuitionistic fuzzy sets using connection number of SPA theory. Artif. Intell. Rev. 2020, 53, 595–624. [Google Scholar] [CrossRef]
  47. Garg, H.; Kumar, K. Distance measures for connection number sets based on set pair analysis and its applications to decision-making process. Appl. Intell. 2018, 48, 3346–3359. [Google Scholar] [CrossRef]
  48. Rani, D.; Garg, H. Distance measures between the complex intuitionistic fuzzy sets and its applications to the decision-making process. Int. J. Uncertain. Quantif. 2017, 7, 423–439. [Google Scholar] [CrossRef]
  49. Garg, H.; Kumar, K. A novel possibility measure to interval-valued intuitionistic fuzzy set using connection number of set pair analysis and its applications. Neural Comput. Applic. 2020, 32, 3337–3348. [Google Scholar] [CrossRef]
  50. Song, Y.; Wang, X.; Quan, W.; Huang, W. A new approach to construct similarity measure for intuitionistic fuzzy sets. Soft Comput. 2019, 23, 1985–1998. [Google Scholar] [CrossRef]
  51. Irpino, A.; Verde, R. Dynamic clustering of interval data using a Wasserstein-based distance. Pattern Recognit. Lett. 2008, 29, 1648–1658. [Google Scholar] [CrossRef]
  52. Tran, L.; Duckstein, L. Comparison of fuzzy numbers using a fuzzy distance measure. Fuzzy Sets Syst. 2002, 130, 331–341. [Google Scholar] [CrossRef]
  53. Szmidt, E.; Kacprzyk, J. Distances between intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
  54. Ye, J. Cosine similarity measures for intuitionistic fuzzy sets and their applications. Math. Comput. Model. 2011, 53, 91–97. [Google Scholar] [CrossRef]
  55. Li, J.Q.; Deng, G.N.; Li, H.X.; Zeng, W.Y. The relationship between similarity measure and entropy of intuitionistic fuzzy sets. Inf. Sci. 2012, 188, 314–321. [Google Scholar] [CrossRef]
  56. Zeng, W.Y.; Li, H.X. Relationship between similarity measure and entropy of interval-valued fuzzy sets. Fuzzy Sets Syst. 2006, 157, 1477–1484. [Google Scholar] [CrossRef]
  57. Zhang, H.Y.; Zhang, W.X.; Mei, C.L. Entropy of interval-valued fuzzy sets based on distance and its relationship with similarity measure. Knowl. Based Syst. 2009, 22, 449–454. [Google Scholar] [CrossRef]
  58. Zhang, Q.S.; Jiang, S.Y. A note on information entropy measures for vague sets and its applications. Inf. Sci. 2008, 178, 4184–4191. [Google Scholar] [CrossRef]
  59. De, S.K.; Biswas, R.; Roy, A.R. Some operations on intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114, 477–484. [Google Scholar] [CrossRef]
  60. Hung, W.L.; Yang, M.S. Fuzzy entropy on intuitionistic fuzzy sets. Int. J. Intell. Syst. 2006, 21, 443–451. [Google Scholar] [CrossRef]
  61. Vlachos, I.K.; Sergiadis, G.D. Intuitionistic fuzzy information applications to pattern recognition. Pattern Recognit. Lett. 2007, 28, 197–206. [Google Scholar] [CrossRef]
  62. Li, D.; Wang, Y.; Liu, S.; Shan, F. Fractional programming methodology for multi-attribute group decision-making using IFS. Appl. Soft Comput. 2009, 9, 219–225. [Google Scholar] [CrossRef]
  63. Xu, Z. An overview of methods for determining OWA weights. Int. J. Intell. Syst. 2005, 20, 843–865. [Google Scholar] [CrossRef]
  64. Wei, G.W. Maximizing deviation method for multiple attribute decision making in intuitionistic fuzzy setting. Knowl.-Based Syst. 2008, 21, 833–836. [Google Scholar] [CrossRef]
  65. Kahraman, C.; Engin, O.; Kabak, O.; Kayaet, I. Information systems outsourcing decisions using a group decision-making approach. Eng. Appl. Artif. Intell. 2009, 22, 832–841. [Google Scholar] [CrossRef]
Figure 1. Knowledge amount KI of AIFSs defined in X = { x } .
Figure 1. Knowledge amount KI of AIFSs defined in X = { x } .
Entropy 23 01119 g001
Table 1. Entropy/knowledge measures used for comparative analysis.
Table 1. Entropy/knowledge measures used for comparative analysis.
AuthorsEntropy/Knowledge Measure
Zeng and Li [56] E Z L ( A ) = 1 1 n i = 1 n μ A ( x i ) v A ( x i )
Zhang, Zhang, and Mei [57] E Z A ( A ) = 1 2 n i = 1 n μ A ( x i ) 0.5 2 + 1 v A ( x i ) 0.5 2
Zhang, Zhang, and Mei [57] E Z B ( A ) = 1 1 n i = 1 n μ A ( x i ) 0.5 + 1 v A ( x i ) 0.5
Zhang, Zhang, and Mei [57] E Z C ( A ) = 1 2 n i = 1 n max μ A ( x i ) 0.5 , 1 v A ( x i ) 0.5
Zhang, Zhang, and Mei [57] E Z D ( A ) = 1 4 n i = 1 n max μ A ( x i ) 0.5 2 , 1 v A ( x i ) 0.5 2
Zhang, Zhang, and Mei [57] E Z E ( A ) = 1 2 n i = 1 n μ A ( x i ) 0.5 + 1 v A ( x i ) 0.5 4 + max μ A ( x i ) 0.5 , 1 v A ( x i ) 0.5 2
Burillo and Bustince [21] E B B ( A ) = 1 n i = 1 n 1 μ A ( x i ) v A ( x i )
Szmidt and Kacprzyk [22] E S K ( A ) = 1 n i = 1 n min μ A ( x i ) , v A ( x i ) + π A ( x i ) max μ A ( x i ) , v A ( x i ) + π A ( x i )
Hung and Yang [60] E H C 2 ( A ) = 1 n i = 1 n 1 μ A ( x i ) 2 v A ( x i ) 2 π A ( x i ) 2
Hung and Yang [60] E S ( A ) = 1 n i = 1 n μ A ( x i ) ln μ A ( x i ) + v A ( x i ) ln v A ( x i ) + π A ( x i ) ln π A ( x i )
Vlachos and Sergiadis [61] E V S ( A ) = 1 n ln 2 i = 1 n μ A ( x i ) ln μ A ( x i ) + v A ( x i ) ln v A ( x i ) + 1 π A ( x i ) ln 1 π A ( x i ) + 1 n i = 1 n π A ( x i )
Zhang and Jiang [58] E Z J ( A ) = 1 n i = 1 n min μ A ( x i ) , v A ( x i ) max μ A ( x i ) , v A ( x i )
Li, Deng, Li, et al. [55] E L D L ( A ) = 1 1 2 n i = 1 n μ A ( x i ) v A ( x i ) 3 + μ A ( x i ) v A ( x i )
Szmidt, Kacprzyk, andBujnowski [27] K S K B ( A ) = 1 1 2 n i = 1 n min μ A ( x i ) , v A ( x i ) + π A ( x i ) max μ A ( x i ) , v A ( x i ) + π A ( x i ) + π A ( x i )
Nguyen [30] K N ( A ) = 1 n 2 i = 1 n μ A ( x i ) 2 + v A ( x i ) 2 + μ A ( x i ) + v A ( x i ) 2
Guo [31] K G ( A ) = 1 1 2 n i = 1 n 1 μ A ( x i ) v A ( x i ) 1 + π A ( x i )
Table 2. Comparative results of all AIFSs with respect to A (counter-intuitive results are in bold type).
Table 2. Comparative results of all AIFSs with respect to A (counter-intuitive results are in bold type).
A0.5AA2A3A4
EZL0.41560.42000.23800.15460.1217
EZA0.32140.30430.19740.13300.0979
EZB0.41560.42000.23800.15460.1217
EZC0.33380.32000.14000.06120.0283
EZD0.27770.24630.11880.05620.0271
EZE0.37470.37000.18900.10790.0750
EBB0.08180.10000.09800.09340.0934
ESK0.34460.37400.19700.13090.1094
EHC0.34160.34400.26100.19930.1613
ES0.58110.58740.45550.34890.2778
EVS0.55180.52170.34910.23570.1733
EZJ0.28510.30500.10420.03830.0161
ELDL0.50830.50190.34540.25160.2001
KSKB0.78680.76300.85250.88790.8986
KN0.85850.84710.87380.89270.8999
KG0.76650.76100.86510.91080.9257
KI0.70590.70980.80660.86240.8858
Table 3. Comparative results of all AIFSs with respect to B (counter-intuitive results are in bold type).
Table 3. Comparative results of all AIFSs with respect to B (counter-intuitive results are in bold type).
B0.5BB2B3B4
EZL0.42910.44000.21600.13640.1082
EZA0.33100.30720.18680.11930.0859
EZB0.42910.44000.21600.13640.1082
EZC0.36080.36000.14000.06120.0283
EZD0.29600.25170.11880.05620.0271
EZE0.39500.40000.17800.09880.0683
EBB0.06830.08000.07600.07520.0800
ESK0.35180.40730.16770.11010.0950
EHC0.33550.32800.23280.17080.1379
ES0.54940.53740.39290.29050.2295
EVS0.56400.52330.33690.22120.1612
EZJ0.30420.34500.09270.03490.0151
ELDL0.51910.51200.32790.22900.1791
KSKB0.78990.75630.87820.90740.9125
KN0.86800.86410.89500.91080.9133
KG0.76330.76000.88280.92300.9337
KI0.70380.71820.82720.88040.8992
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, X.; Song, Y.; Wang, Y. Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making. Entropy 2021, 23, 1119. https://doi.org/10.3390/e23091119

AMA Style

Wu X, Song Y, Wang Y. Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making. Entropy. 2021; 23(9):1119. https://doi.org/10.3390/e23091119

Chicago/Turabian Style

Wu, Xuan, Yafei Song, and Yifei Wang. 2021. "Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making" Entropy 23, no. 9: 1119. https://doi.org/10.3390/e23091119

APA Style

Wu, X., Song, Y., & Wang, Y. (2021). Distance-Based Knowledge Measure for Intuitionistic Fuzzy Sets with Its Application in Decision Making. Entropy, 23(9), 1119. https://doi.org/10.3390/e23091119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop