Abstract
The information aggregation operator plays a key rule in the group decision making problems. The aim of this paper is to investigate the information aggregation operators method under the picture fuzzy environment with the help of Einstein norms operations. The picture fuzzy set is an extended version of the intuitionistic fuzzy set, which not only considers the degree of acceptance or rejection but also takes into the account of neutral degree during the analysis. Under these environments, some basic aggregation operators namely picture fuzzy Einstein weighted and Einstein ordered weighted operators are proposed in this paper. Some properties of these aggregation operators are discussed in detail. Further, a group decision making problem is illustrated and validated through a numerical example. A comparative analysis of the proposed and existing studies is performed to show the validity of the proposed operators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The core idea of the fuzzy set (FS) theory was first developed by Zadeh [55] in 1965. In this theory, Zadeh only discussed the positive membership degree of the function. The core idea of the FS theory has been studied in many fields of the real world such as clustering analysis [51], decision making problems [29], medical diagnosis [12], and also pattern recognition [32]. Unfortunately, the idea of the FS theory has been failed due to lack of basic information of the negative membership degree of the function. Therefore, the Atanassov covered these gaps by including the negative membership degree of the function in FS theory. The core idea of the intuitionistic fuzzy set (IFS) theory has been developed by Atanassov [4], in 1986. The concept of the IFS theory is the extension of the FS theory. In this theory, he discussed both the negative membership degree of the function and the positive membership degree of the function of IFS. Hence, the sum of its positive membership function and negative membership function is equal to or less than 1. After the introduction of IFS theory, many researchers attempted the important role in IFS theory and developed different types of techniques in processing the information values by utilizing different operators [8, 9, 18, 24, 26, 28, 34], information measure [7, 35, 39], score and accuracy function [25], under these environments. In particular, the information aggregation is an interesting and important research topic in AIFS theory that has been receiving more and more attention in year 2010 by Xu et al. [52]. Atanassov [4, 6] defined some basic operations and relations of AIFSs, including intersection, union, complement, algebraic sum, and algebraic product, etc., and proved the equality relation between IFSs [5]. But it has been analyzed that the above operators used Archimedean t-norm and t-conorm for the aggregation process. Einstein based t-norm and t-conorm have a best approximation for sum and product of the intuitionistic fuzzy numbers (IFNs) as the alternative of algebraic sum and product. Wang and Liu [44] proposed some geometric aggregation operators based on Einstein operations for intuitionistic fuzzy information. Wang and Liu [45] proposed averaging operators using Einstein operations for intuitionistic fuzzy information. Zhao and Wei [57] defined hybrid averaging and geometric aggregation operators by using Einstein operations. Apart from this, various researchers pay more attention to IFSs for aggregating the different alternatives using different aggregation operators [13, 14, 19,20,21, 23, 27, 33, 38, 53, 54, 56].
In real life, there is some problem which could not be symbolized in IFS theory. For example, in the situation of voting system, human opinions including more answers of such types: yes, no, abstain, refusal. Therefore, Cuong [10] covered these gaps by adding the neutral function in IFS theory. Cuong [10] introduced the core idea of the PFS (picture fuzzy set) model, and the PFS notion is the extension of IFS model. In PFS theory, he basically added the neutral term along with the positive membership degree and negative membership degree of the IFS theory. The only constraint is that in PFS theory, the sum of the positive membership, neutral and negative membership degrees of the function is equal to or less than 1. In 2014, Phong et al. developed some composition of PF relations [37]. Singh in 2015 developed correlation coefficients for the PFS theory. Cuong et al [11], in 2015, gave the core idea about some fuzzy logic operations for PFS. Thong et al. [41] developed a policy to multi-variable fuzzy forecasting by utilizing PF clustering and PF rules interpolation system. Son [40] presented the generalized picture distance measure and also their application. Wei [46] introduced the picture fuzzy cross-entropy for MADM problem. Wei [47] has been introduced the PF aggregation operator and also their applications. The projection model for MADM with picture fuzzy environment has been presented by Wei et al [50]. Bipolar 2-tuple linguistic aggregation operators in MADM have been introduced by Lu M, et al [36]. In 2017 Wei [48] developed the concept about the some cosine similarity measures for PFS. Wei [49] introduced the basic idea of picture 2-tuple linguistic Bonferroni mean operation and also their application to MADM problems. Apart from these, some other scholars are working in the field of picture fuzzy sets theory and introduced different types of decision making approaches (Wang et al. [42],Wang et al. [43]). The different type of aggregation operators has defined for cubic fuzzy numbers, Pythagorean fuzzy numbers [31] and spherical fuzzy numbers [2, 3, 15,16,17, 30].
It is clear that above aggregation operators are based on the algebraic operational laws of PFSs for carrying the combination process. The basic algebraic operations of PFSs are algebraic product and algebraic sum, which are not the only operations that can be chosen to model the intersection and union of the PFSs. Therefore, a good alternative to the algebraic product is the Einstein product, which typically gives the same smooth approximation as the algebraic product. Equivalently, for a union, the algebraic sum is the Einstein sum. Moreover, it seems that in the literature there is little investigation on aggregation techniques using the Einstein operations on PFSs for aggregating a collection of information. Therefore, the focus of this paper is to develop some information aggregation operators based on Einstein operations on PFSs.
The remaining part of this paper is as follows. In “Preliminaries” section, we give some basic definitions of IFS, PFS and score and accuracy function. In “Einstein operations of picture fuzzy sets” section, we proposed picture fuzzy Einstein operations. In “Picture fuzzy Einstein arithmetic averaging operators” section, we introduce some picture fuzzy Einstein arithmetic averaging operators. In “Application of the picture fuzzy Einstein weighted averaging operator to multiple attribute decision making” section, we discuss the application of the picture fuzzy Einstein weighted averaging operator to a multiple attribute decision making problem. In “conclusion” section, we solve MADM problem to illustrate the practicality of the picture fuzzy Einstein operators, and we write the conclusion of the paper in the last section
Preliminaries
Definition 1
[4, 6] An IFS \(\beta\) defined in \(\hat{U}\ne \phi\) is ordered pair as follows
where \(\mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u} \right) \in \left[ 0,1\right]\) and defined as \(\mu _{\beta }\left( \hat{u} \right),\Upsilon _{\beta }\left( \hat{u}\right) :\hat{U}\rightarrow \left[ 0,1\right]\) for all \(\hat{u}\in \hat{U}.\) Hence, \(\mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u}\right)\) are called the degree of membership and non-membership functions, respectively. The pair \(\left\langle \mu _{\beta }\left( \hat{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle\) is called the IFN or IPV, where \(\mu _{\beta }\left( \hat{u}\right)\) and \(\Upsilon _{\beta }\left( \hat{u} \right)\) satisfy the following condition for all \(\hat{u}\in \hat{U}\).
Definition 2
[22] A PFS \(\beta\) on \(\hat{U}\ne \phi\) is defined as
where \(0\le \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{ u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \le 1\) are called membership, neutral and non-membership degrees of the function, respectively, satisfying the condition \(\mu _{\beta }\left( \hat{u} \right) +\eta _{\beta }\left( \breve{u}\right) +\Upsilon _{\beta }\left( \hat{u}\right) \in \left[ 0,1\right],\) for all \(\hat{u}\in \hat{U}.\) Furthermore, for all \(\hat{u}\in \hat{U},\)\(\Phi _{\beta }=1-\mu _{\beta }\left( \hat{u}\right) -\eta _{\beta }\left( \breve{u}\right) -\Upsilon _{\beta }\left( \hat{u}\right)\) is said to be the degree of refusal membership, and the pair \(\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u} \right) \right\rangle\) is called the PFN or PFV. Note that every IFS that can be defined as
If we put \(\eta _{\beta }\left( \breve{u}\right) \ne 0,\) in Eq. (3), then we have PFS
Basically, PFSs models are used in those cases in which the human opinions involving more answers, i.e., “no”, “yes”, “abstain”, and “refusal”. A group of students of the department can be a good example for a PFS. There are some groups of the students want to visit two places: one is Islamabad, and other one is Lahore, but there are some students which want to visit Islamabad (membership) not to Lahore (non-membership), but some students want to visit Lahore (membership) not to Islamabad (non-membership), and also some students which want to visit both places Islamabad and Lahore, i.e., neutral students. But there are also a few students which do not want to visit both places, i.e., refusal.
Definition 3
[22] Let \(\beta =\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle\) and \(\varrho =\left\langle \mu _{\varrho }\left( \hat{u} \right),\eta _{\varrho }\left( \breve{u}\right),\Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle\) be two PFNs of \(\hat{U}.\) Then.
-
1)
\(\beta \otimes \varrho =\left\langle \begin{array}{c} \mu _{\beta }\left( \hat{u}\right) .\mu _{\varrho }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right) +\eta _{\varrho }\left( \breve{u} \right) -\eta _{\beta }\left( \breve{u}\right) .\eta _{\varrho }\left( \breve{u}\right), \\ \Upsilon _{\beta }\left( \hat{u}\right) +\Upsilon _{\varrho }\left( \hat{u}\right) -\Upsilon _{\beta }\left( \hat{u}\right) .\Upsilon _{\varrho }\left( \hat{u}\right) \end{array} \right\rangle .\)
-
2)
\(\beta \oplus \varrho =\left\langle \begin{array}{c} \mu _{\beta }\left( \hat{u}\right) +\mu _{\varrho }\left( \hat{u}\right) -\mu _{\beta }\left( \hat{u}\right) .\mu _{\varrho }\left( \hat{u}\right), \\ \eta _{\beta }\left( \breve{u}\right) .\eta _{\varrho }\left( \breve{u} \right),\Upsilon _{\beta }\left( \hat{u}\right) .\Upsilon _{\varrho }\left( \hat{u}\right) \end{array} \right\rangle .\)
-
3)
\(\lambda .\beta =\left\langle 1-\left( 1-\mu _{\beta }\left( \hat{u} \right) \right) ^{\lambda },\left( \eta _{\beta }\left( \breve{u}\right) \right) ^{\lambda },\left( \Upsilon _{\beta }\left( \hat{u}\right) \right) ^{\lambda }\right\rangle\)
-
4)
\(\beta ^{\lambda }=\left\langle \left( \mu _{\beta }\left( \hat{u} \right) \right) ^{\lambda },(1-(1-\eta _{\beta }\left( \breve{u}\right) )^{\lambda },(1-(1-\Upsilon _{\beta }\left( \hat{u}\right) )^{\lambda }\right\rangle\)
Definition 4
[22] Let \(\beta =\left\langle \mu _{\beta }\left( \hat{u}\right),\eta _{\beta }\left( \breve{u}\right),\Upsilon _{\beta }\left( \hat{u}\right) \right\rangle\) be a PFN, the score function and accuracy function of \(\beta\) are defined as
And
Definition 5
[22] Let \(\beta =\left\langle \mu _{\beta }\left( \hat{u} \right),\eta _{\beta }\left( \breve{u}\right), \Upsilon _{\beta }\left( \hat{u}\right) \right\rangle\) and \(\varrho =\left\langle \mu _{\varrho }\left( \hat{u}\right), \eta _{\varrho }\left( \breve{u}\right), \Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle\) be family of two PFNs. Then the following comparison rules can be used.
-
(1)
if \(S\left( \beta \right) >S\left( \varrho \right), \,\) then \(\beta >\varrho\)
-
(2)
if \(S\left( \beta \right) =S\left( \varrho \right),\) then
-
if \(H\left( \beta \right) >H\left( \varrho \right),\) then \(\beta >\varrho\)
-
if \(H\left( \beta \right) =H\left( \varrho \right),\) then \(\beta =\varrho\)
-
Definition 6
[22] Let \(\left( \beta _{1},\beta _{2},\ldots, \beta _{n}\right)\) be family of PFNs. Then The picture fuzzy weighted averaging ( PFWA) operator is as follows,
Definition 7
[22] Let \(\left( \beta _{1},\beta _{2},\ldots, \beta _{n}\right)\) be family of PFS. Then, the picture fuzzy ordered weighted averaging (PFOWA) operator is defined as
where \(\left( \delta (1),\delta (2),\ldots, \delta (n\right) )\) is a permutation of (\(1,2,\ldots n\)) such that \(\beta _{\delta \left( p \right) }\le _{L^{*}}\beta _{\delta \left( p-1\right) }\) for all \(p=2,3,\ldots, n\) where \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\) be the associated vector of PFOWA operator such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\)
Einstein operations of picture fuzzy sets
In this part of the paper, we have presented the Einstein operations and also discussed some basic properties of the defined operations on the PFSs. Let the t-norm T, and t-conorm S, be Einstein product \(T_{\varepsilon }\) and Einstein sum \(S_{\varepsilon }\), respectively; then the generalized union and the intersection between two PFSs that is \(\beta\) and \(\varrho\) become the Einstein sum and Einstein product, respectively, as given that
Furthermore, we can derive the following forms:
Definition 8
Let \(\beta =\left\langle \mu _{\beta }\left( \hat{u}\right), \eta _{\beta }\left( \breve{u}\right), \Upsilon _{\beta }\left( \hat{u}\right) \right\rangle\) and \(\varrho =\left\langle \mu _{\varrho }\left( \hat{u} \right), \eta _{\varrho }\left( \breve{u}\right), \Upsilon _{\varrho }\left( \hat{u}\right) \right\rangle\) be a family of two PFNs. Then
Corollary 1
Let \(\beta\) be a PFS and \(\lambda\) be any positive real number; then \({\lambda .}_{\varepsilon }\beta\) is also a PFS, i.e.,
Proof
Since \(0\le \mu _{\beta }(\breve{u}),\eta _{\beta }(r),\Upsilon _{\beta }( \breve{u})\le 1,\) respectively, and \(0\le \mu _{\beta }(\breve{u})+\eta _{\beta }(\breve{u})+\Upsilon _{\beta }(\breve{u})\le 1,\) then \(1-\Upsilon _{\beta }(\breve{u})\ge \mu _{\beta }(\breve{u})\ge 0,\) and \([1-\mu _{\beta }(\breve{u})]^{\lambda }\ge [\Upsilon _{\beta }(\breve{u} )]^{\lambda },\) and also \([1-\mu _{\beta }(\breve{u})]^{\lambda }\ge [\eta _{\beta }(\breve{u})]^{\lambda },\) and then we have
and
and
Thus from the above, we can write as,
Furthermore, we have
iff \(\mu _{\beta }(\breve{u})=\eta _{\beta }(r)=\Upsilon _{\beta }(\breve{u} )=0\) and
iff \(\mu _{\beta }(\breve{u})+\eta _{\beta }(\breve{u})+\Upsilon _{\beta }( \breve{u})=1.\) Thus the solution of \({\lambda .}_{\varepsilon }\beta\) is a PFS for any positive real number \({\lambda .}\)\(\square\)
Theorem 1
Let \(\beta\) be a PFS and \(\lambda\) be any positive integer; then show that
Proof
Now we used the mathematical induction to prove that the above result hold for all positive integer \({\lambda .}\) The above statement is named as \(Q(\lambda ).\) Show that the above statement \(Q(\lambda )\) true for \(\lambda =1.\) Since
Then \(Q(\lambda )\) is true for \(\lambda =1,\) i.e, Q(1) holds. Assume that \(Q(\lambda )\) holds for \(\lambda =k\). Now we prove that for \(\lambda =k+1,\), i.e, \(\left( k+1\right) ._{\varepsilon }\beta =\overbrace{\beta \oplus _{\varepsilon }\beta \oplus _{\varepsilon }\cdots \oplus _{\varepsilon }\beta .}^{k+1}\)Thus on PFSs the Einstein sum as,
Hence, it has been shown that \(Q(k+1)\) holds. Since for any positive integer \(\lambda\) we have been proved that \(Q(\lambda )\) holds. \(\square\)
Theorem 2
Let\(\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right),\)\(\beta _{1}=(\mu _{\beta _{1}},\eta _{\beta _{2}},\Upsilon _{\beta _{1}})\)and\(\beta _{2}=(\mu _{\beta _{2}},\eta _{\beta _{2}},\Upsilon _{\beta _{2}})\)be family of three PFNs, then both\(\beta _{3}=\beta _{1}\oplus _{\varepsilon }\beta _{2}\)and\(\beta _{4}= {\lambda .}_{\varepsilon }\beta\) (\(\lambda >0\)) are also PFSs.
Proof
The result is obviously proved by the corollary 1. As under, we discused some special cases of \(\lambda\) and \(\beta.\)
-
(1)
If \(\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(1,0,0)\)
$$\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (1,0,0) \\ {\lambda .}_{\varepsilon }(1,0,0)& = {} (1,0,0) \end{aligned}$$ -
(2)
If \(\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(0,1,1)\) in this case \(\mu _{\beta }=0,\)\(\eta _{\beta }=1\) and \(\Upsilon _{\beta }=1,\) then
$$\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (0,1,1) \\ {\lambda .}_{\varepsilon }(0,1,1)& = {} (0,1,1) \end{aligned}$$ -
(3)
If \(\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =(0,0,0)\) in this case \(\mu _{\beta }=0,\)\(\eta _{\beta }=0\) and \(\Upsilon _{\beta }=0,\) then
$$\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = (0,0,0) \\ {\lambda .}_{\varepsilon }(0,0,0)& = {} (0,0,0) \end{aligned}$$ -
(4)
If \(\lambda \rightarrow 0\) and \(0<\mu _{\beta },\eta _{\beta },\Upsilon _{\beta }<1,\)then
$$\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) \rightarrow (0,1)\\ { i.e};\, {\lambda .}_{\varepsilon }{\beta }\rightarrow & {} (0,1,1),{ as}(\lambda \rightarrow 0) \end{aligned}$$ -
(5)
If \(\lambda \rightarrow +\infty\) and \(0<\mu _{\beta },\eta _{\beta },\Upsilon _{\beta }<1,\)then
Since
$$\begin{aligned}&\lim _{\lambda \rightarrow +\infty }\frac{[1+{\mu }_{{\beta } }]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{[1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{\beta }}]^{\lambda }} =\lim _{\lambda \rightarrow +\infty }\frac{1^{\lambda }-\left( \frac{1- {\mu }_{{\beta }}}{1+{\mu }_{{\beta }}}\right) ^{\lambda }}{1^{\lambda }+\left( \frac{1-{\mu }_{{\beta }}}{1+ {\mu }_{{\beta }}}\right) ^{\lambda }} \\ &\quad =\frac{1-0}{1-0}=1. \end{aligned}$$And since \(0<\eta _{\beta }<1\Leftrightarrow 0<2\eta _{\beta }<2\Leftrightarrow \eta _{\beta }<2-\eta _{\beta }\Leftrightarrow 1<\frac{ 2-\eta _{\beta }}{\eta _{\beta }},\) then \(\lim _{\lambda \rightarrow +\infty }\left( \frac{2-\eta _{\beta }}{\eta _{\beta }}\right) ^{\lambda }=+\infty ;\) thus \(\lim _{\lambda \rightarrow +\infty }\frac{2(\eta _{\beta })^{\lambda } }{(2-\eta _{\beta })^{\lambda }+\eta _{\beta }}=\lim _{\lambda \rightarrow +\infty }\frac{2.1^{\lambda }}{\left( \frac{2-\eta _{\beta }}{\eta _{\beta }} \right) ^{\lambda }+1^{\lambda }}=0\)
And similarly \(0<\Upsilon _{\beta }<1\Leftrightarrow 0<2\Upsilon _{\beta }<2\Leftrightarrow \Upsilon _{\beta }<2-\Upsilon _{\beta }\Leftrightarrow 1< \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }},\) then \(\lim _{\lambda \rightarrow +\infty }\left( \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }} \right) ^{\lambda }=+\infty ;\)thus \(\lim _{\lambda \rightarrow +\infty } \frac{2(\Upsilon _{\beta })^{\lambda }}{(2-\Upsilon _{\beta })^{\lambda }+\Upsilon _{\beta }}5=\lim _{\lambda \rightarrow +\infty }\frac{2.1^{\lambda }}{\left( \frac{2-\Upsilon _{\beta }}{\Upsilon _{\beta }}\right) ^{\lambda }+1^{\lambda }}=0\)
-
(6)
If \(\lambda =1,\) then
$$\begin{aligned} {\lambda .}_{\varepsilon }{\beta }& = {} \left( \frac{[1+{\mu }_{ {\beta }}]^{\lambda }-[1-{\mu }_{{\beta }}]^{\lambda }}{ [1+{\mu }_{{\beta }}]^{\lambda }+[1-{\mu }_{{ \beta }}]^{\lambda }},\right. \frac{2[\eta _{{\beta }}]^{\lambda }}{[2-\eta _{{\beta }}]^{\lambda }+[\eta _{{\beta }}]^{\lambda }},\\ &\left. \frac{2[ {\Upsilon }_{{\beta }}]^{\lambda }}{[2-{\Upsilon }_{ {\beta }}]^{\lambda }+[{\Upsilon }_{{\beta }}]^{\lambda }}\right) = \left( \frac{[1+{\mu }_{{\beta }}]-[1-{\mu }_{{ \beta }}]}{[1+{\mu }_{{\beta }}]+[1-{\mu }_{{ \beta }}]},\right. \\ &\frac{2[\eta _{{\beta }}]}{[2-\eta _{{\beta } }]+[\eta _{{\beta }}]},\left. \frac{2[{\Upsilon }_{{\beta }}]}{ [2-{\Upsilon }_{{\beta }}]+[{\Upsilon }_{{\beta } }]}\right) \\ & = {} \left( {\mu }_{{\beta }},\eta _{{\beta }},{ \Upsilon }_{{\beta }}\right) \\ \ i.e;\ {\lambda .}_{\varepsilon }{\beta }& = {} {\beta },\ \ \text { when }(\lambda =1) \end{aligned}$$
\(\square\)
Proposition 1
Let \(\beta =\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right),\) \(\beta _{1}=(\mu _{\beta _{1}},\eta _{\beta _{2}},\Upsilon _{\beta _{1}})\) and \(\beta _{2}=(\mu _{\beta _{2}},\eta _{\beta _{2}},\Upsilon _{\beta _{2}})\) be family of three \(PFNs,\lambda, \lambda _{1},\lambda _{2}>0;\) then,we have
-
(1)
\(\beta _{1}\oplus _{\varepsilon }\beta _{2}=\beta _{2}\oplus _{\varepsilon }\beta _{1};\)
-
(2)
\({\lambda .}_{\varepsilon }\left( \beta _{1}\oplus _{\varepsilon }\beta _{2}\right) = {\lambda .}_{\varepsilon }\beta _{1}\oplus _{\varepsilon }\lambda ._{\varepsilon }\beta _{2};\)
-
(3)
\(\lambda _{1}._{\varepsilon }\beta \oplus _{\varepsilon }\lambda _{2}._{\varepsilon }\beta =(\lambda _{1}+\lambda _{2})._{\varepsilon }\beta ;\)
-
(4)
\((\lambda _{1}.\lambda _{2})._{\varepsilon }\beta =\lambda _{1}._{\varepsilon }(\lambda _{2}._{\varepsilon }\beta ).\)
Proof
(1) It is trivial.
(2) We transform
into the following form
and let \(a=(1+\mu _{\beta _{1}})(1+\mu _{\beta _{2}}),\)\(b=(1-\mu _{\beta _{1}})(1-\mu _{\beta _{2}}),\)\(c=\eta _{\beta _{2}}.\eta _{\beta _{2}},d=(2-\eta _{\beta _{2}})(2-\eta _{\beta _{2}}),\)
\(e=\Upsilon _{\beta _{1}}.\Upsilon _{\beta _{2}},\)\(f=(2-\Upsilon _{\beta _{1}})(2-\Upsilon _{\beta _{2}});\)
By the Einstein operation law, we have
In addition, since
and
let \(a_{1}=(1+\mu _{\beta _{1}})^{\lambda },\)\(b_{1}=(1-\mu _{\beta _{1}})^{\lambda },\)\(c_{1}=\eta _{\beta _{2}},e_{1}=\Upsilon _{\beta _{1}}a_{2}=(1+\mu _{\beta _{2}})^{\lambda },\)\(b_{2}=\)\((1-\mu _{\beta _{2}})^{\lambda },\)
\(c_{2}=\eta _{\beta _{2}},\)\(e_{2}=\Upsilon _{\beta _{2}}d_{1}=(2-\eta _{\beta _{2}})^{\lambda },\)\(d_{2}=(2-\eta _{\beta _{2}})^{\lambda },\)\(f_{1}=(2-\Upsilon _{\beta _{1}})^{\lambda },\)\(f_{2}=(2-\Upsilon _{\beta _{2}})^{\lambda };\) then
and
By the Einstein operation law, it follows that
Hence \({\lambda .}_{\varepsilon }\left( \beta _{1}\oplus _{\varepsilon }\beta _{2}\right) = {\lambda .}_{\varepsilon }\beta _{1}\oplus _{\varepsilon }\ {\lambda .}_{\varepsilon }\beta _{1}\)
(3) Since
and
where \(\lambda _{1}>0,\lambda _{2}>0,\) let \(a_{1}=(1+\mu _{\beta })^{\lambda _{1}},\)\(b_{1}=(1-\mu _{\beta })^{\lambda _{1}},\)\(c_{1}=\eta _{\beta },e_{1}=\Upsilon _{\beta },\)
\(a_{2}=(1+\mu _{\beta })^{\lambda _{2}},b_{2}=\) \((1-\mu _{\beta })^{\lambda _{2}},\) \(c_{2}=\eta _{\beta },\) \(e_{2}=\Upsilon _{\beta },d_{1}=(2-\eta _{\beta })^{\lambda _{1}},\) \(d_{2}=(2-\eta _{\beta })^{\lambda _{2}},\)
\(f_{1}=(2-\Upsilon _{\beta })^{\lambda _{1}},\)\(f_{2}=(2-\Upsilon _{\beta })^{\lambda _{2}};\)then
And
By the Einstein operation law, it follows that
(4) \((\lambda _{1}.\lambda _{2})._{\varepsilon }\beta =\lambda _{1}._{\varepsilon }(\lambda _{2}._{\varepsilon }\beta )\)
Since
Let \(a=(1+\mu _{\beta })^{\lambda _{2}},\)\(b=\)\((1-\mu _{\beta })^{\lambda _{2}},\)\(c=\eta _{\beta },\)\(e=\Upsilon _{\beta },\ d=(2-\eta _{\beta })^{\lambda _{2}},\)\(f=(2-\Upsilon _{\beta })^{\lambda _{2}};\)
By the Einstein law, it follows that
\(\square\)
Picture fuzzy Einstein arithmetic averaging operators
In this section, we shall develop some Einstein aggregation operators with picture fuzzy information, such as picture fuzzy Einstein weighted averaging operator, picture fuzzy Einstein ordered weighted averaging operator to aggregate the picture fuzzy information. And also discussed some basic properties of picture fuzzy Einstein aggregation operator.
Definition 9
Let \(\beta _{p}=(\mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}),\left( p =1,2,\ldots, n\right)\) be a family of PFNs and \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\left( p =1,2,\ldots, n\right),\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p =1,2,\ldots, n\right)\) and \(\sum _{p =1}^{n}\varpi _{p}=1;\) then, a PFEWA operator of dimension n is a mapping \({\hbox {PFEWA}}:\left( L^{*}\right) ^{n}\rightarrow L^{*},\) and
Theorem 3
Let\(\beta _{p}=(\mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}),\left( p =1,2,\ldots, n\right)\)be a family of PFNs, then their aggregated value by using thePFEWAoperator is also a PFN, and
where\(\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\)be the weighting vector of\(\beta _{p}\left( p=1,2,\ldots, n\right)\)suchthat\(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\)and\(\sum _{p=1}^{n}\varpi _{p}=1;\)
Proof
The first result is easily proved from Theorem 2. Now using mathematical induction method to prove Eq. (15). It is obviously true that Eq. (15) holds for \(n=1.\) Assume that Eq. (15) holds for \(n=k,\) i.e.,
Then if \(n=k+1,\) we have
Let \(a_{1}=\prod \nolimits _{p=1}^{k}(1+\mu _{\beta _{p}})^{\varpi _{p}},\)\(b_{1}=\prod \nolimits _{p=1}^{k}(1-\mu _{\beta _{p}})^{\varpi _{p}},\)\(c_{1}=\prod \nolimits _{p=1}^{k}(\eta _{\beta _{p}})^{\varpi _{p}},\)\(d_{1}=\prod \nolimits _{p=1}^{k}(\Upsilon _{\beta _{p}})^{\varpi _{p}},e_{1}=\prod \nolimits _{p=1}^{k}(2-\eta _{\beta _{p}})^{\varpi _{p}},\)\(f_{1}=\prod \nolimits _{p=1}^{k}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}},a_{2}=\left( 1+\mu _{\beta _{k+1}}\right) ^{\varpi _{p+1}},b_{2}=\left( 1-\mu _{\beta _{k+1}}\right) ^{\varpi _{p+1}},c_{2}=\left( \eta _{\beta _{k+1}}\right) ^{\varpi _{p+1}},\)
\(d_{2}=\left( \Upsilon _{\beta _{k+1}}\right) ^{\varpi _{p+1}},e_{2}=\left( 2-\eta _{\beta _{k+1}}\right) ^{\varpi _{p+1}},f_{2}=\left( 2-\Upsilon _{\beta _{k+1}}\right) ^{\varpi _{p+1}};\)
Then, \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{k})=\left( \frac{a_{1}-b_{1}}{a_{1}+b_{1}},\frac{2c_{1}}{e_{1}+c_{1}},\frac{2d_{1}}{ f_{1}+d_{1}}\right)\) and \(\varpi _{p+1}.\beta _{p}=\left( \frac{a_{2}-b_{2} }{a_{2}+b_{2}},\frac{2c_{2}}{e_{2}+c_{2}},\frac{2d_{2}}{f_{2}+d_{2}}\right) ;\) thus,
by the Einstein operational law, we have
i.e., (15) true for \(n=k+1.\)
Therefore, (15) holds for all n, which competes the proof of the above theorem. \(\square\)
Lemma 1
[51, 54] Let\(\beta _{p}>0,\varpi _{p}>0,\left( p=1,2,\ldots, n\right),\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\)then
with equality if and only if\(\beta _{1}=\beta _{2}=_{\cdots }=\beta _{n}\)
Corollary 2
ThePFWAandPFEWAoperators have the following relation
where\(\beta _{p}\)\(\left( p=1,2,\ldots, n\right)\)be the family of PFNs and\(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\)be the weighting vector of\(\beta _{p}\left( p=1,2,\ldots, n\right)\)such that\(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\)and\(\sum _{p=1}^{n}\varpi _{p}=1;\)
Proof
Since
then
where that equality holds if and only if \(\mu _{\beta _{1}}=\mu _{\beta _{2}}=\cdots =\mu _{\beta _{n}}.\)
In addition, since \(\prod \nolimits _{p=1}^{n}(2-\eta _{\beta _{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\eta _{\beta _{p}}\le \sum _{p=1}^{n}\varpi _{p}(2-\eta _{\beta _{p}})+\sum _{p=1}^{n}\varpi _{p}\eta _{\beta _{p}}=2,\) then
where that equality holds if and only if \(\eta _{\beta _{2}}=\eta _{\beta _{2}}=\cdots =\eta _{\beta _{n}}.\)
Let \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta }^{*},\eta _{\beta }^{*}\right) =\beta ^{*}\) and \(PFWA_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\eta _{\beta }\right) =\beta ;\) then (17) and (18) are transformed into the form \(\mu _{\beta }^{*}\le \mu _{\beta }\) and \(\eta _{\beta }^{*}\ge \eta _{\beta },\) respectively. Thus
If \(s(\beta ^{*})<s(\beta ),\) then by Definition 5, for every \(\varpi,\)we have
If \(s(\beta ^{*})=s(\beta ),\) i.e., \(\mu _{\beta }^{*}-\eta _{\beta }^{*}=\mu _{\beta }-\eta _{\beta },\) then by the conditions \(\mu _{\beta }^{*}\le \mu _{\beta }\) and \(\eta _{\beta }^{*}\ge \eta _{\beta },\) we have \(\mu _{\beta }^{*}=\mu _{\beta }\) and \(\eta _{\beta }^{*}=\eta _{\beta };\)
Thus \(h(\beta ^{*})=\mu _{\beta }^{*}+\eta _{\beta }^{*}=\mu _{\beta }+\eta _{\beta }=h(\beta ),\) in this case, from Definition 5, it follows that
From (19) and (20), we know that (16) always holds, i.e.,
where that equality holds if and only if \(\beta _{1}=\beta _{2}=\cdots =\beta _{n}.\)
In addition, since \(\prod \nolimits _{p=1}^{n}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}}+\prod \nolimits _{p=1}^{n}\Upsilon _{\beta _{p}}\le \sum _{p=1}^{n}\varpi _{p}(2-\Upsilon _{\beta _{p}})+\sum _{p=1}^{n}\varpi _{p}\Upsilon _{\beta _{p}}=2,\) then
where that equality holds if and only if \(\Upsilon _{\beta _{1}}=\Upsilon _{\beta _{2}}=\cdots =\Upsilon _{\beta _{n}}.\)
Let \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta }^{*},\Upsilon _{\beta }^{*}\right) =\beta ^{*}\) and \({\hbox {PFWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\Upsilon _{\beta }\right) =\beta ;\) then (17) and (21) are transformed into the form \(\mu _{\beta }^{*}\le \mu _{\beta }\) and \(\Upsilon _{\beta }^{*}\ge \Upsilon _{\beta },\) respectively. Thus
If \(s(\beta ^{*})<s(\beta ),\) then by Definition 5, for every \(\varpi,\)we have
If \(s(\beta ^{*})=s(\beta ),\) i.e., \(\mu _{\beta }^{*}-\Upsilon _{\beta }^{*}=\mu _{\beta }-\Upsilon _{\beta },\) then by the conditions \(\mu _{\beta }^{*}\le \mu _{\beta }\) and \(\Upsilon _{\beta }^{*}\ge \Upsilon _{\beta }^{*},\) we have \(\mu _{\beta }^{*}=\mu _{\beta }\) and \(\Upsilon _{\beta }^{*}=\Upsilon _{\beta };\)Thus \(h(\beta ^{*})=\mu _{\beta }^{*}+\Upsilon _{\beta }^{*}=\mu _{\beta }+\Upsilon _{\beta }^{*}=h(\beta ),\) in this case, from Definition 5, it follows that
From (22) and (23), we know that (16) always holds, i.e.,
where that equality holds if and only if \(\beta _{1}=\beta _{2}=\cdots =\beta _{n}.\)\(\square\)
Example 1
Let \(\beta _{1}=\left( 0.2,0.4,0.3\right), \beta _{2}=\left( 0.3,0.5,0.1\right), \beta _{3}=\left( 0.1,0.2,0.4\right),\) and \(\beta _{4}=\left( 0.6,0.1,0.2\right)\) be a four \(\beta FVs\) and \(\varpi =\left( 0.2,0.1,0.3,0.4\right) ^{T}\) be the weighting vector of \(\beta _{p}\left( p=1,2,\ldots, n\right).\) Then \(\prod \nolimits _{p=1}^{4}(1+\mu _{\beta _{p}})^{\varpi _{p}}=1.3221,\prod \nolimits _{p=1}^{4}(1-\mu _{\beta _{p}})^{\varpi _{p}}=0.6197,\)\(2\prod \nolimits _{p=1}^{4}\left( \eta _{\beta _{2}}\right) ^{\varpi _{p}}=0.3816,\)\(\prod \nolimits _{p=1}^{4}\left( \eta _{\beta _{2}}\right) ^{\varpi _{p}}=0.1908,\)\(\prod \nolimits _{p=1}^{4}(2-\eta _{\beta _{p}})^{\varpi _{p}}=1.7637,2\prod \nolimits _{p=1}^{4}(\Upsilon _{\beta _{p}})^{\varpi _{p}}=0.4982,\)\(\prod \nolimits _{p=1}^{4}(\Upsilon _{\beta _{p}})^{\varpi _{p}}=0.2491,\)\(\prod \nolimits _{p=1}^{4}(2-\Upsilon _{\beta _{p}})^{\varpi _{p}}=1.7269.\)
If we use the PFWA operator which is developed by Xu [53], to aggregate the PFVs \(\beta _{i}\left( p=1,2,\ldots, n\right),\) then we have
It is clear that \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},\beta _{3},\beta _{4})<{\hbox {PFWA}}_{\varpi }(\beta _{1},\beta _{2},\beta _{3},\beta _{4}).\)
Proposition 2
Let \(\beta _{p}=\left( \mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}\right), \left( p=1,2,\ldots, n\right)\) be a family of PFNs and \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\left( p=1,2,\ldots, n\right)\) such that \(\varpi _{p}\in \left[ 0,1\right],\) \(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then, we have the following.
(1) Idempotency If all\(\beta _{p}\)are equal, i.e.,\(\beta _{p}=\beta\)for all\(p=1,2,\ldots, n\), then
Proof
Since \(\beta _{p}=\beta\) for all \(p=1,2,\ldots, n,\) i.e., \(\mu _{\beta _{p}}=\mu _{\beta },\eta _{\beta _{p}}=\eta _{\beta }\) and \(\Upsilon _{\beta _{p}}=\Upsilon _{\beta },\)\(p=1,2,\ldots, n,\) then
(2). Boundary
\(\square\)
where \(\beta _{\min }=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}\) and \(\beta _{\max }=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.\)
Proof
Let \(f(r)=\frac{1-r}{1+r},\)\(r\in [0,1];\) then \(f^{^{\prime }}(r)=[ \frac{1-r}{1+r}]^{^{\prime }}=\frac{-2}{\left( 1+r\right) ^{2}}<0;\) thus f(x) is decreasing function. Since \(\mu _{\beta _{\min }}\le \mu _{\beta _{p}}\le \mu _{\beta _{\max }},\) for all p, then \(f\left( \mu _{\beta _{\max }}\right) \le f\left( \mu _{\beta _{p}}\right) \le f\left( \mu _{\beta _{\min }}\right),\) for all p, i.e.,\(\frac{1-\mu _{\beta _{\max }}}{ 1+\mu _{\beta _{\max }}}\le \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}} \le \frac{1-\mu _{\beta _{\min }}}{1+\mu _{\beta _{\min }}},\left( p=1,2,\ldots n\right).\)Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\left( p=1,2,\ldots n\right)\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then for all p, we have
i.e.,
Let \(g(y)=\frac{2-y}{y},\)\(y\in (0,1];\) then \(g^{^{\prime }}(y)=\frac{-2 }{y^{2}}<0,\) be decreasing function on (0, 1]. Since \(\eta _{\beta _{\max }}\le \eta _{{\beta }_{p}}\le \eta _{\beta _{\min }},\) for all p, where \(0<\eta _{\beta _{\max }},\)then \(g\left( \eta _{\beta _{\min }}\right) \le g\left( \eta _{\beta _{p}}\right) \le g\left( \eta _{\beta _{\max }}\right),\) for all p, i.e.,\(\frac{2-\eta _{\beta _{\min }}}{\eta _{\beta _{\min }}}\le \frac{2-\eta _{\beta _{p}}}{\eta _{\beta _{p}}}\le \frac{ 2-\eta _{\beta _{\max }}}{\eta _{\beta _{\max }}},\left( p=1,2,\ldots, n\right).\)
Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weight vector of \(\beta _{p}\left( p=1,2,\ldots n\right)\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then for all p, we have
Thus
i.e.
Note that (27) also holds even if \(\eta _{\beta _{\max }}=0.\)
Let \(h(z)=\frac{2-z}{z},\)\(z\in (0,1];\) then \(h^{^{\prime }}(z)=\frac{-2 }{z^{2}}<0,\) be a decreasing function on (0, 1]. Since \(\Upsilon _{\beta _{\max }}\le \Upsilon _{\beta _{\text {p}}}\le \Upsilon _{\beta _{\min }},\) for all i, where \(0<\Upsilon _{\beta _{\max }},\)then \(h\left( \Upsilon _{\beta _{\min }}\right) \le h\left( \eta _{\beta _{p}}\right) \le h\left( \Upsilon _{\beta _{\max }}\right),\) for all p, i.e.,\(\frac{2-\Upsilon _{\beta _{\min }}}{\Upsilon _{\beta _{\min }}}\le \frac{2-\Upsilon _{\beta _{p}}}{\Upsilon _{\beta _{p}}}\le \frac{2-\Upsilon _{\beta _{\max }}}{ \Upsilon _{\beta _{\max }}},\left( p=1,2,\ldots, n\right).\)
Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then for all p, we have
Thus
i.e.,
Note that (28) also holds even if \(\Upsilon _{\beta _{\max }}=0.\)
Let \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =\beta ;\) Then (26), (27) and (28) are transformed into the following forms, respectively;
\(\square\)
(3) Monotonicity Let \(\beta _{p}=\left( \mu _{\beta _{p}},\eta _{\beta _{p}},\Upsilon _{\beta _{p}}\right)\) and \(\beta _{p}^{*}=\left( \mu _{\beta _{p}^{*}},\eta _{\beta _{p}^{*}},\Upsilon _{\beta _{p}^{*}}\right)\)\(\left( p=1,2,\ldots, n\right)\) be a two family of PFNs, and \(\beta _{p}\le _{L}\beta _{p}^{*},\) i.e.,\(\mu _{\beta _{p}}\le \mu _{\beta _{p}^{*}},\eta _{\beta _{p}}\ge \eta _{\beta _{p}^{*}}\) and \(\Upsilon _{\beta _{p}}\ge \Upsilon _{\beta _{p}^{*}}\), for all p; then
Proof
Let \(f(r)=\frac{1-r}{1+r},\)\(r\in [ 0,1],\) be a decreasing function \(\mu _{\beta _{p}}\le \mu _{\beta _{p}^{*}},\) for all p, then \(f\left( \mu _{\beta _{p}^{*}}\right) \le f\left( \mu _{\beta _{p}}\right),\) for all \(p=1,2,\ldots, n,\)i.e.,\(\frac{1-\mu _{\beta _{p}^{*}}}{1+\mu _{\beta _{p}^{*}}}\le \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}}, \left( p=1,2,\ldots n\right).\)
Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\)\(\left( p=1,2,\ldots, n\right)\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then for all p, we have \(\left( \frac{1-\mu _{\beta _{p}^{*}}}{1+\mu _{\beta _{p}^{*}}}\right) ^{\varpi _{p}}\le \left( \frac{1-\mu _{\beta _{p}}}{1+\mu _{\beta _{p}}}\right) ^{\varpi _{p}},\left( p=1,2,\ldots, n\right).\) Thus
i.e.,
Let \(g(y)=\frac{2-y}{y},\)\(y\in (0,1],\) be the decreasing function on (0, 1]. Since \(\eta _{\beta _{p}}\ge \eta _{\beta _{p}^{*}}>0,\) for all p, then \(g\left( \eta _{{\beta }_{p}^{*}}\right) \ge g\left( \eta _{ {\beta }_{p}}\right),\) i.e.,\(\frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\ge \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta }_{p}}},\left( p=1,2,\ldots n\right).\) Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) we have \(\left( \frac{2-\eta _{{\beta } _{p}^{*}}}{\eta _{{\beta }_{p}^{*}}}\right) ^{\varpi _{p}}\ge \left( \frac{2-\eta _{{\beta }_{p}}}{\eta _{{\beta } _{p}}}\right) ^{\varpi _{p}},\left( p=1,2,\ldots, n\right).\) Thus
i.e.,
Note that (31) also holds even \(\eta _{{\beta }_{p}}=\eta _{{\beta }_{p}^{*}}=0,\) for all p,
Let \(h(z)=\frac{2-z}{z},\) be the decreasing function on (0, 1]. Since \(\Upsilon _{{\beta }_{p}}\ge \Upsilon _{{\beta }_{p}^{*}}>0,\) for all p, then \(h\left( \Upsilon _{{\beta }_{p}^{*}}\right) \ge h\left( \Upsilon _{{\beta }_{p}}\right).\) i.e.,\(\frac{ 2-\Upsilon _{{\beta }_{p}^{*}}}{\Upsilon _{{\beta } _{p}^{*}}}\ge \frac{2-\Upsilon _{{\beta }_{p}}}{\Upsilon _{{\beta }_{p}}},\left( p=1,2,\ldots, n\right).\) Let \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\) such that \(\varpi _{p}\in \left[ 0,1\right],\)\(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) we have \(\left( \frac{2-\Upsilon _{{\beta }_{p}^{*}}}{\Upsilon _{{\beta } _{p}^{*}}}\right) ^{\varpi _{p}}\ge \left( \frac{2-\Upsilon _{{\beta }_{p}}}{\Upsilon _{{\beta }_{p}}}\right) ^{\varpi _{p}}.\) Thus
i.e.,
Note that (32) also holds even \(\Upsilon _{{\beta }_{p}}=\Upsilon _{ {\beta }_{p}^{*}}=0,\) for all p,
Let \({\hbox {PFEWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{n})=\left( \mu _{\beta },\eta _{\beta },\Upsilon _{\beta }\right) =\beta\) and \({\hbox {PFEWA}}_{\varpi }(\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*})=\left( \mu _{\beta ^{*}},\eta _{\beta ^{*}},\Upsilon _{\beta ^{*}}\right) =\beta ^{*}\) Then (30), (31) and (32) are transformed into the following forms, respectively;
\(\square\)
Picture fuzzy Einstein ordered weighted averaging operator
In this section, we shall develop picture fuzzy Einstein ordered weighted averaging aggregation operators to aggregate the picture fuzzy information and also some basic properties of picture fuzzy Einstein ordered weighted averaging operator.
Definition 10
Let \(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right) \left( p=1,2,\ldots, n\right)\) be the family of PFNs. A picture fuzzy Einstein OWA operator of dimension n is a mapping \({\hbox {PFEOWA}}:L^{*}\rightarrow L^{*},\) which has an associated vector\(\ \varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\) such that \(\varpi _{p}\in \left[ 0,1\right], p=1,2,\ldots, n\) and \(\sum _{p=1}^{n}\varpi _{p}=1,\) and
Such that \(\left( \delta \left( 1\right), \delta \left( 2\right), \ldots, \delta \left( n\right) \right)\) be a permutation of \(\left( 1,2,\ldots, n\right)\), where \(\beta _{\delta \left( p\right) }\le _{L}\beta _{\delta \left( p+1\right) }\)\(\forall\)\(p=1,2,\ldots, n.\)
Theorem 4
Let\(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)\)are the family of PFNs, then by using thePFEOWAoperator the aggregated value is again aPFV, and
where\(\left( \delta \left( 1\right), \delta \left( 2\right), \ldots, \delta \left( n\right) \right)\)be a permutation of\(\left( 1,2,\ldots, n\right)\)with\(\beta _{\delta \left( p\right) }\le _{L}\beta _{\delta \left( p+1\right) }\)\(\forall\)\(p=1,2,\ldots, n.\)\(\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\)be the weighting vector ofthePFEOWAoperator such that\(\varpi _{p}\in \left[ 0,1\right]\)and\(\sum _{p=1}^{n}\varpi _{p}=1.\left( p=1,2,\ldots, n\right)\)
Proof
The proof of this theorem is same as to Theorem 3. \(\square\)
Corollary 3
The PFOWA operator and PFEOWA operator have the following relation
where \(\beta _{p}\) be the family of PFNs and \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\) be the weighting vector of \(\beta _{p}\) \(\left( p=1,2,\ldots, n\right)\) such that \(\varpi _{p}\in \left[ 0,1\right]\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\left( p=1,2,\ldots, n\right)\)
Proof
Similar to the Corollary 2. \(\square\)
Example 2
Let \(\beta _{1}=\left( 0.2,0.3,0.4\right), \beta _{2}=\left( 0.1,0.5,0.3\right), \beta _{3}=\left( 0.3,0.2,0.4\right),\)\(\beta _{4}=\left( 0.1,0.2,0.3\right)\) and \(\beta _{5}=\left( 0.3,0.1,0.4\right)\) be a five PFVs and the PFOWA operator has an associated vector \(\varpi =\left( 0.113,0.256,0.132,0.403,0.096\right) ^{T}\). Since \(\beta _{2}<\beta _{1}<\beta _{4}<\beta _{3}<\beta _{5},\) then \(\beta _{\delta \left( 1\right) }=\beta _{5}=\left( 0.3,0.1,0.4\right),\)\(\beta _{\delta \left( 2\right) }=\beta _{3}=\left( 0.3,0.2,0.4\right),\)\(\beta _{\delta \left( 3\right) }=\beta _{4}=\left( 0.1,0.2,0.3\right),\)\(\beta _{\delta \left( 4\right) }=\beta _{1}=\left( 0.2,0.3,0.4\right),\)\(\beta _{\delta \left( 5\right) }=\)\(\beta _{2}=\left( 0.1,0.5,0.3\right).\) Then, we compute the following partial values: \(\prod \nolimits _{p=1}^{5}(1+\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.2115,\)\(\prod \nolimits _{p=1}^{5}(1-\mu _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=0.7821,\)\(\prod \nolimits _{p=1}^{5}\eta _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}=0.2378,\)\(\prod \nolimits _{p=1}^{5}(2-\eta _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.7391,\prod \nolimits _{p=1}^{5}\Upsilon _{\beta _{\delta \left( p\right) }}^{\varpi _{p}}\)\(=0.3746,\)\(\prod \nolimits _{p=1}^{5}(2-\Upsilon _{\beta _{\delta \left( p\right) }})^{\varpi _{p}}=1.6222.\) By (45), it follows that
Example 3
If we use the PFOWA operator, which was developed by Xu [52], to aggregate the PFVs, \(\beta _{p}\)\(\left( p=1,2,\ldots, 5\right),\)then we have
It is clear that \({\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{5})<{\hbox {PFOWA}}_{\varpi }(\beta _{1},\beta _{2},\ldots, \beta _{5}).\)
Similar to those of the PFOWA operator, the PFEOWA operator has same properties as follows.
Proposition 3
Let \(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)\) be a family of PFNs, and \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots \varpi _{n}\right) ^{T}\) be the weighting vector of the PFEOWA operator, such that \(\varpi _{p}\in \left[ 0,1\right],\) \(\left( p=1,2,\ldots, n\right)\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\) then, we have the following.
-
(1).
Idempotency If all \(\beta _{p}\) are equal, i.e., \(\beta _{p}=\beta\) for all \(\beta,\) \(\left( p=1,2,\ldots, n\right)\) then
$$\begin{aligned} {\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})={\beta } \end{aligned}$$ -
(2).
Boundary
$$\begin{aligned} {\beta }_{\min }\le {\hbox {PFEWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n})\le {\beta }_{\max } \end{aligned}$$where \(\beta _{\min }=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}\) and \(\beta _{\max }=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.\)
-
(3).
Monotonicity Let\(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{ {\beta }_{p}},\Upsilon _{{\beta }_{p}}\right)\)and\(\beta _{p}^{*}=\left( \mu _{{\beta }_{p}^{*}},\eta _{{\beta } _{p}^{*}},\Upsilon _{{\beta }_{p}^{*}}\right)\)\(\left( p=1,2,\ldots, n\right)\)be a two collections of PFVs, and\(\beta _{p}\le _{L}\beta _{p}^{*},\)i.e.,\(\mu _{{\beta }_{p}}\le \mu _{{\beta }_{p}^{*}},\eta _{{\beta }_{p}}\ge \eta _{{\beta } _{p}^{*}}\)and\(\Upsilon _{{\beta }_{p}}\ge \Upsilon _{{\beta }_{p}^{*}}\), for all p; then
$$\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad \le {\hbox {PFEOWA}}_{\varpi }({\beta }_{1}^{*},{\beta } _{2}^{*},{\ldots },{\beta }_{n}^{*}). \end{aligned}$$ -
(4).
Commutativity Let \(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{ {\beta }_{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)\) be a family of PFNs, then for every \(\varpi\)
$$\begin{aligned}&{\hbox {PFEOWA}}_{\varpi }({\beta }_{1},{\beta }_{2},{\ldots},{\beta }_{n}) \\ &\quad ={\hbox {PFEOWA}}_{\varpi }({\beta }_{1}^{*},{\beta } _{2}^{*},{\ldots},{\beta }_{n}^{*}). \end{aligned}$$
where \((\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*})\) is any permutation of \((\beta _{1}^{*},\beta _{2}^{*},{\ldots},\beta _{n}^{*}).\)
Besides the aforementioned properties, the PFEOWA operator has the following desirable results.
Proposition 4
Let \(\beta _{p}=\left( \mu _{{\beta }_{p}},\eta _{{\beta } _{p}},\Upsilon _{{\beta }_{p}}\right), \left( p=1,2,\ldots, n\right)\) be a family of PFVs, and \(\varpi =\left( \varpi _{1},\varpi _{2},\ldots, \varpi _{n}\right) ^{T}\) be the weighting vector of the PFEOWA operator, such that \(\varpi _{p}\in \left[ 0,1\right]\) and \(\sum _{p=1}^{n}\varpi _{p}=1;\left( p=1,2,\ldots, n\right)\) then, we have the following.
-
(1).
If \(\varpi =\left( 1,0,\ldots, 0\right) ^{T},\) then \({\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\max \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.\)
-
(2).
If \(\varpi =\left( 0,0,\ldots, 1\right) ^{T},\) then \({\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\min \left\{ \beta _{1},\beta _{2},{\ldots},\beta _{n}\right\}.\)
-
(3).
If \(\varpi _{j}=1\) and \(\varpi _{p}=0\left( j\ne i\right),\) then \({\hbox {PFEOWA}}_{\varpi }(\beta _{1},\beta _{2},{\ldots},\beta _{n})=\beta _{\delta \left( j\right) }\) where \(\beta _{\delta \left( j\right) }\) is the jth largest of \(\beta _{p}\left( p=1,2,\ldots, n\right).\)
Application of the picture fuzzy Einstein weighted averaging operator to multiple attribute decision making
MADM problems are common in everyday decision environments. An MADM problem is to find a great concession solution from all possible alternatives measured on multiple attributes. Let the discrete set of alternatives and attributes are \(\ A=\left\{ A_{1},A_{2},\ldots, A_{n}\right\}\) and \(\ C=\left\{ C_{1},C_{2},\ldots, C_{n}\right\}\), respectively. Suppose decision maker given the PFVs for the alternatives \(A_{i}\)\(\left( \text {p} =1,2,\ldots, n\right)\) on attributes \(C_{j}\)\(\left( j=1,2,\ldots, m\right)\) are \(k_{ij}=\left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right),\) where \(\mu _{ij},\eta _{ij}\) and \(\Upsilon _{ij}\) indicates the degrees that the alternative \(d_{i}\) satisfies, neutral and does not satisfy the attribute \(h_{j}\), respectively. Where \(0\le \mu _{ij}+\eta _{ij}+\Upsilon _{ij}\le 1,\) and satisfy the condition \(\mu _{ij}+\eta _{ij}+\Upsilon _{ij}\le 1\). Hence, an MADM problem can be briefly stated in a picture fuzzy decision matrix \(K=\left( k_{ij}\right) _{n\times m}=\left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) _{n\times m}\).
Step 1 Find the normalized picture fuzzy decision matrix. Generally, attributes are two types, the one is benefit and the other is cost; in other words, the attribute set C can be divided into two subsets: \(C_{1}\) and \(C_{2}\), where \(C_{1}\) and \(C_{2}\) are the subset of benefit attributes and cost attributes, respectively. If in a MADM problem the attributes are of the same type (benefit or cost), then the rating values do not need normalization. If in a MADM problem the attributes are of the different type (benefit and cost) in such case, we can use the following formula to change the benefit type values into the cost type values.
Hence, we obtain the normalized picture fuzzy decision matrix \(S=\left( s_{ij}\right) _{n\times m}=\left( \left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) \right) _{n\times m},\) where \(k_{ij}^{c}\) is the complement of \(k_{ij}\).
Step 2 Utilize the PFWA and PFEWA operator to aggregate all the rating values \(s_{ij}\left( j=1,2,\ldots, m\right)\) of the ith line and achieve the total rating value \(s_{i}\) corresponding to the alternative \(A_{i}\).
Step 3 Find the score value of the total aggregated value \(s_{i}\) by using Eq. (4). Then select the best one ranking the alternative \(A_{i}(i=1,2,\ldots, m),\) by the descending order of the score values.
Illustrative example
Let consider a numerical example for decision making problem, the decision maker consider three different companies for invest of money. Let \(A_{1},A_{2}\) and \(A_{3}\) be represented car company, Food company and Arm company, respectively, and the criteria plane \(C_{1}\) is risk analysis, \(C_{2}\) is growth analysis, \(C_{3}\) is investment cost, \(C_{4}\) is social impact, \(C_{5}\) operating cost, \(C_{6}\) is environmental impact analysis and \(C_{7}\) other factors. The opinions of decision maker about these three companies based seven criteria are represented in Table 1.
Based on Table 1, we get the picture fuzzy decision metric \(\ K=\left( k_{ij}\right) _{3\times 7}=\left( \left( \mu _{ij},\eta _{ij},\Upsilon _{ij}\right) \right) _{3\times 7}\). In this problem, we consider that the attributes \(C_{3}\) and \(C_{7}\) are the cost attributes and all another are benefit attributes, using Eq. (34), transformed the picture fuzzy decision matrix K into the following normalized matrix, shown in Table 2. Let \(\varpi =(0.05,0.15,0.20,0.20,0.15,0.15,0.10)^{T}\) be the attribute weight vector. Using the PFEWA and PFWA operators, respectively, we can get the ranking orders and the overall rating values of the alternatives \(A_{i}\left( i=1,2,3\right)\) as follows in Table 3.
It is clearly seen from Table 3 that using different operators, the overall rating values of the alternatives are different, but the ranking orders of the alternatives remain the same, and therefore, the best option is \(A_{1}\).
Comparison analysis
This section consists of the comparative analysis of several existing aggregation operators of picture fuzzy information with proposed Einstein aggregation operators. Existing methods to aggregated picture fuzzy information are shown in the below table.
Overall ranking of the alternatives | |
---|---|
Existing operators | Ranking |
PFWG [1] | \(\varvec{H}_{4}>H_{1}>H_{2}>H_{3}\) |
PFOWG [1] | \(\varvec{H}_{4}>H_{2}>H_{1}>H_{3}\) |
PFHWG [1] | \(\varvec{H}_{4}>H_{1}>H_{2}>H_{3}\) |
GPFHWG [1] | \(\varvec{H}_{4}>H_{1}>H_{2}>H_{3}\) |
Proposed operators | Ranking |
PFEWA | \(\varvec{H}_{4}>H_{1}>H_{2}>H_{3}\) |
PFEOWA | \(\varvec{H}_{4}>H_{1}>H_{2}>H_{3}\) |
The bast alternative is \(H_{4}.\) The obtaining result utilizing Einstein weighted averaging operators is same as results shown in existing methods. Hence, this study proposed the novel Einstein aggregation operators to aggregate the picture fuzzy information more effectively and efficiently. Utilizing proposed Einstein aggregation operators, we find the best alternative from set of alternative given by the decision maker. Hence, the proposed MCDM technique based on proposed operators gives to find best alternative as an applications in decision support systems.
Conclusion
In this paper, we investigate the multiple attribute decision making (MADM) problem based on the arithmetic aggregation operators and Einstein operations with picture fuzzy information. Then, motivated by the ideal of traditional arithmetic aggregation operators and Einstein operations, we have developed some aggregation operators for aggregating picture fuzzy information: picture fuzzy Einstein aggregation operators. Then, we have utilized these operators to develop some approaches to solve the picture fuzzy multiple attribute decision making problems. Finally, a practical example for different companies for invest of money is given to verify the developed approach and to demonstrate its practicality and effectiveness. In the future, the application of the proposed aggregating operators of PFSs needs to be explored in the decision making, risk analysis and many other uncertain and fuzzy environment.
References
Ashraf, S., Mahmood, T., Abdullah, S., Khan, Q.: Different approaches to multi-criteria group decision making problems for picture fuzzy environment. Bull. Braz. Math. Soc. New Ser. (2018). https://doi.org/10.1007/s00574-018-0103-y
Ashraf, S., Abdullah, S., Mahmood, T., Ghani, F., Mahmood, T.: Spherical fuzzy sets and their applications in multi-attribute decision making problems. J. Intell. Fuzzy Syst. (Preprint), pp. 1–16. https://doi.org/10.3233/JIFS-172009
Ashraf, S., Abdullah, S.: Spherical aggregation operators and their application in multiattribute group decision-making. Int. J. Intell. Syst. 34(3), 493–523 (2019)
Atanassov, K.T.: Intuitionistic fuzzy sets. Fuzzy Sets Syst. 20, 87–96 (1986)
Atanassov, K.T.: An equality between intuitionistic fuzzy sets. Fuzzy Sets Syst. 79, 257–258 (1996)
Atanassov, K.T.: Intuitionistic Fuzzy Sets, pp. 1–137. Physica, Heidelberg (1999)
Arora, R., Garg, H.: A robust correlation coefficient measure of dual hesitant fuzzy soft sets and their application in decision making. Eng. Appl. Artif. Intell. 72, 80–92 (2018)
Chen, S.M., Chang, C.H.: Fuzzy multiattribute decision making based on transformation techniques of intuitionistic fuzzy values and intuitionistic fuzzy geometric averaging operators. Inf. Sci. 352, 133–149 (2016)
Chen, S.M., Cheng, S.H., Tsai, W.H.: Multiple attribute group decision making based on interval-valued intuitionistic fuzzy aggregation operators and transformation techniques of intervalvalued intuitionistic fuzzy values. Inf. Sci. 367–368(1), 418–442 (2016)
Cuong, B.C.: Picture fuzzy sets-first results. part 2, seminar neuro-fuzzy systems with applications. Institute of Mathematics, Hanoi (2013)
Cuong, B.C., Van Hai, P.: Some fuzzy logic operators for picture fuzzy sets. In: 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), pp. 132–137. IEEE (2015, October)
De, S.K., Biswas, R., Roy, A.R.: An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst. 117(2), 209–213 (2001)
Deschrijver, G., Kerre, E.E.: A generalization of operators on intuitionistic fuzzy sets using triangular norms and conorms. Notes IFS 8(1), 19–27 (2002)
Deschrijver, G., Cornelis, C., Kerre, E.E.: On the representation of intuitionistic fuzzy t-norms and t-conorms. IEEE Trans. Fuzzy Syst. 12(1), 45–61 (2004)
Fahmi, A., Abdullah, S., Amin, F., Ali, A., Ahmad Khan, W.: Some geometric operators with triangular cubic linguistic hesitant fuzzy number and their application in group decision-making. J. Intell. Fuzzy Syst. 35, 2485–2499 (2018)
Fahmi, A., Amin, F., Abdullah, S., Ali, A.: Cubic fuzzy Einstein aggregation operators and its application to decision-making. Int. J. Syst. Sci. 49(11), 2385–2397 (2018)
Fahmi, A., Abdullah, S., Amin, F., Khan, M.S.A.: Trapezoidal cubic fuzzy number Einstein hybrid weighted averaging operators and its application to decision making. Soft Comput. (2018). https://doi.org/10.1007/s00500-018-3242-6
Goyal, M., Yadav, D., Tripathi, A.: Intuitionistic fuzzy genetic weighted averaging operator and its application for multiple attribute decision making in e-learning. Indian J. Sci. Technol. 9(1), 1–15 (2016)
Garg, H.: A new generalized Pythagorean fuzzy information aggregation using Einstein operations and its application to decision making. Int. J. Intell. Syst. 31(9), 886–920 (2016)
Garg, H.: Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 101, 53–69 (2016)
Garg, H.: Generalized Pythagorean fuzzy geometric aggregation operators using Einstein t-norm and t-conorm for multicriteria decision-making process. Int. J. Intell. Syst. 32(6), 597–630 (2017)
Garg, H.: Some picture fuzzy aggregation operators and their applications to multicriteria decision-making. Arab. J. Sci. Eng. 42(12), 5275–5290 (2017)
Garg, H.: Generalised Pythagorean fuzzy geometric interactive aggregation operators using Einstein operations and their application to decision making. J. Exp. Theor. Artif. Intell. 30, 1–32 (2018). https://doi.org/10.1080/0952813X.2018.1467497
Garg, H.: Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision-making process. J. Ind. Manag. Optim. 14(1), 283–308 (2018)
Garg, H.: A linear programming method based on an improved score function for interval-valued Pythagorean fuzzy numbers and its application to decision-making. Int. J. Uncertain. Fuzziness Knowl. based Syst. 26(01), 67–80 (2018)
Garg, H., Kumar, K.: An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set pair analysis theory and their application in decision making. Soft Comput. 22, 4959–4970 (2018)
Garg, H.: New exponential operational laws and their aggregation operators for interval-valued Pythagorean fuzzy multicriteria decision-making. Int. J. Intell. Syst. 33(3), 653–683 (2018)
Huang, J.Y.: Intuitionistic fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 27(1), 505–513 (2014)
Hong, D.H., Choi, C.H.: Multicriteria fuzzy decision-making problems based on vague set theory. Fuzzy Sets Syst. 114(1), 103–113 (2000)
Khan, M.S.A., Abdullah, S.: Interval-valued Pythagorean fuzzy GRA method for multiple-attribute decision making with incomplete weight information. Int. J. Intell. Syst. 33(8), 1689–1716 (2018)
Khan, A.A., Ashraf, S., Abdullah, S., Qiyas, M., Luo, J., Khan, S.U.: Pythagorean fuzzy Dombi aggregation operators and their application in decision support system. Symmetry 11(3), 383 (2019)
Khatibi, V., Montazer, GA (2009) Intuitionistic fuzzy set vs. fuzzy set application in medical pattern recognition. Artif. Intell. Med. 47(1):43–52
Kaur, G., Garg, H.: Cubic intuitionistic fuzzy aggregation operators. Int. J. Uncertain. Quantif. 8(5), 405–427 (2018)
Kaur, G., Garg, H.: Multi-attribute decision-making based on Bonferroni mean operators under cubic intuitionistic fuzzy set environment. Entropy 20(1), 65 (2018)
Kumar, K., Garg, H.: TOPSIS method based on the connection number of set pair analysis under interval-valued intuitionistic fuzzy set environment. Comput. Appl. Math. 37(2), 1319–1329 (2018)
Lu, M., Wei, G., Alsaadi, F.E., Hayat, T., Alsaedi, A.: Bipolar 2-tuple linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 33(2), 1197–1207 (2017)
Phong, P.H., Hieu, D.T., Ngan, R.T., Them, P.T.: Some compositions of picture fuzzy relations. In: Proceedings of the 7th National Conference on Fundamental and Applied Information Technology Research (FAIR’7), Thai Nguyen, pp. 19–20 (2014, June)
Rani, D., Garg, H.: Complex intuitionistic fuzzy power aggregation operators and their applications in multicriteria decision-making. Expert Syst. 35(6), e12325 (2018)
Singh, S., Garg, H.: Distance measures between type-2 intuitionistic fuzzy sets and their application to multicriteria decision-making process. Appl. Intell. 46(4), 788–799 (2017)
Son, L.H.: Generalized picture distance measure and applications to picture fuzzy clustering. Appl. Soft Comput. 46(C), 284–295 (2016)
Thong, P.H.: A new approach to multi-variable fuzzy forecasting using picture fuzzy clustering and picture fuzzy rule interpolation method. In: Knowledge and Systems Engineering, pp. 679–690. Springer, Cham (2015)
Wang, L., Peng, J.J., Wang, J.Q.: A multi-criteria decision-making framework for risk ranking of energy performance contracting project under picture fuzzy environment. J. Clean. Prod. 191, 105–118 (2018)
Wang, L., Zhang, H.Y., Wang, J.Q., Li, L.: Picture fuzzy normalized projection-based VIKOR method for the risk evaluation of construction project. Appl. Soft Comput. 64, 216–226 (2018)
Wang, Liu: Intuitionistic fuzzy geometric aggregation operators based on Einstein operations. Int. J. Intell. Syst. 26(11), 1049–1075 (2011)
Wang, Liu: Intuitionistic fuzzy information aggregation using Einstein operations. IEEE Trans. Fuzzy Syst. 20(5), 923–938 (2012)
Wei, G.: Picture fuzzy cross-entropy for multiple attribute decision making problems. J. Bus. Econ. Manag. 17(4), 491–502 (2016)
Wei, G.: Picture fuzzy aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 33(2), 713–724 (2017)
Wei, G.: Some cosine similarity measures for picture fuzzy sets and their applications to strategic decision making. Informatica 28(3), 547–564 (2017)
Wei, G.: Picture 2-tuple linguistic Bonferroni mean operators and their application to multiple attribute decision making. Int. J. Fuzzy Syst. 19(4), 997–1010 (2017)
Wei, G., Alsaadi, F.E., Hayat, T., Alsaedi, A.: Projection models for multiple attribute decision making with picture fuzzy information. Int. J. Mach. Learn. Cybern. 9(4), 713–719 (2018)
Xu, Z., Chen, J., Wu, J.: Clustering algorithm for intuitionistic fuzzy sets. Inf. Sci. 178(19), 3775–3790 (2008)
Xu, Z., Cai, X.: Recent advances in intuitionistic fuzzy information aggregation. Fuzzy Optim. Decis. Mak. 9(4), 359–381 (2010)
Xu, Z., Cai, X.: Intuitionistic fuzzy information aggregation. In: Intuitionistic Fuzzy Information Aggregation, pp. 1–102. Springer, Berlin (2012)
Xu, Z., Yager, R.R.: Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 35(4), 417–433 (2006)
Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)
Zeng, S., Asharf, S., Arif, M., Abdullah, S.: Application of exponential Jensen picture fuzzy divergence measure in multi-criteria group decision making. Mathematics 7(2), 191 (2019)
Zhao, X., Wei, G.: Some intuitionistic fuzzy Einstein hybrid aggregation operators and their application to multiple attribute decision making. Knowl. Based Syst. 37, 472–479 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Khan, S., Abdullah, S. & Ashraf, S. Picture fuzzy aggregation information based on Einstein operations and their application in decision making. Math Sci 13, 213–229 (2019). https://doi.org/10.1007/s40096-019-0291-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40096-019-0291-7