Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

AHP-Group Decision Making: A Bayesian Approach Based on Mixtures for Group Pattern Identification

  • Published:
Group Decision and Negotiation Aims and scope Submit manuscript

Abstract

This paper proposes a Bayesian estimation procedure to determine the priorities of the Analytic Hierarchy Process (AHP) in group decision making when there are a large number of actors and a prior consensus among them is not required. Using a hierarchical Bayesian approach based on mixtures to describe the prior distribution of the priorities in the multiplicative model traditionally used in the stochastic AHP, this methodology allows us to identify homogeneous groups of actors with different patterns of behaviour for the rankings of priorities. The proposed procedure consists of a two-step estimation algorithm: the first step carries out a global exploration of the model space by using birth and death processes, the second concerns a local exploration by means of Gibbs sampling. The methodology has been illustrated by the analysis of a case study adapted from a real experiment on e-democracy developed for the City Council of Zaragoza (Spain).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aczél J., Saaty T. L. (1983). Procedures for Synthesizing Ratio Judgements. Journal of Mathematical Psychology 27(1): 93–102

    Article  Google Scholar 

  • Aguarón J., Moreno-Jiménez J. M. (2000). Stability Intervals in the Analytic Hierarchy Process. European Journal of Operational Research 125:114–133

    Article  Google Scholar 

  • Aguarón J., Moreno-Jiménez J. M. (2003). The Geometric Consistency Index. Approximated Thresholds. European Journal of Operational Research 147 (1):137–145

    Article  Google Scholar 

  • Aitchinson, J. (1986). The Statistical Analysis of Compositional Data, Chapman & Hall.

  • Alho J., Kangas J., Rolehmainen O. (1996). Uncertainty in Expert Predictions of the Ecological Consequences of Forest Plans. Applied Statistics 45:1–14

    Article  Google Scholar 

  • Alho J., Kangas J. (1997). Analyzing Uncertainties in Experts’ Opinions of Forest Plan Performance. Forest Science 43:521–528

    Google Scholar 

  • Basak I. (1998). Probabilistic Judgements Specified Partially in the Analytic Hierarchy Process. European Journal of Operational Research 108:153–164

    Article  Google Scholar 

  • Basak I. (2001). The Categorical Data Analysis Approach for Ratio Model of Pairwise Comparisons. European Journal of Operation Research 128:532–544

    Article  Google Scholar 

  • Bryson N., Joseph A. (1999) Generating Consensus Priority Points Vectors: A Logarithmic Goal Programming Approach. Computers and Operations Research 80:333–345

    Google Scholar 

  • Casella G., Robert C. P. (1996). Rao-Blackwellisation of Sampling Schemes. Biometrika 83:81–94

    Article  Google Scholar 

  • Crawford G., Williams C. (1985). A Note on the Analysis of Subjective Judgment Matrices. Journal of Mathematical Psychology 29:387–405

    Article  Google Scholar 

  • Escobar M. T., Moreno-Jiménez J. M. (2000). Reciprocal distributions in the Analytic Hierarchy Process. European Journal of Operational Research 123(1):154–174

    Article  Google Scholar 

  • Escobar M. T., Moreno-Jiménez J. M. (2002). A Linkage between the Analytic Hierarchy Process and the Compromise Programming Models. Omega 30(5):359–365

    Article  Google Scholar 

  • Escobar, M. T. and J. M. Moreno-Jiménez.(2005). “Aggregation of Individual Preference Structures (AIPS),” In: Proceedings of Group Decision and Negotiation Conference (GDN2005), CD, Vienna (Austria).

  • Escobar, M. T., J. Aguarón, and J. M. Moreno-Jiménez. (2000). “Consensus Building with Reciprocal Distributions in AHP,” Presented in the EURO XVII, 17th European Conference on Operational Research, Budapest (Hungary).

  • Escobar, M. T., J. Aguarón, and J. M. Moreno-Jiménez. (2001). Procesos de Negociación en Problemas de Alta Complejidad, Actas de la XV Reunión Asepelt-España, La Coruña.

  • Fichtner J. (1986). On Deriving Priority Vectors from Matrices of Pairwise Comparisons. Socio-Economic Planning Sciences 20/6:341–345

    Article  Google Scholar 

  • Forman E., Peniwati K. (1998). Aggregating individual judgments and priorities with the Analytic Hierarchy Process. European Journal of Operational Research 108:165–169

    Article  Google Scholar 

  • French, S. (2004). Web-enabled Strategy GDSS, e-Democracy and Arrow’s Theorem: A Bayesian Perspective (http://www.esf.org/ted, acceded 2005).

  • Gelman A., Rubin D. (1992). Inference from Iterative Simulation Using Multiple Sequences (with Discussion). Statistic Science 7:457–511

    Google Scholar 

  • Gelman, A., J. B. Carlin, H. S. Stern, and D. Rubin (2005). Bayesian Data Analysis, 2nd edn. Texts in Statistical Science, Chapman & Hall/CRC.

  • Genest C., Rivest L. P. (1994). A Statistical Look at Saaty’s Method of Estimating Pairwise Preferences Expressed on a Ratio Scale. Journal of Mathematical Psychology 38:477–496

    Article  Google Scholar 

  • Geweke J. (1992). Evaluating the Accuracy of Sampling-Based Approaches to the Calculation of Posterior Moments (with discussion). In: Bernardo J. M., Berger J. O., Dawid A. P., Smith A. F. M. (eds), Bayesian Statistics, vol. 4. Oxford University Press, Oxford, pp 169–193

    Google Scholar 

  • Gilks W. R., Richardson S., Spiegelhalter D. J. (eds) (1996). Markov Chain Monte Carlo in Practice. Chapman & Hall, London

    Google Scholar 

  • González-Pachón J., Romero C. (1999). Distance-based Consensus Methods: A Goal Programming Approach. Omega 27:341–347

    Article  Google Scholar 

  • Hand D, Mannila H., Smyth P. (2001). Principles of Data Mining. The MIT Press, Cambridge, Massachusetts

    Google Scholar 

  • Laininen P., Hämäläinen R. P. (2003). Analyzing AHP-matrices by Regression. European Journal of Operation Research 148:514–524

    Article  Google Scholar 

  • Leskinen P., Kangas J. (1998). Analyzing Uncertainties of Interval Judgement Data in Multiple-Criteria Evaluation of Forest Plans. Silva Fennica 32:363–372

    Google Scholar 

  • Moreno-Jiménez, J. M. (2003). “Los M étodos Estadísticos en el Nuevo Método Científico,” In: J. M. Casas and A. Pulido (eds.), Información económica y técnicas de análisis en el siglo XXI, INE, pp. 331–348.

  • Moreno-Jiménez J. M., Escobar M. T. (2000). El Pesar en el Proceso Analítico Jerárquico. Estudios de Economía Aplicada 14:95–115

    Google Scholar 

  • Moreno-Jiménez J. M., Polasek W. (2003). E-Democracy and Knowledge. A Multicriteria Framework for the New Democratic Era. Journal Multi-criteria Decision Analysis 12:163–176

    Article  Google Scholar 

  • Moreno-Jiménez J. M., Vargas L. G. (1993). A Probabilistic Study of Preference Structures in the Analytic Hierarchy Process with Interval Judgments. Mathematical and Computer Modelling 17 (4–5):73–81

    Article  Google Scholar 

  • Moreno-Jiménez J. M., Aguarón J, Escobar M. T., Turón A. (1999). The Multicriteria Procedural Rationality on Sisdema. European Journal of Operational Research 119 (2):388-403

    Article  Google Scholar 

  • Moreno-Jiménez J. M., Aguarón J., Escobar M. T. (2001). Metodología científica en valoración y selección ambiental. Pesquisa Operacional 21:3–18

    Article  Google Scholar 

  • Moreno-Jiménez, J. M., J. Aguarón, and M. T. Escobar. (2002). “Decisional Tools for Consensus Building in AHP-Group Decision Making,” in 12th. Mini Euro Conference, Brussels.

  • Moreno-Jiménez J. M., Aguarón J., Raluy A., Turón A. (2005). A Spreadsheet Module for Consistent AHP-Consensus Building. Group Decision and Negotiation 14(2):89–108

    Article  Google Scholar 

  • Moreno-Jiménez, J. M., M. Salvador, and A. Turón. (2005). “Preference Structures in AHP Group Decision Making,” In 2nd Compositional Data Analysis Workshop (CODAWork05), CD, Gerona (Spain).

  • Ramanathan R., Ganesh L. S. (1994). Group Preference Aggregation Methods employed in AHP: An Evaluation and Intrinsic Process for Deriving Members’ Weightages. European Journal of Operational Research 79:249–265

    Article  Google Scholar 

  • Richardson S., Green P. J. (1997). On Bayesian Analysis of Mixtures with an Unknown Number of Components. Journal of the Royal Statistical Society Series B 59 (4):731–792

    Article  Google Scholar 

  • Robert C. P., Casella G. (1999). Monte Carlo Statistical Methods. Springer-Verlag, New York

    Google Scholar 

  • Saaty T. L. (1977). A Scaling Method for Priorities in Hierarchical Structures. Journal of Mathematical Psychology 15(3):234–281

    Article  Google Scholar 

  • Saaty T. L. (1980). Multicriteria Decision Making: The Analytic Hierarchy Process. Mc Graw-Hill, NY

    Google Scholar 

  • Saaty T. L. (1989). Group Decision-Making and the AHP. In: Golden B. L., Wasil E. A., Harker P. T. (eds), The Analytic Hierarchy Process: Applications and Studies. Springer-Verlag, New York, pp 59–67

    Google Scholar 

  • Saaty, T. L. (1994). Fundamentals of Decision Making, RSW Publications.

  • Saaty, T. L. (1996). The Analytic Network Process, RSW Publications.

  • Stephens M. (2000). Bayesian Analysis of Mixture Models with an Unknown Number of Components-an Alternative to Reversible Jump Methods. Annals of Statistics 28:40–74

    Article  Google Scholar 

  • Vargas L. G. (1982). Reciprocal Matrices with Random Coefficients. Mathematical Modelling 3:69–81

    Article  Google Scholar 

  • Wei Q., Yan H., Ma J., Fan Z. (2000). A Compromise Weight for Multi-criteria Group Decision Making with Individual Preference. Journal of the Operational Research Society 51:625–634

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to express their thanks to three anonymous referees for their helpful observations and suggestions on two earlier versions of this paper which we believe have served to significantly improve the quality of this revised text. They are also grateful to Stephen Wilkins for his help in preparing the final draft of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José María Moreno-Jiménez.

Additional information

Partially funded under the research project Electronic Government. Internet-based Complex Decision Making: e-democracy and e-cognocracy (Ref. PM2004-052) approved by the Regional Government of Aragon (Spain) as part of the multi-disciplinary projects programme.

Appendices

Appendix A

Posterior distribution of θ

In order to make the calculation of the posterior distribution easier, the auxiliary vectors \({\{{\bf z}_{k}=(z_{ k1},\ldots,{z}_{km})^{\prime}; k\,=\,1,\ldots,r\}}\) are introduced. These vectors indicate the component of the mixture (3) to which the decision maker D k belongs, in such a way that \({{\bf z}_{ k}= {\bf e}_{\ell,m}}\) with probability \({\pi_{\ell};\ell =1,\ldots,{m}}\).

Using the Bayes Theorem the joint posterior distribution of \({{\varvec \theta}}\) and \({{\bf z}=({\bf z}_{1},\ldots,{\bf z}_{ r})}\), \({\left({\varvec \theta} ,{\bf z}\right)\left|{{\bf y}}^{(1)},\ldots,{\bf y}^{(r)}\right.}\), will be given by

$$ \begin{aligned} \left[{\left(\varvec{\theta}, {\bf z}\right)\left| {\bf y}^{(1)},\ldots, {\bf y}^{\rm (r)}\right.}\right]\propto\\ \propto\left[{\bf y}^{(1)},\ldots,{\bf y}^{\rm (r)}\left| {\left({\varvec{\theta}, {\bf z}}\right)}\right.\right]\left[ {\left(\varvec{\theta}, {\bf z}\right)}\right]\quad =\quad \left\{\mathop{\varvec{\prod}}\limits_{{\rm k}=1}^{\rm r}{\left[{\bf y}^{\rm (k)}\left|{\left({\varvec{\theta},{\bf z}}_{\rm k}\right)} \right.\right]\left[{\bf z}_{\rm k}\left|\varvec{\theta} \right. \right]} \right\}\left[\varvec{\theta}\right]\quad \propto\\ \propto\prod\limits_{k=1}^r\left\{\left[{\bf y}^{\rm (k)}\left|{\varvec{\mu}}^{\rm (k)},\varvec{\tau}^{\rm (k)}\right. \right]\left[\varvec{\tau}^{\rm (k)}\right] \prod\limits_{\ell=1}^{\rm m}{\left[\varvec{\mu}^{\rm (k)}\left|\varvec{\mu}_{\rm G}^{(\ell )},\varvec{\Sigma}_{\rm G}^{(\ell)}\right.\right]^{\rm z_{{\rm k}\ell}}\varvec{\pi} _\ell^{{\rm z}_{{\rm k}\ell}}}\right\}\prod\limits_{\ell =1}^{\rm m} {\left\{{\left[\varvec{\mu}_{\rm G}^{(\ell)}\right]\left[{ \varvec{\Sigma}}_{\rm G}^{(\ell)}\right]}\right\}} [\varvec{\pi}\vert\hbox{m}][\hbox{m}]\\ \propto \quad \prod\limits_{\rm k=1}^{\rm r}{\left\{{\left({\tau^{\rm (k)}} \right)^{\frac{\rm J}{2}-1}\exp\left[-\varvec{\tau}^{\rm (k)}\frac{\left({\bf y}^{\rm (k)}-{\bf X}\varvec{\mu}^{\rm (k)}\right)^{\prime}\left({\bf y}^{\rm (k)}-{\bf X}\varvec{\mu}^{\rm (k)}\right)}{2}\right]I_{\left({0,\infty}\right)}\left( {\varvec{\tau}^{\rm (k)}}\right)}\right\}}\times\\ \times\prod\limits_{\ell=1}^{\rm m} {\left\{{\prod\limits_{{\rm k}=1}^{\rm r}{\left\{\left|\varvec\Sigma_{\rm G}^{(\ell)}\right|^{-\frac{{\rm z}_{{\rm k}\ell}}{2}}\exp\left[ {-\frac{{\rm z}_{{\rm k}\ell}\left(\varvec\mu^{\rm (k)}- \varvec{\mu}_{\rm G}^{(\ell)}\right)^{\prime}\left(\varvec{\Sigma} _{\rm G}^{(\ell )}\right)^{-1}\left({\varvec{\mu}}^{\rm (k)}- \varvec{\mu}_{\rm G}^{(\ell)}\right)}{2}}\right]\right\}}} \right\}}\times\\ \times\prod\limits_{\ell=1}^{\rm m}\left\{\exp\left[-\varvec{\tau}_{\mu_{\rm G}}^ \frac{\left(\varvec{\mu}_{\rm G}^{(\ell)}\right)^{\prime}\mu_{\rm G}^{(\ell)}}{2}\right]\left|{\varvec\Sigma}_{\rm G}^{(\ell)}\right|^{-\frac{{\rm n}_0+{\rm n}-1}{2}}\exp\left[ {-\frac{1}{2}\hbox{trace}\left({\hbox{ n }_0{\bf D}}_0\left( \varvec{\Sigma}_{\rm G}^{(\ell)}\right)^{-1}\right)}\right]\hbox{I}_{\rm S}\left( \varvec{\Sigma}_{\rm G}^{\rm (\ell)}\right)\right\} \times\\ \times\left\{{\prod\limits_{\ell =1}^{\rm m}{\pi_\ell^{{\rm z}_{{\rm k}\ell}}}}\right\}\hbox{I}_{{\rm P}_{\rm m}}\left(\varvec{\pi} \right)\frac{\lambda^{\rm m}}{\hbox{m}!} \end{aligned} $$
(A.1)

where \({I_{A}(x)=1}\) if \({x\in A}\) and 0 otherwise, \({S=\{{{\varvec \Sigma}}_{(n-1)\times{(n-1)}}}\) half-defined positive and symmetric} and \({{{\varvec \prod}}_{\bf m}=\left\{{{\varvec \pi}}\in{\bf R}^{m}:\sum_{\ell =1}^{m}{\pi_\ell}=1\ { \rm and }\ \pi _\ell\geq 0;\ell=1,\ldots,m\right\}}\).

Given that distribution (A.1) has no tractable analytical form, we use MCMC methods in order to draw a sample that allows us to make inferences on the different components of the parameter vector \({{\varvec \theta}}\). In the following paragraph we describe the algorithm used to draw this sample.

Appendix B

Algorithm to draw a sample from the posterior distribution (A.1)

Note that, for each value of m, we have a two-level family of hierarchical models, given by the following equations:

$$ {\bf EQ}_{1}: y_{\rm ij}^{\rm (k)} ={{\mu}}_{\rm i}^{(k)}-{{\mu}}_{j}^{\rm (k)}+\varepsilon_{\rm ij}^{\rm (k)};\quad {i}=1,\ldots,{n}-1; \enspace{j}=i+1,\ldots,{n};\enspace {k}=1,..,{r} $$
$$ {\bf EQ}_{2,{m}}:{{\varvec \mu}}^{(k)} \sim {G}= \sum\limits_{\ell =1}^{m} {\pi_\ell {N}_{n-1}\left( {{\varvec \mu}}_{G}^{(\ell)},{{\varvec \Sigma}}_{G}^{(\ell )}\right)} \quad {k}=1,\ldots,{r} $$

where the parameter vector \({{\varvec \theta}}\) is \({{{\varvec \theta}}= ({{\varvec \theta}}_{1}, {m},{{\varvec \theta}}_{2,{m}}}\)), with \({ {{\varvec \theta}}_1=\left\{\left({\tau^{(k)},{{\varvec \mu}}^{(k)}} \right),\;k=1,\ldots,r\right\}}\) and \({{{\varvec \theta}}_{2,m}=\left\{ {\left({\pi_\ell,{{\varvec \mu}}}_{G}^{(\ell)},{{\varvec \Sigma}} _{G}^{(\ell)}\right),\;\ell=1,\ldots,{m}}\right\}}\) whose dimensionality changes according to the value of m.

Therefore, there is a space of models underlying the problem, \({\{{\bf M}_{m}; m\,=\,1, 2, \ldots\}}\), where \({{\bf M}_{m}}\) is given by equations EQ1 and EQ2,m . With the aim of exploring this space and obtaining samples of the posterior distribution (A.1), we have followed the methodology based on birth–death Markov processes developed by Stephens (2000).

In order to explain the algorithm, we define the set \({\Omega =\bigcup\limits_{{m}=1}^\infty {\Omega_{m}}}\) where

$$ \Omega_{\rm m}=\left\{\left({\pi_1,\varvec{\mu}_{\rm G}^{(1)}, \varvec{\Sigma}_{\rm G}^{(1)}}\right),\ldots,\left(\pi_{\rm m}, {\varvec\mu}_{\rm G}^{\rm (m)},\varvec{\Sigma}_{\rm G^{\rm (m)}} \right):\varvec{\pi}\in {\bf P}_{\rm m},\varvec{\mu}_{\rm G}^{\rm (k)}\in {\bf R}^{{\rm n}-1},\varvec{\Sigma}_{\rm G}^{\rm (k)}\in\hbox{S}\right\} $$

is the parameter space of the distribution G for a fixed number m of mixture components (3). Therefore, Ω constitutes the model space in which the process of global exploration is developed.

This global exploration process consists of the choice of the movement and its implementation. There are two types of possible movements: incorporation (birth) of a new component of the mixture (3) or removal (death) of one of the components. We define births and deaths on Ω as follows:

A birth or incorporation of a new component \({\left({\pi_{{m}+1} ,{{\varvec \mu}}}_{G}^{({m}+1)},{{\varvec \Sigma}}_{G}^{({ m}+1)}\right)}\) is said to occur when the process jumps from \({\Omega_{m}}\) to \({\Omega_{{m}+1}}\) where:

$$ \begin{aligned} \varvec{\Omega}_{{\rm m}+1}=\left\{\left({\pi_1\left({1-\pi_{{\rm m}+1}} \right),\varvec{\mu}}_{\rm G}^{(1)},\varvec{\Sigma}_{\rm G}^{(1)} \right),\ldots,\left({\pi_{\rm m}\left({1-\pi_{{\rm m}+1}}\right), \varvec{\mu}_{\rm G}^{{\rm (m)}},\varvec{\Sigma}}_{\rm G}^{\rm (m)}\right),\left({\pi_{{\rm m}+1},\varvec{\mu}}_{\rm G}^{({\rm m}+1)} ,\varvec{\Sigma}_{\rm G}^{({\rm m}+1)}\right)\right\}\\ \pi_{m+1}\sim\hbox{ Beta}(1,\hbox{ m}), \varvec{\mu}_{\rm G}^{({\rm m}+1)}\varvec{\sim}\hbox{ N}_{{\rm n}-1} \left(0_{{\rm n}-1} ,\frac{1}{\varvec{\tau}_{\mu_{\rm G}}{\bf I}_{{\rm n}-1}}\right)\hbox{ and }\varvec{\Sigma}_{\rm G}^{({\rm m}+1)}\sim \hbox{ IW }\left({\hbox{n}_0,{\bf D}_0}\right) \end{aligned} $$
(B.1)

The death or removal of the \({\ell}\) th component of \({\Omega_{m}}\), \({\left( {\pi_\ell,{{\varvec \mu}}_{G}^{(\ell)},{{\varvec \Sigma} }}_{G}^{(\ell)}\right)}\) is said to occur when the process jumps from \({\Omega_{m}}\) to \({\Omega_{{m}-1}^{\left({-\ell} \right)}}\) where: \({\Omega_{{m}-1}^{\left({-\ell}\right)=}}\)

$$ \left\{{\left(\frac{\pi_1}{\left({1-\pi_\ell}\right)}, \varvec{\mu}_{\rm G}^{(1)},\varvec{\Sigma}_{\rm G}^{(1)} \right),\ldots,\left( \frac{\pi_{\ell-1}}{\left({1-\pi_\ell} \right)},\varvec{\mu}_P{\rm G}^{(\ell-1)} ,\varvec{\Sigma} _{\rm G}^{(\ell -1)}\right),\left(\frac{\pi_{\ell +1}}{\left( {1-\pi_\ell}\right)},\varvec{\mu}_{\rm G}^{(\ell +1)}, \varvec{\Sigma}_{\rm G}^{(\ell +1)}\right),\ldots,\left( {\frac{\pi_m }{\left({1-\pi_\ell} \right)},\varvec{\mu}}_{\rm G}^{\rm (m)},\varvec{\Sigma} _{\rm G^{\rm (m)}}\right)}\right\} $$
(B.2)

The algorithm consists of two steps that are alternated in each iteration. In the first step, a global exploration of the model space \({\{{\bf M}_{m}, m\,=\,1, 2, \ldots\}}\) is carried out by means of birth–death point processes with reversible jumps developed by Stephens (2000). In the second, a local exploration for a model \({{\bf M}_{m}}\) with m fixed is carried out by applying Gibbs sampling (Robert and Casella 1999) to the model \({{\bf M}_{m}}\) during a number I of iterations fixed beforehand by the analyst.

The scheme of the algorithm is as follows:

Step 0: Start

The maximum number of algorithm iterations, \({\hbox{IT}_{\rm max}}\) and the number of iterations for the Gibbs sampling, \({\hbox{IT}_{\rm GS}}\), used for carrying out a Bayesian analysis of each model explored by the algorithm, are fixed. \({m^{(0)}}\) is extracted from a \({\hbox{Poisson}(\lambda_{0})}\) and a set of objects

$$ {{\varvec \Omega}}_{{m}^{(0)}}^{(0)}=\left\{\left({\pi_1^{(0)},{{\mu}}_{ G}^{(0,1)}, {{\Sigma}}_{G}^{(0,1)}}\right),\ldots,\left({\pi_{{ m^{(0)}}}^{(0)}},{{\mu}}_{ G}^{(0,{m}^{(0)})},{{\Sigma}}_{G}^{({m}^{(0)})} \right)\right\} $$

is obtained using, for example, the prior distribution (4)–(8). Thereafter, the auxiliary vectors \({{\bf z}^{(0)}=\left\{{{\bf z}_1^{(0)} ,\ldots,{\bf z}_{r}^{(0)}}\right\}}\) with \({{\bf z}_{k}^{(0)}=\left({{z}_{k1}^{(0)} ,\ldots,{z}_{{km}^{(0)}}^{(0)}}\right)^{\prime}}\) ; \({k=1,\ldots,r}\) are generated in such a way that \({{\bf z}_{ k}^{(0)}}\) is equal to the \({\ell}\) th coordinate vector of \({{\bf R}^{{m}^{(0)}}}\) with probability \({\pi_\ell^{(0)}}\) ; \({\ell=1,\ldots,{m}^{(0)}}\) and, from these \({\{{{\varvec \mu}}^{(0,{k})};\,{k}=1,\ldots,{r}\}}\) are generated by means of the normal distribution \({N_{{ n}-1}\left({\sum_{\ell=1}^{{m}^{(0)}} {{z}_{{ k}\ell}^{(0)}{{\varvec \mu}}}_{G}^{(0,\ell)}} ,\sum_{\ell=1}^{{m}^{(0)}}{{z}_{{k}\ell}^{(0)} {{\varvec \Sigma}}}_{G}^{(0,\ell)}\right)}\). The iterations counter it is initialised (it = 1) and the following steps are repeated until \({\hbox{it} > \hbox{IT}_{\rm max}}\).

Step 1: Local exploration by means of Gibbs sampling

Steps 1(a) to 1(f) are executed during ITGS iterations

Step 1(a): Draw \({\left\{{\tau^{({{\rm it},k})};{k}=1,\ldots,{r}} \right\}}\) from

$$ \hbox{Gamma}\left({\frac{{J}+{n}_{1}}{2}, \frac{\left({{\bf y}^{({k})}-{\bf X}{{\varvec \mu}}^{({\rm it}-1, {k})}}\right)^{\prime}\left({{\bf y}^{({k})}-{\bf X}{{\varvec \mu}} ^{({\rm it}-1,{k})}}\right)+{d}_{1}}{2}}\right) $$

Step 1(b): Set \({{m}^{({\rm it})} = {m}^{({\rm it}-1)}}\) and draw \({\{{{\varvec \mu}}^{({{\rm it},k})};{k}=1,\ldots,{r}\}}\) from \({N_{{ n}-1}( {\bf MED}^{({k})}{\bf VAR}^{({k})})}\) where

$$ {\bf MED}^{({k})} = {\bf VAR}^{({k})}\left({{{\tau}}^{({{\rm it},k})}\left({{\bf X}^{\prime}{\bf y}^{({k})}}\right)+\sum\limits_{\ell=1}^{{m}^{({\rm it})}}{{z}_{{k}\ell }^{({\rm it}-1)}\left({{\varvec \Sigma}}_{G}^{({\rm it}-1,\ell)} \right)^{-1}{{\varvec \mu}}_{G}^{({\rm it}-1,\ell)}}}\right)^{-1} $$
$$ {\bf VAR}^{({k})}=\left({{{\varvec \tau}}^{({{\rm it},k})}\left({{\bf X^{\prime}X}} \right)+\sum\limits_{\ell=1}^{{m}^{({\rm it})}}{{z}_{{k}\ell}^{({\rm it}-1)}\left({{{\varvec \Sigma}}_{G}^{({\rm it}-1,\ell)}}\right)^{-1}}}\right)^{-1} $$

Step 1(c): Draw \({\left\{{{\varvec \Sigma}}_{G}^{({\rm it},\ell)};\ell =1,\ldots,{m}^{({\rm it})}\right\}}\) from \({\hbox{IW}(n_{\ell},{\bf D}_{\ell})}\) with \({n_{\ell}= {n}_{0}+1+\sum_{{k}=1}^{r} {{z}_{{k}\ell}^{({\rm it}-1)}}}\)

$$ {\bf D}_{\ell}={n}_{0}{\bf D}_{0}+\sum\limits_{{ k}=1}^{r} {{z}_{{k},\ell}^{({\rm it}-1)}\left({{\varvec \mu}}^{({{\rm it},k})}-{{\varvec \mu}}_{ G}^{\left({{\rm it}-1,\ell}\right)}\right)\left({{{\varvec \mu}}^{({ {\rm it},k})}-{{\varvec \mu}}_{G}^{\left( {{\rm it}-1,\ell}\right)}}\right)^{\prime}} $$

Step 1(d): Draw \({\left\{{{\varvec \mu}}_{G}^{({\rm it},\ell)};\ell =1,\ldots,{m}^{({\rm it}1)}\right\}}\) from \({{N}_{{ m}^{({\rm it}-1)}}\left({{\bf MED}_{G}^{(\ell)},{\bf VAR}_{G}^{(\ell )}}\right)}\) where

$$ {\bf MED}_{G}^{(\ell)}={\bf VAR}_{G}^{(\ell)}\left( {\sum\limits_{{k}=1}^{r} {{z}_{{k}\ell}^{({\rm it}-1)} \left({\Sigma_{G}^{({\rm it},\ell)}}\right)^{-1}{{\varvec \mu}}^{({{\rm it},k})} }}\right) $$
$$ {\bf VAR}_{G}^{(\ell)}=\left({\sum\limits_{{k}=1}^{ r}{{z}_{{k}\ell}^{({\rm it}-1)}\left({\Sigma_{ G}^{({\rm it},\ell)}}\right)^{-1}}+{\tau_\mu}_{{G}}}{\bf I}_{{n}-1}\right)^{-1} $$

Step 1(e): Draw \({\left\{{\bf z}_{k}^{({ \rm it})};{k}=1,\ldots,{r}\right\}}\) from \({\hbox{Mul}(1,{p}_1^{({k})} ,\ldots,{p}_{{m}^{({\rm it})}}^{({k})})}\) where

$$ {p}_\ell^{({k})}\propto{{\varvec \pi}}_\ell^{({\rm it}-1)}\left|{\Sigma_{G}^{({\rm it},\ell)}}\right|^{-1/2}\!\!\!\!\exp \left[{-\frac{1}{2}\left({{{\varvec \mu}}^{({{\rm it},k})}-{{\varvec \mu}}_{G}^{\left({{\rm it},\ell}\right)}}\right)^{\prime}\left({ {{\varvec \Sigma}}_{G}^{({\rm it},\ell)}}\right)^{-1}\!\!\!\!\!\left({{{\varvec \mu}}}^{({{\rm it},k})}-{{\varvec \mu}}_{G}^{\left({{\rm it},\ell}\right)} \right)}\right];\quad\ell=1,\ldots,{m}^{({\rm it})} $$

Step 1(f): Draw \({{{\varvec \pi}}^{({ it})}=\left({{{\varvec \pi}}_1^{({\rm it})},\ldots,{{\varvec \pi}}_{{ m}^{({\rm it})}}^{({\rm it})}}\right)^{\prime}}\) from Dirichlet \({\left({1+\sum_{{k}=1}^{r}{{z}_{{k}1}^{\rm (it)} },\ldots,1+\sum_{{k}=1}^{ r}{{z}_{{km}^{({\rm it}-1)}}^{({\rm it})}}}\right)}\). Set it = it + 1.

Step 2: Global exploration of the model space.

Step  2(a): Set \({{{\theta}}^{({\rm it})} ={{\theta}}^{({\rm it}-1)},\,{\bf z}^{({\rm it})}={\bf z}^{({\rm it}-1)},\,{{\Omega}}_{{m}^{({\rm it})}}^{({\rm it})}=\left\{{\left({{{\pi}}_\ell^{({\rm it})},{{\varvec \mu}}_{G}^{({\rm it},\ell)},{{\varvec \Sigma}}_{G}^{({\rm it},\ell )}}\right);\ell=1,\ldots, {m}^{({\rm it})}}\right\}}\) and t = 0.

Step 2(b): If \({{m}^{({\rm it})} = 1}\), go to Step 2(d). Otherwise, calculate the death rate for each component of the mixture

$$ \delta_{\ell}=\frac{{L}\left({{{\Omega}}_{{m}^{({\rm it})}}^{\left({{\rm it}}\right)\left({-\ell}\right)}} \right)}{{L}\left({{{\Omega}}_{{m}^{({\rm it})}}^{\left({{\rm it}}\right)}}\right)};\quad \ell = 1,\ldots,{m}^{({\rm it})} $$

where \({{L}({{\Omega}}_{{m}})=\prod\limits_{k=1}^{r}{\left({\sum_{\ell =1}^{m}{\pi_\ell\varphi \left({ {{\varvec \mu}}^{({k})};{{\varvec \mu}}_{G}^{(\ell)}, {{\varvec \Sigma}}_{G}^{(\ell)}}\right)}}\right)}}\) is the likelihood function for the set of points \({{{\Omega}}_{m}}\), where \({({\bf x};{{\varvec \mu}},{{\varvec \Sigma}})}\) is the density function of a \({N_{{n}-1}({{\varvec \mu}},{{\varvec \Sigma}})}\) evaluated in \({{\bf x}\in {\bf R}^{n-1}}\).

Step 2(c): Simulate the time of the next jump \({t=t + v}\), where v is sampled from an exponential distribution with mean \({\frac{1}{\lambda _0+\delta}}\) where \({\delta= \sum_{\ell=1}^{{m}^{({\rm it})}}{\delta_\ell}}\) is the total death rate of the process. If \({t > \hbox{it}}\), return to Step 1 of the algorithm. Otherwise, go to Step 2(d).

Step 2(d): Simulate the type of jump. Here, we distinguish two cases:

Step 2(d1): If \({{m}^{({\rm it})}= 1}\), a birth happens following the procedure (B.1).

Step 2(d2): If \({{m}^{({\rm it})} > 1}\), a birth happens following the procedure (B.1) with probability \({\frac{\lambda_0}{\lambda_0 +\delta}}\), and the death of the components \({\ell}\) th following the procedure (B.2) happens with probability \({\frac{\delta_\ell }{\lambda_0+\delta}}\). In this case, the values of \({\left\{{ {\bf z}_{k}^{({\rm it})}:{\bf z}_{k}^{({\rm it}-1)}={\bf e}_\ell}\right\}}\) are again reassigned by applying Step 1(e).

In both cases, if the jump carried out is a birth, set \({{m}^{({\rm it})}={m}^{({\rm it})}+1}\), and if it is a death, set \({{m}^{({\rm it})}= {m}^{({\rm it})}{-}1}\). Return to Step 2(b).

As a consequence of the algorithm a sample of the distribution (A.1) is obtained:

$$ \left\{{\left({\varvec{\theta}^{({\rm it})},{\bf z}^{({\rm it})}} \right)=\left({\left({\left({\tau^{({\rm it,k})},\varvec{\mu}^{({\rm it,k})}}\right)_{{\rm k}=1}^{\rm r} ,\hbox{m}^{({\rm it})},\left( {\pi_\ell^{({\rm it})},{\bf \mu}_{\rm G}^{({\rm it},\ell)}, \varvec{\Sigma}_{\rm G}^{({\rm it},\ell)}}\right)_{\ell=1}^{{\rm m}^{({\rm it})}}}\right),{\bf z}^{({\rm it})}}\right);\hbox{it}=\hbox{it}_0 ,\hbox{it}_0 +\hbox{s}\ldots.,\hbox{IT}_{{\rm max}}}\right\} $$
(B.3)

where it0 is the estimated number of iterations necessary for the process convergence to the stationary distribution (A.1) and s is the number of lags to neglect the serial autocorrelation. This value can be estimated following the usual procedures in the literature (Robert and Casella 1999, Cap. 8).

This sample can be used to draw inferences on the different components of \({{{\varvec \theta}}}\). In particular, an estimation of distribution G is given by \(E[G({{\varvec \mu}})\vert{\bf y}]\). This expectation can be calculated using the Blackwell-Rao estimator (Casella and Robert (1996)) given by

$$ \frac{1}{\left({\hbox{IT}_{\rm max }-\hbox{it}_0+1}\right)} \sum\limits_{{\rm it}={\rm it}_0}^{{\rm IT}_{\rm max}} {\sum\limits_{\ell=1}^{{m}^{({\rm it})}}{{{\pi}}_\ell^{({\rm it})} {N}_{{n}-1}\left({{{\varvec \mu}}_{G}^{ ({\rm it},\ell)}, {{\varvec \Sigma}}_{G}^{({\rm it},\ell)}}\right)}} $$
(\hbox{B}.4)

From (B.4) it is possible to calculate estimations of the distributions of the priorities \({\{w_{i}; i\,=\,1,\ldots,n\}}\) (with or without normalisation), as well as the posterior distribution of the preference structures by means of Monte Carlo methods. From these preference structures it can be inferred if there is consensus between the decision makers, or if there are several modes which make clear the existence of several opinion groups. In this latter case, the groups would be located by using the samples of the individual priorities \({\{{{\varvec \mu}}^{({{\rm it},k})}; \hbox{it}=\hbox{it}_{0}, \ldots,\hbox{IT}_{\rm max}, k\,=\,1,\ldots,r\}}\) and/or the indicators \({{\{{\bf z}^{\rm (it)}; \hbox{it}=\hbox{it}_{0},\ldots,\hbox{IT}_{\rm max}\}}}\). This could be carried out using classification algorithms. Alternatively, we could use perceptual maps which reflect the individual preferences for each alternative (see the example described in Section 4).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gargallo, P., Moreno-Jiménez, J.M. & Salvador, M. AHP-Group Decision Making: A Bayesian Approach Based on Mixtures for Group Pattern Identification. Group Decis Negot 16, 485–506 (2007). https://doi.org/10.1007/s10726-006-9068-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10726-006-9068-0

Keywords

Navigation