This vignette is a comprehensive tutorial on how to use the mcdabench package in R for conducting Multi-Criteria Decision Analysis (MCDA). It presents the application of various MCDA methods to two simulated datasets, allowing users to understand and mcdabench different approaches for decision support.
The recent version of the package mcdabench from CRAN is installed with the following command:
# install.packages("mcdabench", dep=TRUE)If you have already installed mcdabench, you can load it into R working environment by using the following command:
library(mcdabench)As an example, the egrids dataset in the package contains simulated data representing different energy management strategies or system configurations for optimizing smart grids. The dataset includes 12 alternatives and 10 criteria, which evaluate smart grids in terms of efficiency, reliability, environmental compatibility, and cost-effectiveness.
# Load the data set
data(egrids)
# Extract the decision matrix, benefit-cost vector and weights
dmat <- egrids$dmat
bc <- egrids$bcvec
userwei <- egrids$weights
print(egrids)## $dmat
## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## G1 85 92 88 75 0.0050 120 98 0.30 95 1.20
## G2 80 90 85 78 0.0070 115 97 0.40 93 1.50
## G3 82 88 87 70 0.0040 110 95 0.35 96 1.10
## G4 78 85 82 80 0.0060 125 99 0.25 94 1.40
## G5 90 95 92 74 0.0055 118 96 0.33 97 1.30
## G6 88 91 89 76 0.0062 112 94 0.28 92 1.40
## G7 81 89 83 79 0.0071 130 100 0.22 98 1.60
## G8 76 83 80 77 0.0065 127 98 0.29 91 1.35
## G9 89 94 90 73 0.0058 122 97 0.27 90 1.25
## G10 87 90 88 72 0.0049 108 93 0.32 89 1.18
## G11 79 84 81 75 0.0067 117 96 0.31 95 1.50
## G12 77 86 79 71 0.0053 105 91 0.26 88 1.22
##
## $bcvec
## [1] 1 1 1 1 -1 -1 -1 -1 -1 -1
##
## $weights
## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## 0.15 0.12 0.10 0.08 0.07 0.13 0.10 0.08 0.12 0.05
MCDA problems employ normalization to rescale different criteria and make them comparable. This process addresses the issue of varying units and scales across criteria by transforming them into a common scale. By doing so, normalization prevents biased evaluations and promotes consistency and fairness in decision-making, which allows for accurate comparisons between the available alternatives.
The calcnormal function in the mcdabench package offers various common normalization techniques. While “maxmin” (called as MaxMin or MinMax) normalization is typically applied, users can also choose from options such as “enhanced”, “linear”, “logarithmic”, “ratio”, “maxmin”, “nonlinear”, “vector”, “sum”, “zavadskas”, and “zscore” depending on their needs.
The following code snippet demonstrates the normalization of a decision matrix using four techniques available in the calcnormal function. For example, nmatrix1 is obtained using the "maxmin" type normalization, which scales values to a range between 0 and 1. Meanwhile, nmatrix2 is normalized matrix using the "sum" technique, where each criterion value is divided by the sum of all values for that criterion. Below, the normalized matrix nmatrix is displayed below giving an idea.
nmatrix1 <- calcnormal(dmat, bcvec=bc, type="maxmin")
nmatrix2 <- calcnormal(dmat, bcvec=bc, type="sum")
nmatrix3 <- calcnormal(dmat, bcvec=bc, type="vector")
nmatrix4 <- calcnormal(dmat, bcvec=bc, type="zavadskas")
round(nmatrix1, 3) # MaxMin normalized matrix## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## G1 0.643 0.750 0.692 0.5 0.677 0.40 0.222 0.556 0.3 0.80
## G2 0.286 0.583 0.462 0.8 0.032 0.60 0.333 0.000 0.5 0.20
## G3 0.429 0.417 0.615 0.0 1.000 0.80 0.556 0.278 0.2 1.00
## G4 0.143 0.167 0.231 1.0 0.355 0.20 0.111 0.833 0.4 0.40
## G5 1.000 1.000 1.000 0.4 0.516 0.48 0.444 0.389 0.1 0.60
## G6 0.857 0.667 0.769 0.6 0.290 0.72 0.667 0.667 0.6 0.40
## G7 0.357 0.500 0.308 0.9 0.000 0.00 0.000 1.000 0.0 0.00
## G8 0.000 0.000 0.077 0.7 0.194 0.12 0.222 0.611 0.7 0.50
## G9 0.929 0.917 0.846 0.3 0.419 0.32 0.333 0.722 0.8 0.70
## G10 0.786 0.583 0.692 0.2 0.710 0.88 0.778 0.444 0.9 0.84
## G11 0.214 0.083 0.154 0.5 0.129 0.52 0.444 0.500 0.3 0.20
## G12 0.071 0.250 0.000 0.1 0.581 1.00 1.000 0.778 1.0 0.76
corplot(nmatrix1, xlab="Alternative", ylab="Criterion", title="MaxMin Normalized Matrix")## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## ℹ The deprecated feature was likely used in the mcdabench package.
## Please report the issue to the authors.
## This warning is displayed once per session.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated.
To effectively visualize and compare the distribution of normalized values in MCDA studies, the mcdabench package offers the boxplotmcda function. This function is specifically designed to display the variability in the columns of MCDA matrices. The code below leverages boxplotmcda to generate boxplots for nmatrix1, nmatrix2, nmatrix3, and nmatrix4 for comparison purpose.
opar <- par(mfrow=c(2,2))
boxplotmcda(nmatrix1, mt = "MaxMin")
boxplotmcda(nmatrix2, mt = "Sum")
boxplotmcda(nmatrix3, mt = "Vector")
boxplotmcda(nmatrix4, mt = "Zavadskas-Turskis")par(opar)Above, the figure displays the distribution of normalized values for a decision matrix across 10 criteria (C1 to C10), resulting from the application of four different normalization options such as “maxmin”, “sum”, “vector”, and “zavadskas”. Each boxplot visualizes how the values for each criterion are scaled by the respective normalization method.
The common “maxmin” normalization scales the values of each criterion to a common range, typically between 0 and 1. We can observe the central tendency (median) and the spread (variability) of the normalized values for each criterion after this linear scaling. The “sum” technique calculates the values of each criterion by dividing them by the sum of all values for that criterion. As a result, for each criterion, the normalized values will sum up to 1. The “Vector” normalization (also known as Euclidean norm normalization) scales the values of each criterion by dividing them by the Euclidean norm (or length) of the vector of values for that criterion. We can see that the scales and spreads of the normalized values can differ significantly from other methods. The “Zavadskas” normalization is another technique to bring the criteria values to a comparable scale. The resulting scale and distribution characteristics will depend on the specifics of the Zavadskas formula, which aims to provide a balanced and consistent scaling.
According to the plots in the figure, “vector” and “zavadskas” normalization tend to preserve and amplify the inherent variability across the criteria more than “maxmin” and “sum”. This results in criteria with initially wider ranges exhibiting broader distributions post-normalization. Consequently, if the analysis aims to emphasize the natural variability of criteria, “vector” and “zavadskas” might be preferred, while “maxmin” and “sum” offer a more balanced comparison. The choice of normalization should thus align with the data characteristics and the analysis objectives.
In Multi-Criteria Decision Analysis (MCDA), criteria values are assigned importance coefficients, commonly known as weights which influence the final decision. These weights are typically determined by experts based on their knowledge and judgment. However, this subjective approach can be challenging due to biases and inconsistencies.
To address this, various objective weight calculation techniques have been developed, providing a systematic and data-driven alternative. Some widely used methods include Equal Weights, GINI, CRITIC, MEREC, Geometric Mean, Entropy, Standard Deviation, Rank Order Centroid (ROC), and Rank-Sum (RS). These techniques help ensure more balanced and transparent weight assignments, enhancing the reliability and fairness of decision-making processes.
In the following code snippet, weights are calculated using various methods based on the decision matrix (nmatrix3) that has been previously normalized values using the “vector” normalization technique.
critwei <- calcweights(nmatrix3, bcvec=bc, type="critic")
entwei <- calcweights(nmatrix3, bcvec=bc, type="entropy")
equwei <- calcweights(nmatrix3, bcvec=bc, type="equal")
giniwei <- calcweights(nmatrix3, bcvec=bc, type="gini")
sdevwei <- calcweights(nmatrix3, bcvec=bc, type="sdev")
merecwei <- calcweights(nmatrix3, bcvec=bc, type="merec")
mpsiwei <- calcweights(nmatrix3, bcvec=bc, type="mpsi")
geomwei <- calcweights(nmatrix3, bcvec=bc, type="geom")
rocwei <- calcweights(nmatrix3, bcvec=bc, type="roc")
rswei <- calcweights(nmatrix3, bcvec=bc, type="rs")
wmatrix <- cbind(Equal=equwei, Merec=merecwei, Geometric=geomwei, Mpsi=mpsiwei,
Gini=giniwei, Critic=critwei, Entropy=entwei, StdDev=sdevwei, Rs=rswei, Roc=rocwei)
print(round(wmatrix,3))## Equal Merec Geometric Mpsi Gini Critic Entropy StdDev Rs Roc
## C1 0.1 0.098 0.086 0.128 0.142 0.066 0.168 0.079 0.182 0.341
## C2 0.1 0.099 0.094 0.106 0.103 0.051 0.089 0.057 0.164 0.171
## C3 0.1 0.099 0.091 0.111 0.119 0.057 0.119 0.066 0.145 0.114
## C4 0.1 0.100 0.098 0.103 0.101 0.090 0.084 0.056 0.127 0.085
## C5 0.1 0.103 0.125 0.094 0.152 0.180 0.192 0.210 0.109 0.068
## C6 0.1 0.100 0.098 0.100 0.064 0.079 0.034 0.088 0.091 0.057
## C7 0.1 0.102 0.122 0.085 0.026 0.032 0.006 0.036 0.073 0.049
## C8 0.1 0.099 0.091 0.074 0.151 0.273 0.200 0.212 0.055 0.043
## C9 0.1 0.100 0.101 0.106 0.033 0.046 0.009 0.046 0.036 0.038
## C10 0.1 0.099 0.094 0.094 0.109 0.125 0.099 0.150 0.018 0.034
Below the profile plot and table illustrate the weights assigned to ten criteria (C1-C10) by various objective weighting methods following “Vector” normalization. The “Equal” method provides a uniform baseline. Methods like “Merec” and “Geometric mean” yield relatively consistent weights across criteria. The GINI coefficient appears to follow a more moderate approach, exhibiting some differentiation in weights but generally less extreme than “Critic”, “Entropy”, “StdDev”, and “Roc”.
parcorplot(wmatrix, xl="Weighting Methods", yl="Weight", lt="Criteria")corplot(wmatrix, xlab="Weighting Methods", ylab="Weight", title="Weights",
colpal=c("gray","dodgerblue", "orange")) Methods such as “Critic” and “StdDev” emphasize criteria with higher variability, while “Entropy” downweights less diverse criteria. “Roc” shows a highly skewed distribution. GINI, by measuring inequality in the distribution of criterion values, can highlight criteria where alternatives show greater disparity. This makes it a potential middle-ground option when some differentiation is desired but extreme weighting based solely on variance or information content might not be appropriate. The choice of method, including GINI, should depend on the specific MCDA problem and the desired balance in reflecting criteria differences.
Some MCDA methods utilize specific parameters unique to their algorithms. For example, VIKOR’s parameter v represents the weight of the strategy for maximum group utility, a value between 0 and 1. The values of these parameters are passed to the mcdabench function as a list of parameters within the function’s params argument. If a parameter list is not provided, default values are used. To understand the specific parameters for each MCDA method, one should consult their respective function help documentation.
paramlist <- list(
aras = list(),
aroman = list(lambda = 0.5, beta = 0.5),
codas = list(thr = 0.1),
cocoso = list(lambda = 0.5),
electre4 = list(p = 0.6, q = 0.4, v = 0.1),
fuca = list(),
gra = list(idesol = NULL, grdmethod = "sum", rho = 0.5),
mabac = list(),
macont6 = list(p = 0.5, q = 0.5, delta = 0.5, theta = 0.5),
marcos = list(),
mairca = list(),
maut = list(utilfuncs = NULL, normutil = TRUE, ss = 1),
mavt = list(valfuncs = NULL, normvals = TRUE, ss = 1),
megan = list(normethod = "maxmin", thr = 0, tht = "sdev"),
megan2 = list(normethod = "ratio", thr = NULL, tht = "p25"),
moora = list(),
ocra = list(),
oreste = list(domplot = FALSE),
promethee1 = list(),
promethee2 = list(),
promethee3 = list(strict = FALSE),
promethee4 = list(alpha = 0.2),
promethee5 = list(g = 0, l = 100),
promethee6 = list(varmethod = "abs_sum"),
ram = list(normethod = "sum"),
rov = list(normethod = "maxmin"),
smart = list(),
topsis = list(normethod = "maxmin"),
vikor = list(normethod = "maxmin", v = 0.5),
waspas = list(normethod = "linear", v = 0.5),
wpm = list(normethod = "vector")
)To establish a preference ranking and make a decision using any suitable MCDA method. The MCDA field offers a multitude of such methods. In such cases, one can utilize methods that have demonstrated successful outcomes in the literature. However, employing the mcdabench function to benchmark across various methods can contribute to a more robust decision-making process.
This function currently allows for working with methods such as “aras”, “aroman”, “cocoso”, “codas”, “edas”, “elect4”, “fuca”, “gra”, “mabac”, “macont”, “mairca”, “marcos”, “maut”, “mavt”, “megan”, “megan2”, “moora”, “promethee1”, “promethee2”, “promethee3”, “promethee4”, “promethee5”, “promethee6”, “ram”, “rov”, “smart”, “topsis”, “vikor”, “waspas”, “wpm”, and “wsm”.
Below, the code chunk runs a comparison of multiple MCDA methods using dmat, the original decision matrix, considering bcvec, benefit-cost vector and equal weights of criteria. The mcdabench function from mcdabench takes a list of method names in methodlist and applies each of them to the data.
methodlist <- c("aras", "edas", "elect4", "fuca", "gra", "mabac", "codas", "marcos", "megan",
"moora", "promt2", "smart", "topsis", "vikor", "waspas")
equwei <- calcweights(dmat, bcvec = bc, type = "equal")
resmcda <- methodbench(dmatrix = dmat, bcvec = bc, weights = equwei,
mcdm = methodlist, params = paramlist)The following code chunk first displays the structure of the resmcda object using the str function, which provides a summary of the results components built with methodbench function above. Then, it extracts rankmat, the rank matrix in resmcda object. This matrix contains the ranking of the alternatives according to each of the MCDA methods used in the comparison. Finally, the rankmat is displayed to decide the preferences.
str(resmcda) # Structure of benchmarking object## List of 3
## $ dmatrix: num [1:12, 1:10] 85 80 82 78 90 88 81 76 89 87 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:12] "G1" "G2" "G3" "G4" ...
## .. ..$ : chr [1:10] "C1" "C2" "C3" "C4" ...
## $ weights: Named num [1:10] 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
## ..- attr(*, "names")= chr [1:10] "C1" "C2" "C3" "C4" ...
## $ rankmat: num [1:14, 1:12] 5 9 5 4 6 5 5 5 5 7 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:14] "ARAS" "CODAS" "EDAS" "FUCA" ...
## .. ..$ : chr [1:12] "G1" "G2" "G3" "G4" ...
rankmat <- resmcda$rankmat # Ranking matrix
print(rankmat)## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## ARAS 5 12 3 8 7 6 10 9 2 1 11 4
## CODAS 9 1 11 8 3 5 2 6 7 10 4 12
## EDAS 5 12 2 8 7 6 9 10 4 1 11 3
## FUCA 4 3 10 7 2 6 1 9 5 11 8 12
## GRA 6 3 10 5 2 7 1 9 4 11 8 12
## MABAC 5 9 7 8 4 3 11 10 2 1 12 6
## MARCOS 5 9 7 8 4 3 11 10 2 1 12 6
## MEGAN 5 3 11 7 2 6 1 9 4 10 8 12
## MOORA 5 12 3 8 7 6 9 10 2 1 11 4
## PROMT2 7 6 8 10 3 2 11 12 4 1 9 5
## SMART 5 9 7 8 4 3 11 10 2 1 12 6
## TOPSIS 5 9 7 8 4 2 10 11 3 1 12 6
## VIKOR 4 10 7 8 5 2 12 11 1 3 9 6
## WASPAS 5 12 2 8 7 6 9 10 3 1 11 4
The following code chunk visualizes rankmat, the rank matrix using the function rankheatmap that creates a heatmap of the ranks by benchmarked MCDA methods, allowing for a visual comparison of how different these methods rank the alternatives. The arguments colpal=1 specifies a color palette, cellnotes=TRUE displays the rank values within the heatmap cells, and tcol="black" sets the color of the cell notes. The function parcorplot of methodbench package generates a parallel coordinate plot of the ranks, providing another way to methodbench the ranking patterns across the different MCDA methods.
rankheatmap(rankmat, colpal=1, cellnotes=TRUE, tcol="black") A parallel coordinate plot is useful for observing whether the rankings of alternatives change in parallel across different MCDA algorithms. This makes it easy to see which method alters the ranking of the alternatives.
corplot(rankmat, xlab="MCDM Methods", ylab="Alternatives", title="MCDA Methods", colpal=c("gray","green","dodgerblue"))parcorplot(rankmat, xl="Alternatives", yl="Ranks", lt="MCDA Methods")In the following code snippet, rankcompare uses the previously created rankmat to compare the ranking results returned by MCDA methods. In this function call, nperms represents the number of permutations, nboot denotes the number of bootstrap resampling iterations, entropyopt specifies the entropy calculation method to be applied, alpha indicates the significance probability for statistical tests, and padjmethod defines the p-value adjustment method to be used in statistical tests. If no adjustment is desired, 'none' should be entered.
Through the comparisons, correlations and similarities between the ranks found by the methods are analyzed, along with statistical tests on rank differences and rank entropy variations.
In this process, the rescomp object returns the following results:
src)wsrs)wilcox)entper)entboot)The results of these comparisons are shown below:
rescomp <- rankcompare(rankmat, nperms = 100, nboot=100, entropyopt = "jsd",
alpha = 0.05, padjmethod = "fdr", biplot=FALSE)
print(rescomp$src) # Spearman rank correlations matrix## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 1 ** *** *** *** *** * *** ** ** ***
## CODAS -0.78 1 ** ** ** ** ** **
## EDAS 0.97 -0.82 1 ** ** *** ** ** * ***
## FUCA -0.47 0.78 -0.50 1 *** ***
## GRA -0.48 0.78 -0.52 0.97 1 ***
## MABAC 0.83 -0.41 0.76 -0.13 -0.16 1 *** ** *** *** *** *** **
## MARCOS 0.83 -0.41 0.76 -0.13 -0.16 1.00 1 ** *** *** *** *** **
## MEGAN -0.43 0.80 -0.48 0.99 0.97 -0.06 -0.06 1
## MOORA 0.99 -0.76 0.98 -0.41 -0.43 0.82 0.82 -0.38 1 * ** ** ** ***
## PROMT2 0.58 -0.16 0.55 -0.08 -0.13 0.87 0.87 -0.01 0.59 1 *** *** ***
## SMART 0.83 -0.41 0.76 -0.13 -0.16 1.00 1.00 -0.06 0.82 0.87 1 *** *** **
## TOPSIS 0.79 -0.37 0.75 -0.08 -0.13 0.99 0.99 -0.02 0.80 0.89 0.99 1 *** **
## VIKOR 0.78 -0.42 0.70 -0.14 -0.19 0.93 0.93 -0.09 0.78 0.83 0.93 0.92 1 **
## WASPAS 0.99 -0.78 0.99 -0.45 -0.47 0.78 0.78 -0.43 0.99 0.56 0.78 0.77 0.73 1
print(rescomp$wsrs) # WS similarity matrix## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 0.26 0.93 0.30 0.32 0.92 0.92 0.36 1.00 0.85 0.92 0.89 0.79 0.96
## CODAS 0.17 0.19 0.82 0.82 0.31 0.31 0.82 0.19 0.47 0.31 0.33 0.24 0.19
## EDAS 0.95 0.19 0.20 0.20 0.81 0.81 0.23 0.95 0.80 0.81 0.81 0.70 0.98
## FUCA 0.32 0.85 0.37 0.97 0.38 0.38 0.99 0.36 0.43 0.38 0.43 0.30 0.36
## GRA 0.30 0.85 0.37 0.98 0.37 0.37 0.99 0.35 0.45 0.37 0.42 0.28 0.36
## MABAC 0.92 0.39 0.87 0.39 0.40 1.00 0.46 0.92 0.91 1.00 0.96 0.86 0.90
## MARCOS 0.92 0.39 0.87 0.39 0.40 1.00 0.46 0.92 0.91 1.00 0.96 0.86 0.90
## MEGAN 0.32 0.85 0.38 0.99 0.99 0.38 0.38 0.37 0.45 0.38 0.43 0.29 0.37
## MOORA 1.00 0.26 0.93 0.30 0.32 0.92 0.92 0.36 0.85 0.92 0.89 0.79 0.96
## PROMT2 0.80 0.44 0.81 0.38 0.36 0.93 0.93 0.43 0.80 0.93 0.96 0.84 0.81
## SMART 0.92 0.39 0.87 0.39 0.40 1.00 1.00 0.46 0.92 0.91 0.96 0.86 0.90
## TOPSIS 0.85 0.41 0.85 0.37 0.36 0.96 0.96 0.44 0.85 0.96 0.96 0.87 0.86
## VIKOR 0.80 0.48 0.70 0.57 0.58 0.89 0.89 0.62 0.80 0.80 0.89 0.87 0.75
## WASPAS 0.96 0.22 0.98 0.24 0.25 0.83 0.83 0.28 0.96 0.80 0.83 0.84 0.72
print(rescomp$rangesim) # Rank range similarity matrix## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 100.00 0.00 66.67 0.00 0.00 66.67 66.67 0.00 100.00 33.33 66.67 66.67 66.67 100.00
## CODAS 0.00 100.00 0.00 100.00 100.00 0.00 0.00 100.00 0.00 33.33 0.00 0.00 0.00 0.00
## EDAS 66.67 0.00 100.00 0.00 0.00 33.33 33.33 0.00 66.67 33.33 33.33 33.33 33.33 66.67
## FUCA 0.00 100.00 0.00 100.00 100.00 0.00 0.00 100.00 0.00 33.33 0.00 0.00 0.00 0.00
## GRA 0.00 100.00 0.00 100.00 100.00 0.00 0.00 100.00 0.00 33.33 0.00 0.00 0.00 0.00
## MABAC 66.67 0.00 33.33 0.00 0.00 100.00 100.00 0.00 66.67 66.67 100.00 100.00 100.00 66.67
## MARCOS 66.67 0.00 33.33 0.00 0.00 100.00 100.00 0.00 66.67 66.67 100.00 100.00 100.00 66.67
## MEGAN 0.00 100.00 0.00 100.00 100.00 0.00 0.00 100.00 0.00 33.33 0.00 0.00 0.00 0.00
## MOORA 100.00 0.00 66.67 0.00 0.00 66.67 66.67 0.00 100.00 33.33 66.67 66.67 66.67 100.00
## PROMT2 33.33 33.33 33.33 33.33 33.33 66.67 66.67 33.33 33.33 100.00 66.67 66.67 66.67 33.33
## SMART 66.67 0.00 33.33 0.00 0.00 100.00 100.00 0.00 66.67 66.67 100.00 100.00 100.00 66.67
## TOPSIS 66.67 0.00 33.33 0.00 0.00 100.00 100.00 0.00 66.67 66.67 100.00 100.00 100.00 66.67
## VIKOR 66.67 0.00 33.33 0.00 0.00 100.00 100.00 0.00 66.67 66.67 100.00 100.00 100.00 66.67
## WASPAS 100.00 0.00 66.67 0.00 0.00 66.67 66.67 0.00 100.00 33.33 66.67 66.67 66.67 100.00
print(rescomp$wilcox) # Wilcoxon rank sum test matrix## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## CODAS 31.50 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## EDAS 7.50 33.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## FUCA 28.50 32.50 35.50 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## GRA 34.00 33.00 34.00 5.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## MABAC 18.00 33.50 19.00 39.00 39.00 NaN 1.00 1.00 1.00 NaN 1.00 1.00 1.00
## MARCOS 18.00 33.50 19.00 39.00 39.00 0.00 1.00 1.00 1.00 NaN 1.00 1.00 1.00
## MEGAN 24.00 22.50 22.50 5.00 7.50 32.50 32.50 1.00 1.00 1.00 1.00 1.00 1.00
## MOORA 1.50 35.00 3.00 31.00 37.50 13.00 13.00 26.50 1.00 1.00 1.00 1.00 1.00
## PROMT2 32.50 33.00 28.00 36.50 31.00 26.50 26.50 32.50 32.50 1.00 1.00 1.00 1.00
## SMART 18.00 33.50 19.00 39.00 39.00 0.00 0.00 33.50 15.00 28.50 1.00 1.00 1.00
## TOPSIS 18.50 32.00 22.50 39.00 38.00 5.00 5.00 32.50 21.50 29.00 5.00 1.00 1.00
## VIKOR 31.50 34.00 33.00 35.50 41.50 21.00 21.00 42.00 32.50 28.00 21.00 14.00 1.00
## WASPAS 5.00 34.00 1.50 31.00 37.50 16.50 16.50 26.50 1.50 33.50 16.50 18.00 32.50
print(rescomp$entper) # Rank entropy matrix with permutations## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 0.03 1.00 0.23 0.15 1.00 1.00 0.14 1.00 0.99 1.00 1.00 0.99 1.00
## CODAS 1.32 0.04 1.00 0.99 0.24 0.21 1.00 0.05 0.50 0.17 0.24 0.26 0.07
## EDAS 0.04 1.34 0.12 0.14 1.00 1.00 0.17 1.00 0.99 1.00 1.00 1.00 1.00
## FUCA 1.13 0.22 1.16 1.00 0.56 0.46 1.00 0.18 0.52 0.48 0.50 0.49 0.23
## GRA 1.12 0.22 1.17 0.03 0.53 0.47 1.00 0.19 0.56 0.46 0.34 0.43 0.28
## MABAC 0.13 1.08 0.21 0.91 0.92 1.00 0.65 1.00 1.00 1.00 1.00 1.00 0.99
## MARCOS 0.13 1.08 0.21 0.91 0.92 0.00 0.56 1.00 1.00 1.00 1.00 1.00 1.00
## MEGAN 1.09 0.21 1.15 0.01 0.02 0.86 0.86 0.30 0.62 0.53 0.61 0.58 0.22
## MOORA 0.00 1.30 0.04 1.10 1.09 0.13 0.13 1.06 0.99 1.00 1.00 1.00 1.00
## PROMT2 0.29 0.91 0.33 0.87 0.90 0.09 0.09 0.83 0.29 1.00 1.00 1.00 0.99
## SMART 0.13 1.08 0.21 0.91 0.92 0.00 0.00 0.86 0.13 0.09 1.00 1.00 1.00
## TOPSIS 0.17 1.05 0.23 0.89 0.92 0.02 0.02 0.85 0.17 0.06 0.02 1.00 1.00
## VIKOR 0.25 1.08 0.38 0.90 0.92 0.12 0.12 0.86 0.25 0.23 0.12 0.15 0.99
## WASPAS 0.02 1.32 0.01 1.12 1.13 0.18 0.18 1.11 0.02 0.32 0.18 0.21 0.33
print(rescomp$entboot) # Rank entropy matrix with bootstrap## ARAS CODAS EDAS FUCA GRA MABAC MARCOS MEGAN MOORA PROMT2 SMART TOPSIS VIKOR WASPAS
## ARAS 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## CODAS -0.318 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## EDAS 0.961 -0.342 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## FUCA -0.125 0.775 -0.158 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## GRA -0.124 0.775 -0.173 0.974 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## MABAC 0.873 -0.083 0.786 0.092 0.078 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## MARCOS 0.873 -0.083 0.786 0.092 0.078 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## MEGAN -0.093 0.786 -0.147 0.991 0.984 0.137 0.137 1.000 1.000 1.000 1.000 1.000 1.000
## MOORA 0.997 -0.302 0.964 -0.095 -0.094 0.871 0.871 -0.063 1.000 1.000 1.000 1.000 1.000
## PROMT2 0.708 0.090 0.674 0.130 0.100 0.914 0.914 0.173 0.710 1.000 1.000 1.000 1.000
## SMART 0.873 -0.083 0.786 0.092 0.078 1.000 1.000 0.137 0.871 0.914 1.000 1.000 1.000
## TOPSIS 0.827 -0.054 0.771 0.112 0.082 0.982 0.982 0.148 0.829 0.943 0.982 1.000 1.000
## VIKOR 0.749 -0.084 0.622 0.101 0.080 0.877 0.877 0.143 0.746 0.765 0.877 0.851 1.000
## WASPAS 0.982 -0.321 0.991 -0.125 -0.132 0.820 0.820 -0.106 0.985 0.681 0.820 0.795 0.672
rankheatmap(rescomp$rangesim, colpal=1, cellnotes=TRUE, tcol="black")sccmatrix <- rankspearman(rankmat)$cormat
corplot(sccmatrix, xlab="MCDM Methods", ylab="MCDM Methods", title="Spearman Correlation Matrix")Sensitivity analysis in MCDA evaluates how changes in input parameters impact the ranking or decision outcome. It helps assess the robustness and reliability of the decision model by examining the effect of variations in criteria weights or preference settings. Sensitivity analysis is typically performed to:
In the following code snippet, the sensana function performs a sensitivity analysis on rankmat. The obtained results are stored in ressens and displayed. Sensitivity analysis below assesses the stability of ranking outcomes across different methods, helping to determine which approaches produce consistent results and which lead to significant variations.
ressens <- sensana(rankmat)
print(ressens$stabtable) # Stability## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## SD 1.28 3.92 3.19 1.05 1.95 1.87 4.34 1.38 1.58 4.40 2.32 3.42
## CRV 0.24 0.50 0.47 0.13 0.45 0.42 0.56 0.14 0.49 1.14 0.23 0.49
## SD2 0.08 0.30 0.22 0.06 0.15 0.16 0.34 0.09 0.11 0.35 0.17 0.26
## SRSI 0.69 0.77 0.69 0.54 0.77 0.69 0.77 0.77 0.92 0.62 0.77 0.69
## RSI 0.71 0.64 0.64 0.79 0.71 0.71 0.64 0.71 0.64 0.79 0.71 0.71
print(ressens$sensscores) # Sensitivity score## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## 0.64 4.14 4.07 0.50 2.64 1.64 3.14 1.14 1.36 2.86 1.71 3.14
Based on the stability analysis with stabtable measures indicate how much the rankings of alternatives fluctuate across different MCDA methods:
Standard Deviation (SD): A higher SD suggests that the ranking assigned by a particular method is more volatile across different alternatives.
G2 (4.57) and G3 (4.05) show significant variation, meaning that the rankings assigned by these methods shift noticeably. G8v (0.63) andG4` (0.93) are much more stable, implying that their rankings remain relatively unchanged regardless of sensitivity variations.
Relative Volatility (RVOL) measures proportional ranking fluctuation. G5 and G11 (0.85) indicate methods with higher sensitivity to input changes. G4 and G8 (0.62) show lower volatility, meaning they produce more stable ranking outcomes.
Ranking Stability Index (RSI): Higher RSI values indicate robust ranking behavior; lower values suggest greater instability in ranking decisions. G4, G8, and G9 exhibit strong stability (above 0.75). G2 (0.71) and G3 (0.64) are slightly more sensitive, meaning rankings from these methods can shift depending on input variations.
Based on the Sensitivity Scores (sensscores) represents the degree to which rankings change when input variations occur: G3 (4.71) and G2 (3.61) are the most sensitive methods, indicating that rankings from these approaches are highly affected by parameter changes. G4 (0.50) and G8 (0.61) produce more stable rankings, meaning they are less influenced by weight and preference shifts.
We can use the Spearman Correlation Matrix (spearmancor), provides insights into how similar or different the rankings produced by various MCDA methods are:
High positive correlations (\(\geq 0.90\)) between methods like ARAS, EDAS, MARCOS, and WASPAS indicate strong agreement in their rankings. These methods likely follow similar ranking principles or weighting strategies.
MOORA (-0.22) and FUCA (0.10) show weaker correlations, suggesting they provide alternative ranking perspectives compared to conventional MCDA methods.
GRA (0.06) and SMART (0.08) demonstrate low correlation with many methods, implying that they apply distinct evaluation techniques resulting in different rankings.
This analysis summarizes how different MCDA methods prioritize alternatives by applying various ranking aggregation techniques. It helps determine consistency in rankings across different methods and identifies which alternatives are preferred most frequently. The rankaggregate function is used to combine multiple rankings, in this case, from the rankmat (which contains rankings from the weight sensitivity analysis). The topk parameter is set to specifically consider the top 3 alternatives from each individual ranking when performing aggregation methods.
respref <- rankaggregate(rankmat, topk=3)
print(respref$preference_ranking)## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## TOPK3 10.5 6.0 6 10.5 4 3 6.0 10.5 2 1 10.5 8
## RANKSUM 5.0 10.0 6 9.0 3 4 8.0 11.0 1 2 12.0 7
## MEDIAN 4.0 9.0 7 8.0 3 5 10.0 11.0 2 1 12.0 6
## BORDACNT 5.0 10.0 6 9.0 3 4 8.0 11.0 1 2 12.0 7
## COPELAND 5.0 9.5 6 8.0 4 3 9.5 11.0 2 1 12.0 7
## KEMYNG 5.0 9.5 6 8.0 4 3 9.5 11.0 2 1 12.0 7
## MARKOV 5.0 8.0 6 10.0 4 3 7.0 12.0 2 1 11.0 9
print(respref$preference_table)## Method Outranking
## 1 TOPK3 G10 > G9 > G6 > G5 > G2 = G3 = G7 > G12 > G1 = G4 = G8 = G11
## 2 RANKSUM G9 > G10 > G5 > G6 > G1 > G3 > G12 > G7 > G4 > G2 > G8 > G11
## 3 MEDIAN G10 > G9 > G5 > G1 > G6 > G12 > G3 > G4 > G2 > G7 > G8 > G11
## 4 BORDACNT G9 > G10 > G5 > G6 > G1 > G3 > G12 > G7 > G4 > G2 > G8 > G11
## 5 COPELAND G10 > G9 > G6 > G5 > G1 > G3 > G12 > G4 > G2 = G7 > G8 > G11
## 6 KEMYNG G10 > G9 > G6 > G5 > G1 > G3 > G12 > G4 > G2 = G7 > G8 > G11
## 7 MARKOV G10 > G9 > G6 > G5 > G1 > G3 > G7 > G2 > G12 > G4 > G11 > G8
The preference_ranking table provides the aggregated ranks for each alternative using various rank aggregation methods.
G1, G4, G8, and G11 share a rank of 10.5, while G2, G3, and G7 share rank 6.0.G10 consistently holds rank 1 for TOPK3, MEDIAN, COPELAND, KEMYNG, and MARKOV, but rank 2 for RANKSUM and BORDACNT.G9 is often rank 2 (TOPK3, MEDIAN, COPELAND, KEMYNG, MARKOV) or rank 1 (RANKSUM, BORDACNT).G11 consistently appears as the lowest-ranked alternative (rank 12) across most methods, except for MARKOV (rank 11).This variability among aggregation methods highlights that while the underlying MEGAN rankings might be robust (as seen in the weight sensitivity analysis), the final consensus can depend on the specific aggregation method applied.
The preference_table explicitly describes the outranking relationships derived by each aggregation method.
G10 and G9 are consistently identified as the leading alternatives, appearing at the very top of most aggregated preference lists. Alternatives like G5 and G6 also generally perform well, often appearing in the top ranks after G10 and G9. G11 consistently appears at the very end of the aggregated rankings, indicating it is the least preferred alternative across most aggregation methods. G8, G2, G4, and G7 also frequently appear in the lower half of the rankings.
Many aggregation methods show ties or close groupings for alternatives that perform similarly. For example, in TOPK3, G2, G3, and G7 are tied, as are G1, G4, G8, and G11. This indicates that while a strict hierarchy might not always be present, groups of alternatives with comparable performance levels are identified.
fod <- respref$flowdominance
if (!is.null(fod) && "BORDACNT" %in% rownames(fod)) {
flowplot(
fod["BORDACNT", ],
colpal = terrain.colors(ncol(fod)),
txtcol = "black",
orientation = "vertical"
)
}