Dominance-based rough set approach

(Learn how and when to remove this message)

The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński.[1][2][3] The main change compared to the classical rough sets is the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration of criteria and preference-ordered decision classes.

Multicriteria classification (sorting)

Multicriteria classification (sorting) is one of the problems considered within MCDA and can be stated as follows: given a set of objects evaluated by a set of criteria (attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem of classification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to as ordinal classification problem with monotonicity constraints and often appears in real-life application when ordinal and monotone properties follow from the domain knowledge about the problem.

As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes: bad, medium and good (notice that class good is preferred to medium and medium is preferred to bad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible values bad, medium and good. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class).

As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classes safe and risky. This may involve such characteristics as "return on equity (ROE)", "return on investment (ROI)" and "return on sales (ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information in knowledge discovery may lead to wrong conclusions.

Data representation

Decision table

In DRSA, data are often presented using a particular form of decision table. Formally, a DRSA decision table is a 4-tuple S = U , Q , V , f {\displaystyle S=\langle U,Q,V,f\rangle } , where U {\displaystyle U\,\!} is a finite set of objects, Q {\displaystyle Q\,\!} is a finite set of criteria, V = q Q V q {\displaystyle V=\bigcup {}_{q\in Q}V_{q}} where V q {\displaystyle V_{q}\,\!} is the domain of the criterion q {\displaystyle q\,\!} and f : U × Q V {\displaystyle f\colon U\times Q\to V} is an information function such that f ( x , q ) V q {\displaystyle f(x,q)\in V_{q}} for every ( x , q ) U × Q {\displaystyle (x,q)\in U\times Q} . The set Q {\displaystyle Q\,\!} is divided into condition criteria (set C {\displaystyle C\neq \emptyset } ) and the decision criterion (class) d {\displaystyle d\,\!} . Notice, that f ( x , q ) {\displaystyle f(x,q)\,\!} is an evaluation of object x {\displaystyle x\,\!} on criterion q C {\displaystyle q\in C} , while f ( x , d ) {\displaystyle f(x,d)\,\!} is the class assignment (decision value) of the object. An example of decision table is shown in Table 1 below.

Outranking relation

It is assumed that the domain of a criterion q Q {\displaystyle q\in Q} is completely preordered by an outranking relation q {\displaystyle \succeq _{q}} ; x q y {\displaystyle x\succeq _{q}y} means that x {\displaystyle x\,\!} is at least as good as (outranks) y {\displaystyle y\,\!} with respect to the criterion q {\displaystyle q\,\!} . Without loss of generality, we assume that the domain of q {\displaystyle q\,\!} is a subset of reals, V q R {\displaystyle V_{q}\subseteq \mathbb {R} } , and that the outranking relation is a simple order between real numbers {\displaystyle \geq \,\!} such that the following relation holds: x q y f ( x , q ) f ( y , q ) {\displaystyle x\succeq _{q}y\iff f(x,q)\geq f(y,q)} . This relation is straightforward for gain-type ("the more, the better") criterion, e.g. company profit. For cost-type ("the less, the better") criterion, e.g. product price, this relation can be satisfied by negating the values from V q {\displaystyle V_{q}\,\!} .

Decision classes and class unions

Let T = { 1 , , n } {\displaystyle T=\{1,\ldots ,n\}\,\!} . The domain of decision criterion, V d {\displaystyle V_{d}\,\!} consist of n {\displaystyle n\,\!} elements (without loss of generality we assume V d = T {\displaystyle V_{d}=T\,\!} ) and induces a partition of U {\displaystyle U\,\!} into n {\displaystyle n\,\!} classes Cl = { C l t , t T } {\displaystyle {\textbf {Cl}}=\{Cl_{t},t\in T\}} , where C l t = { x U : f ( x , d ) = t } {\displaystyle Cl_{t}=\{x\in U\colon f(x,d)=t\}} . Each object x U {\displaystyle x\in U} is assigned to one and only one class C l t , t T {\displaystyle Cl_{t},t\in T} . The classes are preference-ordered according to an increasing order of class indices, i.e. for all r , s T {\displaystyle r,s\in T} such that r s {\displaystyle r\geq s\,\!} , the objects from C l r {\displaystyle Cl_{r}\,\!} are strictly preferred to the objects from C l s {\displaystyle Cl_{s}\,\!} . For this reason, we can consider the upward and downward unions of classes, defined respectively, as:

C l t = s t C l s C l t = s t C l s t T {\displaystyle Cl_{t}^{\geq }=\bigcup _{s\geq t}Cl_{s}\qquad Cl_{t}^{\leq }=\bigcup _{s\leq t}Cl_{s}\qquad t\in T}

Main concepts

Dominance

We say that x {\displaystyle x\,\!} dominates y {\displaystyle y\,\!} with respect to P C {\displaystyle P\subseteq C} , denoted by x D p y {\displaystyle xD_{p}y\,\!} , if x {\displaystyle x\,\!} is better than y {\displaystyle y\,\!} on every criterion from P {\displaystyle P\,\!} , x q y , q P {\displaystyle x\succeq _{q}y,\,\forall q\in P} . For each P C {\displaystyle P\subseteq C} , the dominance relation D P {\displaystyle D_{P}\,\!} is reflexive and transitive, i.e. it is a partial pre-order. Given P C {\displaystyle P\subseteq C} and x U {\displaystyle x\in U} , let

D P + ( x ) = { y U : y D p x } {\displaystyle D_{P}^{+}(x)=\{y\in U\colon yD_{p}x\}}
D P ( x ) = { y U : x D p y } {\displaystyle D_{P}^{-}(x)=\{y\in U\colon xD_{p}y\}}

represent P-dominating set and P-dominated set with respect to x U {\displaystyle x\in U} , respectively.

Rough approximations

The key idea of the rough set philosophy is approximation of one knowledge by another knowledge. In DRSA, the knowledge being approximated is a collection of upward and downward unions of decision classes and the "granules of knowledge" used for approximation are P-dominating and P-dominated sets.

The P-lower and the P-upper approximation of C l t , t T {\displaystyle Cl_{t}^{\geq },t\in T} with respect to P C {\displaystyle P\subseteq C} , denoted as P _ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\geq })} and P ¯ ( C l t ) {\displaystyle {\overline {P}}(Cl_{t}^{\geq })} , respectively, are defined as:

P _ ( C l t ) = { x U : D P + ( x ) C l t } {\displaystyle {\underline {P}}(Cl_{t}^{\geq })=\{x\in U\colon D_{P}^{+}(x)\subseteq Cl_{t}^{\geq }\}}
P ¯ ( C l t ) = { x U : D P ( x ) C l t } {\displaystyle {\overline {P}}(Cl_{t}^{\geq })=\{x\in U\colon D_{P}^{-}(x)\cap Cl_{t}^{\geq }\neq \emptyset \}}

Analogously, the P-lower and the P-upper approximation of C l t , t T {\displaystyle Cl_{t}^{\leq },t\in T} with respect to P C {\displaystyle P\subseteq C} , denoted as P _ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\leq })} and P ¯ ( C l t ) {\displaystyle {\overline {P}}(Cl_{t}^{\leq })} , respectively, are defined as:

P _ ( C l t ) = { x U : D P ( x ) C l t } {\displaystyle {\underline {P}}(Cl_{t}^{\leq })=\{x\in U\colon D_{P}^{-}(x)\subseteq Cl_{t}^{\leq }\}}
P ¯ ( C l t ) = { x U : D P + ( x ) C l t } {\displaystyle {\overline {P}}(Cl_{t}^{\leq })=\{x\in U\colon D_{P}^{+}(x)\cap Cl_{t}^{\leq }\neq \emptyset \}}

Lower approximations group the objects which certainly belong to class union C l t {\displaystyle Cl_{t}^{\geq }} (respectively C l t {\displaystyle Cl_{t}^{\leq }} ). This certainty comes from the fact, that object x U {\displaystyle x\in U} belongs to the lower approximation P _ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\geq })} (respectively P _ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\leq })} ), if no other object in U {\displaystyle U\,\!} contradicts this claim, i.e. every object y U {\displaystyle y\in U} which P-dominates x {\displaystyle x\,\!} , also belong to the class union C l t {\displaystyle Cl_{t}^{\geq }} (respectively C l t {\displaystyle Cl_{t}^{\leq }} ). Upper approximations group the objects which could belong to C l t {\displaystyle Cl_{t}^{\geq }} (respectively C l t {\displaystyle Cl_{t}^{\leq }} ), since object x U {\displaystyle x\in U} belongs to the upper approximation P ¯ ( C l t ) {\displaystyle {\overline {P}}(Cl_{t}^{\geq })} (respectively P ¯ ( C l t ) {\displaystyle {\overline {P}}(Cl_{t}^{\leq })} ), if there exist another object y U {\displaystyle y\in U} P-dominated by x {\displaystyle x\,\!} from class union C l t {\displaystyle Cl_{t}^{\geq }} (respectively C l t {\displaystyle Cl_{t}^{\leq }} ).

The P-lower and P-upper approximations defined as above satisfy the following properties for all t T {\displaystyle t\in T} and for any P C {\displaystyle P\subseteq C} :

P _ ( C l t ) C l t P ¯ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\geq })\subseteq Cl_{t}^{\geq }\subseteq {\overline {P}}(Cl_{t}^{\geq })}
P _ ( C l t ) C l t P ¯ ( C l t ) {\displaystyle {\underline {P}}(Cl_{t}^{\leq })\subseteq Cl_{t}^{\leq }\subseteq {\overline {P}}(Cl_{t}^{\leq })}

The P-boundaries (P-doubtful regions) of C l t {\displaystyle Cl_{t}^{\geq }} and C l t {\displaystyle Cl_{t}^{\leq }} are defined as:

B n P ( C l t ) = P ¯ ( C l t ) P _ ( C l t ) {\displaystyle Bn_{P}(Cl_{t}^{\geq })={\overline {P}}(Cl_{t}^{\geq })-{\underline {P}}(Cl_{t}^{\geq })}
B n P ( C l t ) = P ¯ ( C l t ) P _ ( C l t ) {\displaystyle Bn_{P}(Cl_{t}^{\leq })={\overline {P}}(Cl_{t}^{\leq })-{\underline {P}}(Cl_{t}^{\leq })}

Quality of approximation and reducts

The ratio

γ P ( Cl ) = | U ( ( t T B n P ( C l t ) ) ( t T B n P ( C l t ) ) ) | | U | {\displaystyle \gamma _{P}({\textbf {Cl}})={\frac {\left|U-\left(\left(\bigcup _{t\in T}Bn_{P}(Cl_{t}^{\geq })\right)\cup \left(\bigcup _{t\in T}Bn_{P}(Cl_{t}^{\leq })\right)\right)\right|}{|U|}}}

defines the quality of approximation of the partition Cl {\displaystyle {\textbf {Cl}}\,\!} into classes by means of the set of criteria P {\displaystyle P\,\!} . This ratio express the relation between all the P-correctly classified objects and all the objects in the table.

Every minimal subset P C {\displaystyle P\subseteq C} such that γ P ( C l ) = γ C ( C l ) {\displaystyle \gamma _{P}(\mathbf {Cl} )=\gamma _{C}(\mathbf {Cl} )\,\!} is called a reduct of C {\displaystyle C\,\!} and is denoted by R E D C l ( P ) {\displaystyle RED_{\mathbf {Cl} }(P)} . A decision table may have more than one reduct. The intersection of all reducts is known as the core.

Decision rules

On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms of decision rules. The decision rules are expressions of the form if [condition] then [consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions.

Certain rules has the following form:

if f ( x , q 1 ) r 1 {\displaystyle f(x,q_{1})\geq r_{1}\,\!} and f ( x , q 2 ) r 2 {\displaystyle f(x,q_{2})\geq r_{2}\,\!} and f ( x , q p ) r p {\displaystyle \ldots f(x,q_{p})\geq r_{p}\,\!} then x C l t {\displaystyle x\in Cl_{t}^{\geq }}

if f ( x , q 1 ) r 1 {\displaystyle f(x,q_{1})\leq r_{1}\,\!} and f ( x , q 2 ) r 2 {\displaystyle f(x,q_{2})\leq r_{2}\,\!} and f ( x , q p ) r p {\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!} then x C l t {\displaystyle x\in Cl_{t}^{\leq }}

Possible rules has a similar syntax, however the consequent part of the rule has the form: x {\displaystyle x\,\!} could belong to C l t {\displaystyle Cl_{t}^{\geq }} or the form: x {\displaystyle x\,\!} could belong to C l t {\displaystyle Cl_{t}^{\leq }} .

Finally, approximate rules has the syntax:

if f ( x , q 1 ) r 1 {\displaystyle f(x,q_{1})\geq r_{1}\,\!} and f ( x , q 2 ) r 2 {\displaystyle f(x,q_{2})\geq r_{2}\,\!} and f ( x , q k ) r k {\displaystyle \ldots f(x,q_{k})\geq r_{k}\,\!} and f ( x , q k + 1 ) r k + 1 {\displaystyle f(x,q_{k+1})\leq r_{k+1}\,\!} and f ( x , q k + 2 ) r k + 2 {\displaystyle f(x,q_{k+2})\leq r_{k+2}\,\!} and f ( x , q p ) r p {\displaystyle \ldots f(x,q_{p})\leq r_{p}\,\!} then x C l s C l s + 1 C l t {\displaystyle x\in Cl_{s}\cup Cl_{s+1}\cup Cl_{t}}

The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table.

Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes).

A set of decision rules is complete if it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We call minimal each set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete. One of three induction strategies can be adopted to obtain a set of decision rules:[4]

The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM,[5] which generates minimal set of rules.

Example

Consider the following problem of high school students’ evaluations:

Table 1: Example—High School Evaluations
object (student) q 1 {\displaystyle q_{1}}
(Mathematics)
q 2 {\displaystyle q_{2}}
(Physics)
q 3 {\displaystyle q_{3}}
(Literature)
d {\displaystyle d}
(global score)
x 1 {\displaystyle x_{1}} medium medium bad bad
x 2 {\displaystyle x_{2}} good medium bad medium
x 3 {\displaystyle x_{3}} medium good bad medium
x 4 {\displaystyle x_{4}} bad medium good bad
x 5 {\displaystyle x_{5}} bad bad medium bad
x 6 {\displaystyle x_{6}} bad medium medium medium
x 7 {\displaystyle x_{7}} good good bad good
x 8 {\displaystyle x_{8}} good medium medium medium
x 9 {\displaystyle x_{9}} medium medium good good
x 10 {\displaystyle x_{10}} good medium good good

Each object (student) is described by three criteria q 1 , q 2 , q 3 {\displaystyle q_{1},q_{2},q_{3}\,\!} , related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes: C l 1 = { b a d } {\displaystyle Cl_{1}=\{bad\}} , C l 2 = { m e d i u m } {\displaystyle Cl_{2}=\{medium\}} and C l 3 = { g o o d } {\displaystyle Cl_{3}=\{good\}} . Thus, the following unions of classes were approximated:

Notice that evaluations of objects x 4 {\displaystyle x_{4}\,\!} and x 6 {\displaystyle x_{6}\,\!} are inconsistent, because x 4 {\displaystyle x_{4}\,\!} has better evaluations on all three criteria than x 6 {\displaystyle x_{6}\,\!} but worse global score.

Therefore, lower approximations of class unions consist of the following objects:

P _ ( C l 1 ) = { x 1 , x 5 } {\displaystyle {\underline {P}}(Cl_{1}^{\leq })=\{x_{1},x_{5}\}}
P _ ( C l 2 ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 8 } = C l 2 {\displaystyle {\underline {P}}(Cl_{2}^{\leq })=\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{8}\}=Cl_{2}^{\leq }}
P _ ( C l 2 ) = { x 2 , x 3 , x 7 , x 8 , x 9 , x 10 } {\displaystyle {\underline {P}}(Cl_{2}^{\geq })=\{x_{2},x_{3},x_{7},x_{8},x_{9},x_{10}\}}
P _ ( C l 3 ) = { x 7 , x 9 , x 10 } = C l 3 {\displaystyle {\underline {P}}(Cl_{3}^{\geq })=\{x_{7},x_{9},x_{10}\}=Cl_{3}^{\geq }}

Thus, only classes C l 1 {\displaystyle Cl_{1}^{\leq }} and C l 2 {\displaystyle Cl_{2}^{\geq }} cannot be approximated precisely. Their upper approximations are as follows:

P ¯ ( C l 1 ) = { x 1 , x 4 , x 5 , x 6 } {\displaystyle {\overline {P}}(Cl_{1}^{\leq })=\{x_{1},x_{4},x_{5},x_{6}\}}
P ¯ ( C l 2 ) = { x 2 , x 3 , x 4 , x 6 , x 7 , x 8 , x 9 , x 10 } {\displaystyle {\overline {P}}(Cl_{2}^{\geq })=\{x_{2},x_{3},x_{4},x_{6},x_{7},x_{8},x_{9},x_{10}\}}

while their boundary regions are:

B n P ( C l 1 ) = B n P ( C l 2 ) = { x 4 , x 6 } {\displaystyle Bn_{P}(Cl_{1}^{\leq })=Bn_{P}(Cl_{2}^{\geq })=\{x_{4},x_{6}\}}

Of course, since C l 2 {\displaystyle Cl_{2}^{\leq }} and C l 3 {\displaystyle Cl_{3}^{\geq }} are approximated precisely, we have P ¯ ( C l 2 ) = C l 2 {\displaystyle {\overline {P}}(Cl_{2}^{\leq })=Cl_{2}^{\leq }} , P ¯ ( C l 3 ) = C l 3 {\displaystyle {\overline {P}}(Cl_{3}^{\geq })=Cl_{3}^{\geq }} and B n P ( C l 2 ) = B n P ( C l 3 ) = {\displaystyle Bn_{P}(Cl_{2}^{\leq })=Bn_{P}(Cl_{3}^{\geq })=\emptyset }

The following minimal set of 10 rules can be induced from the decision table:

  1. if P h y s i c s b a d {\displaystyle Physics\leq bad} then s t u d e n t b a d {\displaystyle student\leq bad}
  2. if L i t e r a t u r e b a d {\displaystyle Literature\leq bad} and P h y s i c s m e d i u m {\displaystyle Physics\leq medium} and M a t h m e d i u m {\displaystyle Math\leq medium} then s t u d e n t b a d {\displaystyle student\leq bad}
  3. if M a t h b a d {\displaystyle Math\leq bad} then s t u d e n t m e d i u m {\displaystyle student\leq medium}
  4. if L i t e r a t u r e m e d i u m {\displaystyle Literature\leq medium} and P h y s i c s m e d i u m {\displaystyle Physics\leq medium} then s t u d e n t m e d i u m {\displaystyle student\leq medium}
  5. if M a t h m e d i u m {\displaystyle Math\leq medium} and L i t e r a t u r e b a d {\displaystyle Literature\leq bad} then s t u d e n t m e d i u m {\displaystyle student\leq medium}
  6. if L i t e r a t u r e g o o d {\displaystyle Literature\geq good} and M a t h m e d i u m {\displaystyle Math\geq medium} then s t u d e n t g o o d {\displaystyle student\geq good}
  7. if P h y s i c s g o o d {\displaystyle Physics\geq good} and M a t h g o o d {\displaystyle Math\geq good} then s t u d e n t g o o d {\displaystyle student\geq good}
  8. if M a t h g o o d {\displaystyle Math\geq good} then s t u d e n t m e d i u m {\displaystyle student\geq medium}
  9. if P h y s i c s g o o d {\displaystyle Physics\geq good} then s t u d e n t m e d i u m {\displaystyle student\geq medium}
  10. if M a t h b a d {\displaystyle Math\leq bad} and P h y s i c s m e d i u m {\displaystyle Physics\geq medium} then s t u d e n t = b a d m e d i u m {\displaystyle student=bad\lor medium}

The last rule is approximate, while the rest are certain.

Extensions

Multicriteria choice and ranking problems

The other two problems considered within multi-criteria decision analysis, multicriteria choice and ranking problems, can also be solved using dominance-based rough set approach. This is done by converting the decision table into pairwise comparison table (PCT).[1]

Variable-consistency DRSA

The definitions of rough approximations are based on a strict application of the dominance principle. However, when defining non-ambiguous objects, it is reasonable to accept a limited proportion of negative examples, particularly for large decision tables. Such extended version of DRSA is called Variable-Consistency DRSA model (VC-DRSA)[6]

Stochastic DRSA

In real-life data, particularly for large datasets, the notions of rough approximations were found to be excessively restrictive. Therefore, an extension of DRSA, based on stochastic model (Stochastic DRSA), which allows inconsistencies to some degree, has been introduced.[7] Having stated the probabilistic model for ordinal classification problems with monotonicity constraints, the concepts of lower approximations are extended to the stochastic case. The method is based on estimating the conditional probabilities using the nonparametric maximum likelihood method which leads to the problem of isotonic regression.

Stochastic dominance-based rough sets can also be regarded as a sort of variable-consistency model.

Software

4eMka2 Archived 2007-09-09 at the Wayback Machine is a decision support system for multiple criteria classification problems based on dominance-based rough sets (DRSA). JAMM Archived 2007-07-19 at the Wayback Machine is a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on the Laboratory of Intelligent Decision Support Systems (IDSS) website.

See also

References

  1. ^ a b Greco, S., Matarazzo, B., Słowiński, R.: Rough sets theory for multi-criteria decision analysis. European Journal of Operational Research, 129, 1 (2001) 1–47
  2. ^ Greco, S., Matarazzo, B., Słowiński, R.: Multicriteria classification by dominance-based rough set approach. In: W.Kloesgen and J.Zytkow (eds.), Handbook of Data Mining and Knowledge Discovery, Oxford University Press, New York, 2002
  3. ^ Słowiński, R., Greco, S., Matarazzo, B.: Rough set based decision support. Chapter 16 [in]: E.K. Burke and G. Kendall (eds.), Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques, Springer-Verlag, New York (2005) 475–527
  4. ^ Stefanowski, J.: On rough set based approach to induction of decision rules. In Skowron, A., Polkowski, L. (eds.): Rough Set in Knowledge Discovering, Physica Verlag, Heidelberg (1998) 500--529
  5. ^ Greco S., Matarazzo, B., Słowiński, R., Stefanowski, J.: An Algorithm for Induction of Decision Rules Consistent with the Dominance Principle. In W. Ziarko, Y. Yao (eds.): Rough Sets and Current Trends in Computing. Lecture Notes in Artificial Intelligence 2005 (2001) 304--313. Springer-Verlag
  6. ^ Greco, S., B. Matarazzo, R. Slowinski and J. Stefanowski: Variable consistency model of dominance-based rough set approach. In W.Ziarko, Y.Yao (eds.): Rough Sets and Current Trends in Computing. Lecture Notes in Artificial Intelligence 2005 (2001) 170–181. Springer-Verlag
  7. ^ Dembczyński, K., Greco, S., Kotłowski, W., Słowiński, R.: Statistical model for rough set approach to multicriteria classification. In Kok, J.N., Koronacki, J., de Mantaras, R.L., Matwin, S., Mladenic, D., Skowron, A. (eds.): Knowledge Discovery in Databases: PKDD 2007, Warsaw, Poland. Lecture Notes in Computer Science 4702 (2007) 164–175.

External links