volume 5, issue 4, 2015 issn: 2277 128x international...

8
© 2015, IJARCSSE All Rights Reserved Page | 692 Volume 5, Issue 4, 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Machine Fault Identification Using Data Mining Technique and R Programming (Packages and Tools) Apoorv Prasad * , Prof. Rajeswari C, Prof. Devendiran S SITE, VIT Vellore, Tamil Nadu, India AbstractRolling Bearings are a type of bearing that basically uses cylinders to maintain the separation between the moving parts of the bearing. The main purpose of a roller bearing is to reduce rotational friction and support radial and axial loads. Compared to ball bearings, roller bearings can support heavy radial loads and limited axial loads. They can operate at moderate to high speeds. Since rolling bearings are widely used in rotary machines, faults occurring in bearings must be detected as early as possible to avoid fatal breakdowns of machines that may lead to loss of production and human casualties. Faults which typically occur in rolling element bearings are usually caused by localized effects in the outer-race, inner-race, normal-race and ball-race. Fault detection of various mechanical components is essential to increase the productivity and reduce the breakdowns. This paper presents the use of fuzzy-rough sets, data-mining algorithms and R programming to identify the levels of fault in the components of the rolling bearings. KeywordsData mining, redundant, missing data, sets, relations, attributes, data analysis, approach. I. INTRODUCTION Data Mining is the process to discover interesting knowledge from large amounts of data. The size of data collected from many fields is increasing at an enormous rate. Traditional data analysis has become ineffective and developing methods for efficient data analysis have become a necessity. Data mining is an interdisciplinary field with contributions from many areas, such as statistics, machine learning, information retrieval and pattern recognition. Objectives of applying data mining techniques in machine fault analysis are not only to identify faults in components but also to save human resources and to reduce costs. Rough set theory was developed by Zdzislaw Pawlak in the early 1980’s. With the knowledge of Rough Set Concepts, one can use it along with data mining for feature selection, feature extraction, data reduction, decision rule generation, and pattern extraction. It also aids in identifying partial or total dependencies in data, eliminates redundant data, gives approach to null values, missing data, dynamic data and others. R is a free software environment for statistical computing and graphics. It provides a variety of statistical and graphical techniques. R can be extended easily via packages. It is widely used in both academia and industry. II. ROUGH SET CONCEPTS Rough Set Theory (RST), proposed by Palwak (1982), is an extension of classical set theory for dealing with imprecision knowledge. It employs indiscernibility relations to evaluate to what extent two objects are identical or similar. Using the relations, one can construct: Lower Approximation and Upper Approximation. Lower Approximation: It consists of all instances which surely belong to the concept/class. Upper Approximation: It contains all instances which possibly belong to the concept. One benefit of RST is that it does not require additional parameters to extract information. A dataset is represented in terms of an information system A = (U,A), where: U: a finite, non-empty set of objects (instances/rows). A: a finite, non-empty set of attributes, such that for every a A. a : U → Va, where Va is the set of values of a. A decision system A = (U, A {d}) is a special kind of information system, used in context of classification, in which d is the decision attribute. Indiscernibility relations: Given an information system (U, A), for any B A the equivalence relation RB is defined by RB(x, y) = {(x, y) U2 | a B, a(x) = a(y)}. If (x, y) RB(x, y), then x and y have exactly the same values for the attributes in B. The equivalence classes of this so-called B-indiscernibility relation are denoted by [x]B. Approximations: Given B A, X U can be approximated using the information in B by constructing B-lower and B- upper approximations of X:

Upload: others

Post on 27-Sep-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

© 2015, IJARCSSE All Rights Reserved Page | 692

Volume 5, Issue 4, 2015 ISSN: 2277 128X

International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com

Machine Fault Identification Using Data Mining Technique and R

Programming (Packages and Tools) Apoorv Prasad

*, Prof. Rajeswari C, Prof. Devendiran S

SITE, VIT Vellore, Tamil Nadu,

India

Abstract— Rolling Bearings are a type of bearing that basically uses cylinders to maintain the separation between the

moving parts of the bearing. The main purpose of a roller bearing is to reduce rotational friction and support radial

and axial loads. Compared to ball bearings, roller bearings can support heavy radial loads and limited axial loads.

They can operate at moderate to high speeds.

Since rolling bearings are widely used in rotary machines, faults occurring in bearings must be detected as early as

possible to avoid fatal breakdowns of machines that may lead to loss of production and human casualties. Faults

which typically occur in rolling element bearings are usually caused by localized effects in the outer-race, inner-race,

normal-race and ball-race. Fault detection of various mechanical components is essential to increase the productivity

and reduce the breakdowns.

This paper presents the use of fuzzy-rough sets, data-mining algorithms and R programming to identify the levels of

fault in the components of the rolling bearings.

Keywords— Data mining, redundant, missing data, sets, relations, attributes, data analysis, approach.

I. INTRODUCTION

Data Mining is the process to discover interesting knowledge from large amounts of data. The size of data collected from

many fields is increasing at an enormous rate. Traditional data analysis has become ineffective and developing methods

for efficient data analysis have become a necessity. Data mining is an interdisciplinary field with contributions from

many areas, such as statistics, machine learning, information retrieval and pattern recognition. Objectives of applying

data mining techniques in machine fault analysis are not only to identify faults in components but also to save human

resources and to reduce costs.

Rough set theory was developed by Zdzislaw Pawlak in the early 1980’s. With the knowledge of Rough Set Concepts,

one can use it along with data mining for feature selection, feature extraction, data reduction, decision rule generation,

and pattern extraction. It also aids in identifying partial or total dependencies in data, eliminates redundant data, gives

approach to null values, missing data, dynamic data and others.

R is a free software environment for statistical computing and graphics. It provides a variety of statistical and graphical

techniques. R can be extended easily via packages. It is widely used in both academia and industry.

II. ROUGH SET CONCEPTS

Rough Set Theory (RST), proposed by Palwak (1982), is an extension of classical set theory for dealing with imprecision

knowledge. It employs indiscernibility relations to evaluate to what extent two objects are identical or similar. Using the

relations, one can construct: Lower Approximation and Upper Approximation.

Lower Approximation: It consists of all instances which surely belong to the concept/class.

Upper Approximation: It contains all instances which possibly belong to the concept. One benefit of RST is that it does

not require additional parameters to extract information.

A dataset is represented in terms of an information system A = (U,A), where:

U: a finite, non-empty set of objects (instances/rows).

A: a finite, non-empty set of attributes, such that for every a ∈ A.

a : U → Va, where Va is the set of values of a.

A decision system A = (U, A ∪ {d}) is a special kind of information system, used in context of classification, in which d

is the decision attribute.

Indiscernibility relations: Given an information system (U, A), for any B ⊆ A the equivalence relation RB is defined

by

RB(x, y) = {(x, y) ∈ U2 | ∀ a ∈ B, a(x) = a(y)}.

If (x, y) ∈ RB(x, y), then x and y have exactly the same values for the attributes in B. The equivalence

classes of this so-called B-indiscernibility relation are denoted by [x]B.

Approximations: Given B ⊆ A, X ⊆ U can be approximated using the information in B by constructing B-lower and B-

upper approximations of X:

Page 2: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 693

RB ↓ X = {x ∈ U | [x]B ⊆ X},

RB ↑ X = {x ∈ U | [x]B ∩ X ≠ Ø }.

Positive Region: The set of objects in U that can be certainly classified using the conditional attributes B:

POSB = U x ∈ U RB ↓ [x]d.

Boundary Region: The set of objects, x ∈ U, that can be possibly, but not certainly, classified using

the conditional attributes in B:

BNDB = U x ∈ U RB ↑ [x]d \ U x ∈ U RB ↓ [x]d.

III. FUZZY ROUGH SET THEORY

Fuzzy rough set theory (FRST) was introduced by Dubois and Prade (1990) as a combination between fuzzy sets and

RST. To calculate indiscernibility relations, we can define the fuzzy tolerance relations RB. For example, for any subset

B of A,

ra (x, y) = 1 – |a(x) – a(y)| / |a max – a min| ,

RB (x,y) = Ʈ(ra (x, y)),

where ra is a fuzzy similarity equation and Ʈ is a t-norm.

Regarding the approximations, Radzikowska and Kerre define them as follows:

(RB ↓ X)(y) = inf x ∈ U I (RB (x, y), X(x)),

(RB ↑ X)(y) = sup x ∈ U Ʈ (RB (x, y), X(x)),

where I, Ʈ, and RB are implicator, t-norm and the relation.

In this case, for constructing the approximations we need to define types of relation, fuzzy similarity equation, implicator,

and t-norm.

A benefit of Fuzzy Rough Set Theory is that we can analyse without performing discretization.

IV. “ROUGH SETS” PACKAGE IN R

“RoughSets” is an R Package integrating implementations of algorithms based on Rough Set and Fuzzy Rough Set

Theory. It is presented for researchers and practitioners. Researchers can build new models by defining custom functions

as parameters. Practitioners can perform data analysis.

It contains over 40 functions implementing basic concepts, missing value completion, discretization, feature selection,

instance selection, rule induction (rule-based classifiers), and nearest neighbour-based classifiers.

CRAN : http://cran.r-project.org/package=RoughSets

Function names contains three parts separated by points:

1. prefix:

BC: basic concepts,

D: discretization,

FS: feature selection,

IS: instance selection,

RI: rule induction,

C: nearest neighbour-based classifiers,

MV: missing value completion,

SF: supporting functions,

X: auxiliary functions.

2. suffix: there are three types: RST, FRST, and none. None means it is used for both theories.

3. middle: the actual function name, which may consist of more than one word separated by points.

For example, FS.quickreduct.RST: a function for feature selection implementing Quick Reduct based on Rough Set

Theory.

Fig 1. RoughSets Package in R

Page 3: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 694

Implementation of Basic Concepts: Functions included in this part are used as main components of other application

areas:

1. Indiscernibility relations: It is a relation determining whether two objects are indiscernible by some attributes.

RST: equivalence relation (crisp).

FRST: crisp, tolerance, transitive kernel, transitive closure, and custom.

2. Lower and upper approximations: These approximations show whether objects can classified with certainty or not.

RST: Pawlak’s standard.

FRST: implicator/tnorm, vqrs, OWA, rfrs, fvprs etc.

3. Positive region and degree of dependency: It is used to determine objects that are included in positive region and

the corresponding degree of dependency.

4. Discernibility Matrix: It is used to create discernibility matrix showing attributes that discern each pair of objects.

Especially for FRST, we can define custom functions on the following parameters: type.LU, type.aggregation, t.tnorm,

and t.implicator.

Implementation of Missing Value Completion(MV): It is a process to deal with missing values.

Wrapper function:

MV.missingValueCompletion(decision.table, type.method)

Implementation of Discretization(D): It refers an approach for converting real-values attributes into nominal ones in

information systems.

NAME OF FUNCTIONS DESCRIPTIONS

D.max.discernibility.matrix.RST Discretization based on maximal discernibility algorithm.

D.local.discernibility.matrix.RST Discretization based on local strategy.

D.global.discernibility.heuristic.RST Discretization based on global maximum discernibility heuristic.

D.discretize.quantiles.RST Discretization based on quantiles.

D.discretize.equal.intervals.RST Discretization based on equal interval size.

Wrapper function:

D.discretization.RST(decision.table, type.method, …)

Implementation of Feature Selection (FS): It is a process of finding a subset of attributes which gives the same quality

as the complete feature set.

NAME OF FUNCTIONS DESCRIPTIONS

FS.quickreduct.RST Feature selection based on Quick Reduct.

FS.quickreduct.FRST Feature selection based on fuzzy Quick Reduct.

FS.greedy.heuristic.superreduct.RST Greedy heuristics for selecting a super-reduct.

FS.nearOpt.fvprs.FRST Feature selection based on near-optimal reduction.

FS.permuation.heuristic.RST Permuation heuristic for determining a reduct.

Wrapper Functions:

FS.reduct.computation (decision.table, method, …)

NAME OF FUNCTIONS DESCRIPTIONS

MV.deletionCases Deleting instances.

MV.mostCommonValResConcept Assigning the most common value of an attribute restricted by concepts.

MV.mostCommonVal Replacing the attribute mean or common values.

MV.globalClosestFit The global closest fit.

MV.conceptClosestFit The concept closest fit.

Page 4: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 695

FS.feature.subset.computation (decision.table, method, …)

FS.all.reducts.computation(discernibilityMatrix)

Implementation of Instance Selection (IS): It is to remove or replace noisy or inconsistent instances from training

datasets.

In RST, it refers to evaluating each object included in the boundary region or the positive region, e.g., deleting objects in

the boundary region.

NAME OF FUNCTIONS DESCRIPTIONS

IS.FRIS.FRST Fuzzy rough instance selection.

IS.FRPS.FRST Fuzzy rough prototype selection.

For example,

IS.FRPS.FRST(decision.table, type.alpha = “FRPS.1”)

Implementation of Rule Induction (RI): It refers to approaches used to extract knowledge in IF-THEN rules.

NAME OF FUNCTIONS DESCRIPTIONS

RI.indiscernibilityBasedRules.RST Rule induction based on RST.

RI.hybridFS.FRST QuickRules algorithm.

RI.GFRS.FRST Generalized fuzzy rough set rule induction.

For example,

RI.indiscernibilityBasedRules.RST(decision.table, feature.set)

In order to predict a new dataset:

Predict(object, newdata,….)

Implementation of Nearest Neighbour-based Classifiers (C): It attempts to combine Nearest Neighbour(NN) with

lower and upper approximations.

NAME OF FUNCTIONS DESCRIPTIONS

C.FRNN.FRST Fuzzy-rough nearest neighbours.

C.FRNN.O.FRST Fuzzy-rough ownership nearest neighbours.

C.POSNN.FRST Positive region based fuzzy-rough nearest neighbours.

For example:

control <- list(type.LU, k, type.aggregation, type.relation, t.implicator)

C.FRNN.FRST(decision.table, newdata, control)

Note: all values in the data must be numeric.

V. PROPOSED METHODOLOGY

Fig 2. Experimental Setup

Page 5: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 696

In the proceeding sub-sections we have discussed in detail the underlying step-by-step implementation in the proposed

methodology.

Step 1: Raw Data.

The experimental setup as shown in fig 2 consists of variable frequency drive(VFD), three phases 0.5 hp AC motor,

bearing, belt drive, gearbox, and brake drum dynaometer with scale. A standard deep groove ball bearing (No. 60005) is

used in this experiment. Tri axial type accelerometer is fixed over the bearing block to measure the vibration signals.

Using 24 bit ATA0824DAQ51 data acquisition system, the signals were collected with the sampling frequency of

12800Hz. The bearing is driven by a motor at a constant rotating speed of 1700r/min.

Constant load is applied by the brake drum dynamometer and the speed is monitored by the tachometer.

After the signal processing, the data collected of different components of bearings i.e. Normal-face, Ball-face, Inner-face,

Outer-face is stored in .csv file format. Each component’s dataset are of 240,000 rows in .csv file format.

Step 2: Data Cleaning.

It is one of the most vital step before initiating data mining on the collected data. In most of the cases, the data collected

may contain some noise and even missing values.

So to successfully achieve data cleaning, one must be able to remove the noise and missing values. The noise can be

removed by smoothing techniques whereas the problem of missing values is solved by replacing a missing value with

most common occurring value for that attribute.

First, we have taken the datasets of 240,000 rows of different parts of bearings i.e. Normal, Ball, Inner, Outer. Then, we

divided 60,000 rows in ten different .csv files of each part of bearings.

To cope up with the missing values, we have taken into account of the most common occurring value or we have taken

mean of previous ten values of that attribute and then determining the missing value.

Step 3: Feature Selection.

Feature Selection helps in extracting subset of features which promises to obtain the complete feature set without the loss

in quality. In other words, its purpose is to select out the most important/useful features and to eliminate the unimportant

ones.

As, we had 60,000 rows in 10 different files for a component i.e. Normal, Ball, Inner, Outer. For each ten different files,

we will perform feature selection by taking into account of time domain features:

1. Mean.

2. Variance.

3. Square Mean Root.

4. Root Mean Square Root.

5. Peak.

6. Skewness.

7. Kurtosis.

8. Sfactor.

9. Cfactor.

10. Lfactor.

11. IFactor.

Step 4: Feature Reduction.

After performing Feature Selection, we will be performing feature extraction.

In most of the cases, unfortunately it has been observed that, even after feature selection, adding new features leads to

worse rather than better performance.

It can be reduced by:

1. Redesigning the features,

2. Selecting an appropriate subset among the existing features and

3. Combining existing features.

Linear transformations are particularly attractive because they are simple to compute and are analytically tractable.

Linear methods project the high-dimensional data into a lower dimensional space. It will reduce the complexity in

estimation and classification as well as an ability to visually examine the multivariate data in two or three dimensions.

Two classical approaches for finding optimal linear transformations are:

1. Principal Components Analysis (PCA): Seeks a projection that best represents the data in a least-squares

sense.

2. Linear Discriminant Analysis (LDA): Seeks a projection that best separates the data in a least-square sense.

In our proposed methodology we will be focusing on Principal Component Analysis for Feature Reduction technique.

Algorithm:

1. Mean centre the data (optional).

2. Compute the covariance matrix of the dimensions.

Page 6: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 697

3. Find eigenvectors of covariance matrix.

4. Sort eigenvectors in decreasing order of eigenvalues.

5. Project onto eigenvectors in order.

6. Assume data matrix is B of size m*n.

7. For each dimension, compute mean µi..

8. Mean centre B by subtracting µi. from each column i to get A.

9. Compute covariance matrix C of size n*n.

10. If mean centred, C = ATA.

11. Find eigenvectors and corresponding eigenvalues (V,E) of C.

12. Sort eigenvalues such that e1 ≥e2≥……≥en.

13. Project step-by-step onto the principal components v1,v2,….etc.

VI. SIMULATION RESULTS

The following figures i.e. Fig 3 to Fig 12 represent the simulated results of the proposed model. Fig 3 – Fig 9 represents

the R software environment and the implementation of modules step-by-step. Fig 10 – Fig12 show the results obtained in

datasets after performing each step in data mining technique i.e. 1. Raw Data, 2. Data Cleaning, 3. Feature Selection, 4.

Feature Reduction.

Fig 3. File Selection Fig 4. Conversion of Data Frame to Decision Table.

Fig 5. Indiscernibilty Relation Fig 6. Lower and Upper Approximation.

Fig 7. Positive Region. Fig 8. Discernibility Matrix.

Fig 9. Feature Subset Quick Reduct

Page 7: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 698

Fig 10. Initial Dataset with no feature selection, data cleaning and reducts

Fig 11. Dataset after performing Data cleaning, reducts and feature selection.

Fig 12. Dataset after performing Feature Reduction

VII. CONCLUSIONS

In our proposed methodology, we have made use of open source programming platforms (R Programming) which is one

of the best statistical computing software. It is widely used by many researchers and data scientists for performing data

mining. RoughSets Package is avaiable in R and we have made use of that in achieving our results. At the last, the choice

between feature reduction and feature selection depends on the application domain and the specific training data.

Feature Selection leads to savings in computational costs and the selected features retain their original physical

interpretation. Feature Reduction may provide a better discriminative ability but these new features may not have a clear

physical meaning.

REFERENCES

[1] AN Pathak, Manu Sehgal, Divya Christopher, "A Study on Selective Data Mining Algorithms",International

Journal of Computer Science Issues (IJCSI), 03/2011, Volume 8, Issue 2, Page 479.

[2] Mosima Anna Masethe, Hlaudi Daniel Masethe, "Prediction of Work Integrated Learning Placement Using Data

Mining Algorithms", Lecture Notes in Engineering and Computer Science, 10/2014,Volume 2213, Issue 1,

International Association of Engineers, Page 353-357..

[3] Smyth, Padhraic, Pregibon, Daryl,Faloutsos, Christos," Data-driven evolution of data mining algorithms",

Communications of the ACM, 08/2002, Volume 45, Issue 8, Page 33-37.

[4] S.Saraswathi, Dr. Mary Immaculate Sheela, "A Comparative Study of Various Clustering Algorithms in Data

Mining",International Journal of Computer Science and MobileComputing, 11/2014, Volume 3, Issue 11, Page

422 - 428.

Page 8: Volume 5, Issue 4, 2015 ISSN: 2277 128X International ...ijarcsse.com/Before_August_2017/docs/papers/Volume_5/4_April201… · Fuzzy rough set theory (FRST) was introduced by Dubois

et al., International Journal of Advanced Research in Computer Science and Software Engineering 5(4),

April- 2015, pp. 692-699

© 2015, IJARCSSE All Rights Reserved Page | 699

[5] Anderson, Russell K, "Prediction Algorithms for Data Mining, John Wiley & Sons, Ltd, 2013, ISBN

1119967546, 9781119967545, Page 83-101.

[6] Sadoyan, Hovhannes, Zakarian, Armen, Mohanty, Pravansu, " Data mining algorithm for manufacturing process

control",The International Journal of Advanced Manufacturing Technology, 03/2006, Volume 28, Issue 3,Page

342-350.

[7] Prabhjot Kaur, Robin Parkash Mathur, " Implementation and Analysis of Clustering Algorithms in Data

Mining", International Journal of Computers & Technology, 05/2013, Volume 6, Issue 1, Page 232-236.

[8] P K Srimani, Malini M Patil, "MASSIVE DATA MINING (MDM) ON DATA STREAMS USING

CLASSIFICATION ALGORITHMS", International Journal of Engineering Science and Technology, 06/2012,

Volume 4, Issue 6.

[9] Dharmender Kumar, " Performance Analysis of Data Mining Algorithms to Generate Frequent Itemset",

International Journal of Artificial Intelligence & Knowledge Discovery, 04/2011, Volume 1, Issue 2, Page 1-5.

[10] Liu, Guilong, " Axiomatic systems for rough sets and fuzzy rough sets", International Journal of Approximate

Reasoning, 2008, Volume 48, Issue 3,Pages 857-867.

[11] Nguyen, Hung Son, Pal, Sankar K, Skowron, Andrzej, " Rough sets and fuzzy sets in natural computing",

Theoretical Computer Science, 2011, Volume 412, Issue 42, Page 5816-5819.

[12] Norman Matloff, "The Art of R Programming", ISBN 9781593273842, 1593273843, Edition 1, Pages 400,

10/2011.

[13] Sylvia Tippmann, "PROGRAMMING TOOLS: ADVENTURES WITH R", Nature, 01/2015, Volume 517,

Issue 7532.