soft computing practicals

34
PRACTICAL FILE ON SOFT COMPUTING B.E. (COMPUTER Science & Engineering) Submitted to GGCT, Jabalpur Submitted By Brijesh Kumar Singh 0208cs101037 VIII Sem

Upload: brijesh-singh

Post on 12-May-2017

224 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Soft Computing Practicals

Index

PRACTICAL FILEON

SOFT COMPUTING

B.E.(COMPUTER Science & Engineering)

Submitted to

GGCT, Jabalpur

Submitted ByBrijesh Kumar Singh

0208cs101037VIII Sem

Page 2: Soft Computing Practicals

S.No List of Experiments Date Sign

1 Study of Biological Neuron & Artificial Neural Networks.

2 Study of Various activation functions & their Matlab implementations.

3 WAP in C++ to implement Perceptron Training algorithm.

4 WAP in C++ to implement Delta learning rule.

5 Write an algorithm for Adaline N/W with flowchart.

6 Write an algorithm for Madaline N/W with flowchart.

7 WAP in C++ to implement Error Back Propagation Algorithm.

8 Study of Genetic Algorithm.

9Write a MATLAB program to find union, intersection and complement of fuzzy sets.

10 Write a MATLAB program for maximizing f(x)=x2 using GA.

Remarks:

Object: 1 Study of Biological Neuron & Write about Artificial Neural Networks.

Biological Neuron

Artificial neural networks born after McCulloc and Pitts introduced a set of simplified neurons in 1943. These neurons were represented as models of biological networks into conceptual components for circuits that could perform computational tasks. The basic model of the artificial neuron is founded upon the functionality of the biological neuron. By definition, “Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body”

Page 3: Soft Computing Practicals

The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it. The dendrites and the axon end in pre-synaptic terminals. The cell body is the heart of the cell. It contains the nucleolus and maintains protein synthesis. A neuron has many dendrites, which look like a tree structure, receives signals from other neurons.

A single neuron usually has one axon, which expands off from a part of the cell body. This I called the axon hillock. The axon main purpose is to conduct electrical signals generated at the axon hillock down its length. These signals are called action potentials.

The other end of the axon may split into several branches, which end in a pre-synaptic terminal. The electrical signals (action potential) that the neurons use to convey the information of the brain are all identical. The brain can determine which type of information is being received based on the path of the signal.

The brain analyzes all patterns of signals sent, and from that information it interprets the type of information received. The myelin is a fatty issue that insulates the axon. The non-insulated parts of the axon area are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal travel down the axon to be fast and constant.

The synapse is the area of contact between two neurons. They do not physically touch because they are separated by a cleft. The electric signals are sent through chemical interaction. The neuron sending the signal is called pre-synaptic cell and the neuron receiving the electrical signal is called postsynaptic cell.

The electrical signals are generated by the membrane potential which is based on differences in concentration of sodium and potassium ions and outside the cell membrane.

Biological neurons can be classified by their function or by the quantity of processes they carry out. When they are classified by processes, they fall into three categories: Unipolar neurons, bipolar neurons and multipolar neurons.

Unipolar neurons have a single process. Their dendrites and axon are located on the same stem. These neurons are found in invertebrates.

Page 4: Soft Computing Practicals

Bipolar neurons have two processes. Their dendrites and axon have two separated processes too.

Multipolar neurons: These are commonly found in mammals. Some examples of these neurons are spinal motor neurons, pyramidal cells and purkinje cells.

When biological neurons are classified by function they fall into three categories. The first group is sensory neurons. These neurons provide all information for perception and motor coordination. The second group provides information to muscles, and glands. There are called motor neurons. The last group, the interneuronal, contains all other neurons and has two subclasses. One group called relay or protection interneurons. They are usually found in the brain and connect different parts of it. The other group called local interneuron’s are only used in local circuits.

Artificial Neural Network

An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.

Advantages:

A neural network can perform tasks that a linear program can not. When an element of the neural network fails, it can continue without any problem by

their parallel nature. A neural network learns and does not need to be reprogrammed. It can be implemented in any application. It can be implemented without any problem.

Disadvantages: The neural network needs training to operate. The architecture of a neural network is different from the architecture of microprocessors

therefore needs to be emulated. Requires high processing time for large neural networks.

Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.

Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to the engineering perspective. In engineering, neural networks serve two

Page 5: Soft Computing Practicals

important functions: as pattern classifiers and as nonlinear adaptive filters. We will provide a brief overview of the theory, learning rules, and applications of the most important neural network models. Definitions and Style of Computation An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase . After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is built with a systematic step-by-step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule . The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal mappers . There is a style in neural computation that is worth describing.

An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case the training is called supervised ). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, then neural network technology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the

Page 6: Soft Computing Practicals

solution. But ANN-based solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, prediction, system identification, and control.

Neural Network topologies

In the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between:

Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputs of units in the same layer or previous layers.

Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons is significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).

Classical examples of feed-forward neural networks are the Perceptron and Adaline. Examples of recurrent networks have been presented by Anderson(Anderson, 1977), Kohonen (Kohonen, 1977), and Hopfield (Hopfield, 1982) .

Training of Artificial neural networks

A neural network has to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.

We can categories the learning situations in two distinct sorts. These are:

Page 7: Soft Computing Practicals

Supervised learning or Associative learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).

Unsupervised learning or Self-organization in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.

Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learning may be categorized under this type of learning.

Page 8: Soft Computing Practicals

Object: 2 Study of Various activation functions & their Matlab implementations.

Activation functions

The activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.) . First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.

Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.

Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the hyperbolic tangent function.

Page 9: Soft Computing Practicals

The artifcial neural networks which we describe are all variations on the parallel distributed processing (PDP) idea. The architecture of each neural network is based on very similar building blocks which perform the processing. In this chapter we first discuss these processing units and discuss diferent neural network topologies. Learning strategies as a basis for an adaptive system

Page 10: Soft Computing Practicals

Object: 3 Explain Perceptron Training Algorithm

Perceptron

The perceptron is an algorithm for supervised classification of an input into one of two possible outputs. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector describing a given input. The learning algorithm for perceptrons is an online algorithm, in that it processes elements in the training set one at a time.

The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt.[1]

In the context of artificial neural networks, the perceptron algorithm is also termed the single-layer perceptron, to distinguish it from the case of a multilayer perceptron, which is a more complicated neural network. As a linear classifier, the (single-layer) perceptron is the simplest kind of feedforward neural network.

The perceptron is a binary classifier which maps its input (a real-valued vector) to an output value (a single binary value):

where is a vector of real-valued weights, is the dot product (which here computes a weighted sum), and is the 'bias', a constant term that does not depend on any input value.

The value of (0 or 1) is used to classify as either a positive or a negative instance, in the case of a binary classification problem. If is negative, then the weighted combination of inputs must produce a positive value greater than in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. The perceptron learning algorithm does not terminate if the learning set is not linearly separable.

Perceptron Training Algorithm

Below is an example of a learning algorithm for a (single-layer) perceptron. For multilayer perceptrons, where a hidden layer exists, more complicated algorithms such as backpropagation must be used. Alternatively, methods such as the delta rule can be used if the function is non-linear and differentiable, although the one below will work as well.

When multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation.

Page 11: Soft Computing Practicals

We first define some variables:

denotes the output from the perceptron for an input vector . is the bias term, which in the example below we take to be 0. is the training set of samples, where:

o is the -dimensional input vector.o is the desired output value of the perceptron for that input.

We show the values of the nodes as follows:

is the value of the th node of the th training input vector. .

To represent the weights:

is the th value in the weight vector, to be multiplied by the value of the th input node.

An extra dimension, with index , can be added to all input vectors, with , in which case replaces the bias term. To show the time-dependence of , we use:

is the weight at time . is the learning rate, where .

Too high a learning rate makes the perceptron periodically oscillate around the solution unless additional steps are taken.

The appropriate weights are applied to the inputs, and the resulting weighted sum passed to a function that produces the output y.

Learning algorithm steps

1. Initialise weights and threshold. Note that weights may be initialised by setting each weight node to 0 or to a small random value. In the example below, we choose the former.

2. For each sample in our training set , perform the following steps over the input and desired output :

Page 12: Soft Computing Practicals

2a. Calculate the actual output:

2b. Adapt weights:

, for all nodes .

Step 2 is repeated until the iteration error is less than a user-specified error threshold , or a predetermined number of iterations have been completed. Note that the algorithm adapts

the weights immediately after steps 2a and 2b are applied to a pair in the training set rather than waiting until all pairs in the training set have undergone these steps.

Page 13: Soft Computing Practicals

Object: 4 Write about Delta learning rule

Delta Rule

Delta rule is a generalization of the perceptron training algorithm. It extends the technique to continuous input and outputs in perceptron training Algorithm a term delta is introduced which is the difference between the desired Or target output T and actual output a Delta = (T-A) Here if delta= 0 ,the o/p is correct & nothing is done If delta> 0 , the o/p is incorrect & is 0 ,add each I/p to its corresponding wt. If delta< 0 , the o/p is incorrect and is 1

Subract each i/p from its corresponding wt.It learning rate coefficient of multiples the deltax,Product to allow control of the avg. size of wt changes

I=( ) Wi(n+1)=wi(n)+delta IWhere delta I= the correction associated with the ith input xiWi(n+1)=value of w+I often adjustmentWi(n)= value of w+I before adjustment

Implementation of Delta Rule

#include<<iostream.h>>#include<<conio.h>>Void main() { clrscr( );float input[3],d,weight[3],delta;for(int I=0;I < 3 ; I++){ cout <<”\n initilize weight vector “<<I<<”\t”; cin>>input[I];}cout<<””\n enter the desired output\t”;cin>>d;do {for del=d-a;if(del<0)for(I=0 ;I<3 ;I++)w[I]=w[I]-input[I];else if(del >0)for(I=0;I<3;I++)weight[I]=weight[I]+input[I];for(I=0;I<3;I++){

val[I]=del*input[I];

Page 14: Soft Computing Practicals

weight[+1]=weight[I]+val[I];}cout<<”\value of delta is “<<del;cout<<”\n weight have been adjusted”;}while(del notequal 0)if(del=0)cout<<”\n output is correct”;}

Page 15: Soft Computing Practicals

Object: 5 Write an algorithm for Adaline N/W with flowchart.

Adaline Network

The Adaline network training algorithm is as follows :

Step 0: Weights and bias are set to some random values but not zero. Set the learning rate parameter .

Step 1: Perform step 2-6 when stopping condition is false.Step 2: Perform step 3-5 for each bipolar training pair s:t .Step 3: Set activations for input units I = 1 to n.

xi = si

Step 4: Calculate the net input to the output unit.

Yin=b + xiwi

Step 5: Update the weights and bias for I = 1 to n:

Wi(new) = wi(old) + ( t - yin) xi

b( new) = b(old) + (t - yin)

Step 6: If the highest weight change that occurred during the training is smaller than a specified tolerance the stop the training process, else continue. This is the test for stopping conditionof a network

Testing Algorithm :

Step 0: Initialized the weights.Step 1: Perform step 2-4 for each bipolar input units to x.Step 2: Set the activations of the input units to x.Step 3: Calculate the net input to the output unit.

Yin=b + xiwi

Step 4 : Apply the activation function over the net input calculated :

1 if yin >= 0

-1 if yin< 0 y =

Page 16: Soft Computing Practicals

Object: 6 Write an algorithm for Madaline N/W with flowchart.

Madaline Network

The Madaline network training algorithm is as follows:

Step 0: Initialized the weights. Also set initial learning rate .Step 1: When stopping condition is false, perform step 2-3Step 2: For each bipolar training pair s:t , perform step 3-7Step 3: Activate input layer units. for i = 1 to n.

xi = si

Step 4: Calculate the net input to each hidden Adaline unit. :zinj = b j+ xiwij, j=1 to m

i=1Step 5: Calculate the output to each hidden unit:

zj = f(zinj)

Step 6: Find the output of the net :Yin = bO + zivj

J=1

Y=f ( yin )

Step 7: Calculate the error and update the weights:

1. If t = y, no weight updation is required.2. If t y and t = +1, update weights on zj, where net input is closest to 0(zero):

bj(new) = bj(old) + ( 1 - zinj) wij( new) = wij (old) + (1 - zinj)xi

3. If t y and t = -1, update weights on zk whose net input is positive:

wik( new) = wik (old) + (1 - zink)xi

bk(new) = bk(old) + ( -1 - zink)

Step 8: Test for stopping condition.

Object: 7 Write a program to implement Error Back Propagation Algorithm

Algorithm Error for Back propagation

Page 17: Soft Computing Practicals

Start with randomly chosen weight while Mse is inoat is factory and computational bounds are not exceeds, doFor each input pattern n; & desined o/p vecto dj;Compute hidden node o/p x;(1);Compute the n/w o/p vector oj;Compute the error between oj and desired o/p vector dj;Modify the weights between hidden & o/p nodes.

Wk,j(2,1) = (dk-Ok) Ok (1 - Ok) Xj (1) Modify the weight between i/p & hidden nodes

Wj,j(1,0) = (dk - Ok) Ok (1 - Ok) Wk,j (2,1) Xj (1) (1- Xj (1)) Xj (0)

End forEnd while.

Program for Back propagation Algorithm

Program Code :-

# include <:iostream.h># include <process.h># include <conio.h>float x=0; I=0, j=0, r=0;float hard (float I, float m, float w [20] [20]);float p [5] [5], float t [5], float a{float n=0, s=0, e=0, pt [50][50];for (int I=0, I<2; I ++){

S= stw [i][j]* p[i][j];}

if yes (y = =1)for (j=0 ; j<2 ; j++)

{pt [i][j] = 2 x 9 + p[i][j]+w[i][j];w[I + 1] [j] – w [2] [j] + pt [i][j];S= s + w [i][j] + pt [i][j];

}}

if (s = = t [i]){

w [I +1] [0] = w [1][0];w [I + 1] [1] = w [1][2];

Page 18: Soft Computing Practicals

x++;if (x > (m-1)){if ( y = =0){cout <<”\n The wt b/w input input & hidden node is :\n w = [”;cout<< w [I+1]<<””<<w[I+1][1]<<”];}if(y = =1){cout <<” \n \n Two weight b/w hidden and output node is \n w = [“cout<< [I+1][0] << “<< w[I+1]”[1]<<”]”;exit [0];}I++;Return x;Else if (s! = t[i])E = t[i]-s;If (y = = 1){cout <<”\n \n The wt b/w hidden and output nodes is : \n w = [”;cout <<w [I+1][0]<<”<< w [I+1] [ 1 ]<< ”];exit [0];for (j=0 ; j<2; j++){pt [i][j] = 2+9*I* p [i][j] ;w [I+1] [j] = w [i][j] + pt [i][j];}x--;I ++;Return x;Void main (){clrscr ();{float 2,n,p [5][5], b [20], t [5], e;float w [20][20];float <<”\n Enter the learning Rule :”;cin >>afor (I =0; I<n; I++){cout << “Enter the input p”<< I +1 <<”i”;for (I=0; I<2; I++){cin >>p[i][j];}

Page 19: Soft Computing Practicals

cout <<”\n Enter the Target” << I+1 <<”]” ;cin >> t[1];}cout << “\n Enter the wt vector”;cout <<” \n w [o];for [I=0;I<2;I++]{cin >> w [i][j];for (I=0; I<n; I++){x = hard (I,n,w,p,t,a)n=2;if (I = (n-1)){ I = = -1

{}getch ();}

Object: 8 Study of Genetic Algorithm

Genetic Algorithm

Page 20: Soft Computing Practicals

Professor John Holland in 1975 proposed an attractive class of computational models, called Genetic Algorithms (GA), that mimic the biological evolution process for solving problems in a wide domain. The mechanisms under GA have been analyzed and explained later by Goldberg, De Jong, Davis, Muehlenbein, Chakraborti, Fogel, Vose and many others. Genetic Algorithms has three major applications, namely, intelligent search, optimization and machine learning. Currently, Genetic Algorithms is used along with neural networks and fuzzy logic for solving more complex problems. Because of their joint usage in many problems, these together are often referred to by a generic name: “;soft-computing”. A Genetic Algorithms operates through a simple cycle of stages:

i) Creation of a “;population” of strings, ii) Evaluation of each string, iii) Selection of best strings and iv) Genetic manipulation to create new population of strings.

The cycle of a Genetic Algorithms is presented below

Each cycle in Genetic Algorithms produces a new generation of possible solutions for a given problem. In the first phase, an initial population, describing representatives of the potential solution, is created to initiate the search process. The elements of the population are encoded into bit-strings, called chromosomes. The performance of the strings, often called fitness, is then evaluated with the help of some functions, representing the constraints of the problem. Depending on the fitness of the chromosomes, they are selected for a subsequent genetic manipulation process. It should be noted that the selection process is mainly responsible for assuring survival of the best-fit individuals. After selection of the population strings is over, the genetic manipulation process consisting of two steps is carried out. In the first step, the crossover operation that recombines the bits (genes) of each two selected strings (chromosomes) is executed. Various types of crossover operators are found in the literature. The single point and two points crossover operations are illustrated

The crossover points of any two chromosomes are selected randomly. The second step in the genetic manipulation process is termed mutation, where the bits at one or more randomly selected positions of the chromosomes are altered. The mutation process helps to overcome trapping at local maxima. The offspring’s produced by the genetic manipulation process are the next population to be evaluated.

Page 21: Soft Computing Practicals

Fig.: Mutation of a chromosome at the 5th bit position.

Example: The Genetic Algorithms cycle is illustrated in this example for maximizing a function f(x) = x2 in the interval 0 = x = 31. In this example the fitness function is f (x) itself. The larger is the functional value, the better is the fitness of the string. In this example, we start with 4 initial strings. The fitness value of the strings and the percentage fitness of the total are estimated in Table A. Since fitness of the second string is large, we select 2 copies of the second string and one each for the first and fourth string in the mating pool. The selection of the partners in the mating pool is also done randomly. Here in table B, we selected partner of string 1 to be the 2-nd string and partner of 4-th string to be the 2nd string. The crossover points for the first-second and second-fourth strings have been selected after o-th and 2-nd bit positions respectively in table B. The second generation of the population without mutation in the first generation is presented in table C.

Page 22: Soft Computing Practicals

Table A

Table B:

Table C:

A Schema (or schemata in plural form) / hyper plane or similarity template is a genetic pattern with fixed values of 1 or 0 at some designated bit positions. For example, S = 01?

Page 23: Soft Computing Practicals

1??1 is a 7-bit schema with fixed values at 4-bits and don't care values, represented by ?, at the remaining 3 positions. Since 4 positions matter for this schema, we say that the schema contains 4 genes.

Deterministic Explanation of Holland's Observation

To explain Holland's observation in a deterministic manner let us presume the following assumptions.

i) There are no recombination or alternations to genes. ii) Initially, a fraction f of the population possesses the schema S and those individuals

reproduce at a fixed rate r. iii) All other individuals lacking schema S reproduce at a rate s < r.

Thus with an initial population size of N, after t generations, we find Nf r t individuals possessing schema S and the population of the rest of the individuals is N(1 - f) st. Therefore, the fraction of the individuals with schema S is given by

For small t and f, the above fraction reduces to f (r / s) t , which means the population having the schema S increases exponentially at a rate (r / s). A stochastic proof of the above property will be presented shortly, vide a well-known theorem, called the fundamental theorem of Genetic algorithm.

Stochastic Explanation of Genetic Algorithms

For presentation of the fundamental theorem of Genetic Algorithms, the following terminologies are defined in order.

Definition: The order of a schema H, denoted by O(H), is the number of fixed positions in the schema. For example, the order of schema H = ?001?1? is 4, since it contains 4 fixed positions.

For example, the schema ?1?001 has a defining length d(H) = 4 - 0 = 4, whilethe d(H) of ???1?? is zero.

Definition: The schemas defined over L-bit strings may begeometrically interpreted as hyper planes in an L- dimensional hyperspace(a binary vector space) with each L-bit string representing one corner point inan n-dimensional cube

Page 24: Soft Computing Practicals

Object: 9. Consider the following fuzzy sets

A =

B =

Program to find union, intersection and complement of fuzzys sets

% Enter the two Fuzzy sets

u = input(‘enter the first fuzzy set A’);

v = input(‘enter the second fuzzy set B’);

disp(‘Union of A and B’);

w = max(u,v);

disp(‘Intersection of A and B’);

p = min(u,v);

[m] = size(u);

disp(‘Complement of A’);

q1 = ones(m)-u

[n] = size(v);

disp(‘Complement of B’);

q2 = ones(n)-v

Page 25: Soft Computing Practicals

Output

enter the first fuzzy set A[1 0.4 0.6 0.3]

enter the second fuzzy set B[0.3 0.2 0.6 0.5]

Union of A and B

w=

1.0000 0.4000 0.6000 0.5000

Intersection of A and B

p =

0.3000 0.2000 0.6000 0.3000

Complement of A

q1 =

0 0.6000 0.4000 0.7000

Complement of B

q2 =

0.7000 0.8000 0.4000 0.5000

Page 26: Soft Computing Practicals

Object: 10. Write a MATLAB program for maximizing f(x)=x2 using GA. Where x is range from 0 to 31. Perform 5 iterations only.

Program for Genetic algorithm to maximize the function

f(x) =xsquare

clear all;clc;%x range from 0 to 31 2power5 = 32%five bits are enough to represent x in binary representationN=input (‘Enter no. of population in each iteration’)Nit= input (‘Enter no. of iterations’);%Generate the initial population [oldchrom] = initbp(n,5)%The population in binary is converted to integer FieldD=[5;0;31;0;0;1;1]for i= 1:nit phen=bindecod(oldchrm, FieldD, 3) ; % phen give the integer value of the population %obtain fitness valueSqx=phen. ^ 2;Sumsqx=sum(sqx);avsqx=sumqx/n;hsqx=max(sqx);pselect=sqx./sumqx;sumpselect=sum(pselect);avpselect=sumpselect/n;hpselect=max(pselect);%apply roulette wheel selectionFitnV=sqx;Nsel=4;Newchrix=selrws(FitnV, Nsel);Newchron=oldchron(newchrix, : );%perform CrossoverCrossover=1;Newchromc=recsp (newchrom, crossrate ); %new population after crossover%Perform mutationVlub=0 : 31;

Page 27: Soft Computing Practicals

Mutrate=0.001;Newchrom=mutrandbin (newchromc, vlub, 0.001);%new population after mutationdisp(‘For iteration’);idisp(‘population’);oldchromdisp(‘X’)phendisp(‘£ (X)’);sqxoldchrom=newchromm;end