learning robots modue report

25
Intelligent Kinect-bot applying supervised learning algorithm to a simple humanoid to trigger basic social interaction using Microsoft Kinect and Processing Learning Robots Module 2012 student name: Tom Fejer student number: s101866 [email protected]

Upload: tom-fejer

Post on 09-Mar-2016

214 views

Category:

Documents


0 download

DESCRIPTION

applying supervised learning algorithm to a simple humanoid to trigger basic social interaction using Microsoft Kinect and Processing

TRANSCRIPT

Intelligent Kinect-botapplying supervised learning algorithm to a simple humanoid to trigger basic social interaction using Microsoft Kinect and Processing Learning Robots Module 2012student name: Tom Fejerstudent number: [email protected]

1. IntroductionThis report is a summary of a 2 weeks project made in the Technical University of Eindhoven in 2012. Theoretical and practical lectures was provided by Emilia I. Barakova, Ph.D. and Dr Jun Hu such as ‘Learning as a design method’ or how can Matlab can be used with physical hardwares in order to understand and experiment with learning algorithms in interaction design. 1.1 General Context - Principles of intelligent robotics Connectionism1 is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many forms of connectionism, but the most common forms use neural network models. The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.2

Real biological networks are very complex; artificial neural network algorithms abstract this complexity and focus on the data processing aspect which is the most relevant for them. This algorithms can be used in various context: mimicking animal, human behavior or enhance the performance of predicting ability or decrease generalization errors of complex, changing systems. 1.2 General Goal - Applying learning methods to designthe general goal of this module is to get a better understanding in different learning algorithms and apply it in a valid scenario. 1.3 Concrete Goal - Create initial human-robot interaction (HRI) create an engaging interaction experience with a small humanoid robot interface which can simulate a basic social interaction by distinguish a few small gestures and react on it. 2. Approach / Method

Define Scenarios

Record Data

1 "Connectionism - Wikipedia, the free encyclopedia." 2004. 12 Jun. 2012 <http://en.wikipedia.org/wiki/Connectionism>2 "Neural network - Wikipedia, the free encyclopedia." 2003. 12 Jun. 2012 <http://en.wikipedia.org/wiki/Neural_network>

Simplify and normalize the data

Add the data to the training algorithm

Add test data to the algorithm to verify the learning

3. Concept3.1 desired interaction scenariosFor demonstration purposes I chose two easily distinguishable hand gesture: waving and pushing. The robot would learn these gestures and when he recognize it he would give a relevant feedback on it:

for the waving he would say ‘Hi!’for the pushing he would say ‘High5!’

fig.1 - concept of interaction scenario 3.21 DataFor measuring the position of the hand I used a Microsoft Kinect sensor and the SimpleOpenNI library Hands3D example first I measured the raw data coming from the Kinect (I did a few seconds waving and a few seconds pushing)

fig.2 - raw data: top part: x, y z coordinates separately, bottom part combined As you can see the data is easily distinguishable but for using it in the algorithm I had the normalize it in a way it decrease the number of data and stays distinguishable. 3.22 Scaled dataTo be able to use the data I defined one second windows which allowed me to read 16 Hz measurements. So first I use the average of these measurements divided to XYZ coordinates.

fig.3 - average coordinates of waving (left) and push (right)

Then I added the standard deviation3(σ ) the measurement windows

In statistics and probability theory, standard deviation (represented by the symbol σ) shows how much variation or "dispersion" exists from the average (mean, or expected value).

3 Newman, Dr. "The distribution of range in samples from a normal population, expressed in terms of an independent estimate of standard deviation." Biometrika (1939): 20-30.

fig.4 - standard deviation by coordinates of waving (left) and push (right) Finally I added the average absolute difference (AAD)4

fig.5 - average absolute difference by coordinates of waving (left) and push (right) These 3 different scaling method will give my input data to the learning algorithm. Therefore in the system I will use 9 input data (3 coordinates per method) in every second. 3.3 Learning algorithmFor my scenario I choose supervised learning algorithm to teach the robot to recognize two hand gesture. If in the future I would like to extend the application I can choose the unsupervised learning algorithm which will create different clusters for different gestures . The main difference between the algorithms, the aim of the supervised algorithm is to minimize the classification errors and get the desired output (target variable) and in the unsupervised algorithm the goal is to create groups / subsets of data and make the data points which are belonging to a class as similar as possible or making difference between clusters as much as possible.

4 Chen, Zhenxue et al. "Small target detection algorithm based on average absolute difference maximum and background forecast." International Journal of Infrared and Millimeter Waves 28.1 (2007): 87-97.

fig.6 - supervised (left) and unsupervised (right) learning algorithm block diagram

3.4 Feed forward neural network

fig.7 - feed forward neural network diagram The input layer receives input from the outside world, and passes this value to the hidden layer. The value that reaches the hidden layer depends on the connection between the layers. Each connection has a weight. This weight multiplier can either increase or decrease the value coming from the input layer. 5

The hidden layer then processes the values, and pass them onto the output layer. The connections between the hidden layer and the output layer also have weights. The values in the output layer are processed and produce a final set of results. The final results can be used to make yes/no decisions, or can be used to make certain classifications etc etc. In my case, I would like to receive sensor data from the Kinect, pass this info to the neural network, and get it to classify the data into 2 different classes (‘Hi!’ or ‘High5!’). 3.5 Neuron

fig.8 - Sigmoid function - logistic curve The main purpose of the Neuron is to store the values that flow through the neural network.

5 "Arduino Basics: Neural Network (Part 1) - The Connection." 2011. 12 Jun. 2012 <http://arduinobasics.blogspot.com/2011/08/neural-network-part-1-connection.html>

They also do a bit of processing through the use of an Activation function. The activation function is quite an important feature of the neural network. It essentially enables the neuron to make decisions (yes/no and grey-zone) based on the values provided to them. The other feature of activation functions is the ability to map values of infinite size to a range of 0 to 1. The processing program I used works with the Sigmoid function6.

3.6 Data flow

fig.9 - block diagram of the data flow in the Processing program

6 Yin, Xinyou et al. "A flexible sigmoid function of determinate growth." Annals of Botany 91.3 (2003): 361-371.

3.7 Architecture of the components

fig.10 - architecture of the robot concept In an ideal set-up, the live sensor reading (from Microsoft Kinect sensor) would be transferred to a computer via USB, than the processed data (using Processing) would send signals to the microcontroller (Arduino) which triggers visual and audio informations from a display (Android cell-phone). 4.1 ConclusionI successfully recorded gestures from the sensor (training set) which I could add to the algorithm which afterwards could manage to recognize limited amount of recorded data. With the current set-up I could not manage to process the live sensor input. Somehow the architecture did not send serial data to the arduino port when the kinect sensor was running therefore the planned audio-visual display did not work. In general conclusion the robot with the kinect sensor could learn gestures and recognize gestures but regarding the timeframe I could not create the interaction what I planned. 4.2 RecommendationsFurther tuning are required both on the hardware and software side. As I was working on the project I felt a valid opportunity in developing such a robot which you can teach hand or body movements, using the Kinect sensor capability, and using the open source prototyping components, activate different audio, visual or tactile feedbacks. This would let us experiment with different human gesture or movement based interactions and with choosing proper learning algorithms, it would work much more efficiently than with preprogrammed values. 4.3 Reflection

I had really hard time with this module and the concept which I choose, therefore I had to request an extension to make the application and the algorithm work. As a designer I am not completely satisfied with the results because my goal was to build the interface to experience the social interaction with a robot. Lots of difficulties came up during the process, and my impatience and hardware focused approach did not help a lot. Overall it was a really challenging assignment, I learnt about the basics of the learning algorithms, which knowledge I will extend and use in the future. I started be interested in social robotics (which I never heard before). If I would have a chance to start the assignment over again I probably choose the option to have multiple partners in order to build a working robot and not only the algorithm. 5. Appendix processing codes import SimpleOpenNI.*; SimpleOpenNI context;float zoomF =0.5f;float rotX = radians(180); // by default rotate the hole scene 180deg around the x-axis, // the data from openni comes upside downfloat rotY = radians(0);boolean handsTrackFlag = false;PVector handVec = new PVector();ArrayList handVecList = new ArrayList();int handVecListSize = 10;String lastGesture = "";String stringSeparator = " ";PrintWriter output;int waveLeft;int waveRight;int waveStatus = 0;int pushStatus = 0; int waveCounter;int pushCounter; PFont f; ArrayList<Float>[] trainingWave; //= new ArrayList<Float>[7];ArrayList<Float>[] trainingHi5;// = new ArrayList<Float>[7]; ArrayList myTrainingInputs = new ArrayList();ArrayList myTrainingOutputs = new ArrayList(); float averageX, averageY, averageZ;float standardDeviationX, standardDeviationY, standardDeviationZ; float aadX = 0; float aadZ = 0; float aadY = 0;int windowCounter = 0;int windowSize = 15;ArrayList<Float> numbersX, squaresX, absDiffX;ArrayList<Float> numbersY, squaresY, absDiffY;ArrayList<Float> numbersZ, squaresZ, absDiffZ;

float totalX = 0;float totalSquaresX = 0;float totalY = 0;float totalSquaresY = 0;float totalZ = 0;float totalSquaresZ = 0; float avAbsDiffX = 0;float avAbsDiffY = 0;float avAbsDiffZ = 0;float totalabsDiffX = 0;float totalabsDiffY = 0;float totalabsDiffZ = 0; float averageSquareX, averageSquareY, averageSquareZ; Date d;long startTime;boolean justStarted = true; boolean firsthi5 = true; boolean preTraining = true;boolean trainingSession = false;boolean realTimeSession = false;boolean firstTime = true; NeuralNetwork NN = new NeuralNetwork(); void setup(){ numbersX = new ArrayList<Float>(); squaresX = new ArrayList<Float>(); numbersY = new ArrayList<Float>(); squaresY = new ArrayList<Float>(); numbersZ = new ArrayList<Float>(); squaresZ = new ArrayList<Float>(); absDiffX = new ArrayList<Float>(); absDiffY = new ArrayList<Float>(); absDiffZ = new ArrayList<Float>(); NN.addLayer(9, 4); NN.addLayer(4, 6); NN.addLayer(6, 1); myTrainingInputs = new ArrayList(); myTrainingOutputs = new ArrayList(); size(1024, 768, P3D); // strange, get drawing error in the cameraFrustum if i use P3D, in opengl there is no problem //size(1024,768,OPENGL); context = new SimpleOpenNI(this); output = createWriter("neuralnetwork.txt"); f = loadFont("AdobeFanHeitiStd-Bold-48.vlw"); // disable mirror context.setMirror(false); // enable depthMap generation if (context.enableDepth() == false) { println("Can't open the depthMap, maybe the camera is not connected!");

exit(); return; } // enable hands + gesture generation context.enableGesture(); context.enableHands(); // add focus gestures; // here i do have some problems on the mac, i only recognize raiseHand ? Maybe cpu performance ? context.addGesture("Wave"); context.addGesture("Click"); context.addGesture("RaiseHand"); // set how smooth the hand capturing should be context.setSmoothingHands(.5); stroke(255, 255, 255); smooth(); perspective(radians(45), float(width)/float(height), 10.0f, 150000.0f); //---------------------------------------------NN setup} //----------------------------------------------------------DRAW-------------------------------------////////////void draw(){ // update the cam context.update(); background(0, 0, 0); // set the scene pos translate(width/2, height/2, 0); rotateX(rotX); rotateY(rotY); scale(zoomF); // draw the 3d point depth map int[] depthMap = context.depthMap(); int steps = 10; // to speed up the drawing, draw every third point int index; PVector realWorldPoint; translate(0, 0, -1000); // set the rotation center of the scene 1000 infront of the camera stroke(200); for (int y=0;y < context.depthHeight();y+=steps) { for (int x=0;x < context.depthWidth();x+=steps) { index = x + y * context.depthWidth(); if (depthMap[index] > 0) { // draw the projected point realWorldPoint = context.depthMapRealWorld()[index]; point(realWorldPoint.x, realWorldPoint.y, realWorldPoint.z); } } } // draw the tracked hand if (handsTrackFlag) { if (justStarted) { d = new Date(); startTime = d.getTime();

justStarted = false; } pushStyle(); stroke(255, 0, 0, 200); noFill(); Iterator itr = handVecList.iterator(); beginShape(); int hertzCounter = 0; long time = 0; while ( itr.hasNext () ) { PVector p = (PVector) itr.next(); vertex(p.x, p.y, p.z); hertzCounter++; if (hertzCounter==9) { windowCounter++; numbersX.add(handVec.x); numbersY.add(handVec.y); numbersZ.add(handVec.z); } if (windowCounter==windowSize) { Date dt = new Date(); time = dt.getTime(); //CALCULATE FOR X for (int i = 0; i < numbersX.size(); i++) totalX += numbersX.get(i); averageX = totalX / numbersX.size(); for (int i = 0; i < numbersX.size(); i++) squaresX.add(i, (numbersX.get(i) - int(averageX))*(numbersX.get(i) - int(averageX))); for (int i = 0; i < numbersX.size(); i++) totalSquaresX += squaresX.get(i); averageSquareX = totalSquaresX / squaresX.size(); standardDeviationX = int(sqrt(averageSquareX)); for (int i = 0; i < numbersX.size(); i++) absDiffX.add(abs(numbersX.get(i) - int(averageX))); for (int i = 0; i < numbersX.size(); i++) totalabsDiffX += absDiffX.get(i); avAbsDiffX = totalabsDiffX / numbersX.size(); //CALCULATE FOR Y for (int i = 0; i < numbersY.size(); i++) totalY += numbersY.get(i); averageY = totalY / numbersY.size(); for (int i = 0; i < numbersY.size(); i++) squaresY.add(i, (numbersY.get(i) - int(averageY))*(numbersY.get(i) - int(averageY))); for (int i = 0; i < numbersY.size(); i++) totalSquaresY += squaresY.get(i); averageSquareY = totalSquaresY / squaresY.size(); standardDeviationY = int(sqrt(averageSquareY)); for (int i = 0; i < numbersY.size(); i++) absDiffY.add(abs(numbersY.get(i) - int(averageY))); for (int i = 0; i < numbersY.size(); i++) totalabsDiffY += absDiffY.get(i); avAbsDiffY = totalabsDiffY / numbersY.size(); //CALCULATE FOR Z for (int i = 0; i < numbersZ.size(); i++) totalZ += numbersZ.get(i); averageZ = totalZ / numbersZ.size(); for (int i = 0; i < numbersZ.size(); i++) squaresZ.add(i, (numbersZ.get(i) - int(averageZ))*(numbersZ.get(i) - int(averageZ))); for (int i = 0; i < numbersZ.size(); i++) totalSquaresZ += squaresZ.get(i); averageSquareZ = totalSquaresZ / squaresZ.size(); standardDeviationZ = int(sqrt(averageSquareZ)); for (int i = 0; i < numbersZ.size(); i++) absDiffZ.add(abs(numbersZ.get(i) - int(averageZ))); for (int i = 0; i < numbersZ.size(); i++) totalabsDiffZ += absDiffZ.get(i); avAbsDiffZ = totalabsDiffZ / numbersZ.size();

output.print(time); output.print(stringSeparator); output.print (averageX); output.print(stringSeparator); output.print (averageY); output.print(stringSeparator); output.print (averageZ); output.print(stringSeparator); output.print (standardDeviationX); output.print(stringSeparator); output.print(standardDeviationY); output.print(stringSeparator); output.print(standardDeviationZ); output.print(stringSeparator); output.print(avAbsDiffX); output.print(stringSeparator); output.print(avAbsDiffY); output.print(stringSeparator); output.print(avAbsDiffZ); output.println(""); float[] myInputs; myInputs = new float[9]; myInputs [0]= averageX; myInputs [1]=averageY; myInputs [2]= averageZ; myInputs [3]= standardDeviationX; myInputs [4]=standardDeviationY; myInputs [5]=standardDeviationZ ; myInputs [6]=avAbsDiffX ; myInputs [7]=avAbsDiffY ; myInputs [8]=avAbsDiffZ ; int outputValue = 0; Date d2 = new Date(); if ((startTime+5000)<d2.getTime()) { if ((startTime+10000)<d2.getTime()) { if ((startTime+15000)<d2.getTime()) { println("Session: Real time session"); preTraining = false; trainingSession = false; //realTimeSession = true; } else { output.println("end"); output.println(""); //println("Session: Training session"); preTraining = false; trainingSession = true; realTimeSession = false; } } else { if(firsthi5){ output.println("hi5"); output.println(""); firsthi5 = false; } println("Session: Hi5"); outputValue = 0; } } else { println("Session: Waving"); outputValue = 1;

} if (preTraining) { float[] myOutputs= { outputValue }; println(myInputs[8]); myTrainingInputs.add(myInputs.clone()); myTrainingOutputs.add(myOutputs.clone()); } windowCounter=0; averageX = 0; averageY = 0; averageZ = 0; numbersX = new ArrayList<Float>(); squaresX = new ArrayList<Float>(); numbersY = new ArrayList<Float>(); squaresY = new ArrayList<Float>(); numbersZ = new ArrayList<Float>(); squaresZ = new ArrayList<Float>(); totalX = 0; totalSquaresX = 0; totalY = 0; totalSquaresY = 0; totalZ = 0; totalSquaresZ = 0; avAbsDiffX = 0; avAbsDiffY = 0; avAbsDiffZ = 0; absDiffX = new ArrayList<Float>(); absDiffY = new ArrayList<Float>(); absDiffZ = new ArrayList<Float>(); totalabsDiffX = 0; totalabsDiffY = 0; totalabsDiffZ = 0; if (trainingSession&&firstTime) { trainingSession = false; firstTime = false; println("--------------------------------------------"); println("Begin Training"); println(myTrainingInputs.size()); println(myTrainingOutputs.size()); NN.autoTrainNetwork(myTrainingInputs, myTrainingOutputs, 0.0001, 500000); println(""); println("End Training"); println(""); println("--------------------------------------------"); } if (realTimeSession) { NN.processInputsToOutputs(myInputs); float[] myOutputDataReal= { }; myOutputDataReal=NN.getOutputs(); print(myOutputDataReal[0]); println("Feed Forward: OUTPUT=" + myOutputDataReal[0]); } } textFont(f, 9); // STEP 4 Specify font to be used fill(255);

endShape(); stroke(255, 0, 0); strokeWeight(4); point(handVec.x, handVec.y, handVec.z); } popStyle(); // delay(1000); } // draw the kinect cam context.drawCamFrustum();} // -----------------------------------------------------------------//////////////////////////////// hand events void onCreateHands(int handId, PVector pos, float time){ // println("onCreateHands - handId: " + handId + ", pos: " + pos + ", time:" + time); handsTrackFlag = true; handVec = pos; handVecList.clear(); handVecList.add(pos);} void onUpdateHands(int handId, PVector pos, float time){ //println("onUpdateHandsCb - handId: " + handId + ", pos: " + pos + ", time:" + time); handVec = pos; handVecList.add(0, pos); if (handVecList.size() >= handVecListSize) { // remove the last point handVecList.remove(handVecList.size()-1); }} void onDestroyHands(int handId, float time){ handsTrackFlag = false; context.addGesture(lastGesture);} // -----------------------------------------------------------------void onRecognizeGesture(String strGesture, PVector idPosition, PVector endPosition){ lastGesture = strGesture; context.removeGesture(strGesture); context.startTrackingHands(endPosition);} void onProgressGesture(String strGesture, PVector position, float progress){} // -----------------------------------------------------------------void keyPressed(){ switch(key) {

case ' ': context.setMirror(!context.mirror()); break; } switch(keyCode) { case LEFT: rotY += 0.1f; break; case RIGHT: rotY -= 0.1f; break; case UP: if (keyEvent.isShiftDown()) zoomF += 0.01f; else rotX += 0.1f; break; case DOWN: if (keyEvent.isShiftDown()) { zoomF -= 0.01f; if (zoomF < 0.01) zoomF = 0.01; } else rotX -= 0.1f; break; }} void exit() { output.close();} class Connection{ float connEntry; float weight; float connExit; //This is the default constructor for an Connection Connection(){ randomiseWeight(); } //A custom weight for this Connection constructor Connection(float tempWeight){ setWeight(tempWeight); } //Function to set the weight of this connection void setWeight(float tempWeight){ weight=tempWeight; } //Function to randomise the weight of this connection void randomiseWeight(){ setWeight(random(2)-1); } //Function to calculate and store the output of this Connection float calcConnExit(float tempInput){ connEntry = tempInput; connExit = connEntry * weight; return connExit; }}class Layer{ Neuron[] neurons = {};

float[] layerINPUTs={}; float[] actualOUTPUTs={}; float[] expectedOUTPUTs={}; float layerError; float learningRate; /* This is the default constructor for the Layer */ Layer(int numberConnections, int numberNeurons){ /* Add all the neurons and actualOUTPUTs to the layer */ for(int i=0; i<numberNeurons; i++){ Neuron tempNeuron = new Neuron(numberConnections); addNeuron(tempNeuron); addActualOUTPUT(); } } /* Function to add an input or output Neuron to this Layer */ void addNeuron(Neuron xNeuron){ neurons = (Neuron[]) append(neurons, xNeuron); } /* Function to get the number of neurons in this layer */ int getNeuronCount(){ return neurons.length; } /* Function to increment the size of the actualOUTPUTs array by one. */ void addActualOUTPUT(){ actualOUTPUTs = (float[]) expand(actualOUTPUTs,(actualOUTPUTs.length+1)); } /* Function to set the ENTIRE expectedOUTPUTs array in one go. */ void setExpectedOUTPUTs(float[] tempExpectedOUTPUTs){ expectedOUTPUTs=tempExpectedOUTPUTs; } /* Function to clear ALL values from the expectedOUTPUTs array */ void clearExpectedOUTPUT(){ expectedOUTPUTs = (float[]) expand(expectedOUTPUTs, 0); } /* Function to set the learning rate of the layer */ void setLearningRate(float tempLearningRate){ learningRate=tempLearningRate; } /* Function to set the inputs of this layer */ void setInputs(float[] tempInputs){ layerINPUTs=tempInputs; } /* Function to convert ALL the Neuron input values into Neuron output values in this layer, through a special activation function. */ void processInputsToOutputs(){ /* neuronCount is used a couple of times in this function. */ int neuronCount = getNeuronCount(); /* Check to make sure that there are neurons in this layer to process the inputs */ if(neuronCount>0) {

/* Check to make sure that the number of inputs matches the number of Neuron Connections. */ if(layerINPUTs.length!=neurons[0].getConnectionCount()){ println("Error in Layer: processInputsToOutputs: The number of inputs [" +layerINPUTs.length +"] do NOT match the number of Neuron connections [" + neurons[0].getConnectionCount() + "] in this layer"); exit(); } else { /* The number of inputs are fine : continue Calculate the actualOUTPUT of each neuron in this layer, based on their layerINPUTs (which were previously calculated). Add the value to the layer's actualOUTPUTs array. */ for(int i=0; i<neuronCount;i++){ actualOUTPUTs[i]=neurons[i].getNeuronOutput(layerINPUTs); } } }else{ println("Error in Layer: processInputsToOutputs: There are no Neurons in this layer"); exit(); } } /* Function to get the error of this layer */ float getLayerError(){ return layerError; } /* Function to set the error of this layer */ void setLayerError(float tempLayerError){ layerError=tempLayerError; } /* Function to increase the layerError by a certain amount */ void increaseLayerErrorBy(float tempLayerError){ layerError+=tempLayerError; } /* Function to calculate and set the deltaError of each neuron in the layer */ void setDeltaError(float[] expectedOutputData){ setExpectedOUTPUTs(expectedOutputData); int neuronCount = getNeuronCount(); /* Reset the layer error to 0 before cycling through each neuron */ setLayerError(0); for(int i=0; i<neuronCount;i++){ neurons[i].deltaError = actualOUTPUTs[i]*(1-actualOUTPUTs[i])*(expectedOUTPUTs[i]-actualOUTPUTs[i]); /* Increase the layer Error by the absolute difference between the calculated value (actualOUTPUT) and the expected value (expectedOUTPUT). */ increaseLayerErrorBy(abs(expectedOUTPUTs[i]-actualOUTPUTs[i])); } } /* Function to train the layer : which uses a training set to adjust the connection weights and biases of the neurons in this layer */ void trainLayer(float tempLearningRate){ setLearningRate(tempLearningRate); int neuronCount = getNeuronCount(); for(int i=0; i<neuronCount;i++){ /* update the bias for neuron[i] */ neurons[i].bias += (learningRate * 1 * neurons[i].deltaError); /* update the weight of each connection for this neuron[i] */ for(int j=0; j<neurons[i].getConnectionCount(); j++){ neurons[i].connections[j].weight += (learningRate * neurons[i].connections[j].connEntry * neurons[i].deltaError);

} } }}class NeuralNetwork{ Layer[] layers = {}; float[] arrayOfInputs={}; float[] arrayOfOutputs={}; float learningRate; float networkError; float trainingError; int retrainChances=0; NeuralNetwork(){ /* the default learning rate of a neural network is set to 0.1, which can changed by the setLearningRate(lR) function. */ learningRate=0.1; } /* Function to add a Layer to the Neural Network */ void addLayer(int numConnections, int numNeurons){ layers = (Layer[]) append(layers, new Layer(numConnections,numNeurons)); } /* Function to return the number of layers in the neural network */ int getLayerCount(){ return layers.length; } /* Function to set the learningRate of the Neural Network */ void setLearningRate(float tempLearningRate){ learningRate=tempLearningRate; } /* Function to set the inputs of the neural network */ void setInputs(float[] tempInputs){ arrayOfInputs=tempInputs; } /* Function to set the inputs of a specified layer */ void setLayerInputs(float[] tempInputs, int layerIndex){ if(layerIndex>getLayerCount()-1){ println("NN Error: setLayerInputs: layerIndex=" + layerIndex + " exceeded limits= " + (getLayerCount()-1)); } else { layers[layerIndex].setInputs(tempInputs); } } /* Function to set the outputs of the neural network */ void setOutputs(float[] tempOutputs){ arrayOfOutputs=tempOutputs; } /* Function to return the outputs of the Neural Network */ float[] getOutputs(){ return arrayOfOutputs; }

/* Function to process the Neural Network's input values and convert them to an output pattern using ALL layers in the network */ void processInputsToOutputs(float[] tempInputs){ setInputs(tempInputs); /* Check to make sure that the number of NeuralNetwork inputs matches the Neuron Connection Count in the first layer. */ if(getLayerCount()>0){ if(arrayOfInputs.length!=layers[0].neurons[0].getConnectionCount()){ println("NN Error: processInputsToOutputs: The number of inputs ["+ arrayOfInputs.length +"] do NOT match the NN ["+ layers[0].neurons[0].getConnectionCount()+"]" ); exit(); } else { /* The number of inputs are fine : continue */ for(int i=0; i<getLayerCount(); i++){ /*Set the INPUTs for each layer: The first layer gets it's input data from the NN, whereas the 2nd and subsequent layers get their input data from the previous layer's actual output. */ if(i==0){ setLayerInputs(arrayOfInputs,i); } else { setLayerInputs(layers[i-1].actualOUTPUTs, i); } /* Now that the layer has had it's input values set, it can now process this data, and convert them into an output using the layer's neurons. The outputs will be used as inputs in the next layer (if available). */ layers[i].processInputsToOutputs(); } /* Once all the data has filtered through to the end of network, we can grab the actualOUTPUTs of the LAST layer These values become or will be set to the NN output values (arrayOfOutputs), through the setOutputs function call. */ setOutputs(layers[getLayerCount()-1].actualOUTPUTs); } }else{ println("Error: There are no layers in this Neural Network"); exit(); } } /* Function to train the entire network using an array. */ void trainNetwork(float[] inputData, float[] expectedOutputData){ /* Populate the ENTIRE network by processing the inputData. */ processInputsToOutputs(inputData); /* train each layer - from back to front (back propagation) */ for(int i=getLayerCount()-1; i>-1; i--){ if(i==getLayerCount()-1){ layers[i].setDeltaError(expectedOutputData); layers[i].trainLayer(learningRate); networkError=layers[i].getLayerError(); } else { /* Calculate the expected value for each neuron in this layer (eg. HIDDEN LAYER) */ for(int j=0; j<layers[i].getNeuronCount(); j++){ /* Reset the delta error of this neuron to zero. */ layers[i].neurons[j].deltaError=0; /* The delta error of a hidden layer neuron is equal to the SUM of [the PRODUCT of the connection.weight and error of the neurons in the next layer(eg OUTPUT Layer)]. */ /* Connection#1 of each neuron in the output layer connect with Neuron#1 in the hidden layer */ for(int k=0; k<layers[i+1].getNeuronCount(); k++){ layers[i].neurons[j].deltaError += (layers[i+1].neurons[k].connections[j].weight * layers[i+1].neurons[k].deltaError); } /* Now that we have the sum of Errors x weights attached to this neuron. We must multiply it by the derivative of the activation function. */ layers[i].neurons[j].deltaError *= (layers[i].neurons[j].neuronOutputValue * (1-layers[i].neurons[j].neuronOutputValue)); }

/* Now that you have all the necessary fields populated, you can now Train this hidden layer and then clear the Expected outputs, ready for the next round. */ layers[i].trainLayer(learningRate); layers[i].clearExpectedOUTPUT(); } } } /* Function to train the entire network, using an array of input and expected data within an ArrayList */ void trainingCycle(ArrayList trainingInputData, ArrayList trainingExpectedData, Boolean trainRandomly){ int dataIndex; /* re-initialise the training Error with every cycle */ trainingError=0; /* Cycle through the training data either randomly or sequentially */ for(int i=0; i<trainingInputData.size(); i++){ if(trainRandomly){ dataIndex=(int) (random(trainingInputData.size())); } else { dataIndex=i; } trainNetwork((float[]) trainingInputData.get(dataIndex),(float[]) trainingExpectedData.get(dataIndex)); /* Use the networkError variable which is calculated at the end of each individual training session to calculate the entire trainingError. */ trainingError+=abs(networkError); } } /* Function to train the network until the Error is below a specific threshold */ void autoTrainNetwork(ArrayList trainingInputData2, ArrayList trainingExpectedData2, float trainingErrorTarget, int cycleLimit){ trainingError=9999; int trainingCounter=0; /* cycle through the training data until the trainingError gets below trainingErrorTarget (eg. 0.0005) or the training cycles have exceeded the cycleLimit variable (eg. 10000). */ while(trainingError>trainingErrorTarget && trainingCounter<cycleLimit){ /* re-initialise the training Error with every cycle */ trainingError=0; /* Cycle through the training data randomly */ trainingCycle(trainingInputData, trainingExpectedData, true); /* increment the training counter to prevent endless loop */ trainingCounter++; } /* Due to the random nature in which this neural network is trained. There may be occasions when the training error may drop below the threshold To check if this is the case, we will go through one more cycle (but sequentially this time), and check the trainingError for that cycle If the training error is still below the trainingErrorTarget, then we will end the training session. If the training error is above the trainingErrorTarget, we will continue to train. It will do this check a Maximum of 9 times. */ if(trainingCounter<cycleLimit){ trainingCycle(trainingInputData, trainingExpectedData, false); trainingCounter++;

if(trainingError>trainingErrorTarget){ if (retrainChances<10){ retrainChances++; autoTrainNetwork(trainingInputData, trainingExpectedData,trainingErrorTarget, cycleLimit); } } } else { println("CycleLimit has been reached. Has been retrained " + retrainChances + " times. Error is = " + trainingError); } }} class Neuron{ Connection[] connections={}; float bias; float neuronInputValue; float neuronOutputValue; float deltaError; //The default constructor for a Neuron Neuron(){ } /*The typical constructor of a Neuron - with random Bias and Connection weights */ Neuron(int numOfConnections){ randomiseBias(); for(int i=0; i<numOfConnections; i++){ Connection conn = new Connection(); addConnection(conn); } } //Function to add a Connection to this neuron void addConnection(Connection conn){ connections = (Connection[]) append(connections, conn); } /* Function to return the number of connections associated with this neuron.*/ int getConnectionCount(){ return connections.length; } //Function to set the bias of this Neron void setBias(float tempBias){ bias = tempBias; } //Function to randomise the bias of this Neuron void randomiseBias(){ setBias(random(1)); } /*Function to convert the inputValue to an outputValue Make sure that the number of connEntryValues matches the number of connections */ float getNeuronOutput(float[] connEntryValues){ if(connEntryValues.length!=getConnectionCount()){ println("Neuron Error: getNeuronOutput() : Wrong number of connEntryValues"); exit(); } neuronInputValue=0; /* First SUM all of the weighted connection values (connExit) attached to this neuron. This becomes the neuronInputValue. */ for(int i=0; i<getConnectionCount(); i++){ neuronInputValue+=connections[i].calcConnExit(connEntryValues[i]); } //Add the bias to the Neuron's inputValue neuronInputValue+=bias;

/* Send the inputValue through the activation function to produce the Neuron's outputValue */ neuronOutputValue=Activation(neuronInputValue); //Return the outputValue return neuronOutputValue; } //Activation function float Activation(float x){ float activatedValue = 1 / (1 + exp(-1 * x)); return activatedValue; } }