hub queue size analyzer implementing neural networks in practice
Post on 22-Dec-2015
225 views
TRANSCRIPT
Choosing implementation method
• Create a service which takes an image from camera in DataArt Hub and performs image recognition
• Additional module makes decision about queue size based on recognition results
• It was recommended to implement Neural Network for image recognition because of following reasons:– We don’t need exact solution– Recognition error is not fatal– We have small amount (4) of possible queue states– Alternative solution (image analyzing with wavelets, analyzing
pixels etc.) is too long and expensive to implement
Introducing Neural Networks
computational model that is inspired by the structure and functional aspects of biological neural networks.
Neural Network Advantages– We program only structure of system, not behavior.
Structure is universal Structure is flexible
– We provide an image on the input and get recognition result on the output. Simple and fast recognition
– We do not care about algorithm for image analyzing No need of PhD
– We can reuse existing neural network in similar AND different tasks with minor changesReusability
– Parallel equations Fast & furious
How does it work
Brightness adjustment
Processed image is sent to input layer of network. If weights are correct on the output layer we get desired result (queue size).
Weights
How does it work
Threshold, integrationBrightness
adjustment
Threshold value and integration of result are implementedResult picture are created
Implementation details
• All Neural Network logic was placed in class library for reusability. This library can be used in other projects
• For monitoring network condition and making extra training if needed administrative tool was developed.
Administrative tool
• Creating and training network• Retraining existing networks• Enlarging training set• Monitoring of correct work for all processes• Monitoring current network error and dependability of recognition result.• Saving and loading weights from file
Neural Network minuses
• Very careful approach to training set creation– Each pattern must be representative – Training set must cover all typical situations– large diversity of training for real-world operation
• We should always check what our network has learned
• Limited number of input, hidden and output nodes
• Large computations during training process
NN Training
Trial and error method– Initial weights have to be small enough
– Feed with sample data set (training set)– Get the output value– Use error (output minus target value) as a criteria
of success in the training algorithm– Change is small, number of iterations is big
Algorithm overview
1. Initialise the network with small random weightso maxWeight < 5.0 / (inputNodesNum * maxInputNodeValue)
2. Repeat for each input pattern in the input collection1. Present an input pattern to the input layer of the network.2. Get output values3. Calculate network’s summary error 4. Reduce the error by changing NN weights properly (back
propagation)5. Propagate an error value back to each hidden neuron that is
proportional to their contribution of the network’s activation error. 6. Adjust the weights feeding each hidden neuron to reduce their
contribution of error for this input pattern.
3. Repeat step 2 until the network is suitably trained.
Back Propagation
• Is a way to reduce summary NN error and improve its performance
• “Blind paratrooper” method• 3 steps:
o o o
What can be customized in NN?
• Nodes number• Network error• Maximum number of training iterations• Learning rate
Where NN has been used already
• Image/sound recognition• Stock market• Data classification• Medicine• Detecting credit card fraud.• Forecast engines• Geo routing systems• Aviation• NASA• Etc